approaches with flash technology[edit | edit source]
When video is playing using Adobe's Flash technology, there are several approaches.
single swf[edit | edit source]
One little-used approach is for the actual video content (multimedia (video/pixels and audio) streams) to be embedded (housed) within the
.swf file, itself. Another way of describing this approach is that the
.swf file is a video container file.
examples[edit | edit source]
On ScreenCast.com, one of the users, JasonBrianMerrill posted two examples of how to use __
- gluemidi "Video (swf) - 1.44MB "
- another posting by him Video (swf) - 1.87MB "Title: 2009-12-17_1307"
swf is just a player[edit | edit source]
More often, the approach most-used (and originally popularized by YouTube) was to have a small .swf Shockwave-Flash animation file which serves as the player. However, this player file would not contain the video (and audio) (streams). Instead, a separate video file's playback would be triggered (initiated and controlled) using the flash animation. The .swf animation would provide transport controls (pause, resume, fast-forward) as well as info about the video and links to related videos and preview thumbnails, and advertising overlays. These are the kinds of things easily implemented using ActionScript in Flash Shockwave animations (text and other semantic vector data/content).
So, the original container format of the video files used in (then-Macromedia's) Shock-wave flash video capability was .flv wikipedia:Flash Video (for Flash Video, of course) which used Sorensen Spark's video compression algorithm (and .mp3 audio for the audio stream?).
Next, (around the time macromedia got bought up by Adobe, around 2005 or so) a more robust (efficient) video codec was desired and a deal was struck with On2 to liscence usage of their VP6 video compression. VP6 was part of a series of video compression algorithms that On2 developed, originally for video-conferencing. Lately, Google has purchased On2 and has developed VP9 (See more about Google's work on video codecs).
Soon, Adobe wanted Flash to be able to take advantage of the H.264 (AVC MPEG 4- Part 10) video compression codec (algorith/scheme). This was a newer, more powerful video compression over the previous Sorensen Spark and VP6 technologies, but is more resource-intensive/consumptive of processing. In particular, Adobe wanted to add High Definition video capability to flash technologies. Adobe added this capability to the Flash platform by allowing .mp4 video files to be played using generation 9 wikipedia:Flash Video#History) (shockwave-)Flash technologies.
HDS[edit | edit source]
More recently, Adobe has introduced HDS. (See Capture Internet Video#Adobe HDS in this article's parent page.) This allows for more of a packet-like approach to streaming portions/segments of video over an HTTP connection (as opposed to one single large video file like an .mp4).
"Sneak Peek: Future Adobe technology for HTTP streaming across multiple devices" created April 11, 2011 by By Kevin Towes. read comments to the posting/article.
Adobe DevNet "Technology Center"
good third-party technical overview: "Adobe HTTP Dynamic Streaming (HDS): What You Need to Know", includes comparison with Apple's technology: Apple HTTP Live Streaming (HLS).
Is this the same as (what's in these wikiPedia articles)?:
wikipedia:Dynamic Adaptive Streaming over HTTP (D.A.S.H. "DASH")
- which is a type of wikipedia:Adaptive bitrate streaming ?
RTMP[edit | edit source]
Another way video is transmitted over a network using Flash technologies is with Adobe's own Real Time Multimedia Protocol (R.T.M.P.). When RTMP is used, the flash animation (.swf file) that triggers video playback is sent to the user (client) over HTTP; however the multimedia content (video and audio streams) are transported using the RTMP protocol.
When video is transmitted over RTMP and not HTTP, Network debugger tools (web browser's Developer Tools) (such as Firefox's Firebug add-on) will NOT detect nor be able to sniff/capture/save these videos. (see below) (rtmpsuck).