Niagara SCX User Guide
Video compression
TV-quality video contains a huge amount of brightness information, color information and
picture detail… much too much to cram down a small IP pipe like the Internet. Therefore, it
needs to be converted to a form suitable for Internet or private network delivery. The technical
terms for this conversion are scaling and compression, which together reduce the amount of
network resources needed to convey a reasonable facsimile of the original picture and sound.
The Encoder and its Codecs
The little piece of wizardry inside the encoder that is doing the compression is called a Codec,
which is a short form of “COder-DECoder.” A little misnamed, since all we want to look at right
now is the Coder part. All compression codecs invite you to specify the aforementioned factors
of speed, size and quality in various ways, but not all codecs are created equal. There has been
considerable evolution in codec technology that improves picture quality for a given network
quality and picture size. In addition, there are popular codecs and less popular codecs, each
promoted by their creators for picture quality, suitability to an application, compatibility with
specific viewing devices, or to be compatible with different international standards.
You may recognize some of these codecs by name: Microsoft Windows Media®, MPEG 2, MPEG
4, H.264, Adobe® Flash® and Adobe Flash Live, the video parts of Microsoft Silverlight™, 3GPP
for mobile phones, etc. You have probably encountered all of these as you watch video on the
Internet. Which means, unless you have a specific application for streaming video (which usually
means an application to a closed audience where you can define what encoded format and
what the playback experience will be), you will need an encoder that can handle all of the
popular codecs, in any combination, often at the same time. Better yet, you need an encoder
that makes it easy to control any or all via a single, common, easy-to-understand user
experience.
The streaming server
The encoder creates the desired video and audio stream. The next steps take care of making the
stream available in volume for the anticipated size of the audience, and giving the user some
way to start the playback experience. The device that accepts the stream from the encoder (the
Uplink Stream) and makes it available to a mass audience is called a streaming server.
The server runs special software that accepts uplink streams from an encoder and manages
connection requests from hundreds or thousands of viewers. The software these servers run
come from a variety of sources, including Microsoft Windows Media Server and many others.
Most can stream several different formats. The server can be a single Server if the audience size
is small, like a couple hundred viewers or so. This is often the case in Enterprise and Education
applications; in these small environments, you may wish to own your own media server and
manage its operations. However, if your application requires a more global audience, the server
in reality will be a server farm, which is an array of interconnected servers that are often
deployed in numbers around the globe.
There are companies, such as Akamai and LimeLight, who own and maintain vast networks of
such servers and make them available to you for a fee. These companies call their server array a
Content Delivery Network (CDN). The term has over time come to represent the service itself -