Tutorial: Page (1) of 6 - 07/16/06 Email this story to a friend. email article Print this page (Article printing at MyDmn.com).print page facebook

How Video Works

Chapter 15: Everything You've Wanted to know About Video Compression By Diana Weynand & Marcus Weise

Click here to buy from Amazon
Chapter 15, Compression

Compression is the process of reducing data in a digital signal by eliminating redundant information. This process reduces
the amount of bandwidth required to transmit the data and the amount of storage space required to store it. Any type of digital data can be compressed. Reducing the required bandwidth permits more data to be transmitted at one time.

Compression can be divided into two categories: lossless and lossy. In lossless compression, the restored image is an exact duplicate of the original with no loss of data. In lossy compression, the restored image is an approximation, not an exact duplicate, of the original.

Lossless Compression

Lossless compression is characterized by a complete restoration of all the original data that was contained in the original image. Compressing a document is a form of lossless compression in that the restored document must be exactly the same as the original. It cannot be an approximation. In the visual world, lossless compression lends itself to images that contain large quantities of repeated information, for example, an image that contains a large area of one color, perhaps a blue sky. Computer-generated images or flat colored areas that do not contain much detail, e.g., cartoons, graphics, and 3D animation, also lend themselves to lossless compression.

One type of lossless compression commonly used in graphics and computer-generated images (CGI) is run-length encoding. These images tend to have large portions using the same colors or repeated patterns. Every pixel in a digital image is composed of the three component colors, red, green, and blue, and every pixel has a specific value for each color. Therefore, it takes three bytes of information, one byte for each color, to represent a pixel. Run-length encoding, rather than storing the RGB value for each individual pixel, groups each scan line into sections, or run-lengths, of identical pixel values. For example, one section of a line of video might consist of a row of 25 black pixels. This section would be runlength
encoded as 25, 0, 0, 0. This translates as 25 pixels, each composed of R 0, G 0, andB 0, or black. The original
image would have required 75 bytes (25 pixels  3 bytes) to hold this data. When compressed using run-length encoding, the same data can be contained in four bytes.


Lossy Compression

Video images generated by a camera are generally not suited for lossless compression techniques. Rarely are there long enough run lengths of the same pixel value in an image to maximize the efficiency of these techniques. Compression used for active video is usually in the lossy category. With lossy compression, the restored image will be an approximation of the original. When a lossy image is reproduced or uncompressed, not all the data left out during compression will be restored exactly as they were.

To minimize the apparent loss of data, lossy compression techniques generally compress the data that comprise those parts of the image the human eye is less sensitive to, or that contain less critical image data. The human eye is more sensitive to changes in light levels or luminance than it is to changes in color, both hue and saturation. Within the color gamut, the human eye is more sensitive to the yellow-green-blue range. The human eye is also more sensitive to objects in motion than to still objects. Rabbits, for example, will freeze when they are in the presence of a predator. They know instinctively that the eyes of a predator animal, which includes humans, are more sensitive to objects in motion. Therefore, the rabbits are less likely to be seen while remaining motionless.

In lossy compression, the data chosen to be compressed is the data that does not fall within the human sensitivity range or data that contains a great deal of motion. Two commonly used lossy compression techniques are JPEG and MPEG. These techniques, and variations of them, are described below.

JPEG Compression

JPEG compression was developed by the Joint Photographic Experts Group and defines the standards for compressing still
images, such as graphics and photographs. In JPEG compression, the image data is separated into luminance and chrominance information. JPEG takes advantage of the human eyes greater sensitivity to changes in luminance than changes in color by sampling the chroma or color information in the image half as often as the luminance. In this manner, the chrominance data is reduced by half. The total data can be reduced further by encoding redundant luminance information in the image. Any constant values that appear in the image can be encoded using the same run-length technique used in lossless compression.

Motion JPEG Compression

Motion JPEG, or M-JPEG, was developed from JPEG as a means of compressing moving images by treating each image as a single still picture. Only incremental changes are necessary between adjacent frames of video, as the quantifiable difference from frame-to-frame tends to be less than 5%. Treating each image as a single still rather than as part of continuous motion is an effective approach to compressing motion images.

MPEG Compression

MPEG compression was developed by the Motion Picture Experts Group and defines the standards for compressing moving images. MPEG techniques establish the protocols for compressing, encoding, and decoding the data but not the encoding methods themselves. The rules dictate the order of the data and what the data must contain but not the method by which the data is derived. This allows for continuous improvement in encoding techniques without having to constantly change existing equipment. As in M-JPEG compression, MPEG takes maximum advantage of the interframe similarity as
the key to its compression techniques.


Each line within each field of digital video contains 704 pixels. MPEG-1 compression uses 1 field per frame of video sampled at 352 pixels per line. Using half resolution horizontally and every other scan line vertically creates a quarter-resolution image. MPEG-1 is the simplest form of motion compression. There is no detailed analysis of each individual image or adjacent images. Consequently, no advantage is taken of redundant information that occurs within each frame and between adjacent frames. It is compression based on a simple, mathematical progression of sampling every other pixel on every other line.

MPEG Variations

MPEG-2 compression can use a variety of computer algorithms, or mathematical formulas, to compress the images. These different algorithms are referred to as tools, and can be used in combination to provide progressively more compression without loss of quality. In other words, MPEG-2 compression can produce a good quality image using about 4% of the original video data. In addition, MPEG-2 is flexible and can support a wide variety of data rates, picture sizes, and compression qualities.

Each successive variation of MPEG compression, e.g., MPEG 4, 5, 7, and so on, is more sophisticated in its ability to discern compressible data, allowing for increased compression without degrading the image. Different MPEG compression techniques or variations can lend themselves to specific applications.


Page: 1 2 3 4 5 6 Next Page

Related Keywords:compression, video technology, diana weynand, book review

Content-type: text/html  Rss  Add to Google Reader or
Homepage    Add to My AOL  Add to Excite MIX  Subscribe in
NewsGator Online 
Real-Time - what users are saying - Right Now!

Our Privacy Policy --- @ Copyright, 2015 Digital Media Online, All Rights Reserved