Video Compression (Part 2 of 4)
Using Final Cut Pro and the Sorenson Video Codec
to Prepare QuickTime Video for Web Distribution

By Richard Lainhart

However, since Jordan wanted to sell his lessons, it didn't make sense in this case to try to stream the clips at 7KB/sec, the average person's Internet connection speed. First, video at that data rate is at too low a frame rate to see complex fingering patterns. Second, a student purchasing a video lesson would want to be able to download it for unlimited viewing offline anyway, and so should be willing to put up with longer connection times in order to own a copy of the clip. After some tests with the Sorenson codec, I decided to aim for an overall data rate of 35KB/sec for the QuickTime clips. With a 56K connection at 7KB/sec, this would give a download time of 5 times the length of the clip - that is, a 3 minute clip should download in 15 minutes, which didn't seem unreasonable.

The Sorenson Video codec is capable of astounding quality at very low data rates, but like any compression scheme, it works best if you give it footage that's been optimized for the codec. Sorenson compresses video by comparing pixels in the frames with adjacent pixels, then grouping together similar pixels. It then compares adjacent frames, and groups similar blocks of pixels together across multiple frames. The less the pixels change from frame to frame, the more they can be compressed, which means that these pixels will take up less of the overall data rate of the clip. This leaves more compression headroom for the parts of the frame that do change a lot, so that the moving pixels don't have to be compressed as much to maintain the desired data rate.

What this all means is that you can significantly improve the image quality of low data rate video if you shoot it and edit it correctly. Web video must be compressed to very low data rates to download or stream in a reasonable time, but it's very rare that you see Web video that has been designed from the start for Web delivery. Almost always, highly compressed Web video starts out as broadcast video (and usually highly produced video, with lots of shots and fancy effects), with the result that it doesn't look as good as it could. When you move the camera during shots, as is standard in broadcast video, every pixel in the frame changes every frame, so when this footage is compressed, the codec has no static blocks of pixels to work with to reduce the data rate. Each pixel must then be compressed much more than it would have if the camera hadn't been moving, which results in a greater degradation of the image. By simply locking down the camera for every shot, you create large areas of static pixels that can be highly compressed, so the other pixels can be compressed less than they would be otherwise.

The same applies to dissolves, wipes, and digital video effects - they all make pixels change every frame, so they make your video look worse when it's compressed for the Web. I decided from the outset of the project that I would mount the camera on a tripod and never move it, and I would use cuts only in the edit. Since I knew the final QuickTime clip would be less than full frame size, I also made sure to zoom as close in as possible to the subject of each shot. The smaller the frame size, the closer you want to be to your subject. Since this was a test, we used the natural lighting in Jordan's studio, although using real video lighting would result in a better image, since more even lighting creates larger areas of similar pixels. By the same token, I used the built-in mic in my camcorder to record the live audio, although the ideal scenario would have been to fit Jordan with a lavalier mic and take a stereo feed from his mixer. Since the Canon Optura doesn't have audio line inputs, this wouldn't have been possible anyway.

Read Part [1] [2] [3] [4]