The news articles about this made it sound like Netflix had invented their own H.264 encoder just to optimize bitrate levels. Which would be a waste of time since x264 already has a constant quality mode that will choose bitrate based on quality rather than quality based on bitrate.

If you read their tech blog though what they’re really doing is actually more interesting. Their problem is that they still need to define quantized quality levels that will smoothly increase based on available bandwidth. Previously what they were using is a linear bandwidth scale that wasted bandwidth for videos of low complexity and didn’t give enough bandwidth for videos of high complexity because video quality increases logarithmically given a linear bandwidth scale.

Their innovation actually isn’t that they came up with some new encoding algorithm like the news suggested. It’s that they were able to figure out a method for finding encoding parameters that generate a linear quality scale while still maintaining a constant bitrate. Because the problem with just using constant quality mode is that by default your bitrate is unbounded so the video may suddenly jump from 1mbps to 5mbps to maintain the quality which if you’re streaming means your video might stop and buffer at the beginning of any action scene.

So it seems like what they’re doing is doing a sample encode of each video using a linear constant quality scale. Then taking the average bitrate produced by each of those sample encodes and using it to construct a bitrate curve that linear in quality and constant in bitrate. What they don’t mention and what I would be curious to find out is how they’re then encoding the final videos. Whether they’re still using constant bitrate mode just with algorithmically determined bitrate levels which would still waste bandwidth on low complexity scenes within a video, or whether they’re actually using constant quality mode but with a bitrate cap set at each level so the bitrate never increases beyond a certain level.