"Custom magic algorithms" in this case being "off-the-shelf free software, using completely default parameters, that we've stolen and claimed is our magic algorithm".
I hate to burst your bubble, but "custom magic algorithms" actually exist. I was involved with a contract that accomplished the same end as mentioned in the article - compressed video data, playable within standard video players. The company wasn't achieving the same level of compression that this article is claiming, but they did manage to reduce video size while maintaining quality.
Sure, you can "improve compression" -- just compare yourself to an awful encoder! This is the strategy used by thousands of marketing whitepapers.
But x264 isn't an awful one; it's the best, by a factor of ~30% over the nearest competition according to the most recent independent comparison (http://www.compression.ru/video/codec_comparison/h264_2011/). Every company loves to claim that their "custom magic algorithms" are amazing and magic, but when push comes to shove, nothing compares to what free software can put out, and this shows no signs of changing, at least before the release of HEVC.
If you have a "magic algorithm" that does better than the current state of the art, send it in to be benchmarked! I mean, if you can do better than the encoder used by Google, Youtube, Netflix, Facebook, and thousands of other companies around the world, surely you'll be able to get some customers if you publicize this super-amazing-magic algorithm, right?
Or maybe, like the thousands of other companies claiming the same, your "magic algorithm" is bullshit -- or a good idea, just already implemented in dozens of other encoders out there too, and you're demonstrating its efficacy by comparing it to a junk heap like Quicktime.
I can assure you that we did our testing against x264, and that we produced encodings that were better than x264. You're also assuming that compression optimization can only occur in the encoder - which I assure you is false (damn you NDA!! You'll have to take my word on these statements.)
As to the business of selling said encoders, I couldn't agree with you more, if you've got something that improves the most widely used encoder then surely customers should be clamoring to get a hold of this algorithm. That is if you're selling it properly.
However, I was merely a contractor, and not making the business decisions. I worked with multiple contractors that thought similar things to yourself (myself included.) And when you're taking investor money, telling them that you're going to open source a technology that took 2+ years to develop, and hope that you'll make money from it is a great way to lose your investor money.
I can assure you that we did our testing against x264, and that we produced encodings that were better than x264
By what measurement, PSNR? x264 doesn't optimize for PSNR by default.
And when was this? x264 has improved dramatically in the past 4 years. In 2007, Mainconcept could beat x264 in many cases and Ateme's 2004 encoder was still sometimes better! There are cases where x264 has improved by a factor of 2 in this time period, or more.
You're also assuming that compression optimization can only occur in the encoder
Do you mean prefiltering? Such a thing is a dishonest comparison, as you can prefilter before using any particular encoder, and there are whole frameworks built for exactly that purpose which are widely used with x264 -- and other encoders too.
And when you're taking investor money, telling them that you're going to open source a technology that took 2+ years to develop, and hope that you'll make money from it is a great way to lose your investor money.
If your technology takes 2+ years to develop, your programmers are incompetent or your management is broken. Probably no single algorithm in x264's history has taken more than a few days to develop. Coming up with good ideas is a matter of thinking, combined with trial and error: once you have a idea that actually works, implementing it is dead trivial. The time-consuming part is the other 99 ideas you tried that didn't work so well -- and you can't plan for that.
I used the term "algorithm" as a crutch, but it seems likely there are tools out there than can do fantastic keyframe & data rate shaping.
The most apparent is whatever Apple's been using for years for their movie trailers. Not a single artifact, low data rate, etc. It's better than your average 2-pass. But who knows, the "magic" could simply be to start with uncompressed source...
> However, I was merely a contractor, and not making the business decisions. I worked with multiple contractors that thought similar things to yourself
Do you know you're not talking to some contractor, you're talking to the guy behind x264?
If you don't know him, read his analysis of VP8 (note copyright footer):
Having been in the streaming industry since the mid 90's, with heavy work in live and on demand encoding, I've seen dozens of these companies make similar claims, usually in pursuit of investment dollars. For years running, regardless of open source versus closed source, none stack up well against x264 when measured for the way people see video and for the resources taken to produce the encoded content.
You really don't have to read past the first graph:
This is an informative study, and has been for the last seven years. If you have a better compression that would let me, as a CDN, offer clients movie delivery to users with enough less bandwidth and storage it's worth retooling for, we're all ears. Again, I've talked to dozens upon dozens. None really had it. So far, given their original source, I could personally produce an even smaller x264 file that end users prefer.
You've just described the business model of a surprisingly large number of consultancies. One man's obvious is the next man's "magic". AFAICT businesses are happy to pay for this sort of thing, so I'm not sure who deserves blame here.