> In countries where patents on software algorithms are upheld, vendors and commercial users of products that use H.264/AVC are expected to pay patent licensing royalties for the patented technology that their products use. [...] All other royalties remain in place, such as royalties for products that decode and encode H.264 video.
By that logic Linux isn't truly FOSS either since it no doubt infringes on many software patents and users of Linux have been sued over patents[1].
This is one of the main problems with software patents: they are so broad and numerous that it's impossible to write non-trivial software without infringing on something without even realising it.
There's a difference between willful and non-intentional infringement. If you use H.264 and you get sued by MPEG LA, you can be damn sure that they'll win and that you'll pay up a fortune for it.
On the other hand, patents really aren't so broad and those that are can be and have been overturned in court for being too general - with exceptions of course, of which you hear because they are so outrageous (like Amazon's 1-click patent).
In reality, it's really hard to prove that something is infringing on a patent, if the infringement wasn't intentional. In the Oracle versus Google case, Oracle was hoping they'd get Google with copyright infringement (of APIs no less), the patents involved being just in case the copyright infringement claims wouldn't work. Google won on all fronts.
The reason for why threats of patents infringement are so dangerous is because few companies are willing to make a stand and would rather settle.
VP8, as a spec, should be a bit better than H.264 Baseline Profile and VC-1. It’s not even close to competitive with H.264 Main or High Profile. If Google is willing to revise the spec, this can probably be improved.
VP8, as an encoder, is somewhere between Xvid and Microsoft’s VC-1 in terms of visual quality. This can definitely be improved a lot.
VP8, as a decoder, decodes even slower than ffmpeg’s H.264. This probably can’t be improved that much; VP8 as a whole is similar in complexity to H.264.
With regard to patents, VP8 copies too much from H.264 for comfort, no matter whose word is behind the claim of being patent-free. This doesn’t mean that it’s sure to be covered by patents, but until Google can give us evidence as to why it isn’t, I would be cautious.
VP8 is definitely better compression-wise than Theora and Dirac, so if its claim to being patent-free does stand up, it’s a big upgrade with regard to patent-free video formats.
VP8 is not ready for prime-time; the spec is a pile of copy-pasted C code and the encoder’s interface is lacking in features and buggy. They aren’t even ready to finalize the bitstream format, let alone switch the world over to VP8.
With the lack of a real spec, the VP8 software basically is the spec–and with the spec being “final”, any bugs are now set in stone. Such bugs have already been found and Google has rejected fixes.
Google made the right decision to pick Matroska and Vorbis for its HTML5 video proposal.
> VP8, as a decoder, decodes even slower than ffmpeg’s H.264. This probably can’t be improved that much; VP8 as a whole is similar in complexity to H.264.
Funny that he himself wrote a ffmpeg VP8 decoder that was between 25 and 33% faster than the libvpx decoder, which had already been improved from when he reviewed it. He said at the time it was faster than ffh264 even with more work to be done.
Some recent Google tests suggests that the two ffmpeg decoders basically are the same speed when using one core (2% advantage to VP8) which rises to a 32% advantage to VP8 when using 8 cores.
...thus pissing off everyone using Hangouts for tech support to people who are only ever going to have Internet Explorer installed. Thanks a bunch, Google.
I really want to crack some skulls together over the state of video compression codecs and OS support. Same thing happened with all the circle-jerking about HTML5 video - Flash had AS STANDARD a bunch of useful codecs (eg. Screen Video) for which there is no HTML5 good replacement if you want to do a lossless screencast.
It's a free service. They do what they want with it.
I think supporting the Open Source codec is a good thing.
edit: also from the article(First Paragraph)
"The H.264 plugin will still be supported for browsers that do not support VP8: IE (of course), Safari, and older mobile clients."
So... there's that.
Being "open source" has nothing to do with it. The H.264 specs are open, there are plenty of FOSS tools available, and its supported ecosystem is much more robust. The only advantage to VP8, from a developer's perspective, is in being royalty-free. But the cost borne by Google in acquiring and developing that IP dwarfs any potential royalty to MPEG-LA ($6.5mm/yr max).
Make no mistake about it, this move is a play for power under the guise of good intentions.
That's a little disingenuous. H.264 ships with pretty much zero open source distros (the only exception I can think is the software codecs in the AOSP images -- but even there I strongly suspect it works only because Google paid a license as part of a bigger deal) because of the patent restrictions.
That's not true of VP8. So the complaint upthread (about VP8 not working on IE) flips on its head: if you care about video working out of the box on open source OSes, then H.264 is a non-starter. Maybe you don't (I can pretty much guarantee that you don't, actually -- in fact from your tone and the "side" you picked I basically assume you're running iOS and OS X exclusively).
But some of us do care, and are quite grateful for VP8 making that possible. I don't feel like getting into an argument as to whether this constitutes a "power play", but it certainly seems like "good intentions" to me.
> from your tone and the "side" you picked I basically assume you're running iOS and OS X exclusively
Please don't mistake technical pragmatism for ideology. I'm a big supporter of software freedom, most of my work involves codecs like these, and it's almost always on Linux, which is my daily machine. Don't get me started on how fucked up iOS is (both technically and ideologically!).
There's no argument from me that software patents are a plague, and that does make codec distribution tricky. But Google isn't doing this so users can avoid the step of apt-get'ing ffmpeg.
Mozilla too has fought against H.264 since the beginning. They'll be thrilled that Google is taking this new step towards VP8.
Also, it really doesn't take much brain to figure out why H.264 is dangerous:
1) no open-source software can ship with H.264 (so Firefox cannot ship with H.264, unless it fallbacks to the OS's codecs, which they now do, or if they cut a deal with MPEG LA). Also, Mozilla is lucky because they do make money and they could cut a deal, but most open-source projects don't have sponsors with pockets that big.
2) MPEG LA is not a real standards body, not like ISO. They are some firm in Denver. Those patents may be reasonably licensed (RAND) right now, but they can change those terms whenever they wish to do so. The only companies protected are those that had contributions in that pool (e.g. Apple, Microsoft)
And most importantly for people that care about the web - the web needs to stay open, where "open" in this case really doesn't mean open for companies and organizations with pockets full of millions of dollars.
But that specifically states that the maximum is per operating system.
Well, what if it turns out that's entirely too vague? What if MPEG-LA is telling Google that each ROM for each Android is a different operating system?
This wasn't the case last I checked (with x264), especially if you consider video quality. Historically VP8 has been much slower, mostly because existing H.264 tools have had more time to mature.
I assume they mean power to decode, which makes sense if the article talks about decoding 10 other chat participant's video streams.
I also believe, based on stuff discussed regarding WebRTC that some H.264 advantages don't apply in realtime encoding, like B-frames and so it's a much closer comparison than encoding a movie for streaming. The various submissions to the IETF mailing list arguing for one or the other on technical merit basically came down to a tie.
I wrote one of the first de/packetizers for VP8 RTP back when the spec was released. My impression was that the codec isn't really designed for network transmission. The probability table for the entire frame is sent up-front, so any loss of that makes the entire frame undecodable, and there's no slicing/IDR or NAL-like layer. It's fairly unworkable without a reliability or error-correction layer on top.
More competition in the codec space is a very good thing, but VP8 is a relatively weak entry IMO.
> My impression was that the codec isn't really designed for network transmission
That seems unlikely since the major user of its predecessor VP7 was Skype (and before that VP6 was the "web codec" in Flash for a while). Maybe you could make a case that it has flaws that could make it better for this, but it seems silly to claim it was designed for a different use-case.
VP6 in Flash was always delivered losslessly over TCP, and never used for video capture in Flash. Skype is what makes VP8's shortcomings (for video chat) even more interesting, but On2's codecs show a very clear evolutionary path even from VP3. My guess is they didn't want to deviate too much from what was tried-and-true.
Doesn't change my opinion, though -- I think it's more the case of "Well, this works well enough for video chat, if you also do this and that" rather than "We designed this from the ground-up for low-latency, unreliable operation."
Read the article, they'll support both. Besides, hangouts requires a plugin already on windows so they could include vp8 in the plugin... but let's bash Google instead.
If IE will ever support WebRTC, then they'll have to support VP8, too, which is the standard codec for WebRTC.
So how about Microsoft moves quicker in adopting open standards, instead of backing proprietary ones, or dragging their feet? It took them 2 years to even decide they will use WebGL, finally. I guess it will take them another 2 years for WebRTC.
Still hoping Daala can mature and take the throne. If its mathematics truly produce a next generation of compression efficiency in video, and it is open from xiph + Mozilla, that is the biggest kind of win.
FYI, there is no official standard video codec for WebRTC. For audio, the MTI codec is opus. For video, none has been chosen. So far, the two browsers that implement the WebRTC API have choose to support VP8, but that's not mandated by the standard.
So far lobbying, from MSFT amongst others, has prevented VP8 being declared a mandatory codec for VP8. On the other hand, Chrome and Firefox are only implementing free codecs (which right now means VP8) so inter operability with the Billions of deployed browser endpoints will require VP8 support.
I guess Microsoft's main concern is opening up another vector for patent lawsuits by shipping hundreds of millions of VP8 decoders. Google doesn't indemnify against VP8 infringements so why would Microsoft pick up the tab when they're already paying for H264?(Which they got sued over by Googorola btw).
Is there are reason why this is a link to Google Plus which essentially links to GigaOM? The comments on Plus aren't what you would call top quality either.
That GigaOM article says "Google is phasing out the use of third-party code provided by the video conferencing technology vendor Vidyo", but this VentureBeat article says "Google will license Vidyo’s scalable video coding extensions for use in its free and open source VP9 video codec". Can these articles both be true?
"That’s why, moving forward, Vidyo actually wants to support WebRTC and open video codecs. The company is announcing Wednesday that it will contribute client-side scalable video coding technology to VP9, the next generation of Google’s open video codec. Vidyo will also cooperate with Google to incorporate some of its technology into enterprise versions of Hangouts."
So they "go HD" by switching to a format with worse compression quality. Sounds like a recipe for victory! (I can certainly understand switching to a royalty-free format, but I still found this funny nevertheless.)
On another note, I find it really annoying how people equate video resolution with video quality, when the only thing the resolution really tells is how much detail the video could potentially have. Bitrate and encoding settings will matter much more - if you're using low bitrates (like what you'd tend to see in real-time video calls) a HD video can easily end up looking worse than a lower-resolution video at the same bitrate. This dual move to HD and the switch to a worse format (compression-wise) might just end up doing exactly that (though I haven't actually used Hangouts myself so I can't speak for its current video quality).
I am a huge fan of Google Hangouts, however I am pretty sure my current bottleneck is the upload speed of the people talking with me ... how realistic is it to upload HD video content in real time?
[I am on all kind of European DSL connections, most of the time].
For a lot of people there is going to be an encode (CPU) bottleneck as well. From trying to do webcasting a while ago, I can tell you that encoding HD video in realtime on last years PCs was a major problem.
Then again, it takes this kind of thing to push hardware makers and ISPs forward. If a lot of people want to use google hangouts with HD, it will create a call for internet connections with fast upload speeds (rather than a bias towards download as we see now).
Were you webcasting in 1080p? This only supports 720p, which will be substantially less intensive. I think my phone can encode 720p in real-time. I'm pretty sure a mid-range laptop from the last couple years can do 720p at a reasonable bitrate.
Yep. Even 2 year old machines (especially laptops) are unsuited. We have recently been testing Hangouts, Vidyo, Bluejeans, Polycom, Skype, Fuzebox.... and it was a lot of fun trying to diagnose degradation of user experience. Almost never was it network-related (except in cases with 8-10++ endpoints). It's much more common to be CPU-constrained than anything else.
I'd like to plug the logitech c920 webcam that not only has beautiful optics and a terrific auto-focus, wide viewing angle, and dual microphones, but also does the transcoding onboard.
When this was discussed on WebRTC mailing lists, it turned that basically no video-confercing software uses these encoders as a) they're not tuned for real-time usage, and b) often the software has no access to the hardware anyway.
I've noticed the biggest determinate in quality is actually packet loss from wireless connections. Reduce that with a wired connection and it stays very smooth. Also people prioritize clear audio way more than video. The real time part is what makes it difficult.
Bitrate and resolution of a video encoding are two independent variables. I can encode 4K video at 50kbs or a 480p stream at 10mbs. That 4K stream is going to look like crap, but I can do it.
They may not bump up the bitrate exactly proportionately to the increase in resolution, so the effect may not be as pronounced as you think.
You mean how you can't see who is online on Hangouts? That is super annoying. I may decide how to contact someone based on their status. Are they idle/offline/busy?
most people don't use status and most of those who do abuse them (always away or busy or invisible etc) so I've always found them unreliable and useless. If you need to contact someone, just send the message, he'll answer when he can.
Will be interesting to see whether this will allow for realtime streaming of your desktop for "webinar" application. That's a market I would love to see a little disruption in from a big player.
No, but the resolution is terrible for sharing screens.
The other thing I should have mentioned is that I'm also concerned with sharing a hangout to YouTube for later viewing. That, right now, is only supported for 480p, which is completely unusable for screencasts that contain text.
Hangouts runs on a Google proprietary protocol. They are depreciating their foss Jingle library for xmpp video and audio conferencing for Hangouts, to force people to use their entire stack to use Hangouts.
I'm hoping Daala + Opus conferencing and streaming can enter Jingle in the next few years, see it mainlined in jabber, and that could be a good alternative. Self-host your own xmpp server if you want, or use a web service that provides it (like Talk did).
Did Hangouts start enforcing the true names policy recently or was it ilke that from the start? It makes it pretty useless especially if you want a multi party hangout
and/or don't know all your contacts to be full-on G+ believers.
last time i checked it didnt support full screen which is a deal breaker for screen sharing. Hopefully they change it, who needs HD if its not even full screen anyway.
http://arstechnica.com/gadgets/2013/08/google-hangouts-upgra...