> Real-time video is significantly different than still-frames. It's why video compression uses completely different algorithms than jpeg compression, for example.
You are right that video is different than still images, namely there is a lot of temporal redundancy that can be used to compress things to an acceptable level. But h.264 and whatnot are, at their root, a series of jpg frames (the "I" intra frames), and various temporal compression techniques are used to interpolate between the I frames (P = predicted, B = bidirectional reconstruction).
Motion jpg is a thing, and it is often preferred in video editing over h.264 or whatever because each frame is self contained. It is essentially nothing but a stream of I frames.
You are right that video is different than still images, namely there is a lot of temporal redundancy that can be used to compress things to an acceptable level. But h.264 and whatnot are, at their root, a series of jpg frames (the "I" intra frames), and various temporal compression techniques are used to interpolate between the I frames (P = predicted, B = bidirectional reconstruction).
Motion jpg is a thing, and it is often preferred in video editing over h.264 or whatever because each frame is self contained. It is essentially nothing but a stream of I frames.