Why are pictures from my movies looking so bad?

Alexis Wilke's picture

Video versus Photo

Have you ever tried to film a movie to later find out that you just couldn't use a single one of the many images as a nice photo on your Snap! Website?

Newton's Cradle showing the law of conservation of momentum.
Newton's Cradle

If you still wonder why the quality of a video image is so poor in comparison to just taking a still picture, then this article will most certainly help you understand the several reason behind the problem.

First of all, know that there are now many different types of video cameras available and each runs with a different software. Each brand created the best version ever, yet it really results in each brand having a set of features better adapted to a specific situation. This being said, it actually would not be possible to just merge all the software and get the best of all worlds! The same properties as found in physics apply here: when you gain on one side, another has to give way.

Compression

The first artifacts come from the compression of data. Most cameras today will record moving pictures in real time. This requires your camera to convert the data from light to binary data (24 bits numbers in your computer), treat the data, save the data in your Flash memory and all of that at the speed of motion pictures.

In most cases, the compression uses what we call MPEG (Moving Picture Experts Group.) This group of scientists works on what the human eyes can see and how to translate that to useful hardware and software. More or less, the MPEG compression uses two features: (1) it works in small squares of 4x4 or 8x8 pixels; (2) it translates the squares to a sinusoidal curve with colors and a gray scale image. What you need to understand here is that one square generally compresses very well, but its color will be transformed to better match the the chosen sinusoidal. Because of that, it will have colors that may dramatically vary from the squares around it1.

Video Frames

Although many cameras take liberties, the motion picture speed is called the frame rate and it should be no less than 24 images per second. This is the speed at which the human eye perceives changes in light. In the US, televisions have used 30 frames per seconds. In Europe, we use 25. Movies in theaters usually use 24, but they use a much higher resolution which compensates2.

When televisions were created, we had a speed problem in regard of the quantity of data that could be pushed to your TV screen. Because of that limit, we decided to limit the size of the screen. Video Interlace uses two frames.Old television sets show only about 640x480 pixels (the true resolution is higher, but what is visible is generally considered such. The European standard displays more pixels with a resolution of 768x576.)

Still, the speed was not enough to display the whole image. Making the image smaller was not a good option, so we found another solution: display half of an image (called a field). This means only 240 pixels in height and thus twice less data required on the waves for nearly the same quality.

Half of an image means losing a lot of resolution. To avoid having just 640x240 pixels on your TV, we instead send 2 half frames: an odd field (with odd lines) and an even field (with even lines.) And since displaying only the top or bottom field was not very good, we actually use what is called Interlace. In other words, the odd field displays all the odd lines on the screen (line 1, 3, 5, etc.) and the even field displays the even lines.

Okay... if you think this is already complicated, there is where the fun begins: time is the essence! Video cameras are used to record motion. So you have to think of time. We cut the image in half in resolution, but not in time. In other words, each field represents a different point in the timeline. In other words, say the odd field represents time T1 and the corresponding even field time T2. With 30 frames a second, we know that:

T2 - T1 = 1/30th of a second.

I know what you're going to say: wow! 1/30th of a second is really not much. And in general, in a human life, it is true. However, to your eyes, it makes a difference. You can actually detect the difference between each field when displayed at that speed. A little faster would not make it any better.

Video Image3

Now, when you grab an image from one of your videos, your software gives you the corresponding two fields. Video is always saved in a multiple of 2 fields since you need 2 fields to fill the entire screen. So you won't have a problem in that regard. A moving point in an interlaced video frame.However, remember that field 1 was shot at time T1 and field 2 was shot at time T2. This means the motion that occurred between field 1 and field 2 was recorded and is visible in your video image.

The picture on the right shows a point moving to the right. As you can see, I enlarged the lines in each field to show exactly what is happening in your image. Both fields show the same object at different times and thus one is not at the same location as the other. When you play this back in a video, you do not notice anything. Actually, it will look like the point is perfectly moving from side to side. Now that you are taking a still image from that video, you get the artifact to clearly show up.

Video Frames and Compression

Television on the waves is not compressed. It is expected to get to you as clean as it leaves the camera. Obviously, there are interferences...

Woman filming with her digital camera by Thivierr.Now, in a cheap camera like the ones we get (i.e. not a $20K+ camera,) the data will get compressed. Not only that, it will compress the data on a field basis. This means half an image is compressed, then the next, then the next, etc. until you stop recording.

This creates artifacts between fields since, as I explained before, the compression will murder some of your colors and that will now apply to the fields and the blocks between fields.

With a still subject and less compression I still get artifacts, why?

There are many factors at play. One of the hardest thing to get right is the lightning of the subject. Assuming you got that part working, the next problem you will face is an overheated camera generating many interferences in the conversion of the analog data to digital. Other problems include features such as motion correction in the camera, a way to attempt to correct a motion by recording at different speeds or interpolating pixels between fields. Although some of those features are good, they can alter images in unexpected ways.

Quite frankly, if you can, take picture shots when you know that you'd need them. Use the highest resolution available to you (around 4,000 × 4,000 pixels is really top notch!) and take the time to prepare the shot to get the best possible result.

  • 1. The artifacts resulting from compressing blocks is called blocking.
  • 2. The resolution in a movie theater motion picture is actually not measured in terms of digital pixels since it still uses film. The quality of the film used determines the resolution.
  • 3. The correct term is a Video Frame since, as explained in the previous paragraph the image really is composed of two fields.