Thursday, December 07, 2006

How different is 1080i from 1080p really?

I've had a lot of people ask me about the new HD buzz words floating around on the Internet and in store ads. There seems to be a lot of confusion on the subject of whether or not 1080p is better than 1080i by a large enough margin that the discerning consumer needs to be careful of them. Most recently this question came in the form of an email, and I thought I'd share my answer.

Question: What is the loss of quality between 1080p and 1080i?

Answer: That actually depends on the source material and the monitor - it's a very complicated question to answer but all things being equal the loss of quality would be half of the horizontal resolution.

Another way of putting it:
1080i:
1920x540 pixels displayed at once

1080p:
1920x1080 pixels displayed at once

But this isn't an exact comparison.

The Long Explanation:
Video recorded natively at 1080i will be broken down into two separate fields (540 lines x2). One field (commonly called Field A) will consist of all of the even lines from the original frame and the other field (Field B) will consist of all of the odd lines from the original frame. The idea is that when these two fields are displayed one right after the other quickly enough, that you appear to see the whole original frame (1080 lines) but it only requres 540 horizontal lines to do it. Essentially 1080i relies on tricking your eye into thinking it is seeing more than it actually is.

The interleaving technique was developed with CRTs in mind. Most fixed-resolution displays (LCD/Plasma/DLP) can't actually display an interlaced picture (or rather don't because it would look like crap on that type of display). Instead these displays have image processors that use a variety of techniques to take the interlaced signal and turn it into a progressive signal for whatever the native resolution of the display is. This is where the interlacing really presents a problem, because when being shown on a progressive display, the material has to be "de-interlaced" first. The image processor has three basic options, line-doubling, interpolation, or field overlap.

Line-doubling is the easiest but it is wasteful- just take one of the two fields and show every line twice, discarding the other field entirely - sure, it's using all of the available pixels, but you're really only seeing half of the picture.

Interpolation is the hardest to do and it lowers fidelity- the image processor will "make up" the missing lines by averaging color and lighting info between each pair of lines in a single field and discard the other field entiely. The picture is a little clearer, but you've lost some fidelity because you're no longer seeing the true picture.

Field overlap is the most problematic but provides the best fidelity. Simply taking the two separate fields and laying them on top of each other will give you the complete picture, however if the material was originally recorded in an interlaced format (1080i for example) each field will have a separate time index. What this means is that objects in motion between fields will not line up correctly and this creates an effect called "combing" - where the right and left edges of the moving object will appear like the teeth of a comb.

Most current image processors will use a combination of these techniques, constantly sampling the source material and trying to determine the best method. A lot of progressive scan DVD players have a Video Mode setting that will let you choose something like "Film,Video,Auto" so you can override the image processor if it isn't choosing the right technique.

The "Film" setting is of particular relavence. This is also sometimes called "3:2 pulldown" or "Inverse Telecine". Basically it is a smart version of the field overlap technique. When film material is transferred to video it is run through a machine called a Telecine. The Telecine basically takes each full frame of the film and breaks it down into separate fields to be played back on an interlaced display. Because the frame rate of film (24 frames per second) and interlaced video (~30x2 fields per second) have a constant ratio of 2:3 the Telecine machine uses that ratio to turn every two frames into five fields - that means that our of every five fields one of them will be a duplicate. When your TV or DVD player (it doesn't really matter where the image processor is) detects or is set to Inverse Telecine mode, it uses that known ratio to reassemble the original frames from the interleaved fields.

So the conclusion of the long answer is this:
--If your source material is video (i.e. it was captured/created in 1080i) then your resolution loss between 1080i and 1080p is 540 horizontal lines, because this information will most likely be padded with either fake video information, or line-doubled making every other line redundant. This would be true of most TV shows and video games.

--If your source material was originally film (i.e. it was run through a telecine machine to be made 1080i) and your TV has a decent image processor, then the individual fields will be reassembled into the full 1080p frames resulting in essentially no loss of picture quality whatsoever.

No comments: