Bayer demosaicing algorithms induce orientation-dependent distortion of sharpness

fvdbergh

Senior Member
Joined
Aug 10, 2006
Messages
703
I have run some experiments with dcraw to investigate the effects of various Bayer demosaicing algorithms on image sharpness.

A very basic Bayer demosaicing algorithm (see http://www.cambridgeincolour.com/tutorials/camera-sensors.htm for a brief introduction) will effectively act as a low-pass (i.e., blurring) filter because it interpolates the missing colours from nearby neighbours. In smooth areas, this is actually beneficial, since it will help to suppress image noise. On sharp edges (or detailed regions) this will obviously introduce unwanted blurring. Smarter demosaicing algorithms, such as Pattern Pixel Grouping (PPG) or Adaptive Homogeneity-Directed (AHD) interpolation have crude mechanisms to detect edges. They can then avoid interpolating values across the edges, thus reducing blurring in areas with high detail or sharp edges. Unfortunately, because the use only local information (maybe 1 or two pixels away), they cannot estimate edge orientation very accurately, and hence appear to introduce varying amounts of blurring, depending on the orientation of edges in the image.

I set out to investigate this. The first step is to generate synthetic images for each channel (Red, Green, and Blue). These images are generated with a known edge orientation, and a known edge sharpness (MTF50 value). The gray scales ranges of these images have been manipulated so that the black level and the intensity range differs, to match more closely how a real Bayer sensor performs. A little bit of Gaussian noise was added to each channel. Next, the three channels are combined to form a Bayer image. Each pixel in the Bayer image comes from only one of the three channels, according to a regular pattern, e.g., RGRGRG on even image rows, and GBGBGB on odd rows.
Here is an example:
6659644811_81ceb36ae8_z.jpg

The red, green and blue synthetic images were generated with different MTF50 values. In this case, red had edges with MTF50 = 0.15, G = 0.25, and B = 0.35. This is somewhat exaggerated, but I did not simulate true chromatic aberration, but this would create an effect similar to different MTF50 values in different channels. Since the simulated image is roughly black rectangles on a white background, lateral chromatic aberration would effectively cause the edges to blur. At any rate, I have seen that my Nikkor 35 mm f/1.8 prime lens is definitely softer in the red channel.

Using a set of these synthetic images with known MTF50 values, I could then measure the MTF50 values with and without Bayer demosaicing. The difference between the known MTF50 value for each edge, and the measured MTF50 value, is then expressed as a percentage (of the known MTF50 value). This was computed over 15 repetitions over edge angles from 3 degrees through 43 degrees (other edge orientations can be reduced to this range through symmetry). The result is a plot like this:
6659012839_63ac810467_z.jpg

This clearly shows a few things:
1. Using the original synthetic red channel image (before any Bayer mosaicing) produces very accurate, unbiased results (blue boxes in the plot). The height of each box represents the width of the middle-most 50% of individual measurements, i.e., the typical spread of values.
2. Both PPG (black) and AHD (red) start off with a 5% error, which increases as the edge angle increases from 3 through 43 degrees. This upwards trend is the real problem -- a constant 5% error would have been fine. The values are positive, i.e., sharpness was higher than expected. This is because the green and blue channels both have higher MTF50 values, so some of that sharpness is bleeding through to the red channel.
3. A variant of the MTF measurement algorithm, designed to run on the raw Bayer mosaiced image, was run on the red sites only -- see the green boxes. Since this means only 25% of the pixels could be used, this effectively magnifies the effect of noise (remember, the synthetic images had a little bit of Gaussian noise added). This increases the spread of the errors, but the algorithm remains unbiased, regardless of edge orientation.

The data is quite clear: both PPG and AHD distort the MTF50 measurements in an edge-dependent manner. These two algorithms are the most sophisticated ones included with dcraw. Other raw converters, such as ACR, may perform differently, and I intend to demonstrate their performance in future posts.

Similar plots of the other channels can be found here:
http://www.flickr.com/photos/73586923@N02/sets/72157628774569345/with/6659013357/

My next post will demonstrate how this affects actual photos, including how the on-camera Bayer demosaicing algorithms fare.
 
Last edited:

fvdbergh

Senior Member
Joined
Aug 10, 2006
Messages
703
Effects of dcraw demosaicing algorithms on sharpness in real photos

In the first post on this thread I provided some evidence of orientation-depended distortion of sharpness caused by Bayer demosaicing algorithms. Now we will look at some samples from real photos.

I used MTF Mapper (http://sourceforge.net/projects/mtfmapper/) to produce maps of both Sagittal and Meridional MTF50 values of a Nikkor 35 mm f/1.8 AF-S G lens mounted on a Nikon D7000 body. The actual photo looks like this:
6659171141_92c15da906_z.jpg


I captured this shot in both raw and JPEG. The .NEF was then processed with dcraw v9.10, with various demosaicing options.
First up, the results for Adaptive Homogeneity-Directed (AHD) interpolation (dcraw -q 3):
6659088277_c06cf89db3_b.jpg


This is the image that started this whole thread. While I was developing the MTF surface map rendering option for MTF Mapper, I obviously used synthetic images to see whether MTF Mapper was producing the desired output. Once I was satisfied that the output look good, I tried running MTF Mapper on some .NEF shots developed with dcraw. The strange cross shapes that appeared in both the Meridional and Sagittal MTF maps did not really look like what I was expecting, which got me thinking about possible causes.

I initially though that it was caused by something in the CMOS sensor pixel geometry. Maybe the pixels were not really square, and some edge angles thus produced biased MTF50 estimates. I have yet to test this, but it seems that the demosaicing algorithms are responsible for the bulk of the distortion.

Next, I modified the slanted edge MTF estimation algorithm used in MTF Mapper to use only pixels from a specific filter colour, e.g., only the red sites. Using the dcraw -D option to extract the raw Bayer image from the .NEF files, I could then directly measure only the red channel MTF without going through a demosaicing algorithm first.
The main disadvantage of this approach is that the red and blue filters cover only 25% of the pixels, meaning that effectively I have only 25% of the usual number of intensity observations around each edge. This has the effect of increasing the uncertainty in MTF50 estimates, and I still want to look at ways of improving performance --- maybe a test chart with larger squares will be the best solution. At any rate, here is the MTF surface of the raw red channel, without any demosaicing:
6659088623_1afd4d4cd4_b.jpg

Notice how the meridional MTF map now appears much more symmetrical. The left side still looks a bit low, but then again, I did not take every possible precaution to ensure that the sensor is perfectly parallel to the test chart. Maybe the lens is even slightly decentered. Time will tell.

The sagittal MTF map also looks much more radially symmetrical. Although it is somewhat skew, it looks as if there is a donut-shaped region of maximum sharpness, something that is certainly possible if the field of this lens is not perfectly flat. The most important feature, though, is that the cross-shaped pattern seems to have disappeared, like we may expect based on the results derived from the synthetic images in the first post.

Next, I extracted the red channel from the in-camera JPEG. There are several important things to take into account with JPEGs, though. The chroma channels are downsampled by a factor two, meaning that MTF values should actually decrease, as this will involve a low-pass filter. Extracting only the red channel of a JPEG involves re-mixing the chroma and luminance channels, and then extracting the red channel from the resulting RGB image. This means that both the blue and green MTF values will be mixed in with the red channnel MTF, so if the red channel is actually softer than the other two, then the JPEG process will appear to increase the (relative) MTF50 values somewhat. Anyhow, here is the image
6659088997_5bd5ae77b3_b.jpg

The most disturbing thing about these MTF maps is the grid of bumps that appear. I cannot explain their presence at the moment, but I will try to get some answers at some point. We can see that the centre of the meridional MTF map seems broader, and the sagittal map appears quite flat (other than the local bumps). There is only a very slight hint of a horizontal and vertical stripe through the centre --- more testing will be required to see if this is just random coincidence, or a real feature. Anyhow, the in-camera JPEG does not appear to suffer from the distortion we saw with the dcraw AHD demosaicing interpolation, so whatever the camera is using, it seems to be of better quality.

In my next post I will add some more plots, including the other two dcraw demosaicing algorithms, and ACR, and maybe some of the algorithms in RawTherapee.
 
Last edited:

fvdbergh

Senior Member
Joined
Aug 10, 2006
Messages
703
Effects of dcraw demosaicing algorithms on sharpness in real photos, continued

Same process as above, but with different demosaicing interpolation algorithms.

Bilinear interpolation, which really destroys sharpness (dcraw -q 0)
6665499803_a96eca4776_b.jpg


Variable Number of Gradients (VNG) interpolation (dcraw -q 1)
6665499985_1337816574_b.jpg


Patterned Pixel Grouping (PPG) interpolation (dcraw -q 2)
6665538249_553a546920_b.jpg


LightRoom 3.6 default demosaicing interpolation (sharpening 0, but it may still have applied some sharpening)
6665500115_d434cc778b_b.jpg


ViewNX2 default demosaicing (pretty sure it applied some sharpening)
6665500279_3ae96a2cc7_b.jpg



Discussion:
dcraw is recommend by LensTip, a Polish review site, mostly because you can be sure that dcraw does not apply any sharpening to the image. LensTip uses Imatest, which can operate on raw Bayer images, but it is not clear whether the actual review data is generated using the raw Bayer images, or not. The results above, however, show that all four the Bayer interpolation methods implemented in dcraw leads to orientation-specific distortion of sharpness.

The synthetic results presented in the first post in this thread gives us a rough idea of the magnitude of this distortion, which appears to be less than 10% in the cases tested. This is negligible for real photography. If you are trying to analyse the performance of you lenses, though, you should avoid using the dcraw Bayer interpolation algorithms because they clearly distort the shape of the MTF50 surface (which should be radially symmetrical).

Other raw converters perform reasonably well on the meridional MTF maps, with both LightRoom 3.6 and ViewNX producing reasonably symmetrical MTF maps, although both are plagued by local bumps, the cause of which is still a mystery. In the sagittal direction, both these methods fare slightly worse than Raw Bayer measurements, with some very slight orientation-specific patterns appearing in LR images. ViewNX2 shows a clear diagonal cross-shaped distortion, which is undesirable.

For real world photography, the other features of a demosaicing algorithm are probably more important, e.g., the presence of colour Moiré patterns, rather than a 5-10% loss of sharpness in some regions of the image.
 
Last edited:
Top