I have run some experiments with dcraw to investigate the effects of various Bayer demosaicing algorithms on image sharpness.
A very basic Bayer demosaicing algorithm (see http://www.cambridgeincolour.com/tutorials/camera-sensors.htm for a brief introduction) will effectively act as a low-pass (i.e., blurring) filter because it interpolates the missing colours from nearby neighbours. In smooth areas, this is actually beneficial, since it will help to suppress image noise. On sharp edges (or detailed regions) this will obviously introduce unwanted blurring. Smarter demosaicing algorithms, such as Pattern Pixel Grouping (PPG) or Adaptive Homogeneity-Directed (AHD) interpolation have crude mechanisms to detect edges. They can then avoid interpolating values across the edges, thus reducing blurring in areas with high detail or sharp edges. Unfortunately, because the use only local information (maybe 1 or two pixels away), they cannot estimate edge orientation very accurately, and hence appear to introduce varying amounts of blurring, depending on the orientation of edges in the image.
I set out to investigate this. The first step is to generate synthetic images for each channel (Red, Green, and Blue). These images are generated with a known edge orientation, and a known edge sharpness (MTF50 value). The gray scales ranges of these images have been manipulated so that the black level and the intensity range differs, to match more closely how a real Bayer sensor performs. A little bit of Gaussian noise was added to each channel. Next, the three channels are combined to form a Bayer image. Each pixel in the Bayer image comes from only one of the three channels, according to a regular pattern, e.g., RGRGRG on even image rows, and GBGBGB on odd rows.
Here is an example:
The red, green and blue synthetic images were generated with different MTF50 values. In this case, red had edges with MTF50 = 0.15, G = 0.25, and B = 0.35. This is somewhat exaggerated, but I did not simulate true chromatic aberration, but this would create an effect similar to different MTF50 values in different channels. Since the simulated image is roughly black rectangles on a white background, lateral chromatic aberration would effectively cause the edges to blur. At any rate, I have seen that my Nikkor 35 mm f/1.8 prime lens is definitely softer in the red channel.
Using a set of these synthetic images with known MTF50 values, I could then measure the MTF50 values with and without Bayer demosaicing. The difference between the known MTF50 value for each edge, and the measured MTF50 value, is then expressed as a percentage (of the known MTF50 value). This was computed over 15 repetitions over edge angles from 3 degrees through 43 degrees (other edge orientations can be reduced to this range through symmetry). The result is a plot like this:
This clearly shows a few things:
1. Using the original synthetic red channel image (before any Bayer mosaicing) produces very accurate, unbiased results (blue boxes in the plot). The height of each box represents the width of the middle-most 50% of individual measurements, i.e., the typical spread of values.
2. Both PPG (black) and AHD (red) start off with a 5% error, which increases as the edge angle increases from 3 through 43 degrees. This upwards trend is the real problem -- a constant 5% error would have been fine. The values are positive, i.e., sharpness was higher than expected. This is because the green and blue channels both have higher MTF50 values, so some of that sharpness is bleeding through to the red channel.
3. A variant of the MTF measurement algorithm, designed to run on the raw Bayer mosaiced image, was run on the red sites only -- see the green boxes. Since this means only 25% of the pixels could be used, this effectively magnifies the effect of noise (remember, the synthetic images had a little bit of Gaussian noise added). This increases the spread of the errors, but the algorithm remains unbiased, regardless of edge orientation.
The data is quite clear: both PPG and AHD distort the MTF50 measurements in an edge-dependent manner. These two algorithms are the most sophisticated ones included with dcraw. Other raw converters, such as ACR, may perform differently, and I intend to demonstrate their performance in future posts.
Similar plots of the other channels can be found here:
http://www.flickr.com/photos/73586923@N02/sets/72157628774569345/with/6659013357/
My next post will demonstrate how this affects actual photos, including how the on-camera Bayer demosaicing algorithms fare.
A very basic Bayer demosaicing algorithm (see http://www.cambridgeincolour.com/tutorials/camera-sensors.htm for a brief introduction) will effectively act as a low-pass (i.e., blurring) filter because it interpolates the missing colours from nearby neighbours. In smooth areas, this is actually beneficial, since it will help to suppress image noise. On sharp edges (or detailed regions) this will obviously introduce unwanted blurring. Smarter demosaicing algorithms, such as Pattern Pixel Grouping (PPG) or Adaptive Homogeneity-Directed (AHD) interpolation have crude mechanisms to detect edges. They can then avoid interpolating values across the edges, thus reducing blurring in areas with high detail or sharp edges. Unfortunately, because the use only local information (maybe 1 or two pixels away), they cannot estimate edge orientation very accurately, and hence appear to introduce varying amounts of blurring, depending on the orientation of edges in the image.
I set out to investigate this. The first step is to generate synthetic images for each channel (Red, Green, and Blue). These images are generated with a known edge orientation, and a known edge sharpness (MTF50 value). The gray scales ranges of these images have been manipulated so that the black level and the intensity range differs, to match more closely how a real Bayer sensor performs. A little bit of Gaussian noise was added to each channel. Next, the three channels are combined to form a Bayer image. Each pixel in the Bayer image comes from only one of the three channels, according to a regular pattern, e.g., RGRGRG on even image rows, and GBGBGB on odd rows.
Here is an example:
The red, green and blue synthetic images were generated with different MTF50 values. In this case, red had edges with MTF50 = 0.15, G = 0.25, and B = 0.35. This is somewhat exaggerated, but I did not simulate true chromatic aberration, but this would create an effect similar to different MTF50 values in different channels. Since the simulated image is roughly black rectangles on a white background, lateral chromatic aberration would effectively cause the edges to blur. At any rate, I have seen that my Nikkor 35 mm f/1.8 prime lens is definitely softer in the red channel.
Using a set of these synthetic images with known MTF50 values, I could then measure the MTF50 values with and without Bayer demosaicing. The difference between the known MTF50 value for each edge, and the measured MTF50 value, is then expressed as a percentage (of the known MTF50 value). This was computed over 15 repetitions over edge angles from 3 degrees through 43 degrees (other edge orientations can be reduced to this range through symmetry). The result is a plot like this:
This clearly shows a few things:
1. Using the original synthetic red channel image (before any Bayer mosaicing) produces very accurate, unbiased results (blue boxes in the plot). The height of each box represents the width of the middle-most 50% of individual measurements, i.e., the typical spread of values.
2. Both PPG (black) and AHD (red) start off with a 5% error, which increases as the edge angle increases from 3 through 43 degrees. This upwards trend is the real problem -- a constant 5% error would have been fine. The values are positive, i.e., sharpness was higher than expected. This is because the green and blue channels both have higher MTF50 values, so some of that sharpness is bleeding through to the red channel.
3. A variant of the MTF measurement algorithm, designed to run on the raw Bayer mosaiced image, was run on the red sites only -- see the green boxes. Since this means only 25% of the pixels could be used, this effectively magnifies the effect of noise (remember, the synthetic images had a little bit of Gaussian noise added). This increases the spread of the errors, but the algorithm remains unbiased, regardless of edge orientation.
The data is quite clear: both PPG and AHD distort the MTF50 measurements in an edge-dependent manner. These two algorithms are the most sophisticated ones included with dcraw. Other raw converters, such as ACR, may perform differently, and I intend to demonstrate their performance in future posts.
Similar plots of the other channels can be found here:
http://www.flickr.com/photos/73586923@N02/sets/72157628774569345/with/6659013357/
My next post will demonstrate how this affects actual photos, including how the on-camera Bayer demosaicing algorithms fare.
Last edited: