12-27-2010, 01:12 PM
[quote name='Klaus' timestamp='1293434625' post='5220']
Even if we followed a linear approach - the results wouldn't be pure anyway. The micro-lenses as well as the characteristic of the photo-diodes of the sensor "pollute" the results, of course. Ultra-wides don't vignette quite as bad as the data suggests but who cares how a lens behaves without a camera ...
[/quote]
It's not just about linearity - what you're presenting is an unknown nonlinear function of something that would be a valuable and meaningful quantity. Of course the sensor optics affect the results since any measurement is based on recorded data from the sensor, and that's how it should be.
Problems with your approach:
1) lack of cross-comparability across brands and across review sites
2) dependence of the results on exposure
3) no possibility to estimate the significance of the vignetting in a quantitative way. If you reported the vignetting on a calibrated EV scale, I could for example deduce how much the noise increases when vignetting is corrected in software. Now I have no such possibility. Basically 1 EV vignetting means that to get corner pixels to have the same luminosity that the center has, one needs to double the exposure. Since this is done in software, all artifacts, including noise, compression artifacts, CA etc. increase accordingly. Because your scale is an unknown nonlinear function of the true EV scale, there is no way of figuring the magnitude of these increases in artifacts without considerable additional information about the camera used, the settings used, and where the target's center was placed at on the luminosity scale.
If you measured the true EV value of the vignetting (preferably as a function of distance from the center of the frame) one could calculate everything about the effects of the vignetting on practical results. These numbers that you give mean nothing that could be easily interpreted or compared with.
In any quantitative image quality analysis, the crucial aspect is to present quantities that are relative to subject contrast. For example, if we have a 1 stop lighting contrast on a portrait, we could then look at the table that says 1 EV vignetting and say that ok, the vignetting corresponds to a lighting difference of 1 stop. If you use a higher contrast curve (everybody who cares about their images will alter the tone curve in post-processing on an individual image basis to optimize the effect and visual appearance of the image) it still remains 1 stop, whereas your numbers would be altered. Alter the exposure, again your numbers vary but real-world results stay at the same 1 stop.
I would highly recommend that you consult someone with a degree in optics, image processing, or physics before designing a laboursome experiment that turns out meaningless over time. It is a small amendment that could save a lot of work and help reduce the ghastly variability of results across sites and testers that exists today. Provided that you have maintained constant lighting in your setup and have done the experiments with exactly the same settings over the years, you can still fix this by calibrating your scale for each camera by using a gray patch series with known densities (that you can measure) and then fixing the reports. I understand that complete cross-comparability in results across sites and brands is not possible when it comes to some quantities but for vignetting there is no reason why it can't be done correctly.
Even if we followed a linear approach - the results wouldn't be pure anyway. The micro-lenses as well as the characteristic of the photo-diodes of the sensor "pollute" the results, of course. Ultra-wides don't vignette quite as bad as the data suggests but who cares how a lens behaves without a camera ...
[/quote]
It's not just about linearity - what you're presenting is an unknown nonlinear function of something that would be a valuable and meaningful quantity. Of course the sensor optics affect the results since any measurement is based on recorded data from the sensor, and that's how it should be.
Problems with your approach:
1) lack of cross-comparability across brands and across review sites
2) dependence of the results on exposure
3) no possibility to estimate the significance of the vignetting in a quantitative way. If you reported the vignetting on a calibrated EV scale, I could for example deduce how much the noise increases when vignetting is corrected in software. Now I have no such possibility. Basically 1 EV vignetting means that to get corner pixels to have the same luminosity that the center has, one needs to double the exposure. Since this is done in software, all artifacts, including noise, compression artifacts, CA etc. increase accordingly. Because your scale is an unknown nonlinear function of the true EV scale, there is no way of figuring the magnitude of these increases in artifacts without considerable additional information about the camera used, the settings used, and where the target's center was placed at on the luminosity scale.
If you measured the true EV value of the vignetting (preferably as a function of distance from the center of the frame) one could calculate everything about the effects of the vignetting on practical results. These numbers that you give mean nothing that could be easily interpreted or compared with.
In any quantitative image quality analysis, the crucial aspect is to present quantities that are relative to subject contrast. For example, if we have a 1 stop lighting contrast on a portrait, we could then look at the table that says 1 EV vignetting and say that ok, the vignetting corresponds to a lighting difference of 1 stop. If you use a higher contrast curve (everybody who cares about their images will alter the tone curve in post-processing on an individual image basis to optimize the effect and visual appearance of the image) it still remains 1 stop, whereas your numbers would be altered. Alter the exposure, again your numbers vary but real-world results stay at the same 1 stop.
I would highly recommend that you consult someone with a degree in optics, image processing, or physics before designing a laboursome experiment that turns out meaningless over time. It is a small amendment that could save a lot of work and help reduce the ghastly variability of results across sites and testers that exists today. Provided that you have maintained constant lighting in your setup and have done the experiments with exactly the same settings over the years, you can still fix this by calibrating your scale for each camera by using a gray patch series with known densities (that you can measure) and then fixing the reports. I understand that complete cross-comparability in results across sites and brands is not possible when it comes to some quantities but for vignetting there is no reason why it can't be done correctly.