11-23-2017, 08:17 AM
There is no solution to this, as long as any kind of camera is involved. Using a digital camera for MTF tests, the test becomes a system test that includes characteristics of both the test camera used (or its sensor, to be more precise) AND the raw converter used. There is no way around this, no matter if you apply default, adjusted or (as they claim) no sharpening. The effect is the same, you just scale the results in the end.
We tried, BC. In fact, we see the effects everytime we get a new test camera, since that camera is usually not compatible with any raw converter we have used in the past, sometimes has a new sensor technology or comes without an AA filter. If you use default sharpening, the MTF results usually go through the roof, with peak values way above the maximum resolution of the camera. So, in other words: your camera or your raw converter usually oversharpen quite a bit already.
We aim much lower with our sharpening. For the D7200 reviews, to give you an idea about the amount of work that is required, I went through no less than 15 iterations to find the best parameters. As Klaus pointed out several times in the past already, we do so only to counterbalance the softening that is a result of the Bayer pattern interpolation, so already an influence of the camera's sensor.
Even if you try to use the same parameters for all reviews (like "no sharpening"), you will not get comparable results. A camera without AA filter for example with deliver higher results than one without. And with different converter releases (which you HAVE to use, unless you want to stick to completely outdated cameras), algorithms, especially the demosaic algorithms, change, and that can have all kinds of effects. Klaus had to learn this quite a while ago when Adobe released ACR 3 (I think) and MTF results suddenly dropped. Another example: when calibrating the D7200 I was surprised to get CA values that are lower (!) than on the D7000. It turned out, that this was because of the RAW converter (the same I used for the D7000 reviews, just a newer version), which consistently gives lower CA results (with CA correction of course switched off).
So, no matter what approach you follow: as long as a camera sensor is involved, you're doing system tests, not pure lens tests. And because of the described influences, the results are not directly comparable, no matter what set of parameters you apply. Your only way out of this is the Cicala way, an optical bench. But then you get results that don't tell you anything about how a given lens performs on a given camera. Or, in a more generic way: on a certain level of sensor resolution.
Regarding the rating: to be honest, I don't like them (and I'm not sure if Klaus' opinion is any different). In fact, there were no rating for a few years in the beginning, but there were so many requests to introduce some kind of rating that we couldn't ignore it. I don't like the idea, because they oversimplify things and in some cases, for example the Sigma 14 Art or the Nikkor AF-D 85/1.4, can't do a lens full justice. That's why we introduced field ratings, at least when there is a significant difference between the objective performance figures and the (usual) intended usage of a lens.
I don't like the rating, because I often see that our reviews are reduced to the ratings. The Sigma is considered a "bad lens", because it doesn't have many stars. That's not an issue of the star rating, though, it's an issue of those readers that reduce their effort of review reading to just looking at the stars and are not willing to invest the time to read our 2- or 3-page (so, not really that long) reviews completely and thus get a much better overview of the performance of a tested lens. A star rating can't give you that, if that's all you look at. But it can help to put things into perspective (for those who don't narrow their perspective and view, intentionally or unintentionally).
We tried, BC. In fact, we see the effects everytime we get a new test camera, since that camera is usually not compatible with any raw converter we have used in the past, sometimes has a new sensor technology or comes without an AA filter. If you use default sharpening, the MTF results usually go through the roof, with peak values way above the maximum resolution of the camera. So, in other words: your camera or your raw converter usually oversharpen quite a bit already.
We aim much lower with our sharpening. For the D7200 reviews, to give you an idea about the amount of work that is required, I went through no less than 15 iterations to find the best parameters. As Klaus pointed out several times in the past already, we do so only to counterbalance the softening that is a result of the Bayer pattern interpolation, so already an influence of the camera's sensor.
Even if you try to use the same parameters for all reviews (like "no sharpening"), you will not get comparable results. A camera without AA filter for example with deliver higher results than one without. And with different converter releases (which you HAVE to use, unless you want to stick to completely outdated cameras), algorithms, especially the demosaic algorithms, change, and that can have all kinds of effects. Klaus had to learn this quite a while ago when Adobe released ACR 3 (I think) and MTF results suddenly dropped. Another example: when calibrating the D7200 I was surprised to get CA values that are lower (!) than on the D7000. It turned out, that this was because of the RAW converter (the same I used for the D7000 reviews, just a newer version), which consistently gives lower CA results (with CA correction of course switched off).
So, no matter what approach you follow: as long as a camera sensor is involved, you're doing system tests, not pure lens tests. And because of the described influences, the results are not directly comparable, no matter what set of parameters you apply. Your only way out of this is the Cicala way, an optical bench. But then you get results that don't tell you anything about how a given lens performs on a given camera. Or, in a more generic way: on a certain level of sensor resolution.
Regarding the rating: to be honest, I don't like them (and I'm not sure if Klaus' opinion is any different). In fact, there were no rating for a few years in the beginning, but there were so many requests to introduce some kind of rating that we couldn't ignore it. I don't like the idea, because they oversimplify things and in some cases, for example the Sigma 14 Art or the Nikkor AF-D 85/1.4, can't do a lens full justice. That's why we introduced field ratings, at least when there is a significant difference between the objective performance figures and the (usual) intended usage of a lens.
I don't like the rating, because I often see that our reviews are reduced to the ratings. The Sigma is considered a "bad lens", because it doesn't have many stars. That's not an issue of the star rating, though, it's an issue of those readers that reduce their effort of review reading to just looking at the stars and are not willing to invest the time to read our 2- or 3-page (so, not really that long) reviews completely and thus get a much better overview of the performance of a tested lens. A star rating can't give you that, if that's all you look at. But it can help to put things into perspective (for those who don't narrow their perspective and view, intentionally or unintentionally).
Editor
opticallimits.com
opticallimits.com