• 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Forums > Back > next PZ lens test report - Sigma 14mm f/1.8 DG HSM ART
#31
Quote:It again is clear (to me at least) that the way you guys approach the MFT tests with the sharpening makes for problematic results that no one really can evaluate. 

The sharpening exaggerates sharpness, and will not touch less sharp results, exaggerating the difference in corners and center, and also skewing results twice when comparing for instance how that Nikkor 14-24mm f2.8 did on 24mp. 

I also wonder if the R-ness of the 5DS-R also brings its own part of the problem, here. Fake sharpness from "no" AA-filter skewing results up until the lens acts as AA-filter (corners).

 

Something worth investigating?

 

The 50mp versus 21mp results do show part of the sharpening issue(s). The 50mp extreme corners getting lower results than the 21mp extreme corners.
 

In essence - you want us to process images in a way that no user would do out there ... sorry - won't happen.

If you think that the charts would "look" any different - hardly - we would simply adjust the scale accordingly.

Yes, the numbers would be different but the numbers ... well ...

 

I can only repeat again that we tune reference images to neutral sharpness which Imatest identifies are neither over- nor undersharpened. That amount is less than the default sharpening.
  Reply
#32
Quote:In essence - you want us to process images in a way that no user would do out there ... sorry - won't happen.

If you think that the charts would "look" any different - hardly - we would simply adjust the scale accordingly.

Yes, the numbers would be different but the numbers ... well ...

 

I can only repeat again that we tune reference images to neutral sharpness which Imatest identifies are neither over- nor undersharpened. That amount is less than the default sharpening.
No, i want the lenses to be tested, and not some specific sensor or a specific idea of post processing or choice of RAW converter.

In case of the in this thread mentioned Canon 11-24mm, a user would convert the image with DPP4 and its lens profile, getting much better sharpness in the corners than your USM mask sharpening in your MFT PP.

The argument that you sharpen because a user would do that does not fly. 

 

Would the chart look different? Yes, of course it would. Like JoJu and I have pointed out, the 21mp and 50mp Sigma 14mm f1.8 "charts" do show the issue (the 21mp test gets a higher MFT score than the 50mp test, same lens...). Without the sharpening, the 21mp wide open corner would have a less high bar than it has now. And the difference between sharp figures and less sharp figures would be way less pronounced in the 50mp charts. You will see less of a hump in the charts.

 

Run a test run for this Sigma without sharpening, and put in next to the results you get now, just to get an idea of how the sharpening gets skewed twice, in regards to lower resolution tests.

 

And the Imatest software anyway can't distinguish between fake ("enhanced") sharpness from pixel edges, and and actual sharpness, can it? That then poses an issue that is not under your control.
  Reply
#33
Changing a preparation method after years of testing would then mean retesting. Otherwise the results would be as comparable as today. I don't see the resources for a two-person test site. Usually I check at lensrentals or lenscore if I want to know about MTF charts - with the advantage of lensrentals' batch testing. But honestly, I didn't care about test results with the last dozen purchases.

 

Once I rented a lens (to find out the 85/1.4 Art is too massive), other times I visited a weekend event of Sigma or Fuji to try out. Often I trusted Sigma or Fuji (and occassionally ended up with a disappointment). To me, testing falls short in daily practice. What good is excellent center sharpness after purchase if after a while the zoom ring starts to wobble or I have to find out the VR has some weak spots?

 

Yesterday night I did some star shots with the 14/1.8. In the corners wide open I saw some butterfly-lights. At 100% they look awful, as an A3 print hardly noticeable. One shot I thought I focused at infinity - and some 3-4 meters distant trees were pretty okay. 

 

We are super spoilt by lenses which were literally impossible to produce only a decade ago.

  Reply
#34
     Lenstip don't use sharpening.......within the review site the resolution scores take on a meaning by results of other lenses,   I very much like their method.

 

  Here sharpening is used, just like I use it (typically opens with 75%) Ol's graphics become familiar by use also, in spite of the sharpening, results on the 50 Mps sensor is a struggle...so apart from just upsetting the site's apple cart and with already two tested resolution sensors.........

 

  I would leave well alone!..

    

 

 

   ...besides Lentips's sample images almost all look soft, the top lenses, well just less soft.

  Reply
#35
There is no solution to this, as long as any kind of camera is involved. Using a digital camera for MTF tests, the test becomes a system test that includes characteristics of both the test camera used (or its sensor, to be more precise) AND the raw converter used. There is no way around this, no matter if you apply default, adjusted or (as they claim) no sharpening. The effect is the same, you just scale the results in the end.

We tried, BC. In fact, we see the effects everytime we get a new test camera, since that camera is usually not compatible with any raw converter we have used in the past, sometimes has a new sensor technology or comes without an AA filter. If you use default sharpening, the MTF results usually go through the roof, with peak values way above the maximum resolution of the camera. So, in other words: your camera or your raw converter usually oversharpen quite a bit already.

We aim much lower with our sharpening. For the D7200 reviews, to give you an idea about the amount of work that is required, I went through no less than 15 iterations to find the best parameters. As Klaus pointed out several times in the past already, we do so only to counterbalance the softening that is a result of the Bayer pattern interpolation, so already an influence of the camera's sensor.

Even if you try to use the same parameters for all reviews (like "no sharpening"), you will not get comparable results. A camera without AA filter for example with deliver higher results than one without. And with different converter releases (which you HAVE to use, unless you want to stick to completely outdated cameras), algorithms, especially the demosaic algorithms, change, and that can have all kinds of effects. Klaus had to learn this quite a while ago when Adobe released ACR 3 (I think) and MTF results suddenly dropped. Another example: when calibrating the D7200 I was surprised to get CA values that are lower (!) than on the D7000. It turned out, that this was because of the RAW converter (the same I used for the D7000 reviews, just a newer version), which consistently gives lower CA results (with CA correction of course switched off).

So, no matter what approach you follow: as long as a camera sensor is involved, you're doing system tests, not pure lens tests. And because of the described influences, the results are not directly comparable, no matter what set of parameters you apply. Your only way out of this is the Cicala way, an optical bench. But then you get results that don't tell you anything about how a given lens performs on a given camera. Or, in a more generic way: on a certain level of sensor resolution.

Regarding the rating: to be honest, I don't like them (and I'm not sure if Klaus' opinion is any different). In fact, there were no rating for a few years in the beginning, but there were so many requests to introduce some kind of rating that we couldn't ignore it. I don't like the idea, because they oversimplify things and in some cases, for example the Sigma 14 Art or the Nikkor AF-D 85/1.4, can't do a lens full justice. That's why we introduced field ratings, at least when there is a significant difference between the objective performance figures and the (usual) intended usage of a lens.
I don't like the rating, because I often see that our reviews are reduced to the ratings. The Sigma is considered a "bad lens", because it doesn't have many stars. That's not an issue of the star rating, though, it's an issue of those readers that reduce their effort of review reading to just looking at the stars and are not willing to invest the time to read our 2- or 3-page (so, not really that long) reviews completely and thus get a much better overview of the performance of a tested lens. A star rating can't give you that, if that's all you look at. But it can help to put things into perspective (for those who don't narrow their perspective and view, intentionally or unintentionally).
Editor
opticallimits.com

  Reply
#36
To my surprise, Capture One had a D850 in their list of supported cameras before I had the camera - and before it was available. But usually you're right, especially with exotic stuff. Or Fujifilm...

 

But no matter how many iterations you will go, the values on another system (body, lens, converter) will change again. Basically I would support the idea of no sharpening at all (what with noise reduction? Capture One uses a pretty strong on eby default) and I suspect, some companies are also fiddling with the RAW before they export it to an SD card.

 

About the ratings: If there's a transparent system for ratings, why not? In fact it is quicker than reading. At the same time I sometimes miss aspects which are important for me - but do not influence ratings. And when Klaus mentions to rate the Sigma even lower but went higher at the end because of the nice bokeh then I conclude: oh the ratings are more based on feeelings than on facts (which is normal, I'd say, as soon as one puts subjective points like handling or finish into the rating). Then the "state of the art" also appears to be an issue. Years ago, Klaus said, that lens would have been outstanding - so why not today? I'd like to read about the reasons.

 

I see it from a readers side: OL/PZ doesn't create comparisons between lenses (like "how good are the corners of this lens at this FL compared to another lens"). Reading through lens reviews and seeing some remarks about bad vignetting (don't care) bad flare behaviour (care more, but don't agree with Klaus' findings) leaves me a bit clueless. If lenscore rated the lens, they use one sensor for all, so the results of the lenses are comparable, even if the system has oter issues. Lensrentals MTF curves also allow comparisons - but MTF charts are not the only aspect.

 

Inner design of the lens is not an issue for ratings - and not important for most as long as it's working.

 

I don't care much about 1/2 or 1 stars difference. But I do care about 1 1/2 star diff beteeen a zoom getting 4, flaring like mad and a prime with 1 1/3 stop more speed and fully adjustable AF at 4 different distances - tis matters because sharp or not is more important than getting 10 lines more resolution in lab conditions. I don't want to create a big debate, but that lens is not worse at 14/2.8 than the 4 star Nikkor is. At least not (transfering stars into percent is ridiculous, I know) nearly 40% worse (4 stars 100%, 2.5 stars 62.5%).

 

Regular readers also know that you're testing only one lens. Measuring 10 and getting sample variations also doesn't help much to verifiy my own copy.

 

It all is about "the best lens for the cash" - and it's not static, so the idea of a static rating itself is highly questionable and in my eyes a systematic flaw. That's why I basically don't care. As already said, I do not mind how you rate my lens which I bought without waiting for anyone's verdict. Everything I ever bought has strong sides and weak ones, Use the first, avoid the latter. This was true decades ago and hasn't changed although many people think the best lens is the best in all aspects - as if Usain Bolt would be a quick swimmer and a fast bicycle driver....

  Reply
#37
Quote:To my surprise, Capture One had a D850 in their list of supported cameras before I had the camera - and before it was available. But usually you're right, especially with exotic stuff. Or Fujifilm...
The RAW converters I use for the D3x and D7000 reviews were released in 2011... I don't have high hopes that they support the D850 Wink 
 
Quote:But no matter how many iterations you will go, the values on another system (body, lens, converter) will change again.
That was the intended message Wink
 
Quote:I suspect, some companies are also fiddling with the RAW before they export it to an SD card.
Sure. Leica does, for example. Not necessarily with any influence on MTF values, though.
 
Quote:About the ratings: If there's a transparent system for ratings, why not? In fact it is quicker than reading. At the same time I sometimes miss aspects which are important for me - but do not influence ratings.
And that's the issue, as explained above already: there is no way to rate and at the same time match everyone's priorities. That's why we write 2 or 3 pages of text and include all the graphs. If we honestly thought that a simple rating would do, we could summarize the whole site onto a single page with just the lens name and the rating... something that has been requested very often already and we so far declined, for the same reasons already mentioned when discussing review ratings in general.
 
Quote:And when Klaus mentions to rate the Sigma even lower but went higher at the end because of the nice bokeh then I conclude: oh the ratings are more based on feeelings than on facts (which is normal, I'd say, as soon as one puts subjective points like handling or finish into the rating).
Handling and finish have no influence on the optical rating. Numbers do, for the most part, and then some personal impressions or priorities, I can't rule that out. We're humans, in the end, and shutterbugs and gerhads on top. We try to be as neutral as possible, though.
 
Quote:I see it from a readers side: OL/PZ doesn't create comparisons between lenses (like "how good are the corners of this lens at this FL compared to another lens").
I'm baffled, seriously. What else are the MTF charts there for?
 
Quote:If lenscore rated the lens, they use one sensor for all, so the results of the lenses are comparable, even if the system has oter issues.
And what does this rating tell you if you want to know how the lens performs on a sensor that is very different?
 
Quote:I don't care much about 1/2 or 1 stars difference. But I do care about 1 1/2 star diff beteeen a zoom getting 4, flaring like mad and a prime with 1 1/3 stop more speed and fully adjustable AF at 4 different distances - tis matters because sharp or not is more important than getting 10 lines more resolution in lab conditions. I don't want to create a big debate, but that lens is not worse at 14/2.8 than the 4 star Nikkor is. At least not (transfering stars into percent is ridiculous, I know) nearly 40% worse (4 stars 100%, 2.5 stars 62.5%).
I'm not sure why you keep coming back to this single point. Each lens is tested and judged across the whole specification set it offers. The Nikkor starts at f/2.8, and it is fairly great at its largest aperture already.

The Sigma starts with a bigger aperture, and thus is rated across a larger aperture range. At f/1.8, it shows weaknesses at the borders and corners. Which, by the way, I think can even be seen in the full res sample you posted (the lady at the lake). I don't find the border anyhow impressive in there and think they match the findings quite well.

If a lens offers a large aperture, we expect it to be great across the frame at that setting to receive a great rating,, when it comes to MTF, it's as simple as that.

Beside: looking at the available data, I still have doubts the Sigma matches the Nikkor at 14/2.8
Editor
opticallimits.com

  Reply
#38
Quote:Regular readers also know that you're testing only one lens. Measuring 10 and getting sample variations also doesn't help much to verifiy my own copy.
Regular readers know that quite often we don't Wink At least for my reviews there is often more than one review unit. It is not necessarily mentioned in the text, though.
 
Quote:It all is about "the best lens for the cash" - and it's not static, so the idea of a static rating itself is highly questionable
Right. Just what I said Smile
 
Quote:As already said, I do not mind how you rate my lens
To be honest, seeing how much effort you put into arguing against the rating in this particular case, one might get a slightly different impression Wink
Editor
opticallimits.com

  Reply
#39
https://photos.smugmug.com/Zeiss-FE-35mm...864-X3.jpg

 

the horizon seems tilted Smile

  Reply
#40
Naa, that's just how horizons are down under Wink
Editor
opticallimits.com

  Reply


Forum Jump:


Users browsing this thread: 3 Guest(s)