To avoid hiding it at the bottom, I'll address Apedra first.
Quote:
if I well understood it is the same methodology adopted in the photozone tests
Interferometry is not performed by photozone. Interferometry is a remarkbly, painfully slow process when you evaluate multiple positions within the image field. To give you an idea, in my research group we have done a single test-test to evaluate our procedure for testing a particular freeform optical system on an interferometer. The procedure is fully automated, and it took two hours for that one position. The test re-runs itself if it gets bad data, but even with good data it took two hours. We will ultimately test 81 field points and the system must be precisely moved by hand between the points, so it will take north of 200, potentially more like 300, hours to evaluate the entire system. This is a much more complex system than a photographic lens, but interferometric testing would still be much more time consuming than MTF50 testing, such as Imatest which is done by PZ.
It is also very expensive, for non-flat surfaces you must have a reference lens or surface matched to the focal length you are testing. It is possible to test non-flat systems with a 'stock' reference flat but you must know a great deal about the system being tested for this to be accurate.
Quote:
how do you judge the three lenses (tamron 24-70 f2.8, sigma 24-105 f4 and nikon 24-120) in term of : sharpness, distortion, chromatics aberrations?
The sigma 24-105 is tested on the 5D2, which has 2 fewer megapixels. If we compensate for that by adjusting (24/22)*line pairs per picture height we will get a very close approximation to what the sigma would do on the D3x, used by photozone. This ignores differences in the resolving characteristics of the individual sensors, but none of these lenses are so good that the sensor provides the greatest contribution to the overall performance.
With that in mind, the center of the frame of the sigma is consistently the best at all focal lengths. The tamron would win the off-center competition. The sigma does well and is competitive there, though. I would mention that the very very edge of the picture - just maybe 50px*50px in the corners from the sigma is ruled by astronomical astigmatism and is, frankly, shit. But, it is a very small area so it may not affect your usage. The nikon does the worst in terms of resolution away from the center.
Regarding distortion, the tamron is the best corrected overall. Sigma takes second, nikon is worst.
When it comes to chromatic aberrations the tamron does better than the sigma - somewhat - but neither is offensive. Nikon's 24-120 is very poorly corrected in that regard.
With that said, the nikon would be the most reliable lens of the three, mechanically.
Sigma has made great strides since the death of their CEO when it comes to producing better lens designs, but their use of premium materials in their new "A" and "S" lenses does not assure premium quality. I would be very surprised were their lenses as well designed mechanically as canon and NIkon's. It's possible, but I have enormous doubts.
That said as well, I must admit I have a bit of a personal bias or reservation against sigma due to years of scummy tactics in the way of design theft. I haven't misrepresented the performance of their lens - but they stole their image stabilizer design from nikon, they stole their autofocus motor design from canon, and they stole their design method for the 35mm and 50mm "A" lenses from Zeiss.
Out of the three I would get the Tamron 24-70, but an argument could easily be made for the sigma's greater zoom range. Of course one can also be made for the max aperture of the tamron.
re: JoJu
Quote:
One can compute an already computed lens, but what does that say about it's manufacturing quality
Interferometric testing can only be done on a real, physical lens. Here is a poster on the subject created for presentation to freshmen optics students:
https://dl.dropboxusercontent.com/u/3928...Poster.pdf
Quote:
How does the analysis program gets it's formulas and coefficients?
Here is a page from slide notes on aberrations:
http://i.imgur.com/HQzqeU1.jpg
The equation/power series is for defocus in millimeters. The merit function of a lens is a similar function but contains weighted terms for various aberrations and field postions (think center/mid/corner -- field = distance from center).
Here is a page from slide notes on merit function terms:
http://i.imgur.com/9YrCrOy.jpg
Note that aberrations have different coefficients. Third order spherical, which is the major aberration in the RIM plot I posted above, is w-sub-40, for example. These terms sum to a merit function - an ideal lens has a merit function of 0, but this is an unattainable result.
Back to rim plots for a moment...
An ideal rim plot would have a straight line through the x axis. RIM plots graph defocus vs position in the pupil of the lens, essentially defocus vs distance from the center of the lens. You generate a series of rim plots with different field positions (distances from the center of the image and analyze them to guide you through optimization when designing a lens. The goal is to reduce aberrations, and different aberrations take different forms. Here is a sample of the balancing of a few aberrations, again on slide notes:
http://i.imgur.com/nufW1oL.jpg
When you see that you have spherical aberration, there are measures that can be made to reduce it, same goes for coma, astigmatism, etc.
For example, the length of the central airspace in double-gauss lenses primarily affects the astigmatism and vignetting without changing spherical, coma, field curvature, polychromatic aberrations, or distortion much.
You could optimize from synthetic interferograms from the computer as well, in fact the interferograms I posted above (they are red...) are synthetic - they are the ideal result for the project my research group is working on right now. When we complete the construction of the system we are building, we should get precisely that result.
The trouble with interferograms is that while aberrations have unique forms, they can quickly become very convoluted to the human eye to the point where one has no idea what aberrations are present.
So, back to the quote...
Quote:
How does the analysis program gets it's formulas and coefficients?
A computer, luckily, is very skilled at deconvoluting complex mathematical situations. The computer running the software for the interferometer will take the interference data, which is defocus in terms of waves (multiples of ~630nm). That defocus can be used to produce a map of defocus across the field when several interferograms are taken, to which functions for all of the aberrations may be fit. The coefficients will be determined by the best fit.
You may wonder - "If it's only a best fit it must not be very accurate" - but historically interferometry has been remarkably accurate, and the experiments preceeding the one my research group is conducting right now had the result real interferogram be within 2% of the predicted theory. And that is for a complex freeform situation, in a symmetrical system such as a photographic lens I would expect you could get within a tenth of a percent of the theoretical result.
Quote:
also can read the theory of an optical bench (also not to your taste, I guess ) but I haven't seen one in the flesh nor tried to work with one, beware.
There are many, many many many many many many many many many many many many many different things that can be referred to as an "optical bench" - I suspect you mean LensRental's fancy tool from Optikos? I've actually spoken quite a bit to the lead engineer for Optikos who was very involved in the design of that machine, it's a cool thing.
I have no qualms with the optical bench LR has, because it is a 'pure' form of testing. In a nutshell a tiny glass plate or other test chart printed at an absurd dpi is placed behind the lens. A very low divergence collimated light source illuminated the chart, which is imaged backwards by the lens. A set of three diffraction limited telecentric lenses re-image a small portion of that at each of three field angles onto sensors and the result is used to compute MTF. Alternatively, a tiny, tiny hole can be imaged to see the point spread function of the lens (which the machine likely can't compute MTF from - it is possible I suppose but I was offered a job at Optikos if I could figure out how to get the MTF out of the PSF
so I do not think anyone has done it.) In essense, their machine is three of these in one:
http://www.optikos.com/products/lenscheck-vis/
There are no confounding optics in this test which can play unfairly to specific weaknesses in lenses, only a physical limit to the resolution of the chart being imaged. I'm okay with that, because it is impossible to adjust defocus specifically in this case, eventually you sill sample the dots used to print the chart, but that does not impact the performance of the lens other than hard-limiting resolving power.
Quote:
I guess (!) they use a big sensor and enlarge the picture given by the lens, sort of the way its' done by speed boosters on µ4/3 - and there the speed booster doesn't take away image quality.
The speed booster does still have aberrations - however because the field is being shrunk by a fairly well corrected lens, the defocus/aberrations are all reduced in magnitude. The speed booster is also a superfast lens and is corrected for ideal resolution (untamed field curvature and distortion were traded to correct spherical, coma, and astigmatism) at the expense of less benchmark-visible image quality. A perfomance evaluation done with a speed booster behind the lens still only provides a generalization of the performance of the lens.
Quote:
I just ask what's the point of spending lots of money in it and go public with the results?
What's the point of many things? It is a cool experiment, but it is still quasi-systematically flawed. There is an error in the system for measurement, but because error can be positive or negative and positive+negative = 0 while positive+positive = more positive, the error is not truly systematic and cannot be computationally removed.
I do not believe DxO tests every lens and every camera body together - what is the point of publishing the results? But that is a rant for another day.