03-19-2015, 08:59 PM
Quote:Well, what would you expect then? Where are the processors and which kind of sensors (talking of a unit with glass and sensor grid) to grab this brute force stuff? Reading your text I start asking myself if a squirrel would take a whole forest in his mouth, just because it expects to find some nuts or seeds in this collection. Brute force is what people do if we're not smart enough to hack a password or to compromise a server. I don't think, my way in photography will become a hyper-collecting individual with no time to select what matters to me. Can't see the advantages.
I look at it differently. Maybe brute force wasn't the best description. Basically I'm saying if the cost of doing something becomes low enough, why limit yourself? We might be on the verge of that anyway. Look at 4k video. That's roughly 8MP stills. How long ago were digital cameras outputting 8MP? Not that long ago. Within the limits of video capture, you could use that and not have to worry about getting the optimal moment again.
Alternatively, have you ever taken a shot, and though, if only I moved left a little bit? If in doubt, get more data than you need. It is easier to bin what you don't need, than to make up what's missing. You might argue get it right in the first place, which is essentially what we try to do now. But show me anyone who gets 100% 1st time keeper rate. Doesn't exist. Technology will get us ever closer to that.
Quote:It depends what you consider "advantage". Regarding high ISO, you're right. Regarding high resolution, I see the Foveon 16MP beating the D810 36 MP, switched to DX, by a remarkable margin. If it comes to details and contrast, to texture and clarity, the Foveon sensor appears to be superior - but I want to run a couple of comparisons for my own comprehension of what's going on. If it comes to contrast range, things become difficult to compare. The Sigma software has one slider others don't have and within limits it does a bit of magic. But then, the whole software package is for people who dare to suffer...
I admit I've not kept up to date, but take the 15MP sensor of the SD1. Note I'm counting MP as non-upscaled output pixels. I recognise that each of those has their own RGB data. This is good. I want this. If you were to compare it against a 15MP bayer sensor output, I would expect the Foveon to be superior when pixel peeping.
Take the worst case, say you have a subject area that is really strong in blue, but no red or green. The Foveon still outputs 15MP of useful data. The bayer sensor would only sample 25% significantly, or be effectively a bit under 4MP. Turning it the other way, we'd need a 60MP bayer sensor to equal it in that case.
In practice, we don't often shoot pure colours that only exist in one colour channel, so the benefit is less than that, but still there. It will vary depending on the colour mix. Typical APS-C size sensors at around 24MP are probably close to the real world break even point. Using the full 36MP of the D8xx, or the 50MP of the 5Ds, would probably beat it in real world situations.
The original SD1 everyone wanted, but then died laughing at the asking price. The Merrill version was affordable, but came too late and the Foveon buzz had gone by then. Maybe one of the fixed lens cameras still makes some sense, but I've never been a person to use them. If Sigma made a SD1 with EF mount, I'd probably still buy one to play with.
Quote:"Computational phtography" I did years before when I was rendering scenes, models and textures designed in CAD. Do you wanna create artificial images? I don't, anymore. I'd rather learn how to paint than how to create synthetic languages, but maybe I do have a total wrong imagination of what you wrote about.
That's not what I'm suggesting at all. You still capture (data for) photos, but perhaps less directly and in a manner that requires more processing to get something we recognise.
Lytro could be an example of this. It is gathering data that wont look recognisable until you calculate it what the light is doing, and form it into an image.
As another example closer to what I hope to see, look up the Very Large Telescope. This array can be used as separate telescopes, but also combined to increase resolving ability. Now, I'm not suggesting we all need a telescope array to make images, but imagine if this could be reduced in size and computational cost. Some day in the future you might end up with an array of camera sensors on the back of a smart phone, and able to produce images comparable to something much bigger.
<a class="bbc_url" href="http://snowporing.deviantart.com/">dA</a> Canon 7D2, 7D, 5D2, 600D, 450D, 300D IR modified, 1D, EF-S 10-18, 15-85, EF 35/2, 85/1.8, 135/2, 70-300L, 100-400L, MP-E65, Zeiss 2/50, Sigma 150 macro, 120-300/2.8, Samyang 8mm fisheye, Olympus E-P1, Panasonic 20/1.7, Sony HX9V, Fuji X100.