[quote name='Brightcolours' timestamp='1308766431' post='9413']
The idea behind lightfield capture is to capture the direction of the light. Not focal planes.
This is the reason why one can look behind subjects, somewhat, and can vary DOF and focal plane during the visualization of the data.
How exactly they capture the direction vector (together with intensity and colour) is not disclosed yet. Nor how high (or low) the data density will be.
*edit:
It is explained how the directional vectors are gathered, in this thesis:
[url="http://www.lytro.com/renng-thesis.pdf"]http://www.lytro.com/renng-thesis.pdf[/url]
Basically, if I understand correctly, a "conventional" sensor is used to capture many tiny images of the same scene, each image "fed" by its own micro lens.
This gives many images with views from slightly different viewpoints, with with the lightfield is being calculated.
The resulting lightfield is then used to compute the different focal plane projections when viewing the image.
[/quote]
Thansk for that link - I just wish I had time to read it all <img src='http://forum.photozone.de/public/style_emoticons/<#EMO_DIR#>/biggrin.gif' class='bbc_emoticon' alt='' />.
It loosk liek thsi system uses a set of microlenses with one each for a group of pixels. The principle used for focusing is ray-tracing, I see, which, in a way, virtualizes the different focal planes required for different distances in the object plane(s).
Reading further, it looks like it therefore effectively uses an facetted eye approach - interesting, because effectively it stores a full image for each set of pixels served by a microlens, and it also means that this diminishes the effective resolution, approximately by the square root of the number of sensels per microlens. Each image then is indeed an image from a different light ray and angle.
I now understand why there will be consumer cameras first. 2.2 MP from a much larger sensel array is good enough for consumer pictures. In short, the focusing ability after the fact of such a system essentially delimits the resolution one can obtain. For obtaining higher resolutions, one needs very high MP sensors, ones that aren't used in commercial quantities yet (100 MP+).
Kind regards, Wim
The idea behind lightfield capture is to capture the direction of the light. Not focal planes.
This is the reason why one can look behind subjects, somewhat, and can vary DOF and focal plane during the visualization of the data.
How exactly they capture the direction vector (together with intensity and colour) is not disclosed yet. Nor how high (or low) the data density will be.
*edit:
It is explained how the directional vectors are gathered, in this thesis:
[url="http://www.lytro.com/renng-thesis.pdf"]http://www.lytro.com/renng-thesis.pdf[/url]
Basically, if I understand correctly, a "conventional" sensor is used to capture many tiny images of the same scene, each image "fed" by its own micro lens.
This gives many images with views from slightly different viewpoints, with with the lightfield is being calculated.
The resulting lightfield is then used to compute the different focal plane projections when viewing the image.
[/quote]
Thansk for that link - I just wish I had time to read it all <img src='http://forum.photozone.de/public/style_emoticons/<#EMO_DIR#>/biggrin.gif' class='bbc_emoticon' alt='' />.
It loosk liek thsi system uses a set of microlenses with one each for a group of pixels. The principle used for focusing is ray-tracing, I see, which, in a way, virtualizes the different focal planes required for different distances in the object plane(s).
Reading further, it looks like it therefore effectively uses an facetted eye approach - interesting, because effectively it stores a full image for each set of pixels served by a microlens, and it also means that this diminishes the effective resolution, approximately by the square root of the number of sensels per microlens. Each image then is indeed an image from a different light ray and angle.
I now understand why there will be consumer cameras first. 2.2 MP from a much larger sensel array is good enough for consumer pictures. In short, the focusing ability after the fact of such a system essentially delimits the resolution one can obtain. For obtaining higher resolutions, one needs very high MP sensors, ones that aren't used in commercial quantities yet (100 MP+).
Kind regards, Wim
Gear: Canon EOS R with 3 primes and 2 zooms, 4 EF-R adapters, Canon EOS 5 (analog), 9 Canon EF primes, a lone Canon EF zoom, 2 extenders, 2 converters, tubes; Olympus OM-D 1 Mk II & Pen F with 12 primes, 6 zooms, and 3 Metabones EF-MFT adapters ....