• 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Forums > Back > Sigma SD1 ... wow
#21
[quote name='wim' timestamp='1285195185' post='3111']

My note was based on previous Foveon sensors and comparisons of test shots. The thing is that the Foveon actually still only has 1/3 of the number of actual sensels or pixels indicated by Sigma, just that they can read the full spectrum. And this gives them a slight edge over Bayer sensors, as I mentioned, in the past anyway, of about 30 % to 40 % over their actual number of sensels, but certainly nothing like 3X that.



Regarding diffraction and sensors: I won't go into this, except maybe for a thread specifically dealing with that. Let me just note here that we never talked about diffraction limits with film and halide grains, so why should we all of a sudden with sensors. In my book it is a total non-issue, except for engineers specialized in this subject <img src='http://forum.photozone.de/public/style_emoticons/<#EMO_DIR#>/biggrin.gif' class='bbc_emoticon' alt='Smile' />.



Kind regards, Wim

[/quote]

I look at it a slightly different way. A foveon stack gives what I consider a true output pixel. The way bayer type sensors are sold are a marketing trick to make the numbers sound higher than they really are. But because it is what almost everyone uses, it sticks. The required interpolation means it can never be pixel perfect sharp other than using sharpening tricks.



And personally I'm not bothered with diffraction other than just another law of physics to get in the way. It is there when it is there. But in the specific case of this new sensor, I think it is a worthy consideration when people start pixel peeping. Will the sensor deliver the theoretical benefit or not? To answer that, you need to avoid possible scenarios where the limit is elsewhere. Because it isn't a bayer pattern sensor, we need to consider the differences.



At the end of the day, this could, and probably should, be better than a similar nominal bayer sensor. Like everyone else I'll have to be patient and wait for samples to see if it can deliver.
<a class="bbc_url" href="http://snowporing.deviantart.com/">dA</a> Canon 7D2, 7D, 5D2, 600D, 450D, 300D IR modified, 1D, EF-S 10-18, 15-85, EF 35/2, 85/1.8, 135/2, 70-300L, 100-400L, MP-E65, Zeiss 2/50, Sigma 150 macro, 120-300/2.8, Samyang 8mm fisheye, Olympus E-P1, Panasonic 20/1.7, Sony HX9V, Fuji X100.
  Reply
#22
[quote name='popo' timestamp='1285198592' post='3113']

I look at it a slightly different way. A foveon stack gives what I consider a true output pixel. The way bayer type sensors are sold are a marketing trick to make the numbers sound higher than they really are. But because it is what almost everyone uses, it sticks. The required interpolation means it can never be pixel perfect sharp other than using sharpening tricks.[/quote]

Agreed <img src='http://forum.photozone.de/public/style_emoticons/<#EMO_DIR#>/biggrin.gif' class='bbc_emoticon' alt='Smile' />.

Quote:And personally I'm not bothered with diffraction other than just another law of physics to get in the way. It is there when it is there. But in the specific case of this new sensor, I think it is a worthy consideration when people start pixel peeping. Will the sensor deliver the theoretical benefit or not? To answer that, you need to avoid possible scenarios where the limit is elsewhere. Because it isn't a bayer pattern sensor, we need to consider the differences.

It doesn't really matter. A lot was written on this here and there, but some of it was patently wrong. The thing is that a lens resolves much more than any current sensor, so it really is a moot point. Higher MPs will give us images with more details, and the larger details will still be there too. A diffraction limit only means that you can't resolve more for a specific spatial frequency, but what nobody seems to ever mention is that that spatial frequency on a high MP sensor is much higher than on a lower MP frequency, and that that is really only the reason that you reach the sensor diffraction limit.



It is like saying that a lens is diffraction limited at F/4 (which good, fast lenses often are). That actually means the lens can resolve at least the same spatial frequency as is possible for the diffraction limit according to the Rayleigh criterion (9% contrast if I am not mistaken), and that amounts to a resolution of 400 lp/mm. This would amounts to a very small pixel size, if you'd want to capture this resolution. However, there always is the formula for maximum resolution of a system, so it will always be less than the lowest common denominator in a chain, as the inverse total resolution of a system is equal to the sum of the inverse part resolutions in the system (if not limited by an AA-filter, though, as this has an interesting side effect in lower MP sensors).



And, BTW, at F/8 the diffraction limit, f.e., is about 200 lp/mm, so even that is not nearly reached by any sensor yet (that would be approximately 140 MP).

Quote:At the end of the day, this could, and probably should, be better than a similar nominal bayer sensor. Like everyone else I'll have to be patient and wait for samples to see if it can deliver.

Yep, it should be better. Let's hope we will be able to see some images soon <img src='http://forum.photozone.de/public/style_emoticons/<#EMO_DIR#>/biggrin.gif' class='bbc_emoticon' alt='Big Grin' />.



Ok, and now I ended up commenting on diffraction after all <img src='http://forum.photozone.de/public/style_emoticons/<#EMO_DIR#>/biggrin.gif' class='bbc_emoticon' alt='Big Grin' /> <ROFL>. Ah well <img src='http://forum.photozone.de/public/style_emoticons/<#EMO_DIR#>/biggrin.gif' class='bbc_emoticon' alt='Big Grin' />.



Kind regards, Wim
Gear: Canon EOS R with 3 primes and 2 zooms, 4 EF-R adapters, Canon EOS 5 (analog), 9 Canon EF primes, a lone Canon EF zoom, 2 extenders, 2 converters, tubes; Olympus OM-D 1 Mk II & Pen F with 12 primes, 6 zooms, and 3 Metabones EF-MFT adapters ....
Away
  Reply
#23
[quote name='Pinhole' timestamp='1285126104' post='3077']

Unless I missed something, it doesn't shoot video. It seems a bit odd, considering it's more or less standard these days. I wonder what the target audience is?

[/quote]



I'm quite positive that their target adience is people that want to take pictures, as opposed to recording movies... <img src='http://forum.photozone.de/public/style_emoticons/<#EMO_DIR#>/smile.gif' class='bbc_emoticon' alt='Smile' />
  Reply
#24
[quote name='popo' timestamp='1285173651' post='3096']

The diffraction impact will also kick in earlier on the foveon...

[/quote]



I don't think so. I'd even think it's the opposite. There are 15 MP per layer, as opposed to 18 MP or whatever in a Bayer-type sensor.
  Reply
#25
[quote name='BG_Home' timestamp='1285221985' post='3124']

I don't think so. I'd even think it's the opposite. There are 15 MP per layer, as opposed to 18 MP or whatever in a Bayer-type sensor.

[/quote]

I look at the minimum area required to create colour output. On a bayer sensor, roughly speaking you're looking at overlapping 2x2 blocks in order to get RGB output, so as a guide its effective full colour linear resolution is perhaps half the sensor count.
<a class="bbc_url" href="http://snowporing.deviantart.com/">dA</a> Canon 7D2, 7D, 5D2, 600D, 450D, 300D IR modified, 1D, EF-S 10-18, 15-85, EF 35/2, 85/1.8, 135/2, 70-300L, 100-400L, MP-E65, Zeiss 2/50, Sigma 150 macro, 120-300/2.8, Samyang 8mm fisheye, Olympus E-P1, Panasonic 20/1.7, Sony HX9V, Fuji X100.
  Reply
#26
[quote name='popo' timestamp='1285224960' post='3128']

I look at the minimum area required to create colour output. On a bayer sensor, roughly speaking you're looking at overlapping 2x2 blocks in order to get RGB output, so as a guide its effective full colour linear resolution is perhaps half the sensor count.

[/quote]

Diffraction has nothing to do with the layout of colour. Detail has nothing to do with colour, even.



Diffraction is caused by the lens, it appears in the projected image. That image gets sampled. Diffraction effects on an image are the same for ALL full frame sensors, no matter what resolution and no matter what structure their RGB matrix is. Even similar for a full frame BW sensor.

Diffraction effects on an image are the same too for all 1.5x crop sensors, no matter what resolution or type of sensor. The new Sigma is an 1.5x crop sensor.

The other SIgmas have a 1.7x crop sensor, which is smaller than an 1.5x crop sensor. So obviously, the older SIgmas will enlarge the effects of diffraction on the image more than the new one.
  Reply
#27
[quote name='Brightcolours' timestamp='1285229877' post='3143']

Diffraction has nothing to do with the layout of colour. Detail has nothing to do with colour, even.



Diffraction is caused by the lens, it appears in the projected image. That image gets sampled. Diffraction effects on an image are the same for ALL full frame sensors, no matter what resolution and no matter what structure their RGB matrix is. Even similar for a full frame BW sensor.

Diffraction effects on an image are the same too for all 1.5x crop sensors, no matter what resolution or type of sensor. The new Sigma is an 1.5x crop sensor.

The other SIgmas have a 1.7x crop sensor, which is smaller than an 1.5x crop sensor. So obviously, the older SIgmas will enlarge the effects of diffraction on the image more than the new one.

[/quote]

I am aware of the physics involved, but you've totally missed the point. I'm not the best at explaining stuff so let's try again. While diffraction "happens" at the lens, it still has to get detected by the sensor before we see it. Due to the softening effects of the bayer pattern and low pass filter, it should be noticeable at an output pixel level much "later" on a 15MP bayer sensor than a 15x3 foveon of same area for example.



I get a feeling we're all arguing for the same thing but expressed in totally different ways...
<a class="bbc_url" href="http://snowporing.deviantart.com/">dA</a> Canon 7D2, 7D, 5D2, 600D, 450D, 300D IR modified, 1D, EF-S 10-18, 15-85, EF 35/2, 85/1.8, 135/2, 70-300L, 100-400L, MP-E65, Zeiss 2/50, Sigma 150 macro, 120-300/2.8, Samyang 8mm fisheye, Olympus E-P1, Panasonic 20/1.7, Sony HX9V, Fuji X100.
  Reply
#28
[quote name='popo' timestamp='1285241406' post='3152']

I am aware of the physics involved, but you've totally missed the point. I'm not the best at explaining stuff so let's try again. While diffraction "happens" at the lens, it still has to get detected by the sensor before we see it. Due to the softening effects of the bayer pattern and low pass filter, it should be noticeable at an output pixel level much "later" on a 15MP bayer sensor than a 15x3 foveon of same area for example.



I get a feeling we're all arguing for the same thing but expressed in totally different ways...

[/quote]

The bayer pattern does not actually soften the image... it is quite capable of recording luminance (where the detail lays) to a surprising degree.

The AA-filter is not bayer specific... one can also put an AA-filter in front of a foveon sensor for the same reasons (not wanting to record false details and moire effects).



There are two ways to look at diffraction softening. At pixel level (where it makes little sense) and at image level.



At pixel level you probably are correct, the AA-filter "softening" may slightly cover up diffraction softening, but it will not be much at all.



But the low resolution of Sigma sensors till now anyway covered up diffraction softness. So on pixel level, diffraction would be detectable on a 7D much sooner than on a SD14.



On image level (where diffraction actually does matter), there is/will be no difference between cameras with different types of sensors and different resolutions. Diffraction of the projected image will be the same with similar sized sensors. A lower resolution sensor merely will cover up diffraction softness due to its own lack of resolution.



A downside to the foveon sensor will again be the noisy nature of it, unless they have radically changed the light "absorption" layers in the new sensor (I am not sure if that actually would be possible).

Foveon sensors effectively record a smaller percentage of light compared to bayer RGB CFA array sensors. So... I do not expect the new sensor to deliver great higher ISO results, and I am curious if again they went for a low or no AA-filter approach.
  Reply
#29
[quote name='Brightcolours' timestamp='1285242149' post='3153']

The bayer pattern does not actually soften the image... it is quite capable of recording luminance (where the detail lays) to a surprising degree.

The AA-filter is not bayer specific... one can also put an AA-filter in front of a foveon sensor for the same reasons (not wanting to record false details and moire effects).



There are two ways to look at diffraction softening. At pixel level (where it makes little sense) and at image level.



At pixel level you probably are correct, the AA-filter "softening" may slightly cover up diffraction softening, but it will not be much at all.



But the low resolution of Sigma sensors till now anyway covered up diffraction softness. So on pixel level, diffraction would be detectable on a 7D much sooner than on a SD14.



On image level (where diffraction actually does matter), there is/will be no difference between cameras with different types of sensors and different resolutions. Diffraction of the projected image will be the same with similar sized sensors. A lower resolution sensor merely will cover up diffraction softness due to its own lack of resolution.

[/quote]

Working backwards, I'm with you on the last point. I'm not normally a pixel peeper and recognise "good enough", and think too many people lose sight of the bigger picture where it matters.



In the case of the SD1 sensor, we do have something very different from everything else. At a technology level, I want to know what possible level of improvements it can give in the best case, even if in practice it wont necessarily deliver that in every case. So pixel peeping is worth doing initially to see what it really delivers. If you do so, you need to make sure the rest of the system isn't the limiting factor hence the diffraction consideration.
<a class="bbc_url" href="http://snowporing.deviantart.com/">dA</a> Canon 7D2, 7D, 5D2, 600D, 450D, 300D IR modified, 1D, EF-S 10-18, 15-85, EF 35/2, 85/1.8, 135/2, 70-300L, 100-400L, MP-E65, Zeiss 2/50, Sigma 150 macro, 120-300/2.8, Samyang 8mm fisheye, Olympus E-P1, Panasonic 20/1.7, Sony HX9V, Fuji X100.
  Reply
#30
[quote name='popo' timestamp='1285243136' post='3155']

Working backwards, I'm with you on the last point. I'm not normally a pixel peeper and recognise "good enough", and think too many people lose sight of the bigger picture where it matters.



In the case of the SD1 sensor, we do have something very different from everything else. At a technology level, I want to know what possible level of improvements it can give in the best case, even if in practice it wont necessarily deliver that in every case. So pixel peeping is worth doing initially to see what it really delivers. If you do so, you need to make sure the rest of the system isn't the limiting factor hence the diffraction consideration.

[/quote]

I understand that, but yet:



The AA filter will only add to the softening already there, it will not "mask" it.



The diffraction softening that each pixel samples will be the same for both a Bayer CFA array sensor and a Foveon sensor. The Foveon pixels will never get more soft than the Bayer ones.
  Reply


Forum Jump:


Users browsing this thread: 5 Guest(s)