Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
I don't understand
#1
It is well known that in a Bayer sensor the number of green cells is twice the number of red/blue cells and it is said that this is because human eyes are more sensitive to green light. For "human eyes are more sensitive to green light", I interpret it as for the same amount of green light and red/blue light, the green light looks brighter, e.g., by a factor of k>1. If this is correct (indeed it is correct), then, suppose that we arrange a sensor to have the same number of green/red/blue cells, the cells have the same sensitivity to their colors, then for the same amount of green/red/blue light the censor of each color will record the same amount of light for each color. When this is converted to a RGB image properly, the amount of green/red/blue would still be the same. Then, because "human eyes are more sensitive to green light", when we look at this image we would still think that the green color is brighter than the red/blue color, by a factor of k. Based on this logic, I don't understand why a censor need to have twice number of green cells than the number of red/blue cells.



A similar question applys to conversion of a color image to a bw image: usually during the conversion more weight is given to the green light when calculating brightness (e.g., 60% green + 30% red + 10% blue). For a Bayer sensor more weight is already given to the green color (twice number of green cells). Is that weight still not enough for compensating the brightness difference?



Best regards,

Frank
#2
Brightness??



It is not about brightness <img src='http://forum.photozone.de/public/style_emoticons/<#EMO_DIR#>/cool.gif' class='bbc_emoticon' alt='B)' /> . If you put a red, a blue, and two green cars next to eachother, do the green cars appear more bright? No. But there are two of them! Still no... we see just two of them, not one extra bright.



That is exactly what is the case with the sensor. Just because we sample green at a higher rate than blue and red does not mean we sample green at the same rate as blue and red but with a higher intensity.



It seems like that you are under the impression that for each image pixel one red, one blue, and two green pixels get grouped together. And the values just get added, so green gets twice as bright. This of course is not the case. If it were the case, the horizontal resolution and vertical resolution of the sensor would be halved.



What actually happens is that for each pixel, the captured colour filtered light (whether the filter was for the red part of the spectrum, blue part or green part) gets interpolated with the other colours from neighbouring pixels. The intensity of the red, the blue and the green is left alone.



Green is just sampled at a higher frequency. The green data has more resolution, not a higher intensity or more brightness.



Also, the red, the blue and the green values of RGB pixels get stored next to eachother in images. For images with 16 bit bit depth, each colour channel has its own 16 bits, combining to 48 bits per pixel. For 8-bit bit depth they combine to 24 bits per pixel. The calculated red, green and blue values do not influence eachother there either.
#3
[quote name='Frank' timestamp='1346296101' post='19881']

It is well known that in a Bayer sensor the number of green cells is twice the number of red/blue cells and it is said that this is because human eyes are more sensitive to green light. For "human eyes are more sensitive to green light", I interpret it as for the same amount of green light and red/blue light, the green light looks brighter, e.g., by a factor of k>1. If this is correct (indeed it is correct), then, suppose that we arrange a sensor to have the same number of green/red/blue cells, the cells have the same sensitivity to their colors, then for the same amount of green/red/blue light the censor of each color will record the same amount of light for each color. When this is converted to a RGB image properly, the amount of green/red/blue would still be the same. Then, because "human eyes are more sensitive to green light", when we look at this image we would still think that the green color is brighter than the red/blue color, by a factor of k. Based on this logic, I don't understand why a censor need to have twice number of green cells than the number of red/blue cells.



A similar question applys to conversion of a color image to a bw image: usually during the conversion more weight is given to the green light when calculating brightness (e.g., 60% green + 30% red + 10% blue). For a Bayer sensor more weight is already given to the green color (twice number of green cells). Is that weight still not enough for compensating the brightness difference?



Best regards,

Frank

[/quote]





to add a bit to what brightcolors said, first a link:

http://hyperphysics.phy-astr.gsu.edu/hba...e.html#c3b



If you look at this, you will see that e.g., blue cone density is much smaller. Furthermore, due to chromatic aberration, blue light is slightly out of focus in our eyes. If you have ever seen something that was only illuminated by dim pure blue light (neon), you will have noticed it looks blurry if no other cones or rods are triggered. So, our blue vision is rather unsharp. Therefore, theoretically, you don't need that many blue sensors, because we don't see that well in that light frequency.
#4
Hi BC and Photonius:



Thank you for your response. I think that I understand it now. The doubled area of green cells on a Bayer sensor is used to increase the resolution of the image in green color and reduce the noise in green color which human eyes are most sensitive, not to increase the recorded density of green light.



However, I have another question: in a scene which has very large contrast people usually do not see that large contrast as a camera sees because human eyes respond to light differently than the camera sensor. As a result, the camera records an image with contrast much larger than that we have seen with our naked eyes. Assume that the camera indeed faithfuly recorded the contrast of the scene and produced an image that faithfully reflects the contrast recorded by the camera. But, why don't our eyes respond to the image the same way as to the true scene?



Best regards,

Frank
#5
[quote name='Frank' timestamp='1346335943' post='19891']

Hi BC and Photonius:



Thank you for your response. I think that I understand it now. The doubled area of green cells on a Bayer sensor is used to increase the resolution of the image in green color and reduce the noise in green color which human eyes are most sensitive, not to increase the recorded density of green light.



However, I have another question: in a scene which has very large contrast people usually do not see that large contrast as a camera sees because human eyes respond to light differently than the camera sensor. As a result, the camera records an image with contrast much larger than that we have seen with our naked eyes. Assume that the camera indeed faithfuly recorded the contrast of the scene and produced an image that faithfully reflects the contrast recorded by the camera. But, why don't our eyes respond to the image the same way as to the true scene?



Best regards,

Frank

[/quote]



Well, if you look again at the eye link I gave, we actually only really see well in a narrow field of view for which optimal viewing is adjusted. Our eyes constantly adjusts to the light with the iris and other measures (the eye has an aperture, and can adjust ISO so to speak, e.g. after some time in a dark place you can start to see). Think of it like a camera with spot meter, as you move the camera (and the spot meter in the center) around, automatic exposure will adjust constantly depending on whether you point the spot at a dark or bright area, while the camera still covers a large field. In our brain the whole thing gets assembled into something like a HDR image. And it's not perfect, if a bright light at night hits your eye, you can't see the rest of the dark night.



The dynamic range of a day scene can be huge. The way to capture it with a camera is to expose for each part separately, like the eye does.

The problem comes later, when you view this as a whole. You can create of course an HDR image that shows everything, but the dynamic range of the resulting image is never like the original scene, so it doesn't look right, because you compressed the difference between black and white to a much narrower range.

With computer monitors, even though you have the back-lights, your dynamic range is still limited. Most monitors are actually only 8 bit in each color channel, i.e. 256 levels, 8 f-stops.
  


Forum Jump:


Users browsing this thread:
1 Guest(s)