s (either still
) must overcome the problem of capturing the real world
in full colour
. Images are generally reproduced using either an RGB
scheme. It makes sense, therefore, to use a similar scheme to capture the image at source.
Real world objects, of course, produce full spectrum radiation. The use of RGB and other schemes relies on the human eye's inherent properties - that is, it sees red, green and blue (in a fairly narrow field) and general intensity and lets the brain interpolate the world from that (but that a story for another node).
So the digital camera emulates the retina to a degree. It doesn't need to capture the raw photons and measure their frequency. It need merely capture enough information to allow a convincing image to be built later. The real world can be filtered down to some pretty simple components. But how to do it?
Some of the methods are:
- split the beam and filter in parallel
- capture multiple images and filter sequentially
- filter at the pixel level of a single image
All the options have advantages and disadvantages. The last has some key advantages for small devices: it's cheap and compact.
Now the issue of choosing a filter must be addressed. Again, we can look to the human eye. As mentioned above, the eye is sensitive to red, green, blue and intensity. In fact, it's more sensitive to green and everything is relative to intensity. So, we can model this with the filter we choose. If we divide our sensor into pixels (as is done in a CCD), we must choose what filter to place over each pixel. The Bayer filter is one of the three most common options.
Imagine if you will (or see below for URLs) a mask over the CCD that repeats the following pattern:
Each square of four pixels captures twice as much green as red or blue. Computational methods (see Bayer Pattern
for technicalities) are then used to produce an image file in conventional RGB format.
One alternative worth mentioning here is the Complementary Colour Mosaic Filter, which uses the following mask:
The benefit here is that the filter allows twice as many photons to pass. It's the way a filter works. A green filter
takes white light
and eliminates two thirds of the visible spectrum
(the eliminated spectrum would appear magenta
); a magenta filter
takes white light
and eliminates one third of the visible spectrum
(the eliminated spectrum would appear green
). In a complementary filter system, you get to detect twice as many photons for the same information but the maths is marginally harder.)
laconic says there's a variation on the above that introduces a green filter to enhance the accuracy of green sensitivity (again to be in keeping with the eye's response). I'll add a reference when available.
For how the eye detects colour, see
until I find an E2 node.