Woah! This is a technical node, I will try very much not to bore you my most favorite reader.

The Bayer Pattern is a method for recording color information with an imager that is only sensitive to the intensity of light. A mosaic of alternating colors is placed over neighboring pixels in a 2 x 2 grid. Knowledge about the order and location of adjacent pixels and their color filter is used to interpolate the approximate color represented in a particular pixel location. This is calculated throughout the entire image for each pixel, allowing for a slight sacrifice in color purity and quality (to the eye, for image processing concerns there is a much more discernable deviation) to gain a great deal of resolution and large reduction in manufacturing cost. The alternative to this is using three separate imagers aligned around a prism with individual color filters. The image is then combined in software and does not suffer some of the artifacts that occur when using a pattern to achieve color. A pattern can be comprised of numerous different arrangements and color filter combinations, however the Bayer Pattern is among the most popular. This is primarily used with CCD (Charge Coupled Device) and CMOS (Complementary Metal Oxide Semiconductor) imagers.

It was invented in 1976 by Bryce Bayer then working for the Eastman Kodak Company, though it was built, as many things are, upon a greater history of similar work. Previously there had been a good deal of work using cameras with greyscale sensitivity in combination with different light filters to reproduce color images, and this is simply a logical extension of that idea. Dr. Bayer noted that it should be possible to use either a RGB or CMY color pattern, however at the time the technology was not ready make use of a CMY+G filter.


The Bayer Pattern is a 2 x 2 kernel which looks like this

 --- ---
| R | G |
 --- ---
| G | B |
 --- ---

Used in repetition on an imager, it looks a little like this

    x
     1   2   3   4   5   6  ... 
    --- --- --- --- --- --- ---
y 1| R | G | R | G | R | G | R |
    --- --- --- --- --- --- ---
  2| G | B | G | B | G | B | G |
    --- --- --- --- --- --- ---
  3| R | G | R | G | R | G | R |
    --- --- --- --- --- --- ---
  4| G | B | G | B | G | B | G |
    --- --- --- --- --- --- ---
 ..| R | G | R | G | R | G | R |
    --- --- --- --- --- --- ---

You will notice that there is an extra green pixel for every red and blue one, this is to increase the contrast of the image and reproduce green (which the human eye is more sensitive to) more accurately. Since each pixel on the imager is sensive to only a certain color, adjacent pixels are combined and used to reproduce the actual color value at that location. So let us determine the RGB triplet for location x(4), y(4) on our diagram imager.

G = ( g(x4,y3) + g(x4,y5) + g(x3,y4) + g(x5,y4)) / 4

R = ( r(x3,y3) + r(x5,y3) + r(x3,y5) + r(x5,y5)) / 4

B = b(4,4)

This is of course only a brief introduction, there are many ways to derive more precise information and reduce artifacts than the simple methods outlined here. However, even such basic information is useful in gaining an understanding of an images origin and potentially inherent flaws, especially when dealing with any sort of image processing.

Log in or register to write something here or to contact authors.