What is Colour?
Light is electromagnetic radiation of particular wavelengths. Humans are only capable of responding to a small part of the electromagnetic spectrum, which we call the "visible spectrum", due to the nature of the chemical pigments and response capabilities of photoreceptors of the retina. The visual spectrum is between wavelengths of 400 and 700 nanometers (nm). Isaac Newton first showed the white light was made up of a mixture of many different colours, or different wavelengths, and these could be separated using a prism.
Most of the light reaching the eyes is reflected off of surfaces. So what is it about a green surface that makes it green? The answer is that some wavelengths are reflected and some are absorbed. A green surface is one which reflects medium wavelengths and absorbs longer and shorter wavelengths. So, surfaces are not "coloured" - they just have different reflective characteristics. This process is known a "subtractive colour mixing" where the "colour" is what remains after different parts of the illuminant light have been subtracted out. This can also be done using coloured filters (like in stage lights). In contrast to this, superimposing different wavelengths of light is "additive colour mixing", where what is reflected is the sum of the different lights. Additive colour mixing is the process that occurs in a colour television. This colour mixing can be done by taking advantage of the limited spatial resolution of the retina (spatial summation), or alternatively, it can be done by taking advantage of limited temporal summation by using rapidly flickering pictures.
Perception of Colour:
So far, all that has been said is about the physics of coloured light, like how wavelengths are absorbed and reflected off surfaces, and subtractive and additive colour mixing. But, what about the actual perception of different wavelength?
Newton said that "The rays are not coloured", and it is true that light of 650 nm is not itself "red", the subjective sensations of different colours are due to the properties of a particular perceptual system. So how are different wavelengths coded in the visual system?
The Young-Helmholtz idea of trichromacy, which uses just three different cone types (cones being types of photoreceptors in the retina). The evidence for this comes from the fact that we use three "dimensions" to describe any colour - intensity (strength of light/colour), hue (shade or tone of colour), and saturation (how much white light is present). These dimensions can each be manipulated without any effect on the others. This description of colour is a necessary consequence of trichromacy. Furthermore, we can use microspectrophotometry to look at the light absorbed by the photopigment in a cone by looking at the spectral sensitivity curves of cones. We find that there are three types of cone, each with a different peak sensitivity.
Microspectrophotometry is a technique that has only been around for the last 30 years or so, but Thomas Young proposed trichromacy around 200 years ago! His evidence was that he could match almost any shade or hue with a mixture of different intensities of just three coloured lights. He found that two was insufficient to do this, but no more than three were needed. These "primaries" do not have to even be particular colours, nor do they have to be pure wavelengths. Young's demonstration shows that there are no more than three pathways of colour sensitivity.
So how is Young's demonstration evidence of trichromacy? Let's consider just a single mechanism which has a spectral sensitivity curve, and which is maximally responsive to a single wavelength and less sensitive to others. A single mechanism cannot signal wavelength because the output of a single mechanism is always ambiguous. This is because the response for a low intensity level of optimal wavelength light, is the same as the response for a high intensity level of less optimal wavelength light. This is the essence of Rushton's Principal of Univariance, which says that due to there only being one way of firing (a rod or cone in the retina can only increase or decrease the rate of firing), a cone cannot signal two different things. So, since photoreceptors cannot code for both intensity and wavelength, a cone is both wavelength and intensity blind.
Now, let's consider having two different sorts of receptors which have overlapping sensitivity curves. In the region of overlap, wavelength is indicated by the ratio, or difference, in the rates of firing. Outside the region of overlap, perception is essentially monochromatic. So, to express this differently, with two receptors there are two degrees of freedom. Therefore, such a system could, in principle, signal two different things - both intensity and wavelength. A dichromatic system like this is quite good, although people who only have two functioning types of cone are colour blind. Still, colour blindness was not even noticed until a few hundred years ago.
However, there is a problem with a dichromatic system, that there are many different combinations of wavelength that produce the same ratio or difference in the rates of firing. These are "metamers" - indistinguishable mixtures of wavelengths. A trichromatic system reduces greatly the number of combinations of wavelengths that produce identical ratios of firing, although there are still hundreds of metamers. Any overlapping system has metamers.
Evidence from colour blindness fits with the notion of trichromacy. Colour blindness is caused by the lack of one of the three types of cone cells. Since there are three cone types, there are three types of colour blind people; protonopes (those lacking long wavelength cones), deuteranopes (those lacking middle wavelength cones), and tritanopes (those lacking short wavelength cones). Protonopes, for instance, confuse shades of red and green, among others.
There is overwhelming evidence that our colour vision is trichromatic, but Edvard Hering proposed that instead of 3 colour mechanisms, we have four, arranged as opponent processes. So, we have a red vs. green process, a blue vs. yellow process, and a black vs. white process. Evidence for this comes from after images. So, if we were to fixate for some time on a red spot, we would get an afterimage of a green spot. Also, if we fixate on a white spot, we get an afterimage of a dark spot. This is caused by the adaptation of specific sensory cones. It is most accutely noticeable when we look at a background with all wavelengths present (i.e., white) when looking at the afterimages, since in this case we find there is more activity in some receptors.
Hering also claimed that the fact we cannot image a "reddish-green" or a "bluish-yellow" is evidence for such opponent processes.
Physiological evidence suggests that both Hering's theory and the trichromatic theory are correct. There is trichromacy at the receptor level as there are three types of cones, and opponency at the LGN and cortical level. However, Hering's opponent process theory is actually trichromatic. There are three opponent mechanisms, and Hering was wrong that there is a separate "yellow" receptor - "yellow" is obtained by adding the "green" and "red" signals together. After all, if the original information is encoded with three cones, more information cannot then be obtained at a later level.
There is transformation of the signals at the three stages from the retina to the LGN then the cortex. In the retina there are long wavelength cones, middle wavelength cones, and short wavelength cones. In the LGN these signals are transformed into the opponent processes of red vs. green, blue vs. yellow, and black vs. white. Then in the cortex, signals are transformed to represent intensity (lightness), hue (colour), and saturation. These processes allow us to separate colour from the amount of light.
The opponent processes in the LGN are arranged in centre-surround opponent receptive fields. In the cortex, the receptive fields are effectively double opponent.
How do we perceive the colour, or rather, the "reflectance characteristics", of surfaces? The problem here is that the light reflected from a surface depends on both its reflectance characteristic and the characteristics of the illumination. We cannot tell the colour of a surface from just a small area - a surface could be "blue", or it could be "white" illuminated by a blue light. We have good, but not perfect, colour constancy - it does break down under fluorescent lights or under a single wavelength light, but these are not natural conditions.
We may solve the problem of colour constancy by adjusting the gain of different cone systems over an area. So, if there is a lot of red light around, the visual system turns down the gain of all the long wavelength cones in that area. This is known as von Kries adaptation.
There is another idea, however, and that is we calculate the relative hue of different parts of the scene, thus compensating for the wavelength properties of the illuminating light. This is the basis of most colour constancy algorithms, including Land's retinex model.
The logic of this idea should be explained. To detect the lightness of a surface (white, grey, black, etc), the relative amount of light reflected stays the same. For example, a sheet of white paper reflects 90% of the light reflected on it, while a sheet of black paper reflects only 5% of the light. Therefore, there is always a 90:5, or 18:1, ratio in the amount of light reflected from these surfaces, no matter what the level of illumination. So, to detect the colour of a surface, the relative amounts of particular wavelengths reflected off the surface is always the same. In Land's algorithm, boundaries are particularly important because a change in colour across a sharp boundary is almost certainly due to a change in reflectance characteristic, rather than an illuminatory change.
Double opponent cells, as found in the cortex, would seem to be particularly useful for achieving colour constancy since they detect the relative colour changes over the receptive field.
Should we see colour vision as a process of detecting the wavelengths reflected from local areas (based on trichromacy) and then "correcting" those signals to give us colour constancy. This is the traditional view, however, there is Land's view which is that colour vision has evolved to detect the invariant characteristics of the surface colours - i.e., their relative reflectances.
The question is; do we take the spectral composition of the illumination "into account", or do we merely have mechanisms which bypass the effects of the illumination?