In YUV encoding, the chrominance components
(U and V) are given equal bandwidth, although
usually less than the luminance (Y), since
the human visual cortex is more sensitive to
brightness than to color. However, it has
been shown that our eyes aren't equally
sensitive to the different chroma components
either - we detect differences in
red-to-cyan transitions easier than we
detect them in magenta-to-green.
YIQ encoding takes advantage of this by
rotating the U and V components 123 degrees
around the luminance axis, yielding the new I
and Q components. Q can now be more severely
filtered (or quantized, when dealing with
digital media) than I, without being perceptible
to the viewer.
YIQ encoding was widely used in the early days of
NTSC, but modern broadcasting equipment
usually encodes equiband U and V.
Source: Some MPEG glossary I had lying
around.