As stated, the problem of

quantisation of an

image is related only to the image's

histogram (not the actual layout of the

pixels). A common

error function used takes the

l2 distance between the original and quantised histograms (minimise the sum of the square of the distance between each pixel in the original image and the pixel representing it in the new image).

Deriving this error function, we can see that at a local minimum, the following holds:

- The average value of all the pixels represented by some color in the quantised image is equal to that color.

This suggests the following optimization algorithm, which works quite nicely in practice:

- Pick some initial set of colors.
- Divide the image histogram into the regions of points closest to each of the colors (this is a Voronoi tesselation of the color cube!).
- Move each color to the average of the points in its region, weighted by the histogram.
- Repeat from 2 until weighted sum-of-squares errors is small enough.

Note that convergence to the

global minimum is

*not* guaranteed; in practice this does not matter.