A 1 pass error diffusion dithering algorithm
The presentation assumes pixel values are real numbers in the range [0,1]; implementations of course use integer (or byte) arithmetic. Generalising to color images is easy, and works very well in practice; the key is to use vector valued pixels, and compute and distribute an appropriate vector error. It is also possible to use several grayscale values, rather than just black and white.
To dither the input image, traverse it in raster order. For each pixel, output a black pixel if the corresponding pixel value is <0.5, else output a white pixel. An error term for the pixel can now be computed: this is simply the difference between the pixel value and 0.0 (if a black pixel was output) or the difference between the pixel value and 1.0 (if a white pixel was output; this term is negative). This term is a measure of the excess blackness or whiteness of the output pixel compared to the input pixel; we'd like the average error on areas of the picture to be small (e.g. if a region is dark, with average value 0.2, we'd like 80% of the pixels in the corresponding output region to be black).
Now distribute the error to the 4 neighboring pixels which have not yet been processed, in the following proportions:
prev cur 7/16
3/16 5/16 1/16
Here cur is the current pixel; prev (and the pixels on the previous row) has already been processed, so cannot receive any error term. Note that the sum of the distributed errors is exactly the error term, but that the pixel to the right (which won't receive any additional error) receives a much larger proportion of the error than the other pixels. It is important to clamp pixel values to the range [0,1] - negative values should be transformed silently to 0 and positive values to 1. (When using e.g. byte arithmetic this is even more important, to prevent overflow).
Clearly, the algorithm can be performed in a single pass over the image, keeping only two rows of pixels (the current and next rows). It is even possible to scale the image and dither it in the same pass, add some gamma correction, or correct for the fact that printed black dots are larger than white ones.
The Floyd-Steinberg algorithm has two main problems:
- No clear pattern of dots is formed in places of uniform greyness. Unlike an ordered dithering algorithm, this gives images an uneven feel. Of course, the problem is worse when the dots are large.
- "Ghosting" is often observed: a sharp line dividing two even regions of very different greyness might re-appear faintly several rows below, due to weird destructive interference of the error terms.