is a very simple class of mathematical transform
s to apply to a signal
or other chunk of data
, typically for the purpose of compression
or the like. It gets its name
because the decomposition of the function looks like a single period
of a wave
; for example, the simple Haar
transform looks like a squarewave.
The basic idea in a wavelet filter is that you have a function which, when given 2^n pieces of data, produces the average (the a - or average - channel) of the data and (2^n)-1 pieces of data needed to reconstitute the original 2^n (the d - or difference - channels). Typically the filter should make it so that if any of the d channels are lost, the overall integrity of the data is preserved. As such, typically after the transform is applied, they are stored with the a channel first and then the d channels coming afterwards. For a thorough wavelet transform, the process is repeated on the a channel until there is only one data element for a sum total a channel and then a whole bunch of d channels afterwards. This way, as more of the data is, say, downloaded, it becomes more accurate, as more of the d channels are present in order to reconstitute the original signal.
Probably the simplest wavelet transform is the Haar transform, which is quite simple; basically, given data points x1 and x2, the Haar transform produces a=(x1+x2)/2 and d=x1-a. Then to reconstitute the original x1 and x2, it is simply x1=(a-d) and x2=(a+d). Usually both channels are stored multiplied by 2 so that all of the precision is preserved. Abstracting this filter to higher dimensions is rather simple, though there are two ways to go about it (either apply the Haar transform to more points at a time, or apply it on each axis).
Now, the astute will notice that the wavelet transform itself doesn't actually compress the data - it only rearranges it. Where the compression comes in is that the d channels tend to be very compressible, especially when patterns are involved, since the d channels will tend to have a very regular pattern even if the underlying a channel does not, and so traditional windowing compression mechanisms such as good ol' LZ77 will end up compressing the resulting data very nicely. As an example, on my own CODEC (which uses a modified Haar wavelet which isn't technically a wavelet but it exhibits many wavelet properties), an image of a multilayered checkerboard pattern compresses by about 128:1. (Plain LZ77 gets a mere 43.5:1 on the same image. Okay, so it's not a very representative image in general, but it's a nice case to exaggerate the difference.)
Of course, some of the other wavelets don't work in the same way, strictly speaking. They use different numbers of samples in the various channels, and do different things to the channels. Unfortunately, Haar is about all I know about wavelets. :)
The big disadvantage to wavelets is that, if made lossless, for every big win there's a huge lose in terms of compression ratio. For example, whereas plain LZ77 doesn't break down too badly with random data (it rarely gets bigger), wavelets tend to cause the cases where LZ77 breaks down to explode - one image of just random data remains basically the same size with plain LZ77, whereas when the wavelet transform is applied to it, it expands at about 1:2. Similarly, some images which compress very nicely under LZ77 (a version of the self-portrait on my home node, for example) don't compress very well at all with the Haar transform (the wavelet CODEC gets "only" 1.95:1, but plain LZ77 gets 2.4:1).
Now, the best sort of wavelet compression is one where it adapts to the different levels and decides on a per-level basis how to compress it. Unfortunately, that is certainly non-trivial, and one of Michael Barnsley's patents probably covers it, too. :P
In the interest of establishing prior art in case nobody has thought of this (but it gets patented later), though, here's an algorithm, in pseudocode format, for doing this sort of multiresolution analysis:
- Determine the compressed size of the current level of detail with a variety of filters; these filters may (and should) recursively call this procedure for the chunk of data passed to them. Keep track of the minimum size.
- Split the current level into 2^n chunks (for n dimensions) and apply this algorithm to those smaller chunks
- Write out whichever way puts out the smallest image
Now, that's kind of obtuse, but it's very black box
y in design. For example, one of the filters would be a generic "wavelet" filter, which would try various wavelet transforms on the data and then apply the sum-total compression-determination algorithm to its a
channel (so that channel can try a different filter on it, or can be split up into 2^n chunks to be further processed with some other filter), or whatever.
Of course, each written-out chunk would have to be tagged with the transform applied to it so that it could be reconstituted when read back in.
Damnit, I want to implement this now. :) Unfortunately, determining the compressed size of a chunk of data is a bit non-trivial. :/