Nyquist's theorem is one of the most fundamental in digital signal processing. It says that when sampling a signal, the sample rate needs to be at least twice the highest frequency which appears in the signal. Sampling at this rate ensures that the signal can be reconstructed perfectly from the digital samples.

A simple example: most humans can't hear sounds above 20kHz. Thus, when sampling audio data, 40000 samples per second are sufficient to ensure that no audible sound is lost. Compact discs use 44000 for good measure.

It is quite easy to prove the sampling theorem, or at least to make it plausible, using the convolution theorem. In brief, it states that the convolution of two functions is equal to the product of their respective Fourier transforms, and vice versa.

Now sampling a function (in time or space or whatever) means to multiply it by a comb function. Thus we could just as well convolute the spectrum of the function in question (which is its Fourier transform) with the Fourier transform of the comb. Which is incidentally another comb, but with inverse spacing - the finer is it in the time domain, the larger are the distances in the frequency domain.

The comb is essentially a collection of delta functions arranged in a regular grid. Delta functions are very easy to convolute: Just imagine you put a copy of the other function around every delta peak. And here we are: Obviously, if we want to retrieve our sampled function perfectly, the different copies of the spectrum may not overlap - which is to say that the distance between the delta functions must be at least twice as large as the highest frequency in the function to be sampled!

Ok, now did that make sense to anyone who doesn't already know what I'm talking about?
While the theorem might seem esoterically mathematical, its actually simple common sense ( Isn't most of science just that? )
To understand it, we need to know what sampling means. An audio signal from a microphone or other source is a continuous stream of electricity which is at any point of time proportional to the rapid pressure changes of the sound waves. When storing into a digital medium like a CDROM, the signal is measured at regular intervals ( the sampling rate ) and the levels of the samples are stored digitally. The more frequently the samples are taken, the more accurate the reconstruction of the sound will be when played back
Now, take the case of the most simple sound wave as produced by a tuning fork, which is a sine wave of a single frequency. To reproduce any semblance of the sine wave, you would have to sample the wave at least twice per cycle ( once at the peak and once at the trough), which would result in a triangular wave of the same frequency. Of course, there is a very slim chance that the samples might fall at the points where the wave reaches zero, but the odds are very less. Sampling at a lower frequency, however would always result in a waveform which had no resemblance either in shape or frequency to the original.

Simple! Right?

Log in or register to write something here or to contact authors.