To give a layman's definition:

"You cannot measure/observe something without changing that which you are measuring/observing."

Romantics and artists find this appealing because it appears to refute scientific method. This is far from the truth. It simply means you must account for it.

See also:


Source: Hawking, Stephen, "A Brief History of Time", Bantam Books, NY, 1988 Last Updated 12.30.03

*giggle*... (I wonder if the cop got it?)

The best metaphor I've heard for the HUP is: tomato seeds. Let's say that tomato seeds are our metaphorical particles and in order to determine their position you must touch them with your knife. Well, what happens, as soon as you locate their position (you poke 'em) they start to move. In other words, the measurement of position directly affects velocity (being speed and direction). The effect is symmetrical so attempting to measure velocity we'll naturally affect direction (roughly equivalent to the seed bouncing off your knife or something -- though I don't think tomato seeds ever reach such high velocities. At least not in my house (young man!).)

As for the Romantics, though Heisenberg didn't begin the Downfall of Scientific Method, he did, however, end classical determinism. In the Newtonian world, it was still possible that the grand equation for all particles could be written out starting at the beginning of time and and tell exactly what things would be like in the future : determinism. But Uncertainty broke down the movement of particles to probability functions (known as wave functions). Now, instead of classical determinism, we can do no better than quantum determinism - meaning if it's at all possible to write such equations then they will have to come in the form of the probabilities where particles could be.

The HUP also explains why a transporter will never work (though those tricky people at Star Trek mention something about a Heisenberg Compensator. Harumph!!)

Mathematically expressed as:

            _
            h
   dx dp >= -
            2

(Okay, yeah, this is supposed to say "delta-x times delta-p is greater than or equal to h-bar over 2", but trying to express mathematical formulae in HTML sucks eggs. I can't wait until Mozilla or something learns how to grok MathML. Or TeX.)

h-bar is Planck's constant

Essentially what this means is that the more precisely you know something's position (x), the less you know about that object's momentum (p) and vice versa.

There's a nice web page about the Heisenberg Uncertainty Principle at
    http://tardis.svsu.edu/~slaven/uncertainty/uncertainty7.html

I found that an explanation based on control systems helped me the most. And since people use oscilloscopes to measure this stuff anyway, it's probably pretty close to the truth.

When you take the position of your particle, it looks like an impulse. When you take the velocity, it looks like a step function.

The problem is that while a pure impulse is accurate in position, its area is indeterminate. And where a step function tells you something about magnitude, it is homogeneous over time, and cannot be pinned down anywhere.

Whenever we measure a particle, we always (by definition) use some sort of filter that gives us the position and velocity of the particle to some degree, and these degrees of error are inherently dependent on each other.

Another way to see it: Think of a photograph of a bullet in flight. We can take a picture with a normal camera, and see a streak on the film where it passes. We can't easily measure its position, but by noting the time of exposure and measuring the streak, we can tell how fast it's going. If we take a picture with a high-speed device, we'll get a clear picture of the bullet's position, but no indication of how fast it's going. If there is streaking, then the speed measurement will be much much less accurate than had the picture been taken at a slower speed, which gives a better sample. Our 'interference through measurement' that the romantics like so much is never even an issue.

In quantum mechanics _everything_ about the position and velocity of a particle is described by a complex-valued function of its position, wavefunction psi(x). conj(psi(x))*psi(x), the square of the absolute value of psi, describes the probability density of finding the particle around the place x. Similarly, the probability density of finding the particle with a momentum around p is given by conj(psif(p))*psif(p), where psif(p) is the Fourier transform of psi(x). So, a particle with a precisely defined position would have a wave function that looks like a 'spike', which is zero everywhere except where the particle is. This kind of a function is called the "Dirac delta function" in math jargon. On the other hand, a particle with a precisely defined momentum would have a wave function that looks like a sine wave (The Fourier transform must be a Dirac delta function). So the uncertainty principle follows from the fact that the sine function and Dirac delta function are not the same function. It is impossible to have a function that has a precisely defined frequency, but is only non-zero at a single point.

In fact, one way to look at wavelets is that they are kind of 'compromises' in this respect, being moderately localized in both time and frequency. This is part of what makes them so useful in representing different kinds of data.

Curiously, it turns out that the bell curve that describes the normal distribution is the best compromise between localization in space and frequency! This function as a wave function gives the equality in the uncertainty principle inequality.

A comment on one or two of the writeups above:

The Energy-Time uncertainty relation is not a true uncertainty relation, as anyone familiar with P.W. Atkins work on the subject may know. Rather it is a consequence of the fact that the time-dependent schrodinger equation is first order in time.

This fact is best highlighted by noting that in each of the other uncertainty relations, the two observables have readily identifiable operators associated with them. Energy does, of course, that's the hamiltonian. But time has no operator in quantum mechanics. Instead, it is a parameter.

Also think of the odd consequences that such an uncertainty relation would have for relativistic quantum mechanics.

The common popularization (which appears several times in this node) that the uncertainty principle is about "measurement affecting that which is being measured" is incomplete. In fact, the UP also applies when no measurement by anyone is happening. It is, for example, used to prove that electrons can't exist within an nucleus: if they did, their location would be so exactly fixed that their momentum, by virtue of the UP, would become extremely uncertain. Momentum is mass multiplied by velocity, and an electron's mass is so small that the velocity derived from the uncertainty is many times that needed to escape the atom core. QED.

The Heisenberg Uncertainty Principle is a very important and central part of quantum mechanics. It is one of the primary features that distinguishes quantum and classical mechanics and gives rise to "quantum weirdness". The most familiar form is the original position-momentum uncertainty relation, which can be stated as follows:

Δx Δp ≥ hbar/2

where Δx is the uncertainty in position, Δp is the uncertainty in momentum, and hbar is Dirac's constant, which is Planck's constant divided by 2π. hbar is small, but it is not zero, which tells you that for a minimum uncertainty state1 Δx and Δp are inversely proportional. That means that if one is small the other must be large.

Those are the mathematical facts behind the basic relation, but understanding what it actually means takes some more work. The uncertainty principle is a frequent subject of modern physics abuse syndrome and is often even misunderstood by scientists. Below I will attempt to explain a little about what it means and what it does not mean. The position-momentum uncertainty relation is actually part of a more general mathematical relation in quantum mechanics, which is sometimes called the generalized uncertainty principle (to distinguish it from the specific one for position and momentum). There is also an energy-time uncertainty principle that is superficially similar but is not actually an example of the generalized uncertainty principle. First I will give an informal explanation of the uncertainty principle, then a more formal explanation and a proof. I will look at the topic from the standpoint of our modern understanding (and use roughly the Copenhagen Interpretation for those experts in the audience) not necessarily how early quantum physicists like Bohr and Heisenberg understood it.

Informal Explanation

What Exactly Does "Uncertainty" Mean in Quantum Mechanics?

Quantum mechanics usually doesn't tell us exactly what outcome a measurement performed on a physical system will have, it only tells us which outcomes are possible and how likely each outcome is. Even if you have two systems that are in identically the same state, quantum mechanics says you can get different results from making the same measurement on each system. For example, quantum theory tells you the probability of finding the electron in a hydrogen atom within a given distance of the nucleus, though it doesn't tell you where the electron will be specifically. If you were to take two hydrogen atoms in the ground state and measure the position of the electron in each atom, you would in generally get different results. That is to say that the outcome of measurements in quantum mechanics is nondeterministic.

If you have such a probability distribution for an observable quantity O, you can define an expectation value, denoted ⟨O⟩, which is the value you'd get if you averaged the results of measurements made on many systems all in that quantum state. Since a range of different possible outcomes happen, there is a sort of natural spread of values around the expectation value that are still fairly likely to occur for any one measurement. This characteristic spread can be measured mathematically by finding the standard deviation of the set of possible outcomes, and that gives us the uncertainty in the value of O, which we'll call σO. So the uncertainty in O is the spread of possible values that might result from measuring O, and it gives us an idea of roughly how far we might expect any one measurement to differ from the average value of many measurements done on identical systems.

The Meaning of the Position-Momentum Uncertainty Relation

Now that we have an idea of what "uncertainty" means, we can return to the position-momentum uncertainty relation. The relation

Δx Δp ≥ hbar/2

means the following: Suppose that you had a very large number of identically prepared quantum systems2. On half of the systems you measured the position, x, and on the other half you measured the momentum, p. Then you used the standard deviation to find the uncertainty in x (from your x measurements) and the uncertainty in p (from your p measurements). The uncertainty principle is saying that those two uncertainties would have the relationship given above. This is true for any possible quantum state. If you choose to put a system in a state where it has a very well defined position (very small spread of possible positions), then it will have a very poorly defined momentum (a very wide range of values). This is a statement about the fundamental nature of the quantum state. As Heisenberg puts it, "This indeterminateness is to be considered an essential characteristic of the electron, and not as evidence of the inapplicability of the wave picture."3

Implications and Misconceptions

Probably the most common misconception is that the Heisenberg Uncertainty Principle is equivalent to the statement, "You can't measure a system without changing it." If you look at the way I explained it above, the position and momentum measurements are being done on entirely separate systems, which happen to have started out in the same state, so it is not caused by the first measurement interfering with a second. In fact, we haven't said anything about what state the systems are in after a measurement.

Now, in quantum physics when you measure an observable of a system, the system undergoes what is sometimes called a wavefunction collapse4 so that afterward it is in a new quantum state, and with most sorts of measurements if the same measurement is repeated immediately you will always get the same result a second time. That means if you measure the position of that electron in the hydrogen atom to high precision and find it to be at a specific place, then if you measure the position again immediately you'll find it in the same place. Thus, after the first measurement the electron must have collapsed into a quantum state with a well defined position (even though it did not have one before the measurement). Then the uncertainty principle tells us that this new state must have a poorly defined momentum (a wide spread of possible values), so if we did try to measure the momentum of the electron after we measured its position, we'd just get some really uncertain nonsense (a wide and erratic spread of values). As a result we could not get any information about what the momentum originally was from those later momentum measurements. So it is true that a measurement of a particle's position will destroy any information about it's momentum (and vice versa), but the point is that this is a result of the uncertainty principle together with the entirely separate assumption of wavefunction collapse. Since the uncertainty relation doesn't depend on making measurements on the same system, only identically prepared systems, the uncertainty principle must be a statement about the inherent uncertainty in the original state.

Many people think that the uncertainty principle has to do with measuring devices just being flawed and not being able to make precise measurements, but notice that the uncertainty principle allows us to have Δx to be as small as we like, as long as we don't care about Δp. Or we can have Δp as small as we like, as long as we don't care that Δx is large. Beyond that, notice that I've been insisting that the uncertainty depends on the quantum state of the system and haven't mentioned anything about the measuring apparatus. In fact, the uncertainty principle we've been discussing assumes the measuring apparatus can make a perfectly precise measurement. The uncertainty is in the intrinsic "fuzziness" of the system. We are not even taking into account actual measuring apparatus that have other sources of imprecision.

One often reads or sees "derivations" of the uncertainty principle from ideas like trying to observe atoms with a microscope and considering the recoil from each photon hitting the atom. When quantum mechanics was first developed, people often spoke of this idea and, indeed, Heisenberg discusses it5. This is seen by modern physicists as what might be called a "motivation": it motivates one to consider certain ideas but it is not actually a well formed proof. It seems you might reasonably think that perhaps a particle really does have both an exact position and momentum, and we just can't measure what they are. That sort of theory falls under a class of ideas called hidden variables theories, and it turns out that Bell's Theorem tells us that, under fairly general assumptions, there should be a testable difference between those sorts of theories and quantum theories. The details of that proof are beyond the scope of this writeup. Subsequent experiments suggest that the physical world agrees with quantum mechanics, so it appears that it's not just that we can't measure the position of the electron in a hydrogen atom precisely, it's that the electron really doesn't have a precisely defined position! That is weird in the extreme, but as far as we can tell it seems to be the truth. This also means that the uncertainty principle cannot be derived from classical mechanics, and while you can motivate why you might consider such an idea, you can't really "make sense of it" from classical mechanics alone.

Implications of de Broglie's Relation

In fact, once you know de Broglie's relation and the statistical interpretation the momentum-position uncertainty relation comes fairly directly from the mathematical theory of waves. See the de Broglie's relation node for more details.

Where is the Uncertainty Principle in Everyday Life?

If the uncertainty relation is supposedly so fundamental, why didn't anyone notice it was there until the 20th century? Why can't you observe it when you're bowling or when two cars crash? This question relates to the fairly profound issue of correspondence between quantum and classical mechanics, which includes the interpretation of quantum mechanics and the study of quantum decoherence, but we can give a pretty good, simple answer for every day situations. The answer? hbar is small. One way you can often get the "classical limit" in quantum mechanics is to set hbar = 0. In the case of the position-momentum uncertainty principle, that means Δx Δ p ≥ 0, which is by definition true even in classical physics, since both of the uncertainties are positive numbers. Now, of course, in real life hbar is not zero, but it turns out that it's pretty small. hbar is approximately 10-34 J s. This means that is you have a 0.5 kg ball and measure its position to an precision of 1 μm (a fraction the width of a human hair), the uncertainty principle implies that the uncertainty in velocity must be at least 10-28 m/s. In everyday life there's no way you can tell the velocity to that sort of precision. To give some clue of how precise this is, consider that at 10-28 m/s it would take an object more than 300 billion years for that object to move 1 nm (one millionth of a millimeter). So, the bottom line is that the minimum uncertainties imposed by the uncertainty principle are way too small to notice in everyday situations. You wouldn't expect to see them until you start doing very precise measurements or deal with very small systems.

The Energy-Time Uncertainty Principle

Often times people quote an energy-time uncertainty principle, but it is not the same as the one we've been discussing, the one used for position and momentum. Although that statement is usually given in a similar mathematical form, it's different because time is not an observable in quantum mechanics, as position and momentum are. This should not be too surprising, since you don't measure a system to find its time like you would to find, say, its momentum. Rather, you think of time as existing independent of the system, as something that the observer keeps track of. For this reason there's some ambiguity about what you might mean by "uncertainty in time", and the same mathematical arguments can not be used. If you define what you mean by "uncertainty in time" in a clever way, though, you can get a relation that looks superficially the same as the position-momentum uncertainty relation.

The Generalized Uncertainty Relation

The position-momentum uncertainty relation is actually just a special case of a more general theorem in quantum mechanics sometimes called the generalized uncertainty principle. The term "Heisenberg uncertainty principle" is used to refer either to the general principle or the position-momentum uncertainty principle, though the latter was the one Heisenberg himself originally stated. The generalized uncertainty principle says that, in quantum mechanics, if you have two observable quantities of a system, then there will in general be some lower bound to the uncertainty with which both values can be known for both observables. Mathematically, for two observables A and B, ΔA ΔB ≥ L. In general, this lower bound is not zero, meaning that the less the uncertainty in one the greater the uncertainty in the other. The lower limit is defined by the commutator of the operators representing the two observables, denoted [A,B], which roughly measures how similarly each acts on the specific system. So in general L is dependent on both of the observables being discussed and the state of the system. This is defined for any pair of observables, which can include things like position, momentum, energy, orbital angular momentum, and spin.

The form of the Heisenberg Uncertainty Principle that is normally discussed is the statement of it for position and momentum. For position x and momentum p the lower bound on uncertainty turns out to be independent of the state, and results in the uncertainty relation

Δx Δp ≥ hbar/2

In fact, there are many pairs of observables that obey exactly the same uncertainty relation. Two such observables are said to be "canonically conjugate" to one another, which is a term from classical mechanics (specifically Hamiltonian mechanics). Some examples of canonically conjugate pairs of observable are the following: position and momentum, the components of angular momentum along two perpendicular axes, and a component of linear polarization and a component of circular polarization of light. The generalized uncertainty principle is a very nice and useful mathematical result, but it is general enough that only so much can be said without resulting to a lot of math, which is what I will do now.

Formal Explanation

For an observable O, let ⟨O⟩ be the expectation value of O and let ΔO be the standard deviation of O, so (ΔO)2 = ⟨O2⟩ - ⟨O⟩2. For a complex number z, let conj(z) be the complex conjugate of z and |z|2 be the complex norm squared, z*conj(z). Let adjoint(O) be the adjoint of O, also called the hermitian conjugate, and remember that all operators in this proof represent observables, so they are all hermitian (self - adjoint). Let the commutator of A and B be [A,B]=AB - BA. Finally, hbar is Planck's constant divided by twice pi.

Given two observable quantities represented by operators A and B:

(ΔA)2 (ΔB)2 ≥ |⟨[A,B]⟩|2/4

This is the general statement of the Heisenberg Uncertainty Principle for any two quantities you can observe. The value of the lower limit of the uncertainty depends on what you're measuring, and it can be zero in some cases. This is not the same as the energy-time uncertainty relation because this is stated for observables of the system as represented by operators on the Hilbert space of the system. Time is not an observable, and it is represented in quantum mechanics as a scalar parameter of the theory, an independent variable, not an operator. Also, the sense in which uncertainty is defined is different. There is extensive discussion of the meaning of these relations above, but it is worth pointing out that this theorem applies to the situation where observable A is measured on one group of systems and observable B is measured on an entirely different groups of systems as long as they are prepared in the same quantum state.

Proof

For this proof, I will use Dirac notation, where a state of the system labeled ψ, a vector in the Hilbert space of the system, is denoted by the ket |ψ⟩ and its dual is denoted by the bra ⟨ψ|. The inner product of a bra φ with a ket ψ is then denoted ⟨φ|ψ⟩. Also, the expectation value of an observable O for a state ψ is ⟨O⟩ = ⟨ψ|O|ψ⟩, where O is a hermitian operator on the Hilbert space.

Let the system be in a state |ψ⟩, and consider two observables represented by the operators A and B. To prove the theorem as stated above, we will need to restate both sides of the inequality.

First, consider an operator δA, the gives the deviation of A from the average value.

δA = A - ⟨A⟩

⟨δA2⟩ = ⟨(A - ⟨A⟩)2⟩ = ⟨A2 - 2 ⟨A⟩ A + ⟨A⟩2⟩ = ⟨A2⟩ - 2 ⟨A⟩ ⟨A⟩ + ⟨A⟩2

Thus, ⟨δA2⟩ = ⟨A2⟩ - ⟨A⟩2 = (ΔA)2

And the same goes for B and δB.

The next we need to work out some facts about these deviation operators, δA and δB. We can fairly easily determine they have the same commutation relations as A and B.

[δA,δB] = [A - ⟨A⟩,B - ⟨B⟩] = [A,B] - [⟨A⟩,B] - [A,⟨B⟩] + [⟨A⟩,⟨B⟩]

⟨A⟩ and ⟨B⟩ are just numbers so they commute with any other object; thus,

[δA,δB] = [A,B]

Also, both δA and δB are hermitian, since the Adjoint(A - ⟨A⟩) = Adjoint(A) - conj(⟨A⟩) = A - ⟨A⟩ given that A is hermitian (which also implies that ⟨A⟩ must be a real number).

Now we can proceed with the meat of the proof.

⟨[δA,δB]⟩ = ⟨δAδB - δBδA⟩ = ⟨δAδB⟩ - ⟨δBδA⟩

⟨δBδA⟩ = conj(⟨adjoint(δA) adjoint(δB)⟩) = conj(⟨δAδB⟩)

since δA and δB are hermitian. Thus,

⟨[δA,δB]⟩ = ⟨δAδB⟩ - Conj(⟨δAδB⟩) = 2 i Im(⟨δAδB⟩)

where Im(z) is the imaginary part of z. Now obviously

|⟨[δA,δB]⟩|2 = 4 |Im(⟨δAδB⟩)|2 ≤ 4 |⟨δAδB⟩|2

Since the norm of the imaginary part can't be more than the norm of the entire complex number. Now we can pull a trick by writing

|⟨δAδB⟩|2 = |⟨ψ|δAδB|ψ⟩|2

and considering that as the inner product of two states |ψA⟩ = δA |ψ⟩ and |ψB⟩ = δB |ψ⟩. Then by the Cauchy - Schwartz inequality, for any two vectors |φ⟩ and |ψ⟩,

|⟨φ|ψ⟩|2 ≤ |⟨φ|φ⟩| |⟨ψ|ψ⟩|, so

|(⟨ψ|δA)(δB|ψ⟩)|2 = |⟨ψAB⟩|2 ≤|⟨ψAA⟩| |⟨ψBB⟩| = |⟨ψ|δAδA|ψ⟩| |⟨ψ|δBδB|ψ⟩| = |⟨δA2⟩| |⟨δB2⟩| = ⟨δA2⟩ ⟨δB2

since ⟨δA2⟩ and ⟨δB2⟩ have to be positive because δA and δB are hermitian.

Now to put all this crazy crap together:

(ΔA)2(ΔB)2 = ⟨δA2⟩ ⟨δB2⟩ ≥ |⟨δAδB⟩|2 ≥ |⟨[δA,δB]⟩|2/4 = |⟨[A,B]⟩|2/4

So there it is. The proof itself is perhaps not too enlightening, which is that why I saved it until last, but now you know. Of course, this is a modern proof based upon formalism developed after Heisenberg stated the uncertainty principle. His proofs were based on similar principles but he was not thinking of things in terms of Hilbert spaces, and original proofs actually didn't come up with the correct minimum limit, as he had Δx Δ p ≥ hbar.


This was my very first node. I largely rewrote it (as of 11/04) and the original now resides at the bottom of my homenode. Let me know if you think this is an improvement. Any suggestions on how the node can be clearer (especially for people who don't know much about the subject) would be very welcomed.

Notes

  1. A "minimum uncertainty state" is one in which the product Δx and Δp is minimized, meaning that if you choose a value for one, then this state has the minimum possible for the other.
  2. A system is usually prepared in a certain quantum state through a series of operations and measurements of the system. For example, an electron may be prepared in a spin up state by sending it through a Stern-Gerlach device (which separates out to beams of electrons, one with spin up and one with spin down) and using only electrons from the spin up beam.
  3. Heisenberg, p. 14.
  4. Unfortunately, the term "wavefunction collapse" can actually have a variety of meanings to experts. Here I just mean a projection onto an eigenstate of the measurement not making any statement about a dynamical process by which the state undergoes this projection. This is what von Neumann referred to as "process 1".
  5. Heisenberg, p21.

Sources

  • The Physical Principles of the Quantum Theory, Werner Heisenberg
  • Introduction to Quantum Mechanics, David J. Griffiths
  • Modern Quantum Mechanics, J. J. Sakurai

Well there's probably nothing else that is so famous and yet causes so much confusion. The HCP lies at the heart of Quantum Mechanics (as Feynman puts it, "it protects Quantum Mechanics") and somehow it seems to have caught a lot of popular attention outside the physics community also.

Well lets get the Mathematics out of the way first. So here's the generalized uncertainty principle:
If A and B are any observables (Hermitian operators) define
dA=A-E(A) , E(A) means expectation of A
dB=B-E(B)
then
E(dA2)E(dB2) >= (1/4)*|E(A,B)|2
When A=x and B = p, this reduces to the normal del(x).del(p) >= h/4pi because [x,p] = ih/(2*pi) .
Sakurai's book Modern Quantum Mechanics has a proof in the first chapter.

Okay now lets take the WU's one by one. Socialist Wolf's WU first. Well it is possible to determine the exact position and velocity that the particle had at a point in the past. So you can determine the position and then later determine the velocity and then say "look at t = -10 seconds the particle was here and had this velocity". Feynman explains this nicely in the first chapter of the third volume. One way to look at this is "Everything in the past is a particle, everything in the future is a wave". Thats not what the uncertainty principle deals with though.

Brazil's WU then. Here's the argument Heisenberg used first. Lets say you wish to measure the position of a small object accurately. So you must use light of a small wavelength to resolve the object. The smaller the wavelength, the larger the energy of the photon, so the larger the 'kick' imparted to the object. Thus the object will necessarily get disturbed if you try and measure its position accurately.
The problem with such an argument is that they assume that there is some true position of the particle out there. This is what Heisenberg thought initially though he was later convinced by Bohr.
What this means is that it is meaningless to speak of a dynamical characteristic in the abscence of measurement. Dynamical characteristics exhibit themselves only as a result of measurement. The first chapter of Landau and Lifshitz deals with this if anyone's interested.
Thus the correct way to look at this is to say that if you make a measurement of position the result of this measurement is necessarily probabilistic. You could find the particle here or you could find it there. The fuzziness involved (The standard deviation of the underlying probability distribution) is the uncertainity.

Finally Bitter_Engineer's idea about a photograph is an interesting way of showing how you cannot measure position and velocity with a single measurement but the uncertainity principle goes deeper than that. It has to do wth the fuzzy probabilistic nature of Quantum Mechanics itself.

This writeup is a nonmathematical discussion of the Heisenberg Uncertainty Principle. The Heisenberg Uncertainty Principle is a mathematical inequality, but its philosophical interpretation is a fascinating and surprising aspect of our universe.

Some background information about quantum mechanics

Before the age of quantum mechanics, the goal of physics was to be able to deterministically predict the future universe given complete knowledge about the present universe. More practically, given the state of an isolated system in the universe, we should be able to predict how that system evolves with time. For example, if we want to predict where a bullet will land after we fire it, we shouldn't need to account for the gravitational pull of Jupiter. However, after accounting for Earth's gravity, air resistance, the curvature of the Earth, etc., we should be able to precisely determine where the bullet will land. Until the 20th century, physicists were very successful at developing equations that deterministically predicted the future.

In the early 20th century, physicists were puzzled by experiments that suggested that at the atomic/subatomic scale, deterministic predictions were impossible. Early such experiments were the Stern-Gerlach experiment and the double-slit experiment. Although determinstic predictions weren't possible, physicists found that probabilistic predictions were. Let's take the double-slit experiment as an example. Although it was not possible to know where an electron would be measured to land on a screen after going through two tiny slits, the probability of it being measured to land at a certain location could be predicted.

Physicists such as Pauli, Dirac, Schrodinger, and Heisenberg developed the mathematics of this probabilistic physics, and it became known as quantum mechanics. Many physicists (such as Albert Einstein) did not believe that the universe was truly probabilistic--they felt that we just didn't know everything there was to know. Experiments later in the century seem to prove that the universe really is probabilistic. Almost all physicists now accept the probabilistic nature of the universe as a fact.

While in general we cannot give the precise location, velocity, etc. of a particle, we can write functions that give the probabilities of measuring certain values for those "observables." A very common and important example, called the wavefunction and denoted by Ψ(r), describes the probability of measuring a particle to be at any location r in space. This function could be spread out over all space (suggesting that the particle is completely delocalized) or it could be concentrated in a tiny area (suggesting the particle is quite localized). Since the function describes probability (more exactly, probability density), all we require is that its integral over all space be 1, meaning there's a 100% chance that we'll measure the particle to be somewhere. We could also write a function Φ(v) that tells us the probability of measuring a particle to have some velocity v.

The Heisenberg Uncertainty Principle

Finally we can discuss the Heisenberg Uncertainty Principle. Consider the function Ψ(r). We can find the average value of Ψ(r), (i.e. the average position measured for the particle if we could measure its position several times). Furthermore, we can find a value called the uncertainty that describes the average difference in magnitude between the result of an actual measurement and the average measurement result*. Similarly, we can find the uncertainty of Φ(v).

* Uncertainty is actually defined as the root mean square of the difference, since the absolute value function is difficult to work with.

The Heisenberg Uncertainty Principle states that the product of the uncertainty of Ψ and the uncertainty of Φ must be larger than a nonzero, universal constant (Planck's constant/4π*mass). Philosophically, this means that a particle can NEVER have a precise location and velocity at the same time! Certainly this is a surprising result to everybody, but is it any more weird than the simple fact that we have to resort to probability functions to describe the position of a particle?

In reality, the Heisenberg Uncertainty Principle is a more general mathematical inequality that governs the product of uncertainties for all pairs of physical observables. Other writeups in this node go into more detail about this.

Imagine a point. What is the wavelength of this point? If you're confused by this, you should be: a point is not a wave, so asking about its "wavelength" is silly.

Imagine a wave -- an infinitely long sine curve with crests and troughs that repeat forever. Where is the wave? This is a meaningless question -- it is equally everywhere at once, and so nowhere in particular.

If something is a point, it can have a position, but not a wavelength. If something is a wave, it can have a wavelength (the distance between crests), but not a position. Thus, in this sense no object can ever have both a position and a wavelength, since to do so it would have to be two completely different objects at the same time.

None of this is particularly deep. What is deep, however, is that it turns out that momentum and wavelength are physically equivalent quantities -- knowing one is equivalent to knowing the other! In fact, in a sense our everyday notion of momentum simply doesn't exist, but is just an illusion created by the true thing, the property of wavelength.

This is a very strange idea to swallow, and the only reason that we do is simply that we have done lots and lots of experiments and they all point towards it being true. Once you have accepted it, however, the uncertainty principle becomes easy to see: something cannot have both a precisely defined position and a precisely defined momentum at the same time because it cannot be simultaneously both a point and a wave.

The equations presented in the other write-ups of this node just state this idea in more mathematical terms, with one additional insight: it is possible for something to have both a rough position and wavelength by existing in a state that is neither a point nor a wave but something in between -- a shape that is spread out over space and yet concentrated in a particular area. The mathematical relation basically gives you a trade-off between how point-like and how wave-like something can be at the same time, which translates into a relation for the precision with which we can simultaneously measure position and momentum.

Log in or register to write something here or to contact authors.