Roughly speaking, statistical mechanics is the
physics of many
particles. It is extremely useful for the following reason:
Newton's laws, the basic equations of classical physics, can be solved easily and exactly for the case of two interacting particles. With some effort they can be solved for three particles. As of yet, there exists no general solution for the four-body problem. So what happens when we want to characterize systems with 1023 particles? The "brute force" method of computer simulation even fails us now. So instead of asking a computer what to do, perhaps we should ask a casino owner or political pollster.
In other words, we're not going to know the individual positions and velocities of each the 1023 particles, but we can make some very precise statements regarding their overall statistical behavior. As the gambler or the pollster will tell you, this behavior becomes more predictable when you have a larger number of particles. This principle is known as the law of large numbers or the law of averages.
So why is it useful to know how a system with 1023 particles can interact? Well, one familiar with Avogadro's number, 6.022x1023, knows that it is essentially the constant of proportionality between the macroscopic and the microscopic world. It's the number of atomic mass units in one gram, and therefore, 1023 is, very roughly, the number of atoms in about a gram of matter. So, almost any macroscopic system will have a ridiculously large number of particles of this order. Therefore, any macroscopic system, any system of "normal" scales by our human standards, can and should be described using statistical mechanics.
A basic problem: The drunken sailor*
o
|
_______________|_______________
|__|__|__|__|__|__|__|__|__|__|
Imagine a drunken sailor is standing on the sidewalk at a lampost in the middle of a city block. He begins walking, and each step he takes can be either to the left or the right. We assume the length of his step is always exactly one foot, and that he has an equal probability of walking in either direction. Thus, each step he takes is completely independent of the last step. Where will he end up after he's taken N steps? The real question we should ask is, what is the
probability of being M feet to the right of the lampost after N steps (assuming "-M feet to the right" means "M feet to the left")?
Why is this a problem in statistical mechanics? Well, it deals with probabilities. An equivalent question could be: we have a group of N electrons, spaced far enough apart to not interact. Each has a spin in the z-direction of plus or minus one. What is the probability that the total spin of all the electrons is equal to M? This is identical to the drunken sailor problem, since we could look at the spin of each particle as a "step" in the walk, which either increases or decreases the value of M. Since the spins do not interact with each other, each spin direction is independent of every other one, and we have no way of predicting the value of a given spin. However, we can calculate the probability of having a total spin M by the same method as in the drunken sailor problem, which follows:
Let's go back to the drunken sailor analogy, since that's more colorful and less politically correct. Presumably, after taking N steps, he has moved n1 steps to the right and n2 steps to the left. Note that,
N = n1 + n2
M = n1 - n2.
Now, if we were asking what the probability is of moving first n1 steps to the right and then n2 steps to the left, the probability would simply be:
(1/2)x(1/2)x(1/2)x... = (1/2)N.
However, we want to know the probability of taking those steps in any order. Therefore, we can simply multiply this number by the number of possible rearrangements of steps. This is a simple matter of combinatorics. The number of possible permutations of steps is N!, but since all of the left-steps are indistinguishable, as are the right-steps, we divide by the number of possible permutations of left-steps and right-steps. In the end, we get that the number of possible paths which involve n1 steps to the left and n2 steps to the right equals:
N! / n1!n2!
So that the total probability of being M feet to the right of the lampost is
P = (1/2)NN! / n1!n2!
= (1/2)NN! / (1/2(N + M))!(1/2(N - M))!
As you might guess, this looks basically like a bell curve.
|
| |
| |||
| |||
| |||||
| |||||
| |||||||
| |||||||
| |||||||||
| |||||||||||
| |||||||||||||||
|_________|||||||||||||||||||||||__________
Statistical Description of Temperature and Entropy
Let's say you have some statistical system, like a box with a zillion particles inside of it. Okay, let's say there's 1023 particles. Anyway, call this number N. We don't know the velocities of all the particles, but we do know the total energy of all the particles inside the box. How many possible configurations of particles (positions and velocities) are there which could have this energy? Obviously this is an incredibly large number. We call this number Ω(E). It is the number of possible states corresponding to a given energy. For reasons that will become clear later on, we will wish to work with a much more manageable quantity, the logarithm** of the number of states. This retains most of the properties of Ω(E), but instead of being of the order 1023, it is of the order of 23. So, we define S = k log (Ω(E)), where k is a numerical constant with units of energy, and we call S the entropy of a system. Thus, the entropy is directly related to the number of possible states of a system with given parameters. The larger the number of possible states, the larger the entropy.
Now let's say we have another, different, box, which has a different amount of energy, E'. This box has a number of possible states equal to Ω'(E'). Note that this is a different function of the energy, because the box may be a different size, but the physical interpretation is still the same. Now, if we consider the two boxes as a total system with total energy ET = E + E' (although the boxes do not interact), the total number of states is just the product of the two:
ΩT(ET) = Ω(E)Ω'(ET - E)
Now, let's assume these two boxes can exchange energy (but not particles). Then, we can ask, if the total energy remains constant at ET, what final energies do the two boxes end up at?
Well, to be truthful, this is a probability question, and what we are really asking is: What is the most likely energy state for these two boxes to be in? Note that the probablility for a given state must be proportional to the number of possible states that can be occupied by a given energy:
P(E) = C Ω(E)Ω'(ET - E)
Now, these Ω functions are extremely large, and increase at a fantastic rate with the energy. Therefore, this probability is extremely sharply peaked at its maximum. So, asking about the energy of maximum probability is really asking about what the final energy is going to be, to an incredibly small degree of error (of the order 10-23).
Now, we merely need to find the maximum of P(E). To do so, we employ a method which simplifies the calculation immensely. We find the maximum of the logarithm of P. Since the logarithm is a monotonically increasing function, the maximum of log(P) is the same as the maximum of P. To find the maximum, we employ Calculus techniques: specifically, the method of Lagrange, in which we find the point of zero slope:
d(logP)/dE = 0
But log P = log C + log Ω(E) + log Ω'(ET - E)
So d(logP)/dE = d(log Ω(E))/dE - d(log Ω'(E'))/dE' = 0
And so, the energies will settle down to a point where
d(log Ω(E))/dE = d(log Ω'(E'))/dE'
Now, define β(E) = d(log Ω(E))/dE and the result becomes:
β(E) = β'(E')
This equation gives the point at which the probability is the maximum, and thus, all physical systems can be described by some "beta" function for which, when two systems are put in contact, they approach an equal value. Sound familiar yet? Now we define the dimensionless quantity:
T = 1/kβ, where k is the same numerical constant as above.
Thus, T has the same sorts of properties as β. It obeys the same equation as β above, that is, two systems with different values of T will approach the same value of T when put in contact.
"T" is what we mean by the temperature of a system. It has an entirely statistical description. None of this discussion required that I was talking about a bunch of particles in a box; I could have been describing a series of particles with spins in a magnetic field, or any other statistical system with some energy. The temperature has a statistical meaning from which we can derive its thermodynamic meaning.
Note that, given the defining equation for β and the defining equation for T,
1/T = kβ = k d(log Ω(E))/dE = dS/dE.
Putting all of this together, the condition of maximum probability can be written as a condition of maximum total entropy:
S + S' is maximized
Which leads to the condition that the temperatures are equal:
T = T'
From all of these definitions, the basic laws of thermodynamics can be derived using only statistical calculations.
The derivation of the three laws of thermodynamics
1. The first law is really not derived, it's more of a statement of the conservation of energy. The total energy of an isolated system must remain constant, and if a system is put in contact with another system, the change in its energy is equal to the work done on the system plus the heat it absorbs; the energy can't be created or destroyed. The heat is usually denoted by Q, and refers to the energy being used to increase the entropy of a system.
ΔE = W + Q
One might consider this first law as a definition of heat.
2. The second law is derived from the two equations we wrote above:
S + S' = maximum and 1/T = dS/dE.
The entropy of an isolated system (like our system of two boxes) always approaches a maximum, because this is the point of extremely sharply peaked maximum probability. Since our definition of heat implies it is the energy used to increase entropy, we can rewrite the differential equation:
dS = dQ/T
This is the equation for a system which is not isolated (e.g. looking at one of our two boxes separately). When a system absorbs an amount of heat dQ, its entropy increases by the above equation.
3. The third law states that as T approaches its minimum (usually zero), the entropy approaches some constant value (it doesn't increase without bound or oscillate between two values). This is mostly a statement that there exists some minimum ground-state energy, E0, for which there exists some constant number of states. Thus, as E → E0, S → S0. The only additional piece of information comes with noting the connection between energy and temperature.
Using the equations above, it is a short exercise to notice that dT/dE > 0, and therefore energy is a monotonically increasing function of temperature, implying as the energy approaches its minimum, so must the temperature. Thus, as T approaches a minimum, S approaches a constant.
F. Reif's Fundamentals of Statistical and Thermal Physics was used often as a reference for this writeup. For further study, I highly recommend it as a clear, if dry, treatment of the subject.
*Apologies to any sailors reading this. I know you're not all drunks, so please don't get belligerent.
**I use the notation "log" for logarithm, where most people use "ln". In math and physics, I feel there's really no need to distinguish between natural and decimal logs, since we never use the latter. Hence, I use the one that looks more like the word "log".