7.5 The Chemical Potential¶
Huang, Statistical Mechanics 2ed, Section 7.5
The most confusing quantity in thermodynamics¶
I'm just going to say it. The chemical potential confused me more than anything else in statistical mechanics. Temperature? Fine. You feel hot, you feel cold. Pressure? Makes sense. Push on a wall. Energy? Sure. Things move, things interact.
But chemical potential? What even is it? It's not the energy per particle (that's a trap, and we'll get there). It's not the potential energy. It shows up with a weird sign convention. It's negative for an ideal gas at normal conditions, which feels wrong. And every textbook defines it as "the quantity conjugate to particle number" and moves on like that explains anything.
Here's what finally made it click for me: the chemical potential is the free energy cost of adding one more particle to the system. Not the energy cost. The free energy cost. That distinction is everything.
Why should you care?¶
If you've been following along, you've already used the chemical potential without fully understanding it. In section 7.3, it controlled the fugacity \(z = e^{\beta\mu}\). In section 7.4, I set \(\mu = -0.3\) eV for the GCMC simulation and got \(\langle N \rangle \approx 65\) atoms out. But where did that number come from?
This section answers three questions:
- What is \(\mu\), exactly? The thermodynamic definition, with the physical picture behind it.
- How do you calculate it? The ideal gas formula, which is the one we used for our GCMC simulation.
- Why is it negative? This trips everyone up. We'll make it make sense.
And along the way we'll see why the chemical potential is the key to understanding chemical reactions, phase equilibrium, and why particles flow from one region to another.
The bad intuition: "\(\mu\) is the energy per particle"¶
This is the most common misconception. The total energy is \(U\), you have \(N\) particles, so \(\mu = U/N\)?
Nope. Not even close.
Here's why. When you add a particle to a system, two things change:
- Energy goes up: The new particle brings kinetic energy and interacts with existing particles. That costs energy.
- Entropy goes up: There's now one more particle that can be anywhere in the box, in any momentum state. More microstates. More entropy.
The chemical potential is the balance between these two. It's the change in free energy \(A = U - TS\), not just the change in energy:
The entropy gain from adding a particle to a dilute gas is large (lots of empty space to explore). So the \(-T\partial S / \partial N\) term is large and negative. It dominates. That's why \(\mu\) is negative for a dilute ideal gas. Adding a particle lowers the free energy because the entropy gain outweighs the energy cost.
And that's not weird. That's the system saying "yes, I want more particles, I have room." A negative \(\mu\) means the system is happy to accept more particles. A positive \(\mu\) means the system is full, and adding another particle would cost free energy.
One-line summary: \(\mu\) is the price tag for adding a particle, measured in free energy, not energy.
The thermodynamic definition¶
Huang starts from the generalized form of the first law. When particle number can change, the fundamental relation picks up an extra term:
That's the definition. \(\mu\) is the thing multiplying \(dN\) in the free energy differential. Just like \(-P\) multiplies \(dV\) (pressure does work when volume changes) and \(-S\) multiplies \(dT\) (entropy carries heat when temperature changes), \(\mu\) multiplies \(dN\) (chemical potential does work when particle number changes).
From this you get the equivalent Maxwell relations:
Both definitions are used. The first (constant \(V\) and \(T\)) is natural for the canonical ensemble. The second (constant \(P\) and \(T\)) is natural for experiments at atmospheric pressure. They give the same \(\mu\).
And the first law of thermodynamics generalizes too:
That \(\mu\,dN\) term is the energy brought in by the new particles, corrected for the entropy they carry. If you've been writing \(dU = TdS - PdV\) and wondering "but what if atoms can enter and leave?", this is the answer.
Key Insight
The chemical potential plays the same role for particle number that temperature plays for energy. Temperature controls the flow of energy between systems: heat flows from high \(T\) to low \(T\). Chemical potential controls the flow of particles: particles flow from high \(\mu\) to low \(\mu\). Two systems in equilibrium have the same \(T\) (thermal equilibrium) and the same \(\mu\) (chemical equilibrium). That's the principle behind every GCMC simulation: you set \(\mu\) equal to the reservoir's chemical potential, and particles flow until equilibrium is reached.
The ideal gas chemical potential¶
Ready? Let's do this.
Start from the ideal gas partition function (we derived this in section 6.6 with the correct \(N!\)):
where \(\Lambda = \sqrt{2\pi\hbar^2 / mkT}\) is the thermal de Broglie wavelength. Use Stirling's approximation to get the free energy:
Now differentiate with respect to \(N\):
Done. The ideal gas chemical potential:
where \(n = N/V\) is the number density and \(\Lambda = \sqrt{2\pi\hbar^2/mkT}\) is the thermal wavelength.
Let me unpack this. The argument of the logarithm is \(n\Lambda^3\), which is the number of particles in a cube of side \(\Lambda\). For classical gases at normal conditions, this number is tiny. For argon at room temperature: \(\Lambda \approx 0.16\) A, so \(\Lambda^3 \approx 0.004\) A\(^3\). At atmospheric pressure, \(n \approx 2.5 \times 10^{-5}\) A\(^{-3}\). So \(n\Lambda^3 \approx 10^{-7}\).
The log of \(10^{-7}\) is about \(-16\). Times \(kT \approx 0.026\) eV at 300 K:
Negative. Very negative. That's the entropy term dominating. The gas is so dilute that every new particle has a vast amount of empty phase space to explore. The system desperately wants more particles (from a free energy perspective).
And now you see why \(\mu\) gets less negative as you compress the gas (increase \(n\)). More particles means less room per particle, less entropy gain, higher \(\mu\). At some point \(\mu\) becomes positive. That's the system saying "I'm full. Stop adding particles."
How I chose \(\mu = -0.3\) eV for the GCMC simulation¶
Here's the calculation I actually did for section 7.4. I wanted \(\langle N \rangle \approx 50\)-\(60\) argon atoms in a 30 \(\times\) 30 \(\times\) 30 A box at 300 K. How do I pick \(\mu\)?
In the grand canonical ensemble, for an ideal gas:
This is just the ideal gas partition function with \(z^N\) weighting, summed over \(N\). Invert it:
Plug in the numbers:
- \(V = 30^3 = 27000\) A\(^3\)
- \(\Lambda = 0.16\) A for argon at 300 K, so \(\Lambda^3 = 0.004\) A\(^3\)
- Target \(\langle N \rangle = 50\)
Rounded to \(\mu = -0.3\) eV. And the simulation gave \(\langle N \rangle = 64.5\), which is close to the ideal gas prediction (the small discrepancy comes from LJ interactions making it not exactly ideal).
That's it. For a dilute gas, you just invert the ideal gas formula. Pick the \(\langle N \rangle\) you want, solve for \(\mu\).
MD Connection
For dense systems, this ideal gas trick doesn't work. The chemical potential has two parts: \(\mu = \mu_{\text{ideal}} + \mu_{\text{excess}}\). The ideal part is the formula above (\(kT\log(n\Lambda^3)\)), which is easy. The excess part accounts for interactions (LJ forces, electrostatics, etc.), and that's the hard part. Methods like Widom insertion (insert a test particle, measure the energy change, average over configurations) or thermodynamic integration are needed. LAMMPS can compute these, but they require careful setup and long sampling. For our dilute gas at 300 K, \(\mu_{\text{excess}} \approx 0\), so the ideal gas formula was enough.
Why particles flow from high \(\mu\) to low \(\mu\)¶
Here's an analogy that might help. Think of \(\mu\) as water pressure in connected pipes. Water flows from high pressure to low pressure until the pressure equalizes.
Particles do the same with chemical potential. If two systems are connected (particles can move between them), particles flow from the system with higher \(\mu\) to the one with lower \(\mu\). At equilibrium, \(\mu_1 = \mu_2\).
That's why GCMC works. You set \(\mu\) in LAMMPS to match the reservoir's chemical potential. If the system currently has too few particles (its \(\mu\) is lower than the reservoir's), insertions are favored. If it has too many (its \(\mu\) is higher), deletions are favored. The system reaches equilibrium when its \(\mu\) matches the reservoir.
This also explains phase equilibrium. A liquid and its vapor coexist when \(\mu_{\text{liquid}} = \mu_{\text{vapor}}\) at the same \(T\) and \(P\). That equality determines the vapor pressure curve.
Chemical equilibrium¶
Huang applies the chemical potential to chemical reactions, and the result is elegant.
Consider a reaction like \(2H_2 + O_2 \rightleftharpoons 2H_2O\). At equilibrium (constant \(T\) and \(V\)), the Helmholtz free energy is minimized. Any small change in the number of each species must leave \(A\) unchanged to first order.
Since the species are linked by stoichiometry (\(\delta N_i = \nu_i \, \delta N\) where \(\nu_i\) are the stoichiometric coefficients), minimizing \(A\) gives:
That's the condition for chemical equilibrium. The chemical potentials of all species, weighted by their stoichiometric coefficients, must sum to zero. For the water reaction: \(2\mu_{H_2} + \mu_{O_2} = 2\mu_{H_2O}\).
This is powerful. If you know the chemical potential of each species (from the partition function, or from simulation), you can predict the equilibrium composition of any reaction. It's the bridge between statistical mechanics and chemistry.
What about antiparticles?¶
Huang has a beautiful digression here that I can't resist mentioning. Why is particle number conserved at all?
It isn't, fundamentally. Electrons can be created and destroyed via pair production: \(e^+ + e^- \rightleftharpoons \gamma\). But the net number (electrons minus positrons) is conserved. At low temperatures (\(kT \ll mc^2\)), there's not enough thermal energy to create pairs, so the number of each type is effectively fixed.
How low is "low"? The rest energy of an electron corresponds to \(6 \times 10^9\) K. For a proton, 2000 times higher. So unless you're inside a star, particle number conservation is an incredibly good approximation.
The chemical potential, from this perspective, is the Lagrange multiplier that enforces conservation of particle number. It's the "price" the system pays for the constraint of having a fixed number of particles. Deep stuff, but not something you need to worry about in your LAMMPS simulations.
Takeaway¶
The chemical potential \(\mu = \partial A / \partial N\) is the free energy cost of adding one particle. For an ideal gas, \(\mu = kT\log(n\Lambda^3)\), which is negative at normal conditions because the entropy gain from adding a particle to a dilute system outweighs the energy cost. Particles flow from high \(\mu\) to low \(\mu\), just like heat flows from high \(T\) to low \(T\). In GCMC simulations, you set \(\mu\) to match the reservoir, and the equilibrium \(\langle N \rangle\) falls out. For dilute systems, invert the ideal gas formula. For dense systems, you need Widom insertion or thermodynamic integration. The chemical potential is the bridge between statistical mechanics and chemistry, and once it clicks, half of thermodynamics suddenly makes sense.
Check Your Understanding
- You keep compressing an ideal gas, watching \(\mu = kT \log(n\Lambda^3)\) climb from deeply negative toward zero. At what density does \(\mu\) actually hit zero? And what does it physically mean when \(\mu\) crosses from negative to positive?
- Your friend insists \(\mu = U/N\). For an ideal gas, \(U/N = \frac{3}{2}kT\). But \(\mu = kT\log(n\Lambda^3)\), which is negative. Those aren't even the same sign. What's the piece your friend is forgetting?
- You set up a GCMC run where the reservoir has \(\mu = -0.3\) eV but the system's current effective \(\mu\) is \(-0.5\) eV. Which way do particles flow? Does the system gain or lose atoms?
- You want your GCMC box to equilibrate at 10 atoms instead of 50. Do you crank \(\mu\) more negative or less negative?