Cover Page

Statistical Thermodynamics

 

Basics and Applications to Chemical Systems

 

Iwao Teraoka

Tandon School of Engineering

New York University

Brooklyn, NY, US

 

Wiley Logo

Preface

This book was born out of my long‐time wish that chemistry and chemical engineering students should learn statistical thermodynamics using a book written for them. Many of the books on that subject were written for physics students, and the contents are not appropriate for those who deal with molecules. There are a few good books that nonphysics students can rely on, but they look old‐fashioned.

Many research papers in chemistry, chemical engineering, materials science, biochemistry, and biophysics are written using the concepts of statistical thermodynamics, whether the authors of the papers are aware of that or not. The papers are mostly about molecules, and that is why molecular‐level description of quantities observed in experiments has a big presence in these areas of science. Statistical mechanics offers a right tool for that purpose.

Admittedly, the concepts of statistical mechanics are not easy to grasp. The fundamental hypotheses are philosophical, and they are translated into equations. By applying the tools of statistical mechanics to different thermodynamic systems and solving practice problems, you will be able to get a hang of the fundamental concepts. All the chapters, except for Chapter 1, have practice problems at the end. It is essential to solve them. You can find the answers at Wiley's web site: www.wiley.com/go/Teraoka_StatsThermodynamics.

One of the axioms in Confucius' famous book, the Analects, is shown below. Its translation is as follows: If you just learn and do not think, you will be left in the dark. If you just think and do not learn, you are on a precarious footing. My translation may not be authentic, but those who study statistical mechanics should be always reminded of this axiom.

A list of chapter titles with a downward arrow from Chapter 1 to 7; arrow from Chapter 4 to 8; arrows from 6 to 9, 10, 11, and 14; and arrows from 11 to 12 and 13. The arrows indicate relationships of prerequisites.

There are different ways to introduce the concepts to early‐stage learners. I like the method adopted in Atlee Jackson's textbook titled Equilibrium Statistical Mechanics [1]. The partition function is derived elegantly in a simple method. Chapters 2–7 of this book follow mostly the Atlee Jackson's book.

Some of the practice problems are borrowed from a Japanese book, Statistical Mechanics [2]. Its English translation is available [3].

The prerequisites for this book are the first semester of undergraduate physical chemistry and some math (calculus and linear algebra). In calculus, series and derivatives are far more important for statistical mechanics than the integrals are. In linear algebra, 2 × 2 matrices are all that are required. Familiarity with probability theory will be a big help. If you are not strong in math, it will be a good idea to go through Appendix A, first. As the thermodynamics is the best place to practice partial differentiation, statistical mechanics is the best place to use Taylor expansion.

The construction of chapters in this book is shown here. The arrows indicate relationships of prerequisites. Chapters 1–7 cover all the concepts and tools of statistical mechanics. The remaining chapters are applications to chemical systems.

fprefg002

An appendix is a collection math formulas necessary for deriving the equations in this book. Physical constants are listed in Symbols and Constants.

This book does not include pairing of single‐strand DNA, liquid crystals, surface phenomena (other than adsorption), and liquid–vapor equilibria. I want to cover them in future revisions.

This book does not cover quantum statistical mechanics. The latter is most prominently applied to a system of electrons in metals and semiconductors. If you need to learn quantum statistical mechanics, you should read one of the good books written for physics students [4].

Acknowledgments

The photograph on the cover of this book was taken by Erika Teraoka in the Wiener Zentralfriedhof (Vienna Central Cemetery).

About the Companion Website

This book is accompanied by a companion website:

www.wiley.com/go/Teraoka_StatsThermodynamics

The website includes:

  • Answers are available to the questions in the main text book.

 

flastg001

Symbols and Constants

The numbers are from Physicists' Desk Reference [1] and other sources.

Constant Symbol Value
Avogadro's number NA 6.022 137 × 1023 mol−1
Boltzmann constant kB 1.380 658 × 10−23 J K−1
Vacuum permittivity ε0 8.854 188 × 10−12 F m−1
Gas constant R 8.314 510 J (K·mol)−1
Planck constant h 6.626 070 × 10−34 J s
Speed of light in vacuum c 2.997 925 × 108 m s−1
Elementary charge e 1.602 177 × 10−19 C

Reference

  1. 1. Anderson, H.L. and Cohen, E.R. (1989). General section. In: A Physicist's Desk Reference (ed. H.L. Anderson). New York, NY: American Institute of Physics.

1
Introduction

Section 1.1 looks at the similarities and differences between classical thermodynamics and statistical thermodynamics. Then, in Section 1.2, we see several examples of phenomena that are beautifully described by statistical mechanics. Section 1.3 lists practices of notation adopted by this book.

1.1 Classical Thermodynamics and Statistical Thermodynamics

Classical thermodynamics, when applied to a closed system, starts with two fundamental laws. The first law of thermodynamics accounts for a balance of energy:

1.1equation

where the system receives heat d′Q and work d′W to change its internal energy by dU (see Figure 1.1). The prime in “d′” indicates that the quantity may not be a thermodynamic variable, i.e. not expressed as a total derivative. When the volume of the system changes from V to V + dV, d′W = −p dV, where p is the pressure.

Schematic of a closed system (shaded box) received heat d'Q and work d'W (arrows) from the surroundings to change its internal energy by dU.

Figure 1.1 A closed system received heat d′Q and work d′W from the surroundings to change its internal energy by dU.

The second law of thermodynamics expresses d′Q by a thermodynamic variable, but only when the change is reversible:

1.2equation

where T is the temperature. The second law introduces the entropy S.

In classical thermodynamics, we try to find relationships between macroscopic variables, S, T, U, V, and p. The equation of state is one of the relationships. We also learned different types of energy, specifically, enthalpy H, Helmholtz free energy F, and Gibbs free energy G. These measures of energy are convenient when we consider equilibria under different constraints. For example, at constant T and V, it is F that minimizes when the system is at equilibrium. Certainly, we can always maximize S of the universe (system + the surroundings), but knowing the details of the surroundings is not feasible or of our concern. Rather, we want to focus on the system, although it is the maximization of the entropy of the universe that dictates the equilibrium of the system. People have devised F for that purpose. If we minimize F of the system under given T and V, we are equivalently maximizing S of the universe. Likewise, G minimizes when the system's temperature and pressure are specified.

As you may recall, classical thermodynamics does not need to assume anything about the composition of the system – whether it is a gas or liquid, what molecules constitute the system, and so on. The system is a continuous medium; and it is uniform at all length scales, if it consists of a single phase. In other words, there are no molecules in this view.

Statistical thermodynamics, in contrast, starts with a molecule‐level description of the system – what types of molecules make up the system, whether interactions are present between molecules, and, if they are, how the interaction depends on the distance between molecules, and so on. Furthermore, statistical thermodynamics specifies microscopic states of the molecules, for example, their positions and velocities. If the molecules are monatomic, specifying the positions and velocities may be sufficient for our purposes. When the molecules are diatomic, however, we need to specify the states of rotation and vibration as well. If the molecule is polyatomic, specifying these states becomes more complicated. Even for a system of monatomic molecules, specifying the positions and velocities requires an astronomical number of variables. Typically, the number is close to Avogadro's number. Listing and evaluating all the variables is a daunting task. Fortunately, evaluating thermodynamics variables such as U, F, and G does not require all the details. It is rather the averages of the microscopic variables that count in evaluating the thermodynamic variables, and that is where statistical thermodynamics comes in.

1.2 Examples of Results Obtained from Statistical Thermodynamics

Here, we take a quick look at some of the results of applying statistical thermodynamics to different systems.

1.2.1 Heat Capacity of Gas of Diatomic Molecules

Figure 1.2 shows how the molar heat capacity CV/n of a gas consisting of diatomic molecules changes with temperature T. There are two characteristic temperatures Θrot and Θvib (rotation and vibration). Each diatomic molecule has its own Θrot and Θvib, and some of them are listed in Table 1.1.

Image described by caption.

Figure 1.2 Molar heat capacity CV/n of a gas consisting of diatomic molecules, plotted as a function of temperature T. At T around Θrot, the characteristic temperature of rotation, CV/n increases from 3∕₂R to 5∕₂R; and at around Θvib, the characteristic temperature of vibration, further increases to 7∕₂R.

Table 1.1 Characteristic temperature of rotation, Θrot, and characteristic temperature of vibration, Θvib, for some diatomic molecules.

Molecule Θrot (K) Θvib (K)
H2 87.6 6331
N2  2.88 3393
O2  2.08 2239

The molar heat capacity is 3∕₂R at T ≪ Θrot, where R is a gas constant. We see this range only for H2; for other gases, the boiling point is above Θrot. As T increases and surpasses Θrot, CV/n increases to 5∕₂R. There is a broad range of temperature that gives a nearly constant value of CV/n before it increases to 7∕₂R as T exceeds Θvib. For most diatomic molecules that are gas at room temperature (RT), Θrot ≪ RT ≪ Θvib, and that is why a gas of diatomic molecules has CV/n = 5∕₂R.

1.2.2 Heat Capacity of a Solid

Figure 1.3 depicts the molar heat capacity CV/n of a molecular solid (nonionic), plotted as a function of temperature T. At low temperatures, CV/nT3, and increases to a plateau value of 3R as T increases. Vibration in a lattice (crystal) accounts for this heat capacity. Einstein attempted to explain the heat capacity in his 1905 paper [1]. His statistical model correctly predicted 3R, but not T3. It is Debye who explained the ∼ T3 dependence by improving the Einstein model [2].

Image described by caption and surrounding text.

Figure 1.3 Heat capacity CV of a molecular solid, plotted as a function of temperature T. At close to T = 0, CV ∼ T3. With an increasing T, CV approaches a plateau value of 3R.

1.2.3 Blackbody Radiation

Anything with a temperature T > 0 radiates. A blackbody is a perfect emitter of the radiation (light) and also a perfect absorber. The radiation has different wavelength components and is not visible unless the wavelength falls in the visible range, 450–750 nm. When the radiation intensity is plotted as a function of T, the curve peaks at some wavelength λpeak (see Figure 1.4). With an increasing T, λpeak moves to a shorter wavelength, and the peak intensity increases. Stars exhibit different colors, and it is due to temperature differences. The radiation from the sun peaks at around 500 nm (blue–green), since its surface temperature is around 5800 K. A red star has a lower temperature, and a white star (λpeak ≅ 300 nm) has a higher temperature.

Spectral radiance plotted as a function of T at different temperatures, illustrated by 3 bell-shaped curves labeled 2000 K, 3000 K, and 4000 K.

Figure 1.4 Irradiance of a blackbody at different temperatures, per wavelength, is plotted as a function of wavelength λ. The temperature is indicated adjacent to the curve.

The λpeak decreases as ∼T−1 as T increases, which is called Wien's displacement law, discovered in 1893. The profile of the spectrum has tails at both ends. The long‐wavelength tail follows ∼λ−4, and short‐wavelength tail ∼e−const./λ. The long‐wavelength tail was explained using classical electromagnetism, but it could not explain the short‐wavelength tail, or the Wien's law. Max Planck proposed a photon hypothesis – light consists of energy particles called photons, each carrying energy reciprocally proportional to λ – in 1900 [3]. He succeeded in explaining the whole radiation spectrum.

1.2.4 Adsorption

This example is more chemical than are the preceding examples. When a clean surface (glass, graphite, etc.) is exposed to a vapor, some molecules adsorb onto the surface (Figure 1.5a). The surface coverage θ (fraction of surface covered with the molecules) increases with an increasing partial pressure p of the vapor (see Figure 1.5b). The plot is called an adsorption isotherm, as it is taken at a constant temperature.

Left: A shaded rectangle (surface) topped with circles on top surface and above (vapor). Right: Surface coverage θ, plotted as a function of the partial pressure p of the vapor, with ascending curve peaking at 1 in the y-axis.

Figure 1.5 (a) Surface is in contact with a vapor, and some of the molecules adsorb onto the surface. (b) Surface coverage θ, plotted as a function of the partial pressure p of the vapor.

The adsorption phenomenon can be explained by reaction kinetics, but statistical thermodynamics provides a molecular‐level description of the isotherm. For example, we can estimate, from the experimentally obtained isotherm, the cohesive energy of adsorption per molecule.

1.2.5 Helix–Coil Transition

A polypeptide is a polymer of identical amino acid residues. For example, poly(L‐lysine) is a polymer of L‐lysine. The polypeptide adopts a helix conformation, a random‐coil conformation, or a mixture of them (a part of the polymer chain is in helix conformation), see Figure 1.6(a). Which conformation the polymer takes depends on the environment such as temperature and solvent. Figure 1.6 (b) is a sketch for a plot of percent helix as a function of temperature. The polypeptide is benzyl ester of poly(glutamic acid), and therefore soluble in a polar organic solvent. At low temperatures, nearly all of the polymer is in a coil conformation, and gradually changes to an all‐helix conformation as the temperature rises. It may appear counterintuitive that the ordered state of the polypeptide is seen at high temperatures, rather than at low temperatures.

Left: Polypeptide chain in helix conformation, in equilibrium with a chain in coil conformation. Right: Percent helix in a polypeptide chain versus temperature in a polar organic solvent, with S-curve peaking at 100.

Figure 1.6 (a) Polypeptide chain in helix conformation, in equilibrium with a chain in coil conformation. (b) Percent helix in a polypeptide chain, plotted as a function of temperature, in a polar organic solvent.

The helix conformation is made possible by hydrogen bonds between a donor (H atom in an amide bond) and an acceptor (O atom in the amide bond) several residues away along the chain. If there is a mechanism that supersedes this intrachain hydrogen bond, then the chain may adopt a coil conformation. The latter can occur if the solvent molecules provide a stronger hydrogen bond to the H atoms and O atoms of the amide bonds. It is a competition between the two types of hydrogen bonds that gives rise to the inverted temperature dependence of the percent helix.

1.2.6 Boltzmann Factor

You would have learned about the Boltzmann factor, e−ΔE/(RT), where ΔE is the energy difference per mole, without being shown its derivation. The Boltzmann factor appears in many different situations. It appears, for example, in the barometric formula, p(h) = p(0)eMgh/(RT), where p(h) is the pressure at altitude h, M is the molar mass of the gas, and g is the gravitational constant. The Debye–Hückel theory for electrolyte solutions and the Gouy–Chapman theory for colloidal suspensions also use the Boltzmann factor. In nuclear magnetic resonance (NMR), the population of an up spin (+½) of a magnetic dipole μ in magnetic field B with respect to the down spin (−½) is exp(NAμB/RT), where NA is the Avogadro's number.

Statistical thermodynamics derives the Boltzmann factor from fundamental hypotheses. We learn the hypotheses and the derivation in Chapter 4.

1.3 Practices of Notation

This section lists some practices used in this book.

  1. (1) Symbols: This book uses italic symbols for variables; roman‐typefaced symbols are not variables. For example, the base of the natural logarithm is e, not e. Likewise, the circumference ratio is π, not π.
  2. (2) O(xn) represents a quantity proportional to xn: It may include higher or lower order terms. For example, the Taylor expansion of ex is 1 + x + ½x2, up to O(x2).
  3. (3) Limit and asymptote: We strictly distinguish them. We often consider behaviors of thermodynamic functions at low temperatures and at high temperatures. We evaluate the limiting value of a function f(T) as T → 0, and this is the low‐temperature limit. We also evaluate the dominant term of the function when T is low and when T is high. Such a term retains the temperature dependence and is called an asymptote.

For example, the molar heat capacity CV/n of a molecular solid has the low‐temperature limit of zero, but its low‐temperature asymptote is CV/nT3. The high‐temperature limit of CV/n is 3R.

Let us consider the following function f(x) for a practical example.

1.3equation

The small‐x limit is ln 2. There is no large‐x limit, as f(x) → ∞ as x → ∞. However, we can get the large‐x asymptote:

1.4equation

Figure 1.7 shows a plot of y = f(x) and its large‐x asymptote. The plot of y = f(x) runs close to the asymptote that is a straight line.

Image described by caption and surrounding text.

Figure 1.7 Plot of y = f(x) = ln(1 + cosh x) (solid line) and its large‐x asymptote (dashed line).

The leading term of the large‐x asymptote is x, but including the constant makes the asymptote a better approximation. The asymptote, Eq. (1.4), neglects O(ex).