Quantcast
Channel: Hacker News
Viewing all articles
Browse latest Browse all 25817

What’s a fire, and why does it burn?

$
0
0

I was staring at a bonfire on a beach the other day and realized that I didn’t understand anything about fire and how it works. (For example: what determines its color?) So I looked up some stuff, and here’s what I learned.

Fire

Fire is a sustained chain reaction involving combustion, which is an exothermic reaction in which an oxidant, typically oxygen, oxidizes a fuel, typically a hydrocarbon, to produce products such as carbon dioxide, water, and heat and light. A typical example is the combustion of methane, which looks like

\displaystyle \text{CH}_4 + 2 \text{ O}_2 \to \text{CO}_2 + 2 \text{ H}_2 \text{O}.

The heat produced by combustion can be used to fuel more combustion, and when that happens enough that no additional energy needs to be added to sustain combustion, you’ve got a fire. To stop a fire, you can remove the fuel (e.g. turning off a gas stove), remove the oxidant (e.g. smothering a fire using a fire blanket), remove the heat (e.g. spraying a fire with water), or remove the combustion reaction itself (e.g. with halon).

Combustion is in some sense the opposite of photosynthesis, an endothermic reaction which takes in light, water, and carbon dioxide and produces hydrocarbons.

It’s tempting to assume that when burning wood, the hydrocarbons that are being combusted are e.g. the cellulose in the wood. It seems, however, that something more complicated happens. When wood is exposed to heat, it undergoes pyrolysis (which, unlike combustion, doesn’t involve oxygen), which converts it to more flammable compounds, such as various gases, and these are what combust in wood fires.

When a wood fire burns for long enough it will lose its flame but continue to smolder, and in particular the wood will continue to glow. Smoldering involves incomplete combustion, which, unlike complete combustion, produces carbon monoxide.

Flames

Flames are the visible parts of a fire. As fires burn, they produce soot (which can refer to some of the products of incomplete combustion or some of the products of pyrolysis), which heats up, producing thermal radiation. This is one of the mechanisms responsible for giving fire its color. It is also how fires warm up their surroundings.

Thermal radiation is produced by the motion of charged particles: anything at positive temperature consists of charged particles moving around, so emits thermal radiation. A more common but arguably less accurate term is black body radiation; this properly refers to the thermal radiation emitted by an object which absorbs all incoming radiation. It’s common to approximate thermal radiation by black body radiation, or by black body radiation times a constant, because it has the useful property that it depends only on the temperature of the black body. Black body radiation happens at all frequencies, with more radiation at higher frequencies at higher temperatures; in particular, the peak frequency is directly proportional to temperature by Wien’s displacement law.

Everyday objects are constantly producing thermal radiation, but most of it is infrared – its wavelength is longer than that of visible light, and so is invisible without special cameras. Fires are hot enough to produce visible light, although they are still producing a lot of infrared light.

Another mechanism giving fire its color is the emission spectra of whatever’s being burned. Unlike black body radiation, emission spectra occur at discrete frequencies; this is caused by electrons producing photons of a particular frequency after transitioning from a higher-energy state to a lower-energy state. These frequencies can be used to detect elements present in a sample in flame tests, and a similar idea (using absorption spectra) is used to determine the composition of the sun and various stars. Emission spectra are also responsible for the color of fireworks and of colored fire.

The characteristic shape of a flame on Earth depends on gravity. As a fire heats up the surrounding air, natural convection occurs: the hot air (which contains, among other things, hot soot) rises, while cool air (which contains oxygen) falls, sustaining the fire and giving flames their characteristic shape. In low gravity, such as on a space station, this no longer occurs; instead, fires are only fed by the diffusion of oxygen, and so burn more slowly and with a spherical shape (since now combustion is only happening at the interface of the fire with the parts of the air containing oxygen; inside the sphere there is presumably no more oxygen to burn):

candles-reg-microgravz

Black body radiation

Black body radiation is described by Planck’s law, which is fundamentally quantum mechanical in nature, and which was historically one of the first applications of any form of quantum mechanics. It can be deduced from (quantum) statistical mechanics as follows.

What we’ll actually compute is the distribution of frequencies in a (quantum) gas of photons at some temperature T; the claim that this matches the distribution of frequencies of photons emitted by a black body at the same temperature comes from a physical argument related to Kirchhoff’s law of thermal radiation. The idea is that the black body can be put into thermal equilibrium with the gas of photons (since they have the same temperature). The gas of photons is getting absorbed by the black body, which is also emitting photons, so in order for them to stay in equilibrium, it must be the case that at every frequency the black body is emitting radiation at the same rate as it’s absorbing it, which is determined by the distribution of frequencies in the gas. (Or something like that. I Am Not A Physicist, so if your local physicist says different then believe them instead.)

In statistical mechanics, the probability of finding a system in microstates given that it’s in thermal equilibrium at temperature T is proportional to

\displaystyle e^{- \beta E_s}

where E_s is the energy of state s and \beta = \frac{1}{k_B T} is thermodynamic beta (so T is temperature and k_B is Boltzmann’s constant); this is the Boltzmann distribution. For one possible justification of this, see this blog post by Terence Tao. This means that the probability is

\displaystyle p_s = \frac{1}{Z(\beta)} e^{-\beta E_s}

where Z(\beta) is the normalizing constant

\displaystyle Z(\beta) = \sum_s e^{-\beta E_s}

called the partition function. Note that these probabilities don’t change if E_s is modified by an additive constant (which multiplies the partition function by a constant); only differences in energy between states matter.

It’s a standard observation that the partition function, up to multiplicative scale, contains the same information as the Boltzmann distribution, so anything that can be computed from the Boltzmann distribution can be computed from the partition function. For example, the moments of the energy are given by

\displaystyle \langle E^k \rangle = \frac{1}{Z} \sum_s E_s^k e^{-\beta E_s} = \frac{(-1)^k}{Z} \frac{\partial^k}{\partial \beta^k} Z

and, up to solving the moment problem, this characterizes the Boltzmann distribution. In particular, the average energy is

\displaystyle \langle E \rangle = - \frac{\partial}{\partial \beta} \log Z.

The Boltzmann distribution can be used as a definition of temperature. It correctly suggests that in some sense \beta is the more fundamental quantity because it might be zero (meaning every microstate is equally likely; this corresponds to “infinite temperature”) or negative (meaning higher-energy microstates are more likely; this corresponds to “negative temperature,” which it is possible to transition to after “infinite temperature,” and which in particular is hotter than every positive temperature).

To describe the state of a gas of photons we’ll need to know something about the quantum behavior of photons. In the standard quantization of the electromagnetic field, the electromagnetic field can be treated as a collection of quantum harmonic oscillators each oscillating at various (angular) frequencies\omega. The energy eigenstates of a quantum harmonic oscillator are labeled by a nonnegative integer n \in \mathbb{Z}_{\ge 0}, which can be interpreted as the number of photons of frequency \omega. The energies of these eigenstates are (up to an additive constant, which doesn’t matter for this calculation and so which we will ignore)

\displaystyle E_n = n \hbar \omega

where \hbar is the reduced Planck constant. The fact that we only need to keep track of the number of photons rather than distinguishing them reflects the fact that photons are bosons. Accordingly, for fixed \omega, the partition function is

\displaystyle Z_{\omega}(\beta) = \sum_{n=0}^{\infty} e^{-n \beta \hbar \omega} = \frac{1}{1 - e^{-\beta \hbar \omega}}.

Digression: the (wrong) classical answer

The assumption that n, or equivalently the energy E_n = n \hbar \omega, is required to be an integer here is the Planck postulate, and historically it was perhaps the first appearance of a quantization (in the sense of quantum mechanics) in physics. Without this assumption (so using classical harmonic oscillators), the sum above becomes an integral (where n is now proportional to the square of the amplitude), and we get a “classical” partition function

\displaystyle Z_{\omega}^{cl}(\beta) = \int_0^{\infty} e^{-n \beta \hbar \omega} \, dn = \frac{1}{\beta \hbar \omega}.

(It’s unclear what measure we should be integrating against here, but but this calculation appears to reproduce the usual classical answer, so I’ll stick with it.)

These two partition functions give very different predictions, although the quantum one approaches the classical one as \beta \hbar \omega \to 0. In particular, the average energy of all photons of frequency \omega, computed using the quantum partition function, is

\displaystyle \langle E \rangle_{\omega} = - \frac{d}{d \beta} \log \frac{1}{1 - e^{-\beta \hbar \omega}} = \frac{\hbar \omega}{e^{\beta \hbar \omega} - 1}

whereas the average energy computed using the classical partition function is

\displaystyle \langle E \rangle_{\omega}^{cl} = - \frac{d}{d \beta} \log \frac{1}{\beta \hbar \omega} = \frac{1}{\beta} = k_B T.

The quantum answer approaches the classical answer as \hbar \omega \to 0 (so for small frequencies), and the classical answer is consistent with the equipartition theorem in classical statistical mechanics, but it is also grossly inconsistent with experiment and experience. It predicts that the average energy of the radiation emitted by a black body at a frequency \omega is a constant independent of \omega, and since radiation can occur at arbitrarily high frequencies, the conclusion is that a black body is emitting an infinite amount of energy, at every possible frequency, which is of course badly wrong. This is (most of) the ultraviolet catastrophe.

The quantum partition function instead predicts that at low frequencies (relative to the temperature) the classical answer is approximately correct, but that at high frequencies the average energy becomes exponentially damped, with more damping at lower temperatures. This is because at high frequencies and low temperatures a quantum harmonic oscillator spends most of its time in its ground state, and cannot easily transition to its next lowest state, which is exponentially less likely. Physicists say that most of this “degree of freedom” (the freedom of an oscillator to oscillate at a particular frequency) gets “frozen out.” The same phenomenon is responsible for classical but incorrect computations of specific heat, e.g. for diatomic gases such as oxygen.

The density of states and Planck’s law

Now that we know what’s going on at a fixed frequency \omega, it remains to sum over all possible frequencies. This part of the computation is essentially classical and no quantum corrections to it need to be made.

We’ll make a standard simplifying assumption that our gas of photons is trapped in a box with side length L subject to periodic boundary conditions (so really, the flat torus T = \mathbb{R}^3/L \mathbb{Z}^3); the choice of boundary conditions, as well as the shape of the box, will turn out not to matter in the end. Possible frequencies are then classified by standing wave solutions to the electromagnetic wave equation in the box with these boundary conditions, which in turn correspond (up to multiplication by c) to eigenvalues of the Laplacian \Delta. More explicitly, if \Delta v = \lambda v, where v(x) is a smooth function T \to \mathbb{R}, then the corresponding standing wave solution of the electromagnetic wave equation is

\displaystyle v(t, x) = e^{c \sqrt{\lambda} t} v(x)

and hence (keeping in mind that \lambda is typically negative, so \sqrt{\lambda} is typically purely imaginary) the corresponding frequency is

\displaystyle \omega = c \sqrt{-\lambda}.

This frequency occurs \dim V_{\lambda} times where V_{\lambda} is the \lambda-eigenspace of the Laplacian.

The reason for the simplifying assumptions above are that for a box with periodic boundary conditions (again, mathematically a flat torus) it is very easy to explicitly write down all of the eigenfunctions of the Laplacian: working over the complex numbers for simplicity, they are given by

\displaystyle v_k(x) = e^{i k \cdot x}

where k = \left( k_1, k_2, k_3 \right) \in \frac{2 \pi}{L} \mathbb{Z}^3 is the wave vector. (Somewhat more generally, on the flat torus \mathbb{R}^n/\Gamma where \Gamma is a lattice, wave numbers take values in the dual lattice of \Gamma, possibly up to scaling by 2 \pi depending on conventions). The corresponding eigenvalue of the Laplacian is

\displaystyle \lambda_k = - \| k \|^2 = - k_1^2 - k_2^2 - k_3^2

from which it follows that the multiplicity of a given eigenvalue - \frac{4 \pi^2}{L^2} n is the number of ways to write n as a sum of three squares. The corresponding frequency is

\displaystyle \omega_k = c \| k \|

and so the corresponding energy (of a single photon with that frequency) is

\displaystyle E_k = \hbar \omega_k = \hbar c \| k \|.

At this point we’ll approximate the probability distribution over possible frequencies \omega_k, which is strictly speaking discrete, as a continuous probability distribution, and compute the corresponding density of states g(\omega); the idea is that g(\omega) \, d \omega should correspond to the number of states available with frequencies between \omega and \omega + d \omega. Then we’ll do an integral over the density of states to get the final partition function.

Why is this approximation reasonable (unlike the case of the partition function for a single harmonic oscillator, where it wasn’t)? The full partition function can be described as follows. For each wavenumber k \in \frac{2\pi}{L} \mathbb{Z}^3, there is an occupancy number n_k \in \mathbb{Z}_{\ge 0} describing the number of photons with that wavenumber; the total number n = \sum n_k of photons is finite. Each such photon contributes \hbar \omega_k = \hbar c \| k \| to the energy, from which it follows that the partition function factors as a product

\displaystyle Z(\beta) = \prod_k Z_{\omega_k}(\beta) = \prod_k \frac{1}{1 - e^{- \beta \hbar c \| k \|}}

over all wave numbers k, hence that its logarithm factors as a sum

\displaystyle \log Z(\beta) = \sum_k \log \frac{1}{1 - e^{-\beta \hbar c \| k \|}}.

and it is this sum that we want to approximate by an integral. It turns out that for reasonable temperatures and reasonably large boxes, the integrand varies very slowly as k varies, so the approximation by an integral is very close. The approximation stops being reasonably only at very low temperatures, where as above quantum harmonic oscillators mostly end up in their ground states and we get Bose-Einstein condensates.

The density of states can be computed as follows. We can think of wave vectors as evenly spaced lattice points living in some “phase space,” from which it follows that the number of wave vectors in some region of phase space is proportional to its volume, at least for regions which are large compared to the lattice spacing \frac{2 \pi}{L}. In fact, the number of wave vectors in a region of phase space is exactly \frac{V}{8 \pi^3} times the volume, where  V = L^3 is the volume of our box / torus.

It remains to compute the volume of the region of phase space given by all wave vectors k with frequencies \omega_k = c \| k \| between \omega and \omega + d \omega. This region is a spherical shell with thickness \frac{d \omega}{c} and radius \frac{\omega}{c}, and hence its volume is

\displaystyle \frac{4 \pi \omega^2}{c^3} \, d \omega

from which we get that the density of states for a single photon is

\displaystyle g(\omega) \, d \omega = \frac{V \omega^2}{2 \pi^2 c^3}  \, d \omega.

Actually this formula is off by a factor of two: we forgot to take photon polarization into account (equivalently, photon spin), which doubles the number of states with a given wave number, giving the corrected density

\displaystyle g(\omega) \, d \omega = \frac{V \omega^2}{\pi^2 c^3} \, d \omega.

The fact that the density of states is linear in the volume V is not specific to the flat torus; it’s a general feature of eigenvalues of the Laplacian by Weyl’s law. This gives that the logarithm of the partition function is

\displaystyle \log Z = \frac{V}{\pi^2 c^3} \int_0^{\infty} \omega^2 \log \frac{1}{1 - e^{- \beta \hbar \omega}} \, d \omega.

Taking its derivative with respect to \beta gives the average energy of the photon gas as

\displaystyle \langle E \rangle = - \frac{\partial}{\partial \beta} \log Z = \frac{V}{\pi^2 c^3} \int_0^{\infty} \frac{\hbar \omega^3}{e^{\beta h \omega} - 1} \, d \omega

but for us the significance of this integral lies in its integrand, which gives the “density of energies”

\displaystyle \boxed{ E(\omega) \, d \omega = \frac{V \hbar}{\pi^2 c^3} \frac{\omega^3}{e^{\beta \hbar \omega} - 1} \, d \omega}

describing how much of the energy of the photon gas comes from photons of frequencies between \omega and \omega + d \omega. This, finally, is a form of Planck’s law, although it needs some massaging to become a statement about black bodies as opposed to about gases of photons (we need to divide by V to get the energy density per unit volume, then do some other stuff to get a measure of radiation).

Planck’s law has two noteworthy limits. In the limit as \beta \hbar \omega \to 0 (meaning high temperature relative to frequency), the denominator approaches \beta \hbar \omega, and we get

\displaystyle E(\omega) \, d \omega \approx \frac{V}{\pi^2 c^3} \frac{\omega^2}{\beta} \, d \omega = \frac{V k_B T \omega^2}{\pi^2 c^3} \, d \omega.

This is a form of the Rayleigh-Jeans law, which is the classical prediction for black body radiation. It’s approximately valid at low frequencies but becomes less and less accurate at higher frequencies.

Second, in the limit as \beta \hbar \omega \to \infty (meaning low temperature relative to frequency), the denominator approaches e^{\beta \hbar \omega}, and we get

\displaystyle E(\omega) \, d \omega \approx \frac{V \hbar}{\pi^2 c^3} \frac{\omega^3}{e^{\beta \hbar \omega}} \, d \omega.

This is a form of the Wien approximation. It’s approximately valid at high frequencies but becomes less and less accurate at low frequencies.

Both of these limits historically preceded Planck’s law itself.

Wien’s displacement law

This form of Planck’s law is enough to tell us at what frequency the energy E(\omega) is maximized given the temperature T (and hence roughly what color a black body of temperature T is): we differentiate with respect to \omega and find that we need to solve

\displaystyle \frac{d}{d \omega} \frac{\omega^3}{e^{\beta \hbar \omega} - 1} = 0.

or equivalently (taking the logarithmic derivative instead)

\displaystyle \frac{3}{\omega} = \frac{\beta \hbar e^{\beta \hbar \omega}}{e^{\beta \hbar \omega} - 1}.

Let \zeta = \beta \hbar \omega, so that we can rewrite the equation as

\displaystyle 3 = \frac{\zeta e^\zeta}{e^\zeta - 1}

or, with some rearrangement,

\displaystyle 3 - \zeta = 3e^{-\zeta}.

This form of the equation makes it relatively straightforward to show that there is a unique positive solution \zeta = 2.821 \dots, and hence that \beta \hbar \omega = \zeta, giving that the maximizing frequency is

\displaystyle \boxed{ \omega_{max} = \frac{\zeta}{\beta \hbar} = \frac{\zeta k_B}{\hbar} T}

where T is the temperature. This is Wien’s displacement law for frequencies. Rewriting in terms of wavelengths \ell = \frac{2 \pi c}{\omega} gives

\displaystyle \frac{2 \pi c}{\omega_{max}} = \frac{2 \pi c \hbar}{\zeta k_B T} = \frac{b}{T}

where

\displaystyle b = \frac{2 \pi c \hbar}{\zeta k_B} \approx 5.100 \times 10^{-3} \, mK

(the units here being meter-kelvins). This computation is typically done in a slightly different way, by first re-expressing the density of energies E(\omega) \, d \omega in terms of wavelengths, then taking the maximum of the resulting density. Because d \omega is proportional to \frac{d \ell}{\ell^2}, this has the effect of changing the \omega^3 from earlier to an \omega^5, so replaces \zeta with the unique solution \zeta' to

\displaystyle 5 - \zeta' = 5 e^{-\zeta'}

which is about 4.965. This gives a maximizing wavelength

\displaystyle \boxed{ \ell_{max} = \frac{2 \pi c \hbar}{\zeta' k_B T} = \frac{b'}{T} }

where

\displaystyle b' = \frac{2 \pi c \hbar}{\zeta' k_B} \approx 2.898 \times 10^{-3} \, mK.

This is Wien’s displacement law for wavelengths. Note that \ell_{max} \neq \frac{2 \pi c}{\omega_{max}}.

A wood fire has a temperature of around 1000 \, K (or around 700^{\circ} celsius), and substituting this in above produces wavelengths of

\displaystyle \frac{2 \pi c}{\omega_{max}} = \frac{5.100 \times 10^{-3} \, mK}{1000 \, K} = 5.100 \times 10^{-6} \, m = 5100 \, nm

and

\displaystyle \ell_{max} = \frac{2.898 \times 10^{-3} \, mK}{1000 \, K} = 2.898 \times 10^{-6} \, m = 2898 \, nm.

For comparison, the wavelengths of visible light range between about 750 \, nm for red light and 380 \, nm for violet light. Both of these computations correctly suggest that most of the radiation from a wood fire is infrared; this is the radiation that’s heating you but not producing visible light.

By contrast, the temperature of the surface of the sun is about 5800 \, K, and substituting that in produces wavelengths

\displaystyle \frac{2 \pi c}{\omega_{max}} = 879 \, nm

and

\displaystyle \ell_{max} = 500 \, nm

which correctly suggests that the sun is emitting lots of light all around the visible spectrum (hence appears white). In some sense this argument is backwards: probably the visible spectrum evolved to be what it is because of the wide availability of light in the particular frequencies the sun emits the most.

Finally, a more sobering calculation. Nuclear explosions reach temperatures of around 10^7 \, K, comparable to the temperature of the interior of the sun. Substituting this in produces wavelengths of

\displaystyle \frac{2 \pi c}{\omega_{max}} = 0.51 \, \mu m

and

\displaystyle \ell_{max} = 0.29 \, \mu m.

These are the wavelengths of X-rays. Planck’s law doesn’t just stop at the maximum, so nuclear explosions also produce even shorter wavelength radiation, namely gamma rays. This is solely the radiation a nuclear explosion produces because it is hot, as opposed to the radiation it produces because it is nuclear, such as neutron radiation.


Viewing all articles
Browse latest Browse all 25817

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>