Many properties of the universe appear as though they have been precisely adjusted, or “finely tuned,” to permit the existence of biological life. This phenomenon is known as cosmological fine-tuning. The mysteriously low value of the constant Λ, discussed on the previous page, is one example of cosmological fine-tuning. Of the wide range of possible values Λ could be expected to have (based on the predictions of quantum field theory), only an incomprehensibly tiny fraction of that range would permit the formation of stars and planets, both of which are necessary for life. Similarly, many other unexplained parameters of physics and cosmology fall within surprisingly narrow ranges that make the universe habitable. We can calculate how the universe would behave if these seemingly arbitrary numbers differed from their actual values, and only a narrow range of possible values would permit the existence of any living organisms at all. In other words, the life-permitting range is so narrow that even a miniscule adjustment to the parameter’s value would render the universe completely uninhabitable, not only for human beings but for any imaginable form of life. Parameters like that are said to be fine-tuned for life.
Examples of cosmological fine-tuning can be classified into several main categories:
In what follows, we’ll consider a few noteworthy examples from each category. For a more complete list, see astrophysicist Hugh Ross’s RTB Design Compendium, Part 1, which catalogues 140 fine-tuned parameters.
Recall from chapter 2 that the fundamental laws of physics contain specific numbers, called physical constants, which are the same at all times and places throughout the universe. For example, Newton’s law of universal gravitation contains the gravitational constant G, which has the value 6.67 × 10-11 N m2/kg2. The same constant G also appears in the field equations of Einstein’s theory of gravity, general relativity. In Newton’s theory, G describes the strength of the gravitational force; in Einstein’s theory, G describes the degree to which mass and energy distort the geometry of spacetime. However, neither Newton’s theory nor Einstein’s theory explains why G has the specific value it has. In fact, the value of G cannot be explained or predicted by any known laws of physics. To find the value of G, we have to measure it experimentally (as Henry Cavendish did in his famous experiment with lead spheres). For all the laws of physics say, in other words, G could have been much larger or smaller than it actually is.
Moreover, we can use the laws of physics to calculate how the history of the universe would have unfolded if G had a larger or smaller value. If G were much smaller than 6.67 × 10-11 N m2/kg2, gravitational attraction would be too weak for galaxies, stars, and planets to form: the universe would contain only hydrogen and helium gas left over from the Big Bang. On the other hand, if G were significantly larger, stars would burn through their fuel too quickly to provide a stable energy source for life. In either case, no physical life forms of any kind could survive anywhere in the universe.
However, there is a narrow range of possible values for G that make gravity strong enough—but not too strong—for life-supporting stars and planets to exist. Just how narrow is this life-supporting range? To answer that question, we can compare the width of the life-supporting range to the whole spectrum of strengths of the known physical forces. Assuming that the minimum possible value for G is zero and that the maximum possible strength of gravity is comparable to the strongest known force (namely, the nuclear strong force), the fraction of the total range that yields a life-permitting universe is approximately 1/1035.For further details and a more thorough explanation of this example, see Luke A. Barnes and Geraint F. Lewis, A Fortunate Universe: Life in a Finely-Tuned Cosmos (Cambridge: Cambridge University Press, 2016), 106-109; also Stephen C. Meyer, Return of the God Hypothesis: Three Scientific Discoveries that Reveal the Mind Behind the Universe (New York: HarperCollins, 2021), chapter 7 footnote 25.
To visualize how narrow this life-permitting range is, imagine a measuring line spanning the width of the entire observable universe, with one end of the line representing zero and the other end representing the maximum possible value of G, where gravity becomes as strong as the nuclear strong force. The laws of physics allow the value of G to fall anywhere on that line; but, in order for the universe to permit life, the value of G must lie within a specific segment of the line. How wide is the life-permitting segment? On the universe-sized measuring line, the life-permitting range for G is only 9 nanometers wide—about the width of an average protein molecule!Here’s the math. The diameter of the observable universe is approximately 93 billion light-years, or about 9 × 1026 meters. 9 × 1026 times 1/1035 = 9 × 10-9, that is, 9 nanometers. Lucky for us, the actual value of G does lie within that astonishingly narrow, life-permitting window.
Other physical constants are finely tuned in similar ways. For example, just as G describes the strength of gravity, other physical constants describe the strengths of the other three fundamental forces: electromagnetism, the nuclear strong force, and the nuclear weak force. All of these coupling constants, as they are called, are finely tuned to varying degrees.For a brief summary, see chapter 7 of Stephen C. Meyer, Return of the God Hypothesis: Three Scientific Discoveries that Reveal the Mind Behind the Universe (New York: HarperCollins, 2021). For more detailed explanations of these and other examples, see Luke A. Barnes and Geraint F. Lewis, A Fortunate Universe: Life in a Finely-Tuned Cosmos (Cambridge: Cambridge University Press, 2016). For example, fine-tuning of the strong force constant is required for the production of carbon and oxygen in stars. If the strong force constant were just 0.4% higher (a change of less than half of one percent), stars would produce very little oxygen; but if it were 0.4% lower, stars would produce very little carbon.For further explanation, see Luke A. Barnes and Geraint F. Lewis, A Fortunate Universe: Life in a Finely-Tuned Cosmos (Cambridge: Cambridge University Press, 2016), 118-119. This remarkable balance also requires fine-tuning of the masses of quarks (p. 120). The reason for this involves a phenomenon called resonance. Recall that stars produce heavier elements by nuclear fusion, that is, by fusing lighter atomic nuclei together. Atomic nuclei act as waves with specific energy states, called resonances, which are analogous to the electron orbitals in Bohr’s model of the atom. These resonances depend sensitively on the value of the strong force constant. Carbon and oxygen nuclei have resonances that are finely-tuned to allow efficient nuclear fusion of both elements.For further explanation, see Luke A. Barnes and Geraint F. Lewis, A Fortunate Universe: Life in a Finely-Tuned Cosmos (Cambridge: Cambridge University Press, 2016), 113-120. The strength of the strong force just happens to be in the extremely narrow window required for stars to produce large quantities of both carbon and oxygen—two of the three most important chemical ingredients for life.
The third crucial ingredient for life, hydrogen, was produced in abundance during the Big Bang. This also required fine-tuning of the coupling constants, though not of any individual constant by itself. Rather, the relative strengths of all four fundamental forces are precisely balanced to yield an abundance of hydrogen.See Luke A. Barnes and Geraint F. Lewis, A Fortunate Universe: Life in a Finely-Tuned Cosmos (Cambridge: Cambridge University Press, 2016), 75-78. In other words, the ratios between the various coupling constants exhibit fine-tuning. For another example of a finely-tuned ratio, the strength of electromagnetism to the strength of gravity must be finely tuned in order for stable stars to exist.See Luke A. Barnes and Geraint F. Lewis, A Fortunate Universe: Life in a Finely-Tuned Cosmos (Cambridge: Cambridge University Press, 2016), 110-111. Likewise, the ratio of the strong nuclear force to electromagnetism is just right for allowing chemical complexity.See Luke A. Barnes and Geraint F. Lewis, A Fortunate Universe: Life in a Finely-Tuned Cosmos (Cambridge: Cambridge University Press, 2016), 71-75 and especially Figure 17 on page 75.
None of these fortunate coincidences can be explained by any known laws of physics, since the laws don’t specify the values of the constants. From the perspective of our most successful theories, in other words, the values of the coupling constants are unexplained facts about our universe. Yet, amazingly, all of these unexplained numbers just happen to lie within incredibly narrow, life-permitting windows.
In addition to the physical constants, there are other unexplained quantities in physics that are similarly fine-tuned for life. For example, if the masses of the elementary particles differed significantly from their actual values, the universe would not support any form of chemical complexity, let alone the degree of complexity required for even the simplest forms of biological life. Like the coupling constants, the masses of the elementary particles are not specified by any known laws of physics,As explained in chapter 5, the masses of the elementary particles are determined by their interactions with the Higgs field: the more strongly a particle interacts with this field, the greater its mass. These interactions are described by the fundamental laws of quantum field theory, but the laws themselves don’t determine the strengths of the Higgs interaction. So, we can’t use the laws to calculate the masses of the elementary particles: we have to measure them. so we can use the laws to calculate what would happen if these seemingly arbitrary masses had been different. As it turns out, even small variations in the masses of the elementary particles would have made the universe inhospitable to life.
Recall that atoms are made of three elementary particles: electrons, up quarks, and down quarks. Up and down quarks combine to form protons and neutrons, which in turn combine to form atomic nuclei. Diverse kinds of atomic nuclei join with electrons to form atoms, which in turn can combine in a multitude of ways to form molecules of enormous variety and complexity: the PubChem chemistry database lists over 100 million known chemical compounds (and counting). However, such variety and complexity is only possible because of the finely-tuned masses of the elementary particles. The stability of atoms depends delicately on the masses of the up quark, the down quark, and the electron.
Consider, for instance, the mass of the down quark. If this mass were increased by a factor of 3, all neutrons would decay into protons, so hydrogen would be the only stable element. A three-fold increase may sound like a big change, but keep in mind that the heaviest type of quark—the top quark—is 36 thousand times heavier than the down quark, so tripling the down quark’s mass would be a relatively minor adjustment in comparison to the wide range of masses quarks can have. The situation is even more precarious if we move in the opposite direction: decrease the mass of the down quark by just 8%, and there would be only neutrons. Either way, there would be no chemistry and, therefore, no life.For further explanation of this example, see Luke A. Barnes and Geraint F. Lewis, A Fortunate Universe: Life in a Finely-Tuned Cosmos (Cambridge: Cambridge University Press, 2016), 47-53.
The fine-tuning of the down quark’s mass is especially striking when we compare it to the masses of other quarks. Picture a measuring line stretched out the length of a football field. One end of the line represents the mass of the lightest known quark (the up quark). The other end represents the mass of the heaviest known quark (the top quark). The mass of the down quark lies somewhere in between the ends of the football field, and it happens to be in just the right spot to allow for complex chemistry and life. Nudge it a quarter of an inch toward either end of the field, and all forms of chemical complexity are destroyed.The difference between the mass of the lightest quark and the heaviest quark (the up quark and the top quark, respectively) is about 17,000 times greater than the range of life-permitting (or chemistry-permitting) values for the down quark. If a 100-yard football field represents the full range of known quark values, the size of the life-permitting range for the down quark is less than a quarter of an inch on that scale.
Even particles that seem irrelevant to biology have turned out to be fine-tuned for life in unexpected ways. For example, neutrinos are extremely light—a tiny fraction of the mass of an electron. However, neutrinos far outnumber atoms in the universe. There are about 340 million neutrinos per cubic meter of the universe, on average, compared to only 2 atoms per cubic meter. If neutrinos were just slightly heavier, their total mass would far outweigh matter, and this evenly-distributed mass throughout the universe would prevent the formation of stars and galaxies.For further discussion of this example, see Luke A. Barnes and Geraint F. Lewis, A Fortunate Universe: Life in a Finely-Tuned Cosmos (Cambridge: Cambridge University Press, 2016), 173-177.
As we saw in Chapter 3, the universe began in a state of extremely low entropy, providing a steep entropy gradient to power the thermodynamic processes that sustain life. The likelihood of starting in such a low-entropy state just by chance, according to Oxford physicist Roger Penrose’s calculations, was 1 in 1010123.See this page of chapter 3 for a brief discussion of Penrose’s calculation. For further explanation, see Luke A. Barnes and Geraint F. Lewis, A Fortunate Universe: Life in a Finely-Tuned Cosmos (Cambridge: Cambridge University Press, 2016), 126-127 and 319. This is one of several closely-related ways in which the initial conditions of the universe appear to have been finely tuned with incomprehensible precision.
A second way in which the early universe displays fine-tuning involves the average density of matter and energy just after the Big Bang. According to Einstein’s general theory of relativity, the density of matter and energy determine the curvature of spacetime, as explained in Chapter 6. The early universe had exactly the right density to give spacetime a “flat” (i.e. Euclidean) geometry on large scales. Even the slightest deviation from flatness would have been amplified over time, so the degree of fine-tuning required to yield a flat spacetime geometry becomes increasingly precise as we trace the history of the universe backward in time. This is known as the flatness problem.
The nearly perfect flatness of spacetime is necessary for a long-lived, life-sustaining universe; but there’s no obvious reason why the universe should have had exactly the right initial density to yield such a flat spacetime geometry. The degree of fine-tuning required at the moment of the Big Bang is incalculable. Just one nanosecond after the Big Bang, the universe was unfathomably dense, with an estimated trillion trillion kilograms of mass crammed into each cubic meter (1024 kg/m3). At that moment, the requisite degree of fine-tuning was so extreme that if just one kilogram had been added to or subtracted from the trillion trillion, the universe would have deviated away from flatness by now.See Luke A. Barnes and Geraint F. Lewis, A Fortunate Universe: Life in a Finely-Tuned Cosmos (Cambridge: Cambridge University Press, 2016), 165-167.
Another closely-related issue is the horizon problem. Recall that the Cosmic Microwave Background (CMB) is the oldest light we can detect, a faint glow of microwave radiation emitted from the hydrogen plasma of the early universe. The CMB has nearly uniform temperature in all directions, with the hottest and coldest regions differing only by 0.0002 kelvins or so.See this NASA page for further explanation and a map of the CMB fluctuations. These subtle temperature differences indicate that the distribution of matter and energy in the early universe had just the right amount of smoothness to allow for the formation of stars and galaxies. A bit smoother, and gravity wouldn’t have pulled matter together fast enough to form stars before the universe expanded too much; a bit less smooth, and almost all matter would quickly collapse into black holes. The finely tuned temperature distribution is especially puzzling because different regions of the CMB are outside each other’s “horizon” (i.e., outside each other’s light cones) and never could have influenced each other. This means that the nearly uniform temperature of the CMB can’t be explained simply by thermodynamic processes, like the process of heat transfer that results in thermodynamic equilibrium. It looks as though the universe already had exactly the right distribution of matter and energy from the start.For a fuller explanation of the horizon problem, see Luke A. Barnes and Geraint F. Lewis, A Fortunate Universe: Life in a Finely-Tuned Cosmos (Cambridge: Cambridge University Press, 2016), 168-170.
Are the finely tuned values of the physical constants, elementary particle masses, and initial conditions just lucky coincidences, or is there some deeper explanation for these facts? It seems implausible that all these instances of cosmological fine-tuning are merely coincidental, yet there is no consensus on how to explain the fine-tuning scientifically. This is known as the fine-tuning problem.
One possible explanation for cosmological fine-tuning is that the universe was designed by an intelligent and powerful Creator who deliberately chose the physical laws, constants, and initial conditions to make the cosmos both beautiful and habitable. Even scientists with little sympathy toward religious belief have acknowledged that the impression of design is hard to overlook.For example, in his bestselling book A Brief History of Time (1988), the eminent physicist Stephen Hawking (who was an atheist) commented: “The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life.” (p. 125) For some scientists, the discovery of fine-tuning ultimately led to belief in God. Fred Hoyle, the atheist astronomer who coined the term “Big Bang” as a nickname for the cosmic origin event, was responsible for one of the earliest fine-tuning discoveries. In the late 1940s, he calculated that stars would produce very little carbon unless the nuclear strong force was fine-tuned to a very precise level, in which case stars could produce ten million times more carbon than they would otherwise generate. (See the fineprint section above for details.) Given the abundance of carbon in the universe, Hoyle suspected that the nuclear strong force was indeed finely tuned for carbon production, and subsequent experiments by physicist William Fowler confirmed Hoyle’s prediction.For the details of this remarkable story, see Luke A. Barnes and Geraint F. Lewis, A Fortunate Universe: Life in a Finely-Tuned Cosmos (Cambridge: Cambridge University Press, 2016), 115-117. Hoyle later confessed that his atheism had been shaken by this discovery. Within a few decades, as many more examples of fine-tuning accumulated, his attitude toward the existence of God shifted profoundly. In 1981, Hoyle wrote:
A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.Fred Hoyle, “The Universe: Past and Present Reflections.” Engineering and Science, November 1981, p. 12. In context, Hoyle is referring specifically to the case of fine-tuning that he discovered in collaboration with William Fowler, namely, the finely tuned resonances of carbon and oxygen mentioned above. The final paragraph of the article reads as follows: “From 1953 onward, Fowler and I have been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12C to the 7.12 MeV level in 160. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these [resonances] are just the two levels you would have to fix, and your fixing would have to be just about where these levels are actually found to be. Is that another put-up, artificial job? Following the above argument, I am inclined to think so. A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.”
In Chapter 12, I’ll argue that Hoyle’s “common sense” conclusion can be supported with a careful philosophical analysis of the evidence, employing the same methods of reasoning that scientists use to evaluate the evidential support for other theories and hypotheses.See the sections on Cosmological fine-tuning arguments for design, Probabilistic evidence, and My fine-tuning argument. The “methods of reasoning” to which I am referring here are the probabilistic models employed in Bayesian confirmation theory. For a fuller introduction to Bayesian confirmation theory, see also the chapters on Confirmation Theory and Evidence and Rationality in my online ebook Skillful Reasoning: An Introduction to Formal Logic and Other Tools for Careful Thought. In other words, I contend that Hoyle was right: cosmological fine-tuning is strong evidence for a Creator. However, alternative interpretations of the fine-tuning evidence must also be considered in order to determine which view provides the most plausible explanation for these facts. The most popular alternative is the multiverse hypothesis, which we’ll examine on the next page.