A related argument for design involves a different kind of complexity known as specified complexity, also called complex specified information or functional information. A sequence of symbols or objects—for example, a sequence of printed letters on a page, or a sequence of nucleobases along a strand of DNA—is said to exhibit specified complexity (or to contain complex specified information, or functional information) when it meets both of the following criteria:
Spelling out each of these criteria with precision is no easy task,William Dembski provides a careful, mathematically rigorous exposition of specified complexity in his book The Design Inference: Eliminating Chance Through Small Probabilities (Cambridge: Cambridge University Press, 1998). For an accessible introduction to the concept and a discussion of its relevance to biology, see Stephen C. Meyer, Signature in the Cell: DNA and the Evidence for Intelligent Design (New York: HarperCollins, 2009), chapter 4. but for present purposes an example will suffice to illustrate the crucial idea. If a sequence of ten letters from the English alphabet is selected at random, with all 26 letters equally likely to be chosen for each position in the sequence, the following two sequences have the same probability of occurring:
Each of these sequences has the same probability of being selected at random: 1/2610, or approximately 0.000000000000007, that is, a probability of about 7 quadrillionths. (I created the first sequence using this random letter generator. The second sequence, as you may have guessed, I typed deliberately.) Thus, both sequences meet the first criterion: they are unlikely to occur by chance. However, only the second sequence meets the other criterion. It satisfies a functional requirement of the English language, conveying meaning to readers who understand the language.
The term ‘specified complexity’ originated in scientific literature in the early 1970s, prior to the ID movement,William Dembski attributes the first use of the term to Leslie Orgel, who used it in his 1973 book The Origins of Life. For further discussion of the term’s origin and usage, see Dembski, The Design Revolution: Answering the Toughest Questions about Intelligent Design (Downers Grove: InterVarsity Press, 2004), 81. but the concept rose to prominence when mathematician William Dembski published his book The Design Inference in 1998. Drawing insights from forensic science, cryptography, and information theory, Dembski argued that under certain conditions specified complexity is a hallmark of intelligent causation, and he proposed a set of statistical criteria for identifying such instances.William Dembski, The Design Inference: Eliminating Chance Through Small Probabilities (Cambridge: Cambridge University Press, 1998) Although he did not explicitly endorse the application to biology in that book, in later writings many ID advocates—including Dembski himself—have employed the concept of specified complexity in an argument for biological design.For recent examples, see the essays by Stephen Meyer, Casey Luskin, and William Dembski (chapters 13, 15, 16, and 17) in The Comprehensive Guide to Science and Faith: Exploring the Ultimate Questions about Life and the Cosmos, edited by William Dembski, Casey Luskin, and Joseph Holden (Eugene: Harvest House Publishers, 2021).
I will focus here on a version of the argument put forward by Stephen Meyer. In his 2009 book Signature in the Cell, Meyer points out that specified complexity, or functional information, may arise in several ways. Most obviously, human beings produce functional information in large quantities, as I am doing while I write this book. Some natural processes also produce functional information. It may arise just by chance, despite the improbability required by the first criterion above.Criterion 1 says that in order for a sequence to exhibit specified complexity, it must have a low probability of occurring by chance. However, low-probability sequences frequently do occur just by chance, and sometimes these unlikely sequences might also satisfy criterion 2. Mutations may occasionally produce functional sequences of nucleobases along a strand of DNA, for example. However, the probability of a functional sequence decreases exponentially with the length of the sequence. Without the aid of natural selection, long sequences of functional information are exceedingly unlikely to arise by chance. (We’ll return to the role of natural selection momentarily.)
To appreciate this point, try using this random letter generator to make three-letter sequences, and see how many tries it takes to create an actual three-letter word. (It took me 18 tries before a word appeared: the word “YES.”) Then try six-letter sequences. Unless you are really lucky, you won’t see any six-letter words for a very long time. The Free Dictionary lists 92,061 six-letter words. However, there are 266 = 308,915,776 possible sequences of six letters, so your chance of getting a word on any given try is 92,061/308,915,776 or approximately 0.0003, which means you can expect about three meaningful words out of every ten thousand tries. The probability of getting several meaningful words in a row is much lower, and the probability of getting a whole sentence is almost incomprehensibly low.
The calculation in the preceding example depends, of course, one what proportion of possible sequences of a given length are meaningful words (or phrases, or sentences) in English. Analogously, we need to know what proportion of possible DNA sequences of a given length are functionalIn this context, a functional DNA sequence is one that codes for a protein molecule that provides some advantage to the organism, that is, a protein favored by natural selection. in order to calculate the probability that a functional sequence of that length could be produced just by chance, without the help of natural selection. As we saw in chapter 10, molecular biologist Douglas Axe found that for sequences of modest length (150 amino acids, corresponding to 150 codons or 450 nucleobases on a strand of DNA), only 1 in 1074 of the possible sequences yield stable protein folds. The proportion of functional proteins in his experiments was even smaller. (See the prior discussion of the combinatorial problem.)
Although some other studies have yielded higher estimates for the proportion of functional sequences in information-bearing biomolecules like DNA, RNA, and proteins, the most optimistic results still reinforce the conclusion that long sequences of functional information are unlikely to arise by chance. Even with extravagantly generous assumptions, Meyer argues, chance processes are unlikely to have produced a sequence of functional information more than 500 bits in length during the entire history of the universe.Stephen C. Meyer, Signature in the Cell: DNA and the Evidence for Intelligent Design (New York: HarperCollins, 2009), 294; also footnote 27 on page 541. (A bit is defined as the amount of information encoded in a single digit of a two-digit language such as the binary code used by computers.) In the genetic code of DNA, 500 bits corresponds to a sequence of just 250 nucleobases, or about 83 codons—enough to produce a small protein just 83 amino acids long.
Once a self-replicating form of information (such as the self-replicating RNA molecules envisioned in the “RNA world” hypothesisFor discussion of the “RNA world” hypothesis, see Stephen C. Meyer, Signature in the Cell: DNA and the Evidence for Intelligent Design (New York: HarperCollins, 2009), chapter 14. This and other hypotheses about the origin of life will also be introduced briefly in a [coming soon] section in Chapter 10 of this ebook. begins to proliferate, mutation and natural selection might act to lengthen an existing sequence of functional information, assuming longer sequences provide an advantage in some environments. However, the amount of specified complexity required to achieve self-replication in the first place undoubtedly exceeds the mere 500 bits of information that could have arisen just by chance.Nobody knows exactly how much complexity is required for a self-replicating system, but the simplest known self-replicators are genetically-modified bacteria whose genomes have been pruned by artificially eliminating all genes that are inessential for self-replication. The current record-holder is a genetically-modified bacterium with only 159,662 base pairs in its whole genome. (See this news article in Nature for details.) That’s equivalent to 319,324 bits of information, which is vastly more than the 500-bit maximum that could have been produced by chance in the entire history of the universe. Moreover, the maximum amount of specified information that could have been produced in the early Earth would have been far less than 500 bits, though Meyer does not offer an estimate of that limit. Meyer argues that this presents an insurmountable problem for any theory of biological origins that does not include intelligent causation.
Neither law-governed natural processes nor random chance processes, nor any combination of the two, can produce the amount of specified complexity required even for the simplest forms of life. Self-organizational laws can explain only redundant order or repetitive patterns, such as the repeating arrangements of atoms in a crystal (which we might call specified simplicity), not specified complexity. Neither can random chance processes account for the specified complexity needed to achieve self-replication, as argued above. Theories that combine chance with law-like processes, such as the combination of mutation and natural selection, do no better.See Meyer, Signature in the Cell chapters 13 and 14 of for further discussion of this point. Although mutation and natural selection together may increase the amount of specified complexity after a self-replicating organism (or a self-replicating molecule or system of molecules) comes into existence, this won’t help to explain the specified complexity required to get the process of evolution started in the first place.Meyer summarizes the point this way: “Thus, chance alone does not constitute a sufficient explanation for the de novo origin of any specified sequence or system containing more than 500 bits of (specified) information. Further, since systems characterized by complexity (a lack of redundant order) defy explanation by self-organizational laws, and since appeals to prebiotic natural selection presuppose, but do not explain, the origin of the specified information necessary to a minimally complex self-replicating system, intelligent design best explains the origin of the more than 500 bits of specified information required to produce the first minimally complex living system.” Signature in the Cell: DNA and the Evidence for Intelligent Design (New York: HarperCollins, 2009), 541 (footnote 27). However, we do know of one possible cause which could explain the origin of life:
Experience teaches that whenever large amounts of specified complexity or information are present in an artifact or entity whose causal story is known, invariably creative intelligence—intelligent design—played a role in the origin of that entity. Thus, when we encounter such information in the large biological molecules needed for life, we may infer—based on our knowledge of established cause-and-effect relationships—that an intelligent cause operated in the past to produce the specified information necessary to the origin of life.Stephen C. Meyer, Signature in the Cell: DNA and the Evidence for Intelligent Design (New York: HarperCollins, 2009), 376-377.
In his subsequent book Darwin’s Doubt (2013), Meyer extends the argument further. He contends that known evolutionary mechanisms also fail to explain the origin of the specified complexity required for major innovations that occurred during the history of life, especially the rapid proliferation of novel structures and body plans that appeared at the beginning of the Cambrian period, the so-called “Cambrian explosion.” The waiting times problem and the combinatorial problem, which we encountered in Chapter 10, loom large in this challenge to mainstream evolutionary thinking. Meyer also highlights aspects of the fossil record that seem to conflict with evolutionary expectations, especially the peculiar “top-down” pattern in which morphological disparity among phyla precedes small-scale diversity of species. (This will be explained further in my discussion of the fossil record in chapter 10 [coming soon].) If the information required for these new life forms originated from natural selection acting on random mutations, Meyer argues, we should expect species-level diversity to precede morphological disparity, rather than the other way around.Stephen C. Meyer, Darwin’s Doubt: The Explosive Origin of Animal Life and the Case for Intelligent Design (New York: HarperOne, 2013), Chapter 2. For all of these reasons, Meyer contends, evolutionary mechanisms cannot account for the Cambrian explosion and other similar episodes in the history of life. The possibility of intelligent causation, on the other hand, fits the evidence well:
Conscious and rational agents have, as part of their powers of purposive intelligence, the capacity to design information-rich parts and to organize those parts into functional information-rich systems and hierarchies. We know of no other causal entity or process that has this capacity. Clearly, we have good reason to doubt that mutation and selection, self-organizational processes, or any of the other undirected processes cited by other materialistic evolutionary theories, can do so. Thus, based upon our present experience of the causal powers of various entities and a careful assessment of the efficacy of various evolutionary mechanisms, we can infer intelligent design as the best explanation for the origin of the hierarchically organized layers of information needed to build the animal forms that arose in the Cambrian period.Stephen C. Meyer, Darwin’s Doubt: The Explosive Origin of Animal Life and the Case for Intelligent Design (New York: HarperOne, 2013), 366.
Meyer’s version of the argument from specified complexity has been challenged in several ways, but some popular criticisms seem to miss the point. A common response is to argue that natural selection and mutation, working together, can produce specified complexity. Some critics have also used clever analogies to illustrate how an unintelligent selection process (analogous to natural selection) in a computer program can produce large quantities of functional information.Here’s a representative example from evolutionary creationist Deborah Haarsma: “EC [evolutionary creationism] agrees with ID that DNA contains a pattern of highly specified information. EC disagrees, however, on the source of this information, seeing abundant evidence that this information can and does arise through natural processes. So where does the new information come from? From the environment. To illustrate this, imagine a simple computer program written to navigate a maze. At each fork in the maze, the program randomly picks a direction to turn (north, south, east, west). If it hits a wall, the program goes back and tries other combinations of turns, and eventually reaches the end of the maze. The program results in a list of the turns that work, a set of specified information. Where did the information come from? Not from the computer program, nor from the person who wrote the program. Rather, the information came from the environment of the maze. The information is now embedded in the ‘organism’ (the list of instructions) that ‘lives’ in that environment. Evolution is similar in that organisms incorporate information from their environment as they become more complex and better adapted to it.” Deborah B. Haarsma, “Response from Evolutionary Creation,” in J.B. Stump and Stanley N. Gundry (eds.), Four Views on Creation, Evolution, and Intelligent Design (Grand Rapids: Zondervan, 2017), 223-224. A similar maze analogy appears in this BioLogos article by Loren Haarsma. These replies fail to recognize that Meyer is specifically addressing cases where mutation and natural selection are irrelevant or inadequate to explain the information in question. As Meyer emphasizes, natural selection can’t operate in a “prebiotic” context: that is, it can’t help to explain the origin of the first self-replicating organism (or the first self-replicating molecule or system of molecules), which surely required a high level of specified complexity to achieve self-replication in the first place. Moreover, natural selection doesn’t solve the combinatorial problem: it can’t “guide” mutations to produce novel protein-coding genes; it can only select variants of proteins that already exist. Similarly, the waiting times problem calls into question the ability of natural selection and mutation to produce the information required for novel structures and body plans within the narrow windows of time given by abrupt transitions in the fossil record, especially during periods of rapid innovation like the Cambrian explosion. (Other evolutionary processes, such as changes in gene-regulatory networks, probably played an important role in the Cambrian explosion, but that still doesn’t explain where the novel information came from.) Furthermore, if mutation and natural selection were responsible for the new information, the patterns of morphological change in the Cambrian explosion should be “bottom-up” rather than “top-down.”
A more pertinent objection to Meyer’s argument questions the empirical results cited in his premises, especially the proportion of functional to non-functional amino acid sequences measured in Douglas Axe’s experiments.See this page of chapter 10 for further discussion of the controversy surrounding Axe’s results. Maybe Axe’s results were wildly inaccurate or, more plausibly, perhaps the proportion of functional to non-functional possibilities was much higher for the (unknown) prebiotic chemicals that produced the first self-replicating organism. Such speculative possibilities notwithstanding, however, it seems to me that Meyer is right: currently, our best empirical evidence regarding the origin of life and the Cambrian explosion favors design in both of those cases.
There may be other reasons for rejecting design hypotheses, of course. For example, we may be unable to find a plausible design hypothesis that yields accurate predictions in a wide range of cases, or we might have good philosophical or theological reasons for preferring alternative explanations. We’ll consider several design hypotheses, along with numerous objections to them, shortly. First, however, let’s consider one more way in which biology may provide evidence of intelligent design.