Recycling bad arguments: ENV on the origin of life

During the past few days, Dr. Brian Miller (whose Ph.D. is in physics) has written a series of articles on the origin of life over at Evolution News and Views:

Thermodynamics of the Origin of Life (June 19, 2017)
The Origin of Life, Self-Organization, and Information (June 20, 2017)
Free Energy and the Origin of Life: Natural Engines to the Rescue (June 22, 2017)
Origin of Life and Information — Some Common Myths (June 26, 2017)

Dr. Miller’s aim is to convince his readers that intelligent agency was required to coordinate the steps leading to the origin of life. I think his conclusion may very well be correct, but that doesn’t make his arguments correct. In this post, I plan to subject Dr. Miller’s arguments to scientific scrutiny, in order to determine whether Dr. Miller has made a strong case for Intelligent Design.

While I commend Dr. Miller for his indefatigability, I find it disappointing that his articles recycle several Intelligent Design canards which have been refuted on previous occasions. Dr. Miller also seems to be unaware of recently published online articles which address some of his concerns.

1. Thermodynamics of the Origin of Life

White smokers emitting liquid carbon dioxide at the Champagne vent, Northwest Eifuku volcano, Marianas Trench Marine National Monument. Image courtesy of US NOAA and Wikipedia.

In his first article, Thermodynamics of the Origin of Life, Dr. Miller critiques the popular scientific view that life originated in a “system strongly driven away from equilibrium” (or more technically, a non-equilibrium dissipative system), such as “a pond subjected to intense sunlight or the bottom of the ocean near a hydrothermal vent flooding its surroundings with superheated water and high-energy chemicals.” Dr. Miller argues that this view is fatally flawed, on three counts:

First, no system could be maintained far from equilibrium for more than a limited amount of time. The sun is only out during the day, and superheated water at the bottom of the ocean would eventually migrate away from any hydrothermal vents. Any progress made toward forming a cell would be lost as the system reverted toward equilibrium (lower free energy) and thus away from any state approaching life. Second, the input of raw solar, thermal, or other forms of energy actually increase the entropy of the system, thus moving it in the wrong direction. For instance, the ultraviolet light from the sun or heat from hydrothermal vents would less easily form the complex chemical structures needed for life than break them apart. Finally, in non-equilibrium systems the differences in temperature, concentrations, and other variables act as thermodynamic forces which drive heat transfer, diffusion, and other thermodynamic flows. These flows create microscopic sources of entropy production, again moving the system away from any reduced-entropy state associated with life. In short, the processes occurring in non-equilibrium systems, as in their near-equilibrium counterparts, generally do the opposite of what is actually needed.

Unfortunately for Dr. Miller, none of the foregoing objections is particularly powerful, and most of them are out-of-date. I would also like to note for the record that while the hypothesis that life originated near a hydrothermal vent remains popular among origin-of-life theorists, the notion that this vent was located at “the bottom of the ocean” is now widely rejected. Miller appears to be unaware of this. In fact, as far back as 1988, American chemist Stanley Miller, who became famous when he carried out the Miller-Urey experiment in 1952, had pointed out that long-chain molecules such as RNA and proteins cannot form in water without enzymes to help them. Additionally, leading origin-of-life researchers such as Dr. John Sutherland (of the Laboratory of Molecular Biology in Cambridge, UK) and Dr. Jack Szostak (of Harvard Medical School) have discovered that many of the chemical reactions leading to life depend heavily on the presence of ultraviolet light, which only comes from the sun. This rules out a deep-sea vent scenario. That’s old news. Let us now turn to Dr. Miller’s three objections to the idea of life originating in a non-equilibrium dissipative system.

(a) Could a non-equilibrium be kept away from equilibrium?

Miller’s first objection is that non-equilibrium systems can’t be maintained for very long, which would mean that life would never have had time to form in the first place. However, a recent BBC article by Michael Marshall, titled, The secret of how life on Earth began (October 31, 2016), describes a scenario, proposed by origin-of-life researcher John Sutherland, which would evade the force of this objection. Life, Sutherland believes, may have formed very rapidly:

Sutherland has set out to find a “Goldilocks chemistry”: one that is not so messy that it becomes useless, but also not so simple that it is limited in what it can do. Get the mixture just complicated enough and all the components of life might form at once, then come together.

In other words, four billion years ago there was a pond on the Earth. It sat there for years until the mix of chemicals was just right. Then, perhaps within minutes, the first cell came into existence.

“But how, and where?” readers might ask. One plausible site for this event, put forward by origin-of-life expert Armen Mulkidjanian, is the geothermal ponds found near active volcanoes. Because these ponds would be continually receiving heat from volcanoes, they would not return to equilibrium at night, as Dr. Miller supposes.

Another likely location for the formation of life, proposed by John Sutherland, is a meteorite impact zone. This scenario also circumvents Miller’s first objection, as large-scale meteorite impacts would have melted the Earth’s crust, leading to geothermal activity and a continual supply of hot water. The primordial Earth was pounded by meteorites on a regular basis, and large impacts could have created volcanic ponds where life might have formed:

Sutherland imagines small rivers and streams trickling down the slopes of an impact crater, leaching cyanide-based chemicals from the rocks while ultraviolet radiation pours down from above. Each stream would have a slightly different mix of chemicals, so different reactions would happen and a whole host of organic chemicals would be produced.

Finally the streams would flow into a volcanic pond at the bottom of the crater. It could have been in a pond like this that all the pieces came together and the first protocells formed.

(b) Would an input of energy increase entropy?

What about Miller’s second objection, that the input of energy would increase the entropy of the system, thus making it harder for the complex chemical structures needed for life to form, in the first place?

Here, once again, recent work in the field seems to be pointing to a diametrically opposite conclusion. A recent editorial on non-equilibrium dissipative systems (Nature Nanotechnology 10, 909 (2015)) discusses the pioneering work of Dr. Jeremy England, who maintains that the dissipation of heat in these systems can lead to the self-organization of complex systems, including cells:

The theoretical concepts presented are not new — they have been rigorously reported before in specialized physics literature — but, as England explains, recently there has been a number of theoretical advances that, taken together, might lead towards a more complete understanding of non-equilibrium phenomena. More specifically, the meaning of irreversibility in terms of the amount of work being dissipated as heat as a system moves on a particular trajectory between two states. It turns out, this principle is relatively general, and can be used to explain the self-organization of complex systems such as that observed in nanoscale assemblies, or even in cells.

Dr. England, who is by the way an Orthodox Jew, has derived a generalization of the second law of thermodynamics that holds for systems of particles that are strongly driven by an external energy source, and that can dump heat into a surrounding bath. All living things meet these two criteria. Dr. England has shown that over the course of time, the more likely evolutionary outcomes will tend to be the ones that absorbed and dissipated more energy from the environment’s external energy sources, on the way to getting there. In his own words: “This means clumps of atoms surrounded by a bath at some temperature, like the atmosphere or the ocean, should tend over time to arrange themselves to resonate better and better with the sources of mechanical, electromagnetic or chemical work in their environments.”

There are two mechanisms by which a system might dissipate an increasing amount of energy over time. One such mechanism is self-replication; the other, greater structural self-organization. Natalie Wolchover handily summarizes Dr. England’s reasoning in an article in Quanta magazine, titled, A New Physics Theory of Life (January 22, 2014):

Self-replication (or reproduction, in biological terms), the process that drives the evolution of life on Earth, is one such mechanism by which a system might dissipate an increasing amount of energy over time. As England put it, “A great way of dissipating more is to make more copies of yourself.” In a September paper in the Journal of Chemical Physics, he reported the theoretical minimum amount of dissipation that can occur during the self-replication of RNA molecules and bacterial cells, and showed that it is very close to the actual amounts these systems dissipate when replicating. He also showed that RNA, the nucleic acid that many scientists believe served as the precursor to DNA-based life, is a particularly cheap building material. Once RNA arose, he argues, its “Darwinian takeover” was perhaps not surprising…

Besides self-replication, greater structural organization is another means by which strongly driven systems ramp up their ability to dissipate energy. A plant, for example, is much better at capturing and routing solar energy through itself than an unstructured heap of carbon atoms. Thus, England argues that under certain conditions, matter will spontaneously self-organize. This tendency could account for the internal order of living things and of many inanimate structures as well. “Snowflakes, sand dunes and turbulent vortices all have in common that they are strikingly patterned structures that emerge in many-particle systems driven by some dissipative process,” he said.

Dr. Jeremy England is an MIT physicist. Dr. Miller is also a physicist. I find it puzzling (and more than a little amusing) that Dr. Miller believes that increasing the entropy of a non-equilibrium dissipative system will inhibit the formation of complex chemical structures required for life to exist, whereas Dr. England believes the exact opposite!

(c) Would local variations within non-equilibrium dissipative systems prevent life from forming?

What of Dr. Miller’s third objection, that differences in temperature, concentrations, and other variables arising within a non-equilibrium dissipative system will generate entropy on a microscopic scale, creating disturbances which move the system away from a reduced-entropy state, which is required for life to exist? Dr. David Ruelle, author of a recent Arxiv paper titled, The Origin of Life seen from the point of view of non-equilibrium statistical mechanics (January 29, 2017), appears to hold a contrary view. In his paper, Dr. Ruelle refrains from proposing a specific scenario for the origin of life; rather, his aim, as he puts it, is to describe “a few plausible steps leading to more and more complex states of pre-metabolic systems so that something like life may naturally arise.” Building on the work of Dr. Jeremy England and other authors in the field, Dr. Ruelle contends that “organized metabolism and replication of information” can spontaneously arise from “a liquid bath (water containing various solutes) interacting with some pre-metabolic systems,” where the pre-metabolic systems are defined as “chemical associations which may be carried by particles floating in the liquid, or contained in cracks of a solid boundary of the liquid.” At the end of his paper, Dr. Ruelle discusses what he calls pre-biological systems, emphasizing their ability to remain stable in the face of minor fluctuations, and describing how their complexity can increase over time:

In brief, a pre-biological state is generally indecomposable. This means in particular that the fluctuations in its composition are not large. The prebiological state is also stable under small perturbations, except when those lead to a new metabolic pathway, changing the nature of the state.

We see thus a pre-biological system as a set of components undergoing an organized set of chemical reactions using a limited amount of nutrients in the surrounding fluid. The complexity of the pre-biological system increases as the amount of available nutrients decreases. The system can sustain a limited amount of disturbance. An excessive level of disturbance destroys the organized set of reactions on which the system is based: it dies.

Evidently Dr. Ruelle believes that non-equilibrium dissipative systems are resilient to minor fluctuations, and even capable of undergoing an increase in complexity, whereas Dr. Miller maintains that these fluctuations would prevent the formation of life.

If Dr. Miller believes that Dr. Ruelle is wrong, perhaps he would care to explain why.

2. The Origin of Life, Self-Organization, and Information

The three main structures phospholipids form spontaneously in solution: the liposome (a closed bilayer), the micelle and the bilayer. Image courtesy of Lady of Hats and Wikipedia.

I turn now to Dr. Miller’s second article, The Origin of Life, Self-Organization, and Information. In this article, Dr. Miller argues that there are profound differences between self-organizational order and the cellular order found in living organisms. Non-equilibrium dissipative systems might be able to generate the former kind of order, but not the latter.

The main reason for the differences between self-organizational and cellular order is that the driving tendencies in non-equilibrium systems move in the opposite direction to what is needed for both the origin and maintenance of life. First, all realistic experiments on the genesis of life’s building blocks produce most of the needed molecules in very small concentrations, if at all. And, they are mixed together with contaminants, which would hinder the next stages of cell formation. Nature would have needed to spontaneously concentrate and purify life’s precursors…

Concentration of some of life’s precursors could have taken place in an evaporating pool, but the contamination problem would then become much worse since precursors would be greatly outnumbered by contaminants. Moreover, the next stages of forming a cell would require the concentrated chemicals to dissolve back into some larger body of water, since different precursors would have had to form in different locations with starkly different initial conditions…

In addition, many of life’s building blocks come in both right and left-handed versions, which are mirror opposites. Both forms are produced in all realistic experiments in equal proportions, but life can only use one of them: in today’s life, left-handed amino acids and right-handed sugars. The origin of life would have required one form to become increasingly dominant, but nature would drive a mixture of the two forms toward equal percentages, the opposite direction.

(a) Why origin-of-life theorists are no longer obsessed with purity

Dr. Miller’s contention that contaminants would hinder the formation of living organisms has been abandoned by modern origin-of-life researchers, as BBC journalist Michael Marshall reports in his 2016 article, The secret of how life on Earth began. After describing how researcher John Sutherland was able to successfully assemble a nucleotide using a messy solution that contained a contaminant (phosphate) at the outset, leading Sutherland to hypothesize that for life to originate, there has to be “an optimum level of mess” (neither too much nor too little), Marshall goes on to discuss the pioneering experiments of another origin-of-life researcher, Jack Szostak, whose work has led him to espouse the same conclusion as Sutherland:

Szostak now suspects that most attempts to make the molecules of life, and to assemble them into living cells, have failed for the same reason: the experiments were too clean.

The scientists used the handful of chemicals they were interested in, and left out all the other ones that were probably present on the early Earth. But Sutherland’s work shows that, by adding a few more chemicals to the mix, more complex phenomena can be created.

Szostak experienced this for himself in 2005, when he was trying to get his protocells to host an RNA enzyme. The enzyme needed magnesium, which destroyed the protocells’ membranes.

The solution was a surprising one. Instead of making the vesicles out of one pure fatty acid, they made them from a mixture of two. These new, impure vesicles could cope with the magnesium – and that meant they could play host to working RNA enzymes.

What’s more, Szostak says the first genes might also have embraced messiness.

Modern organisms use pure DNA to carry their genes, but pure DNA probably did not exist at first. There would have been a mixture of RNA nucleotides and DNA nucleotides.

In 2012 Szostak showed that such a mixture could assemble into “mosaic” molecules that looked and behaved pretty much like pure RNA. These jumbled RNA/DNA chains could even fold up neatly.

This suggested that it did not matter if the first organisms could not make pure RNA, or pure DNA…

(b) Getting it together: how the first cell may have formed

Dr. Miller also argues that even if the components of life were able to form in separate little pools, the problem of how they came to be integrated into a living cell would still remain. In recent years, Dr. John Sutherland has done some serious work on this problem. Sutherland describes his own preferred origin-of-life scenario in considerable detail in a paper titled, The Origin of Life — Out of the Blue (Angewandte Chemie, International Edition, 2016, 55, 104 – 121.

Several years ago, we realised that … polarisation of the field was severely hindering progress, and we planned a more holistic approach. We set out to use experimental chemistry to address two questions, the previously assumed answers to which had led to the polarisation of the field: “Are completely different chemistries needed to make the various subsystems?” [and] “Would these chemistries be compatible with each other?”

With twelve amino acids, two ribonucleotides and the hydrophilic moiety of lipids synthesised by common chemistry, we feel that we have gone a good way to answering the first of the questions we posed at the outset. “Are completely different chemistries needed to make the various subsystems?” — we would argue no! We need to find ways of making the purine ribonucleotides, but hydrogen cyanide 1 is already strongly implicated as a starting material. We also need to find ways of making the hydrophobic chains of lipids, and maybe a few other amino acids, but there is hope in reductive homologation chemistry or what we have called “cyanosulfidic protometabolism.” …

The answer to the second question — “Would these chemistries be compatible with each other?” — is a bit more vague (thus far). The chemistries associated with the different subsystems are variations on a theme, but to operate most efficiently some sort of separation would seem to be needed. Because a late stage of our scenario has small streams or rivulets flowing over ground sequentially leaching salts and other compounds as they are encountered (Figure 17), it provides a very simple way in which variants of the chemistry could play out separately before all the products became mixed.

Separate streams might encounter salts and other compounds in different orders and be exposed to solar radiation differently. Furthermore streams might dry out and the residues become heated through geothermal activity before fresh inflow of water. If streams with different flow chemistry histories then merged, convergent synthesis might occur at the confluence and downstream thereof, or products might simply mix. It would be most plausible if only a few streams were necessary for the various strands of the chemistry to operate efficiently before merger. Our current working model divides the reaction network up such that the following groups of building blocks would be made separately: ribonucleotides; alanine, threonine, serine and glycine; glycerol phosphates, valine and leucine; aspartic acid, asparagine, glutamic acid and glutamine; and arginine and proline. Because the homologation of all intermediates uses hydrogen cyanide (1), products of reductive homologation of (1) — especially glycine — could be omnipresent.

Since then, Sutherland has written another paper, titled, Opinion: Studies on the origin of life — the end of the beginning (Nature Reviews Chemistry 1, article number: 0012 (2017), doi:10.1038/s41570-016-0012), in which he further develops his scenario.

(c) How did life come to be left-handed?

Two enantiomers [mirror images] of a generic amino acid which is chiral [left- or right-handed]. Image courtesy of Wikipedia.

The left-handedness of amino acids in living things, coupled with the fact that that organisms contain only right-handed nucleotides, has puzzled origin-of-life theorists since the time of Pasteur. Does this rule put a natural origin for living things, as Dr. Miller thinks? It is interesting to note that Pasteur himself was wary of drawing this conclusion, according to a biography written by his grandson, and he even wrote: “I do not judge it impossible.” (René Dubos, Louis Pasteur: Free Lance of Science, Da Capo Press, Inc., 1950. p 396.)

Pasteur’s caution turns out to have been justified. According to a 2015 report by Claudia Lutz in Phys.org titled, Straight up, with a twist: New model derives homochirality from basic life requirements, scientists at the University of Illinois have recently come up with a simple model which explains the chirality found in living organisms in terms of just two basic properties: self-replication and disequilibrium.

The Illinois team wanted to develop a … model, … based on only the most basic properties of life: self-replication and disequilibrium. They showed that with only these minimal requirements, homochirality appears when self-replication is efficient enough.

The model … takes into account the chance events involving individual molecules — which chiral self-replicator happens to find its next substrate first. The detailed statistics built into the model reveal that if self-replication is occurring efficiently enough, this incidental advantage can grow into dominance of one chirality over the other.

The work leads to a key conclusion: since homochirality depends only on the basic principles of life, it is expected to appear wherever life emerges, regardless of the surrounding conditions.

More recent work has provided a detailed picture of how life’s amino acids became left-handed. In 2016, Rowena Ball and John Brindley proposed that since hydrogen peroxide is the smallest, simplest molecule to exist as a pair of non-superimposable mirror images, or enantiomers, its interactions with ribonucelic acids may have led to amplification of D-ribonucleic acids and extinction of L-ribonucleic acids. Hydrogen peroxide was produced on the ancient Earth, more than 3.8 billion years ago, around the time that life emerged. Ball explains how this favored the emergence of right-handed nucleotide chains, in a recent article in Phys.org:

It is thought that a small excess of L-amino acids was “rained” onto the ancient Earth by meteorite bombardment, and scientists have found that a small excess of L-amino acids can catalyse formation of small excesses of D-nucleotide precursors. This, we proposed, led to a marginal excess of D-polynucleotides over L-polynucleotides, and a bias to D-chains of longer mean length than L-chains in the RNA world.

In the primordial soup, local excesses of one or other hydrogen peroxide enantiomer would have occurred. Specific interactions with polynucleotides destabilise the shorter L-chains more than the longer, more robust, D-chains. With a greater fraction of L-chains over D-chains destabilised, hydrogen peroxide can then “go in for the kill”, with one enantiomer (let us say M) preferentially oxidising L-chains.

Overall, this process works in favour of increasing the fraction and average length of D-chains at the expense of L-species.

An outdated argument relating to proteins

In his article, Dr. Miller puts forward an argument for proteins having been intelligently designed, which unfortunately rests on faulty premises:

Proteins eventually break down, and they cannot self-replicate. Additional machinery was also needed to constantly produce new protein replacements. Also, the proteins’ sequence information had to have been stored in DNA using some genetic code, where each amino acid was represented by a series of three nucleotides know as a codon in the same way English letters are represented in Morse Code by dots and dashes. However, no identifiable physical connection exists between individual amino acids and their respective codons. In particular, no amino acid (e.g., valine) is much more strongly attracted to any particular codon (e.g., GTT) than to any other. Without such a physical connection, no purely materialistic process could plausibly explain how amino acid sequences were encoded into DNA. Therefore, the same information in proteins and in DNA must have been encoded separately.

The problem with this argument is that scientists have known for decades that its key premise is false. Dennis Venema (Professor of Biology at Trinity Western University in Langley, British Columbia) explains why, in a Biologos article titled, Biological Information and Intelligent Design: Abiogenesis and the origins of the genetic code (August 25, 2016):

Several amino acids do in fact directly bind to their codon (or in some cases, their anticodon), and the evidence for this has been known since the late 1980s in some cases. Our current understanding is that this applies only to a subset of the 20 amino acids found in present-day proteins.

In Venema’s view, this finding lends support to a particular hypothesis about the origin of the genetic code, known as the direct templating hypothesis, which proposes that “the tRNA system is a later addition to a system that originally used direct chemical interactions between amino acids and codons.” Venema continues: “In this model, then, the original code used a subset of amino acids in the current code, and assembled proteins directly on mRNA molecules without tRNAs present. Later, tRNAs would be added to the system, allowing for other amino acids—amino acids that cannot directly bind mRNA — to be added to the code.” The model also makes a specific prediction: if it is correct, then “amino acids would directly bind to their codons on mRNA, and then be joined together by a ribozyme (the ancestor of the present-day ribosome).”

Venema concludes:

The fact that several amino acids do in fact bind their codons or anticodons is strong evidence that at least part of the code was formed through chemical interactions — and, contra [ID advocate Stephen] Meyer, is not an arbitrary code. The code we have — or at least for those amino acids for which direct binding was possible — was indeed a chemically favored code. And if it was chemically favored, then it is quite likely that it had a chemical origin, even if we do not yet understand all the details of how it came to be.

I will let readers draw their own conclusions as to who has the better of the argument here: Venema or Miller.

3. Free Energy and the Origin of Life: Natural Engines to the Rescue

Photograph of American scientist Josiah Willard Gibbs (1839-1903), discoverer of Gibbs free energy, taken about 1895. Image courtesy of Wikipedia.

In his third article, Free Energy and the Origin of Life: Natural Engines to the Rescue, Dr. Miller argues that the emergence of life would have required chemicals to move from a state of high entropy and low free energy to one of low entropy and high free energy. (Gibbs free energy can be defined as “a thermodynamic potential that can be used to calculate the maximum of reversible work that may be performed by a thermodynamic system at a constant temperature and pressure.”) However, spontaneous natural processes always tend towards lower free energy. An external source of energy doesn’t help matters, either, as it increases entropy. Miller concludes that life’s formation was intelligently directed:

Now, I will address attempts to overcome the free-energy barriers through the use of natural engines. To summarize, a fundamental hurdle facing all origin-of-life theories is the fact that the first cell must have had a free energy far greater than its chemical precursors. And spontaneous processes always move from higher free energy to lower free energy. More specifically, the origin of life required basic chemicals to coalesce into a state of both lower entropy and higher energy, and no such transitions ever occur without outside help in any situation, even at the microscopic level.

Attempted solutions involving external energy sources fail since the input of raw energy actually increases the entropy of the system, moving it in the wrong direction. This challenge also applies to all appeals to self-replicating molecules, auto-catalytic chemical systems, and self-organization. Since all of these processes proceed spontaneously, they all move from higher to lower free energy, much like rocks rolling down a mountain. However, life resides at the top of the mountain. The only possible solutions must assume the existence of machinery that processes energy and directs it toward performing the required work to properly organize and maintain the first cell.

But as we saw above, Dr. Jeremy England maintains that increasing the entropy of a non-equilibrium dissipative system can promote the formation of complex chemical structures required for life to exist. Miller’s claim that external energy sources invariably increase the entropy of the system is correct if we consider the system as a whole, including the bath into which non-equilibrium dissipative systems can dump their heat. Internally, however, self-organization can reduce entropy- a fact which undermines Miller’s attempt to demonstrate the impossibility of abiogenesis.

4. Origin of Life and Information — Some Common Myths

A game of English-language Scrabble in progress. Image courtesy of Wikipedia.

In his final installment, Origin of Life and Information — Some Common Myths, Dr. Miller takes aim at the view that a reduction in entropy is sufficient to account for the origin of biological information. Miller puts forward three arguments which he believes demonstrate the absurdity of this idea.

(a) Could a reduction in entropy generate functional information?

A bowl of alphabet soup nearly full, and nearly empty, spelling “THE END” in the latter case. Image courtesy of strawberryblues and Wikipedia.

Let’s look at Miller’s first argument, that a reduction in entropy could never account for the highly specific sequencing of amino acids in proteins, or nucleotides in DNA:

A common attempt to overcome the need for information in the first cell is to equate information to a reduction in entropy, often referred to as the production of “negative entropy” or N-entropy. … However, entropy is not equivalent to the information in cells, since the latter represents functional information. To illustrate the difference, imagine entering the kitchen and seeing a bowl of alphabet soup with several letters arranged in the middle as follows:

REST TODAY AND DRINK PLENTY OF FLUIDS

I HOPE YOU FEEL BETTER SOON

You would immediately realize that some intelligence, probably your mother, arranged the letters for a purpose. Their sequence could not possibly be explained by the physics of boiling water or the chemistry of the pasta.

… You would immediately recognize that a reduction in thermal entropy has no physical connection to the specific ordering of letters in a meaningful message. The same principle holds true in relation to the origin of life for the required sequencing of amino acids in proteins or nucleotides in DNA.

This, I have to say, is a fallacious argument, which I refuted in my online review of Dr. Douglas Axe’s recently published book, Undeniable. Dr. Miller, like Dr. Axe, is confusing functional information (which is found in living things) with the semantic information found in a message written with the letters of the alphabet, such as “REST TODAY AND DRINK PLENTY OF FLUIDS.” In fact, functional information is much easier to generate than semantic information, because it doesn’t have to form words, conform to the rules of syntax, or make sense at the semantic level:

The concepts of meaning and function are quite different, for reasons I shall now explain.

In order for an accidentally generated string of letters to convey a meaningful message, it needs to satisfy three very stringent conditions, each more difficult than the last: first, the letters need to be arranged into meaningful words; second, the sequence of words has to conform to the rules of syntax; and finally, the sequence of words has to make sense at the semantic level: in other words, it needs to express a meaningful proposition. For a string of letters generated at random to meet all of these conditions would indeed be fantastically improbable. But here’s the thing: living things don’t need to satisfy any of these conditions... The sequence of amino acids in a protein needs to do just one thing: it needs to fold up into a shape that can perform a biologically useful task. And that’s it. Generating something useful by chance – especially something with enough useful functions to be called alive – is a pretty tall order, but because living things lack the extra dimensions of richness found in messages that carry a semantic meaning, they’re going to be a lot easier to generate by chance than (say) instruction manuals or cook books… In practical terms, that means that given enough time, life just might arise.

Let me be clear: I am not trying to argue that a reduction in entropy is sufficient to account for the origin of biological information. That strikes me as highly unlikely. What I am arguing, however, is that appeals to messages written in text are utterly irrelevant to the question of how biological information arose. I might also add that most origin-of-life theorists don’t believe that the first living things contained highly specific sequences of “amino acids in proteins or nucleotides in DNA,” as Miller apparently thinks, because they probably lacked both proteins and DNA, if the RNA World hypothesis (discussed below) is correct. Miller is attacking a straw man.

(b) Can fixed rules account for the amino acid sequencing in proteins?

Dr. Miller’s second argument is that fixed rules, such as those governing non-linear dynamics processes, would be unable to generate the arbitrary sequences of amino acids found in proteins. These sequences perform useful biological functions, but are statistically random:

A related error is the claim that biological information could have come about by some complex systems or non-linear dynamics processes. The problem is that all such processes are driven by physical laws or fixed rules. And, any medium capable of containing information (e.g., Scrabble tiles lined up on a board) cannot constrain in any way the arrangement of the associated symbols/letters. For instance, to type a message on a computer, one must be free to enter any letters in any order. If every time one typed an “a” the computer automatically generated a “b,” the computer could no longer contain the information required to create meaningful sentences. In the same way, amino acid sequences in the first cell could only form functional proteins if they were free to take on any order…

…To reiterate, no natural process could have directed the amino acid sequencing in the first cell without destroying the chains’ capacity to contain the required information for proper protein folding. Therefore, the sequences could never be explained by any natural process but only by the intended goal of forming the needed proteins for the cell’s operations (i.e., teleologically).

Dr. Miller has a valid point here: it is extremely unlikely that fixed rules, by themselves, can explain the origin of biological information. Unfortunately, he spoils his case by likening the information in a protein to a string of text. As we have seen, the metaphor is a flawed one, on three counts. Proteins contain functional information, not semantic information.

Dr. Miller also makes an illicit inference from the statement that “no natural process could have directed the amino acid sequencing in the first cell” to the conclusion that “the sequences could never be explained by any natural process,” but only by a teleological process of Intelligent Design. This inference is unwarranted on two counts. First, it assumes that the only kind of natural explanation for the amino acid sequences in proteins would have to be some set of fixed rules (or laws) directing their sequence, which overlooks the possibility that functional sequences may have arisen by chance. (At this point, Intelligent Design advocates will be sure to cite Dr. Douglas Axe’s estimate that only 1 in 10^77 sequences of 150 amino acids are capable of folding up and performing some useful biological function, but this figure is a myth.)

Second, teleology may be either intrinsic (e.g. hearts are of benefit to animals, by virtue of the fact that they pump blood around the body) or extrinsic (e.g. a machine which is designed for the benefit of its maker), or both. Even if one could show that a teleological process was required to explain the origin of proteins, it still needs to be shown that this process was designed by an external Intelligence.

(c) Can stereochemical affinity account for the origin of the genetic code?

Origin-of-life researcher Eugene Koonin in May 2013. Koonin is a Russian-American biologist and Senior Investigator at the National Center for Biotechnology Information (NCBI). Image courtesy of Konrad Foerstner and Wikipedia.

Dr. Miller then goes on to criticize the stereochemical affinity hypothesis for the origin of the genetic code, citing the work of Dr. Eugene Koonin:

A third error relates to attempts to explain the genetic code in the first cell by a stereochemical affinity between amino acids and their corresponding codons. According to this model, naturally occurring chemical processes formed the basis for the connection between amino acids and their related codons (nucleotide triplets). Much of the key research promoting this theory was conducted by biochemist Michael Yarus. He also devised theories on how this early stereochemical era could have evolved into the modern translation system using ribosomes, tRNAs, and supporting enzymes. His research and theories are clever, but his conclusions face numerous challenges.

…For instance, Andrew Ellington’s team questioned whether the correlations in these studies were statistically significant, and they argued that his theories for the development of the modern translation system were untenable. Similarly, Eugene Koonin found that the claimed affinities were weak at best and generally unconvincing. He argued instead that the code started as a “frozen accident” undirected by any chemical properties of its physical components.

I should point out here that some of the articles which Miller links to here are rather old. For example, the article by Andrew Ellington’s team questioning the statistical significance of the correlations identified by Yarus dates back to 2000, while Koonin’s critique dates back to 2008. This is significant, as Yarus et al. published an article in 2009 presenting some of their strongest statistical evidence for the stereochemical model they were proposing:

Using recent sequences for 337 independent binding sites directed to 8 amino acids and containing 18,551 nucleotides in all, we show a highly robust connection between amino acids and cognate coding triplets within their RNA binding sites. The apparent probability (P) that cognate triplets around these sites are unrelated to binding sites is [about] 5.3 x 10-45 for codons overall, and P [is about] 2.1 x 10-46 for cognate anticodons. Therefore, some triplets are unequivocally localized near their present amino acids. Accordingly, there was likely a stereochemical era during evolution of the genetic code, relying on chemical interactions between amino acids and the tertiary structures of RNA binding sites. (Michael Yarus, Jeremy Joseph Widmann and Rob Knight, “RNA–Amino Acid Binding: A Stereochemical Era for the Genetic Code,” in Journal of Molecular Evolution, November 2009; 69(5):406-29, DOI 10.1007/s00239-009-9270-1.)

Dr. Miller also neglects to mention that while Dr. Eugene Koonin did indeed critique Yarus’ claims in the more recent 2017 article he linked to, Koonin actually proposed his own variant of the stereochemical model for the origin of the genetic code:

The conclusion that the mRNA decoding in the early translation system was performed by RNA molecules, conceivably, evolutionary precursors of modern tRNAs (proto-tRNAs) [89], implies a stereochemical model of code origin and evolution, but one that differs from the traditional models of this type in an important way (Figure 2). Under this model, the proto-RNA-amino acid interactions that defined the specificity of translation would not involve the anticodon (let alone codon) that therefore could be chosen arbitrarily and fixed through frozen accident. Instead, following the reasoning outlined previously [90], the amino acids would be recognized by unique pockets in the tertiary structure of the proto-tRNAs. The clustering of codons for related amino acids naturally follows from code expansion by duplication of the proto-tRNAs; the molecules resulting from such duplications obviously would be structurally similar and accordingly would bind similar amino acids, resulting in error minimization, in accord with Crick’s proposal (Figure 2).

Apparently, the reason why Koonin considers that “attempts to decipher the primordial stereochemical code by comparative analysis of modern translation system components are largely futile” is that “[o]nce the amino acid specificity determinants shifted from the proto-tRNAs to the aaRS, the amino acid-binding pockets in the (proto) tRNAs deteriorated such that modern tRNAs showed no consistent affinity to the cognate amino acids.” Koonin proposes that “experiments on in vitro evolution of specific aminoacylating ribozymes that can be evolved quite easily and themselves seem to recapitulate a key aspect of the primordial translation system” might help scientists to reconstruct the original code, at some future date.

Although (as Dr. Miller correctly notes) Koonin personally favors the frozen accident theory for the origin of the genetic code, he irenically proposes that “stereochemistry, biochemical coevolution, and selection for error minimization could have contributed synergistically at different stages of the evolution of the code [43] — along with frozen accident.”

Positive evidence against the design of the genetic code

But there is much more to Koonin’s article. Koonin presents damning evidence against the hypothesis that the standard genetic code (or SGC) was designed, in his article. The problem is that despite the SGC’s impressive ability to keep the number of mutational and translational errors very low, there are lots of other genetic codes which are even better:

Extensive quantitative analyses that employed cost functions differently derived from physico-chemical properties of amino acids have shown that the code is indeed highly resilient, with the probability to pick an equally robust random code being on the order of 10−7–10−8 [14,15,61,62,63,64,65,66,67,68]. Obviously, however, among the ~1084 possible random codes, there is a huge number with a higher degree of error minimization than the SGC [standard genetic code – VJT]. Furthermore, the SGC is not a local peak on the code fitness landscape because certain local rearrangements can increase the level of error minimization; quantitatively, the SGC is positioned roughly halfway from an average random code to the summit of the corresponding local peak [15] (Figure 1).

Let’s do the math. There are about 1084 possible genetic codes. The one used by living things is in the top 1 in 100 million (or 1 in 108). That means that there are 1076 possible genetic codes that are better than it. To make matters worse, it’s not even the best code in its local neighborhood. It’s not “on top of a hill,” as it were. It’s about half-way up the hill. Now ask yourself: if the genetic code were intelligently designed, is this a result that one would expect?

It is disappointing that Dr. Miller fails to appreciate the significance of this evidence against design, presented by Koonin. Sadly, he never even mentions it in his article.

The RNA World – fatally flawed?

A comparison of RNA (left) with DNA (right), showing the helices and nucleobases each employs. Image courtesy of Access Excellence and Wikipedia.

Finally, Dr. Miller concludes with a number of critical remarks about the RNA world hypothesis. Before we proceed further, a definition of the hypothesis might be in order:

All RNA World hypotheses include three basic assumptions: (1) At some time in the evolution of life, genetic continuity was assured by the replication of RNA; (2) Watson-Crick base-pairing was the key to replication; (3) genetically encoded proteins were not involved as catalysts. RNA World hypotheses differ in what they assume about life that may have preceded the RNA World, about the metabolic complexity of the RNA World, and about the role of small-molecule cofactors, possibly including peptides, in the chemistry of the RNA World.
(Michael P. Robertson and Gerald F. Joyce, The Origins of the RNA World, Cold Spring Harbor Perspectives in Biology, May 2012; 4(5): a003608.)

In the article cited above, Robertson and Joyce summarize the evidence for the RNA World:

There is now strong evidence indicating that an RNA World did indeed exist on the early Earth. The smoking gun is seen in the structure of the contemporary ribosome (Ban et al. 2000; Wimberly et al. 2000; Yusupov et al. 2001). The active site for peptide-bond formation lies deep within a central core of RNA, whereas proteins decorate the outside of this RNA core and insert narrow fingers into it. No amino acid side chain comes within 18 Å of the active site (Nissen et al. 2000). Clearly, the ribosome is a ribozyme (Steitz and Moore 2003), and it is hard to avoid the conclusion that, as suggested by Crick, “the primitive ribosome could have been made entirely of RNA” (1968).

A stronger version of the RNA World hypothesis is that life on Earth began with RNA. In the article cited above, Robertson and Joyce are very frank about the difficulties attending this hypothesis, even referring to it as “The Prebiotic Chemist’s Nightmare.” Despite their sympathies for this hypothesis, the authors suggest that it may be fruitful to consider “the alternative possibility that RNA was preceded by some other replicating, evolving molecule, just as DNA and proteins were preceded by RNA.”

The RNA World hypothesis has its scientific advocates and critics. In a recent article in Biology Direct, Harold S. Bernhardt describes it as “the worst theory of the early evolution of life (except for all the others),” in a humorous adaptation of Sir Winston Churchill’s famous comment on democracy. Referee Eugene Koonin agrees, noting that “no one has achieved bona fide self-replication of RNA which is the cornerstone of the RNA World,” but adding that “there is a lot going for the RNA World … whereas the other hypotheses on the origin of life are outright helpless.” Koonin continues: “As Bernhardt rightly points out, it is not certain that RNA was the first replicator but it does seem certain that it was the first ‘good’ replicator.”

In 2009, Gerald Joyce and Tracey Lincoln of the Scripps Research Institute in La Jolla, California, managed to create an RNA enzyme that replicates itself indirectly, by joining together two short pieces of RNA to create a second enzyme, which then joins together another two RNA pieces to recreate the original enzyme. Although the cycle was capable of being continued indefinitely, given an input of suitable raw materials, the enzymes were only able to do their job if they were given the correct RNA strands, which Joyce and Lincoln had to synthesize.

The RNA world hypothesis received a further boost in March 2015, when NASA scientists announced for the first time that, using the starting chemical pyrimidine, which is found in meteorites, they had managed to recreate three key components of DNA and RNA: uracil, cytosine and thymine. The scientists used an ice sample containing pyrimidine exposed to ultraviolet radiation under space-like conditions, in order to produce these essential ingredients of life. “We have demonstrated for the first time that we can make uracil, cytosine, and thymine, all three components of RNA and DNA, non-biologically in a laboratory under conditions found in space,” said Michel Nuevo, research scientist at NASA’s Ames Research Center, Moffett Field, California. “We are showing that these laboratory processes, which simulate conditions in outer space, can make several fundamental building blocks used by living organisms on Earth.”

As if that were not enough, more good news for the hypothesis emerged in 2016. RNA is composed of four different chemical building blocks: adenine (A), guanine (G), cytosine (C), and uracil (U). Back in 2009, a team of researchers led by John Sutherland showed a plausible series of chemical steps which might have given rise to cytosine and uracil, which are also known as pyrimidines, on the primordial Earth.However, Sutherland’s team wasn’t able to explain the origin of RNA’s purine building blocks, adenine and guanine. At last, a team of chemists led by Thomas Carell, at Ludwig Maximilian University of Munich in Germany, has filled in this gap in scientists’ knowledge. They’ve found a synthetic route for making purines. To be sure, problems remain, as reporter Robert Service notes in a recent article in Science magazine (‘RNA world’ inches closer to explaining origins of life, May 12, 2016):

…Steven Benner, a chemist and origin of life expert at the Foundation for Applied Molecular Evolution in Alachua, Florida… agrees that the newly suggested purine synthesis is a “major step forward” for the field. But even if it’s correct, he says, the chemical conditions that gave rise to the purines still don’t match those that Sutherland’s group suggests may have led to the pyrimidines. So just how As, Gs, Cs, and Us would have ended up together isn’t yet clear. And even if all the RNA bases were in the same place at the same time, it’s still not obvious what drove the bases to link up to form full-fledged RNAs, Benner says.

I conclude that whatever difficulties attend the RNA World hypothesis, they are not fatal ones. Miller’s case for the intelligent design of life is far from closed.

Conclusion

James M. Tour is a synthetic organic chemist, specializing in nanotechnology. Dr. Tour is the T. T. and W. F. Chao Professor of Chemistry, Professor of Materials Science and NanoEngineering, and Professor of Computer Science at Rice University in Houston, Texas. Image courtesy of Wikipedia.

At the beginning of this post, I invited my readers to consider the question of whether Dr. Miller has made a strong scientific case for Intelligent Design. Now, I would happily grant that he has highlighted a number of difficulties for any naturalistic theory of the origin of life. But I believe I have shown that Dr. Miller’s positive case for Intelligent Design consists largely of trying to put a full stop where science leaves a comma. Dr. Miller seems to have ignored recent developments in the field of origin-of-life research, which remove at least some of the difficulties he alludes to in his articles. I think an unbiased reader would have to conclude that Miller has failed to demonstrate, to even a high degree of probability, the need for a Designer of life.

I’d like to finish with two quotes from an online essay (Origin of Life, Intelligent Design, Evolution, Creation and Faith) by Dr. James Tour, a very fair-minded chemist who has written a lot about the origin of life:

I have been labeled as an Intelligent Design (sometimes called “ID”) proponent. I am not. I do not know how to use science to prove intelligent design although some others might. I am sympathetic to the arguments and I find some of them intriguing, but I prefer to be free of that intelligent design label. As a modern-day scientist, I do not know how to prove intelligent design using my most sophisticated analytical tools— the canonical tools are, by their own admission, inadequate to answer the intelligent design question. I cannot lay the issue at the doorstep of a benevolent creator or even an impersonal intelligent designer. All I can presently say is that my chemical tools do not permit my assessment of intelligent design.

and

Those who think scientists understand the issues of prebiotic chemistry are wholly misinformed. Nobody understands them. Maybe one day we will. But that day is far from today.

Amen to both.

528 thoughts on “Recycling bad arguments: ENV on the origin of life

  1. phoodoo: Hear, hear to that.

    I guess it depends on how you define “process”. Mung defines a process as something intelligently guided. But most people would regard, for example, weather as a process – evaporation, precipitation, fronts, high and low pressure zones, trade winds, etc. Other processes might include mountain building and plate tectonics, orbits and orbital precession, and many others.

    So either these are not processes, or “process” is so all-encompassing as to be meaningless, because there’s nothing that’s NOT a process.

  2. Alan Fox: And selection (natural, artificial and sexual) is a design process. Selection designs. But I suspect you are alluding to some other design process I’m unaware of. What would that be?

    That seems misleading to me. Natural and sexual selection are iterated biased filtering processes, where the bias is constituted by whatever traits are extinction-avoiding in that generation relative to that niche, whether by virtue of being better adaptations or by virtue of increasing progeny.

    (Personally I’ve never been entirely sure what “sexual selection” is or what evidence there is for it. It seems to be invoked whether there’s a trait that seemingly can’t be explained by natural selection. That feels ad hoc to me.)

    Design is something quite different. The process of designing something — say, a paper airplane — involves a back-and-forth between theoretical principles and material constraints. There’s a lot of experimentation as one learns the principles of aerodynamics, the weight of paper, what folds will cause lift or stability, etc. You can design paper airplanes for distance, for altitude, for rolls, for aesthetics. The most passionate defenders of ID seem to imagine that the Designer conceives of some body-plan and then magically creates the genes necessary for producing it. But that’s not at all how we human beings design stuff. We don’t have brilliant ideas that magically happen. We have to tinker, experiment, refine, revise. Sometimes we’re disappointed by a seemingly great idea that doesn’t work; sometimes we’re surprised by a dumb idea that works great. Sometimes we make accidental discoveries just by tinkering.

    The less the intelligent designer is described as doing anything like that, the less grip we have on the very concept of “design” as a helpful explanation of biological phenomena.

    (This is one way of seeing Hume’s point when he attacks the argument from design in Enquiry Concerning Human Understanding and Dialogues On Natural Religion.)

    j

  3. CharlieM: But it doesn’t support unguided evolution.

    Of course it does. Unguided evolution is the only process known to entail such a pattern.

    The cells in the developing human form a nested hierarchy: zygote, pluripotent pre-implantation stem cells, somatic stem cells, differentiated tissue cells. But we know that human development is not unguided, it follows a path towards a specific end.

    It is not a guided process just because it follows a path toward a specific end. The mere fact that you think so means nothing. Anyway, while it might be called a “guided process” on some grounds and using certain meanings, the problem is that you’re conflating “guided” with intelligently guided here, due to context. I can see it as “guided” in a sense, but by no means is it intelligently guided.

    This demonstrates that guided processes can form nested hierarchies.

    Actually, it bolsters the fact that nested hierarchies should be expected from reproduction with differences.

    Glen Davidson

  4. Flint: So either these are not processes, or “process” is so all-encompassing as to be meaningless, because there’s nothing that’s NOT a process.

    One minor quibble — one can think that everything is a process but that the term ‘process’ is not meaningless, because we can surely conceive of things that aren’t processes. The whole point of process metaphysics is to say that the concept of a process is better than other concepts (e.g. things, substances) for understanding the nature of reality.

    OK, quibble over.

  5. CharlieM
    If you knew nothing about feathers, bacterial flagella or sharkskin and were shown these things for the first time would you not consider these very intelligently designed from an engineering perspective?

    No, why should I?

    If I knew nothing about tourmalines, diamonds, and rhodochrosite crystals, then saw them, I probably would think that they were designed.

    To be sure, the bacterial flagellum wouldn’t be so obviously organic, but feathers and sharkskin would seem like organic structures, both because of their composition and the way that they are in fact rather exquisite (something GAs, like evolution, can effect). They are not like cogs, nor even like the impressive circuitry of a computer chip.

    Kids, on the other hand, tend to see vacuum cleaners as living. After all, motion, notably, internal motion, used to be considered the hallmark of life. That you want complexity to be the hallmark now hardly makes it so.

    Glen Davidson

  6. Kantian Naturalist: That seems misleading to me.

    Well, it’s not my intention to mislead anyone. It’s how I see it.

    Natural and sexual selection are iterated biased filtering processes, where the bias is constituted by whatever traits are extinction-avoiding in that generation relative to that niche, whether by virtue of being better adaptations or by virtue of increasing progeny.

    You’re making it too complicated. There is only differential reproduction. And biased reproduction results in changes in allele frequency.

    (Personally I’ve never been entirely sure what “sexual selection” is or what evidence there is for it. It seems to be invoked whether there’s a trait that seemingly can’t be explained by natural selection. That feels ad hoc to me.)

    Seems hard to reject the idea when you look at Birds of Paradise.

    Design is something quite different. The process of designing something — say, a paper airplane — involves a back-and-forth between theoretical principles and material constraints. There’s a lot of experimentation as one learns the principles of aerodynamics, the weight of paper, what folds will cause lift or stability, etc. You can design paper airplanes for distance, for altitude, for rolls, for aesthetics. The most passionate defenders of ID seem to imagine that the Designer conceives of some body-plan and then magically creates the genes necessary for producing it. But that’s not at all how we human beings design stuff. We don’t have brilliant ideas that magically happen. We have to tinker, experiment, refine, revise. Sometimes we’re disappointed by a seemingly great idea that doesn’t work; sometimes we’re surprised by a dumb idea that works great. Sometimes we make accidental discoveries just by tinkering.

    Indeed. Trial and error is a fair description of design development in aircraft, cars, you name it.

    The less the intelligent designer is described as doing anything like that, the less grip we have on the very concept of “design” as a helpful explanation of biological phenomena.

    I was careful not to use the word “intelligent”.

  7. Flint: …because there’s nothing that’s NOT a process.

    Therefore everything is intelligently guided. Q.E.D.

  8. Flint: As I see it, the problem isn’t lack of knowledge or intelligence or education. The problem isn’t even that if evolution happens as currently proposed, the religious doctrine is incorrect. The problem is that science is inherently subversive because it implies that religious belief is superfluous. At best, it adds nothing of value and at worst it blinds its victims to even simple obvious things. And it does so with enough force that at the margin, some are willing to kill and die rather than grasp reality.

    I remain convinced that religious conviction is much like neck-stretching or foot binding – if done carefully and at just the right age, the effects become irreversible.

    I disagree, actually.

    First, I don’t think that there’s any conflict or contradiction between

    (1) understanding why evolutionary theory is a good explanation for biological phenomena and
    (2) committing oneself to a life guided by spiritual principles that are taken on ‘faith’.

    Second, the idea that science is “subversive” of religious convictions relies on a background picture or assumption of the Enlightenment vs the anti- or counter-Enlightenment. And that’s not false, but we should be really careful here.

    For one thing, many religious denominations have embraced the values of the Enlightenment to some degree or other. (Even Catholicism is not entirely immune! Without the Enlightenment, there would be no liberation theology!) Enlightenment values and ideals have been accepted to some degree or other by many versions of Christianity, Islam, and Judaism. The rise of religious fundamentalism — whether Christian, Hindu, Muslim, Jewish, etc. — is in response to and in rejection of the Enlightenment (worries about skepticism leading to nihilism, questioning of naturalized social hierarchies deprives the privileged of their status and power, etc.).

    At the same time, whether or science is aligned with the ideals of the Enlightenment is itself contingent. Are students taught how to think scientifically? Are they being taught the evidence for the claims that they’re supposed to learn? Do they actually work through the experiments themselves? Or are scientific theories presented to them as mere doctrines to be memorized and regurgitated on standardized tests?

    We see this all the time — people who claim to be skeptical of evolution, or climate change, or vaccination, or neuroscience, or (even!) a spherical Earth, just because they were never actually taught the evidence correctly to begin with, or how to evaluate it. And so when their acceptance of dogma is questioned, they aren’t able to articulate the reasons for their beliefs, and they become either ‘skeptical’ or uncritically embrace some pseudoscience.

  9. Flint,

    I guess it depends on how you define “process”. Mung defines a process as something intelligently guided. But most people would regard, for example, weather as a process – evaporation, precipitation, fronts, high and low pressure zones, trade winds, etc. Other processes might include mountain building and plate tectonics, orbits and orbital precession, and many others.

    So either these are not processes, or “process” is so all-encompassing as to be meaningless, because there’s nothing that’s NOT a process.

    Per Wiki

    A process is a set of activities that interact to achieve a result.

    Mung appears right. Damn 🙂

  10. Mung:

    If it were unguided it wouldn’t be describable as a process in the first place.

    phoodoo:

    Hear, hear to that.

    Flint:

    So either these are not processes, or “process” is so all-encompassing as to be meaningless, because there’s nothing that’s NOT a process.

    Mung:

    Therefore everything is intelligently guided. Q.E.D.

    Glen:

    “Subduction is a geological process that takes place at convergent boundaries of tectonic plates where one plate moves under another and is forced or sinks due to gravity into the mantle.”

    It’s Intelligent Subduction. Kind of like Intelligent Falling.

    What’s especially amusing is that Mung’s panteleological idea actually undermines ID as a scientific project, but he, phoodoo and colewd don’t realize that.

  11. colewd: Mung appears right. Damn

    LoL.

    Try this definition:

    A process is a set of activities that interact so as to not achieve any result.

  12. Kantian Naturalist: I disagree, actually.

    Um. Much to disentangle here…

    First, I don’t think that there’s any conflict or contradiction between

    (1) understanding why evolutionary theory is a good explanation for biological phenomena and
    (2) committing oneself to a life guided by spiritual principles that are taken on ‘faith’.

    I’m not sure why, or even whether, you are changing the subject here. I was thinking of science as being a method of investigation, discovery, explanation and understanding of the reality we live in. I agree this is not in any conflict with moral or spiritual principles, which lie almost outside the boundary of science (except insofar as it can be demonstrated that those principles affect the world we live in.) For me, science rests on observable evidence (raw data), while spiritual and moral values rest on an underlying notion of ideal social structure.

    Second, the idea that science is “subversive” of religious convictions relies on a background picture or assumption of the Enlightenment vs the anti- or counter-Enlightenment. And that’s not false, but we should be really careful here.

    For one thing, many religious denominations have embraced the values of the Enlightenment to some degree or other. (Even Catholicism is not entirely immune! Without the Enlightenment, there would be no liberation theology!) Enlightenment values and ideals have been accepted to some degree or other by many versions of Christianity, Islam, and Judaism. The rise of religious fundamentalism — whether Christian, Hindu, Muslim, Jewish, etc. — is in response to and in rejection of the Enlightenment (worries about skepticism leading to nihilism, questioning of naturalized social hierarchies deprives the privileged of their status and power, etc.).

    I’m not familiar enough with this terminology to comment intelligently. Yes, the Vatican has accepted the heliocentric model.

    At the same time, whether or science is aligned with the ideals of the Enlightenment is itself contingent. Are students taught how to think scientifically? Are they being taught the evidence for the claims that they’re supposed to learn? Do they actually work through the experiments themselves? Or are scientific theories presented to them as mere doctrines to be memorized and regurgitated on standardized tests?

    I’m not sure I would equate pedagogical problems with enlightenment ideals. I would say that in the US, science is poorly presented — as you say, as a grab-bag of factoids to be memorized temporarily. But there are some very good teachers, in the US and other industrialized nations. I probably had one or two. I (once again) suspect that scientific thinking is an acquired trait, very difficult to acquire after a childhood of learning received wisdoms. So it’s distressingly common that people regard science as one authority battling among several (that were indoctrinated earlier in childhood).

    We see this all the time — people who claim to be skeptical of evolution, or climate change, or vaccination, or neuroscience, or (even!) a spherical Earth, just because they were never actually taught the evidence correctly to begin with, or how to evaluate it. And so when their acceptance of dogma is questioned, they aren’t able to articulate the reasons for their beliefs, and they become either ‘skeptical’ or uncritically embrace some pseudoscience.

    To me, the problem seems to run deeper – that these people simply have no context within which to grasp what evidence IS. All too often, I see the (implicit) notion of “evidence” as “whatever ratifies my preferences” — and if it does not, then it can’t be evidence! For too many people, the tentative and provisional nature of science, as an ongoing method of building better understanding, is the house built on sand. Conversely the rock of ages is composed of Truths, which may be entirely incorrect but which never change and can be relied on to be Truth (even if false) for one’s entire lifetime.

  13. Mung:

    We have actual lives to live.

    If you have time to be the most prolific commenter at TSZ, then you have time to do a frikkin’ Google search.

  14. Flint: I’m not familiar enough with this terminology to comment intelligently.

    Closet IDist. I knew it.

  15. Flint: I’m not sure why, or even whether, you are changing the subject here.

    Your claim was that science “implies that religious belief is superfluous.”

    That’s utter nonsense.

  16. Mung: Therefore everything is intelligently guided. Q.E.D.

    Philosophically, I have no objection to this or indeed any position that is inherently unknowable. It MIGHT be the case that everything from the quantum to the cosmic level is intelligently guided in the interests of purposes people are necessarily now and forever incapable of even beginning to grasp.

    But in that case, why the objection to “unguided” evolution but no objection to equally “unguided” weather? Why not accept that science is investigating at least some of the indirect mechanisms by which evolution and weather are guided?

  17. Mung: Your claim was that science “implies that religious belief is superfluous.”

    That’s utter nonsense.

    You’re right, the way I expressed it makes no sense. I was trying to say that religion contributes nothing whatsoever to our understanding of how the material universe works. The error the religious types make on this site is to project some religious notions into areas where religion has no useful voice – and find themselves rejecting enormous bodies of consistent observation in favor of totally uninformed religious doctrines.

    Religion is definitely NOT superfluous in shaping our values.

  18. Flint: To me, the problem seems to run deeper – that these people simply have no context within which to grasp what evidence IS. All too often, I see the (implicit) notion of “evidence” as “whatever ratifies my preferences” — and if it does not, then it can’t be evidence! For too many people, the tentative and provisional nature of science, as an ongoing method of building better understanding, is the house built on sand. Conversely the rock of ages is composed of Truths, which may be entirely incorrect but which never change and can be relied on to be Truth (even if false) for one’s entire lifetime.

    I think that’s a big part of the difficulty, yes.

    One of the difficulties I seem to have with a few evolution ‘skeptics’ here is that they don’t seem to understand the difference between is predicted by and is consistent with. (Formally, the difference between implication and conjunction.)

    So they want to say that nested hierarchies are consistent with guided evolution, and that the reason why we think nested hierarchies are evidence of unguided evolution is because of our “worldview”,

    What they don’t seem to understand is that evolution by natural selection actually predicts that genes will be most parsimoniously arranged into nested hierarchies, which means that the data actually do support the theory.

    The fact that the data are also consistent with their non-explanation doesn’t weaken or undermine that justification.

  19. KN,

    One of the difficulties I seem to have with a few evolution ‘skeptics’ here is that they don’t seem to understand the difference between is predicted by and is consistent with. (Formally, the difference between implication and conjunction.)

    That’s not quite right, KN. Conjunction alone does not capture the notion of “A is consistent with B”, since to assert the conjunction would be to assert that A and B are both true. “Is consistent with” is more subtle than that, since A or B (or both) might be false even when “A is consistent with B” is true.

    So they want to say that nested hierarchies are consistent with guided evolution, and that the reason why we think nested hierarchies are evidence of unguided evolution is because of our “worldview”.

    Charlie got confused about prediction vs consistency earlier in the thread, while arguing against Flint that the nested hierarchy doesn’t support unguided evolution:

    Flint:

    Oddly enough, it’s the nested hierarchy that supports evolution. That is, evolution IMPLIES (entails, requires, demands) a nested hierarchy for all but lateral gene transfer. You have it backwards. The nested hierarchy is one of the strongest evidences for evolution.

    CharlieM:

    But it doesn’t support unguided evolution.

    The cells in the developing human form a nested hierarchy: zygote, pluripotent pre-implantation stem cells, somatic stem cells, differentiated tissue cells. But we know that human development is not unguided, it follows a path towards a specific end.

    This demonstrates that guided processes can form nested hierarchies.

    Charlie is multiply confused:

    1. The objective nested hierarchy does support unguided evolution, and it does so trillions of times better than it supports guided evolution, creationism, or other forms of ID.

    2. Charlie’s notion of guided vs unguided is off-kilter. To him, “follows a path toward a specific end” is equivalent to “guided”. But that’s as goofy as Mung’s idea that all processes are teleological. The boulder rolling down the hillside follows a path to a specific end, but it isn’t guided in the relevant sense.

    3. The fact that the objective nested hierarchy is consistent with guided evolution is true but only of minor importance, since any pattern is consistent with guided evolution.

    There’s a massive Dunning-Kruger problem with IDers. Too many of them think they understand the science when they don’t — and they’re too incompetent to recognize their mistakes, even when pointed out by more knowledgeable folks.

  20. keiths: any pattern is consistent with guided evolution.

    I’m always amused at the creationist response to the challenge to point to something, anything at all, that is NOT divinely guided. They always change the subject, probably recognizing that if everything is guided, the very notion of guidance becomes meaningless, because there is no Unguided to contrast it with, meaning that it cannot be recognized in the first place.

  21. colewd:

    Keiths is making a claim here that unguided evolution better supports the data then design yet he cannot support the claim that unguided evolution can generate the nested hierarchy .

    Of course I can, and I did so in that very OP.

    What are you smoking, Bill?

  22. Flint,

    I’m always amused at the creationist response to the challenge to point to something, anything at all, that is NOT divinely guided. They always change the subject, probably recognizing that if everything is guided, the very notion of guidance becomes meaningless, because there is no Unguided to contrast it with, meaning that it cannot be recognized in the first place.

    Or if you’re Mung, colewd, or phoodoo, you blunder ahead with the idea that everything is teleological, not even realizing that this kills ID as a scientific project.

    Foot shots like that are entertaining.

  23. Flint: I’m always amused at the creationist response to the challenge to point to something, anything at all, that is NOT divinely guided. They always change the subject, probably recognizing that if everything is guided, the very notion of guidance becomes meaningless, because there is no Unguided to contrast it with, meaning that it cannot be recognized in the first place.

    Utterly hilarious.

    First, perhaps you should strive harder to understand the theist position so as to avoid caricatures of it. [Assuming that wasn’t your intent all along.]

    Second, you are in essence agreeing with the argument I and many other theists have made here, but seem to be oblivious to that fact. You have no scientific way to determine that some process is unguided. You assume it.

  24. Kantian Naturalist: What they don’t seem to understand is that evolution by natural selection actually predicts that genes will be most parsimoniously arranged into nested hierarchies, which means that the data actually do support the theory.

    There is no “theory of unguided evolution.”

  25. Mung: Second, you are in essence agreeing with the argument I and many other theists have made here, but seem to be oblivious to that fact. You have no scientific way to determine that some process is unguided. You assume it.

    If you want to play the game of science, then if you want to posit the existence of Something That Guides, you need to give a reason for why the model containing that posit should be preferred over models that don’t contain it. What does that posit explain that can’t be explained without it?

    Now, if you don’t want to do science, and just want to do metaphysics instead, that’s fine — that’s your business. But if you don’t have a better empirical explanation than evolutionary theory, then what’s the point of even pretending to care about science?

    Finally, can we just stop with this bullshit that theists can’t accept that evolutionary theory is a good explanation of life? That’s just obviously false. There are lots of theistic evolutionists. It’s a perfectly respectable position. Sure, young-earth creationists don’t like it because it would require them to interpret certain Biblical passages metaphorically rather than literally, but that’s none of my business.

    I don’t know if there are any theistic evolutionists at TSZ, but if there aren’t, I’d suspect it’s because they have better things to do with their time than argue with strangers on the Internet.

  26. Mung: There is no “theory of unguided evolution.”

    Well, this contrast between “guided evolution” and “unguided evolution” isn’t one of mine. I wouldn’t insist on drawing that distinction, or at least not in that way.

    As I’ve said numerous times, the only sense in which evolution is “unguided” is that there is no empirically detectable mechanism that first detects what phenotypes will be adaptive in an environment, and then causes the mutations that bring those phenotypes about.

    If you agree that there is no such empirically detectable mechanism that does that, then we don’t disagree on much else.

  27. Mung: There is no “theory of unguided evolution.”

    There’s also no theory of unfarted evolution. And while we’re at it, no evidence that Jesus’s resurrection was unfarted either.

  28. Kantian Naturalist: If you want to play the game of science, then if you want to posit the existence of Something That Guides, you need to give a reason for why the model containing that posit should be preferred over models that don’t contain it. What does that posit explain that can’t be explained without it?

    You appear to be missing the point. It’s not that we need to posit some He Who Guides in order to explain something. Given a model which we construct which leaves out He Who Guides it in no way follows that the process being modeled is unguided.

    Yet you blithely proceeded from ” that the reason why we think nested hierarchies are evidence of unguided evolution is because of our “worldview.”

    Well, yeah. You cannot demonstrate it logically or empirically.

    If you want to assert that nested hierarchies are evidence of unguided evolution you need to have something on your side to warrant the claim. It’s not logic. It’s not science. So religion. Obviously. 🙂

    But what would this site do without “unguided evolution?”

  29. Mung: Utterly hilarious.

    First, perhaps you should strive harder to understand the theist position so as to avoid caricatures of it. [Assuming that wasn’t your intent all along.]

    I think this applies equally to both of us. You seem as unable to understand the nontheistic view.

    Second, you are in essence agreeing with the argument I and many other theists have made here, but seem to be oblivious to that fact. You have no scientific way to determine that some process is unguided. You assume it.

    And here is a wonderful example. Science is indifferent to any notion of guidance or lack of guidance. Science is trying to understand how things work, guided or not. If (as is apparently the theistic view here at TSZ) any presumed guidance is inherently, in principle undetectable, then it lies outside the purview of science entirely. It is not assumed to be true or false, it’s not addressed in any way.

  30. Kantian Naturalist: As I’ve said numerous times, the only sense in which evolution is “unguided” is that there is no empirically detectable mechanism that first detects what phenotypes will be adaptive in an environment, and then causes the mutations that bring those phenotypes about.

    Is this the first time you’re saying it here, though? 😉

    I would love to see people here stop claiming that evolution is unguided and that science somehow supports that conclusion. We could move on to discussing more interesting topics.

  31. Flint: You seem as unable to understand the nontheistic view.

    The nontheistic view here at TSZ is that evolution is unguided. There has been post after post to that effect here. It’s hard to mistake if for anything else.

  32. Flint: Science is indifferent to any notion of guidance or lack of guidance. Science is trying to understand how things work, guided or not.

    Perhaps that’s because there is no scientific way to distinguish between those two options. If only the atheists here would take that to heart.

  33. Mung: You appear to be missing the point. It’s not that we need to posit some He Who Guides in order to explain something. Given a model which we construct which leaves out He Who Guides it in no way follows that the process being modeled is unguided.

    Yet you blithely proceeded from ” that the reason why we think nested hierarchies are evidence of unguided evolution is because of our “worldview.”

    Well, yeah. You cannot demonstrate it logically or empirically.

    If you want to assert that nested hierarchies are evidence of unguided evolution you need to have something on your side to warrant the claim. It’s not logic. It’s not science. So religion. Obviously.

    But what would this site do without “unguided evolution?”

    I still think this misses the point. Whether or not there is any invisible hand guiding evolution, the mechanisms of evolution as currently understood necessarily result in a nested hierarchy. This concern with guidance strikes me as counting angels on pinheads. And as KN points out, there are plenty of theistic evolutionists who regard the identified mechanisms of evolution (and the entailed patterns resulting from them) as insights into the means by which the guiding is done. In this view, evolution according to current theory is HOW the process is guided.

    But because such guidance cannot be ruled either in or out, it becomes irrelevant. I read this site as concerned with evolution’s mechanisms, and NOT whether there is some guiding hand or greater unguessable purpose behind them.

  34. Mung: Perhaps that’s because there is no scientific way to distinguish between those two options. If only the atheists here would take that to heart.

    I agree with you. Where is the sense in dueling unsupportable assertions?

  35. Mung: Is this the first time you’re saying it here, though?

    No, I’ve been saying it here on many occasions. But if this is the first time you’re hearing me say it, that still counts as progress.

    (By the way, that’s a characterization that I got from Sober by way of Plantinga.)

    I would love to see people here stop claiming that evolution is unguided and that science somehow supports that conclusion. We could move on to discussing more interesting topics.

    I would love it if people would stop saying that we can empirically detect either guidance or its absence.

  36. Flint: If (as is apparently the theistic view here at TSZ) any presumed guidance is inherently, in principle undetectable, then it lies outside the purview of science entirely.

    If (as is apparently the nontheistic view here at TSZ) any presumed lack of guidance is inherently, in principle undetectable, then it lies outside the purview of science entirely.

    It has never ceased to amaze me how ID critics here at TSZ can argue there’s no way to detect “design” but then also argue that they can detect “unguided.” Nonsense of a high order.

  37. Kantian Naturalist: (By the way, that’s a characterization that I got from Sober by way of Plantinga.)

    Actually have Sober’s Philosophy of Biology in front of me right now and was thinking of writing up an OP.

  38. Flint: Whether or not there is any invisible hand guiding evolution, the mechanisms of evolution as currently understood necessarily result in a nested hierarchy. This concern with guidance strikes me as counting angels on pinheads.

    Yes, it’s right at the semi-permeable membrane between science and metaphysics.

    And as KN points out, there are plenty of theistic evolutionists who regard the identified mechanisms of evolution (and the entailed patterns resulting from them) as insights into the means by which the guiding is done. In this view, evolution according to current theory is HOW the process is guided.

    Yes. Put otherwise, the disagreement between theistic evolutionists and atheistic evolutionists is not a scientific disagreement.

    But because such guidance cannot be ruled either in or out, it becomes irrelevant. I read this site as concerned with evolution’s mechanisms, and NOT whether there is some guiding hand or greater unguessable purpose behind them.

    I agree with all that, though I can see Mung’s point that some of the atheists here are rather keen on showing that evolutionary theory is on their side. It isn’t — not because evolutionary theory supports theism, but because evolutionary theory has nothing to say in support of either theism or atheism. It does rule out certain models of creationism, but that’s all.

  39. Flint: I agree with you. Where is the sense in dueling unsupportable assertions?

    Kantian Naturalist: I would love it if people would stop saying that we can empirically detect either guidance or its absence.

    Damn! I guess if you put enough people on ignore you can actually find someone to agree with!

    Now say something comes along like ID. And it claims to be able to detect design or intelligent guidance. Isn’t arguing that evolution is unguided playing right into their hands? Isn’t it an admission that it’s a question that can be settled by science?

    Seems that way to me.

  40. Mung:
    Damn! I guess if you put enough people on ignore you can actually find someone to agree with!

    Story of my life.

    Now say something comes along like ID. And it claims to be able to detect design or intelligent guidance. Isn’t arguing that evolution is unguided playing right into their hands? Isn’t it an admission that it’s a question that can be settled by science?

    Seems that way to me.

    Maybe. But my objection to ID has always been (and continues to be) that it has no way of operationalizing its core concepts — “design” and “intelligence” — that would allow for anyone to verify its central claims.

    Put it this way: if ID were really interested in building testable models of its claims, wouldn’t it be taking a strong interest in the psychology of creativity? Of what’s actually going on at the cognitive level when someone designs, tinkers, creates, invents, rearranges? Wouldn’t we need to do that in order to construct a model of what intelligent design is?

Leave a Reply