The Law of Conservation of Information is defunct

About a year ago, Joe Felsenstein critiqued a seminar presentation by William Dembski, “Conservation of Information in Evolutionary Search.” He subsequently discussed Dembski’s primary source with me, and devised a brilliant response, unlike any that I had considered. This led to an article, due mostly to Felsenstein, though I contributed, at The Panda’s Thumb. Nine days after it appeared, Dembski was asked in a radio interview whether anyone was paying attention to his technical work. Surely a recipient of

qualifies as a someone. But Dembski changed the topic. And when the question came around again, he again changed the topic. Mind you, this isn’t how I know that Felsenstein blasted conservation of “information,” which is not information, in evolutionary “search,” which does not search. It’s how I know that Dembski knows.

Or, I should say, it’s how I first knew. The Discovery Institute has since employed Dembski’s junior coauthor, Winston Ewert, to quietly replace various claims, including the most sensational of them all (Dembski and Marks, “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information,” 2010; preprint 2008):

Though not denying Darwinian evolution or even limiting its role in the history of life, the Law of Conservation of Information shows that Darwinian evolution is inherently teleological. Moreover, it shows that this teleology can be measured in precise information-theoretic terms.

Felsenstein realized that we could apply their measure to a simple model of evolution by natural selection, devoid of purpose, and obtain a large quantity. According to the model, evolution begins with a random genotype, and ends with a genotype fitter than all of its neighbors. The neighbors of a genotype are those that can arise from it by mutation in a single point. In each step of the evolutionary process, a genotype is replaced by the fittest of its neighboring genotypes. The overall result of evolution is a sequence of genotypes that is unconstrained in how it begins, and highly constrained in how it ends. Each genotype in the sequence is not only fitter than all of the genotypes that precede it, but also fitter than all of their neighbors. That is, evolution successively constrains the genotype to smaller and smaller subsets of the space of genotypes. The final genotype is at the very least fitter than all of its neighbors. Equivalently, the minimum degree of constraint is the neighborhood size. Dembski and Marks mistake this for the degree of teleology (purpose) in evolution, and refer to it as active information. The gist of “conservation of information” is that teleology comes only from teleology. As Dembski said in his seminar presentation:

If you will, the teleology of evolutionary search is to produce teleology.

Considering that the neighborhood size indicates only how many, not at all which, genotypes are eliminated in a single step of evolution, there can be no argument that constraint implies purpose.1 Ewert does not hazard an overt reply, but in fact responds by downgrading “active information” from a measure of teleology to a measure of bias. The new significance of “conservation of information” is this: if the constraint, er, bias of a natural process is not due to design, then nature itself must be constrained, er, biased.2 We have it from Ewert, writing on behalf of Dembski and Marks, that:

Of course, Darwinian evolution is not a teleological process and does not search for a goal [e.g., birds…] Whatever search or process might be in play, … it produces birds much more often than chance would otherwise lead us to predict. It is this bias towards producing a bird that we call active information. […] Having postulated Darwinian evolution, … the fact that birds exist has to be explained in terms of the initial configuration of the universe. The universe must have begun with a large amount of active information with respect to the target of birds.

Although “information” stumbles on, searching for brains to eat, the vital principle has departed from the Law of Conservation of Information (LCI). No more does LCI show what it shows. The credit for dispatching teleology goes entirely to Joe Felsenstein. You should have a look at his latest, “Why ID Advocates Downplay Our Disagreement With Them,” before watching me deliver a round to the frontal lobe of the Conservation of Information Theorem.

The credit for keeping things surreal goes entirely to the Discovery Institute. Replacing Dembski, a full-time senior fellow, with Ewert in an exchange with a renowned evolutionary geneticist is beyond bizarre. But it is perhaps no accident that the authorship of the response serves the same purpose as its rhetorical tactics, namely, to conceal the presence of huge concessions. What Ewert does, avoiding all signs of what he’s doing, is to undertake salvage of Dembski’s treatment of LCI in Being as Communion: A Metaphysics of Information (2014). Rather than identify a source, he speaks from authority. Rather than replace terms that convey precisely the misconceptions in the book, he explains matter-of-factly that they don’t mean what they seem to say. And rather than admit that Felsenstein and I set him and his colleagues straight on the meanings, Ewert proclaims that “These Critics of Intelligent Design Agree with Us More Than They Seem to Realize.” The way he trumps up agreement is to treat a single section of our article, which merely reiterates an old point regarding the smoothness of fitness landscapes, as though it were the whole. We actually focus on an arbitrary, and hence arbitrarily rough, landscape.

LCI, putatively a law of nature, putatively has a mathematical foundation. According to Being as Communion (p. 148):

A precise theoretical justification for the claim that natural selection is inherently teleological comes from certain recent mathematical results known as Conservation of Information (CoI) theorems.

Now the claim is that natural selection is inherently biased, and that something must account for the bias — either design or the initial “configuration” of the Universe (wink wink, nudge nudge) — given that bias is conserved. In short, CoI still applies, with the understanding that I is for bIas. Dembski places his work in the context of earlier analysis of search, and mentions a sometime theorist you’ve heard of before (p. 151):

Computer scientist Thomas English, in a 1996 paper, also used the term “Conservation of Information,” though synonymously with the then recently proved results by Wolpert and Macready about No Free Lunch (NFL). In English’s version of NFL, “the information an optimizer gains about unobserved values is ultimately due to its prior information of value distributions.”

I actually proved an NFL theorem more general than that of Wolpert and Macready, and used the term “conservation of information” to characterize an auxiliary theorem. Although I got the math right, what I wrote about it in plain language was embarrassingly wrong. I happened to emend my online copy of the paper a month before Dembski’s book appeared, adding a preface titled “Sampling Bias Is Not Information.” So, while it definitely was Felsenstein who left Dembski et al. no choice but to abandon teleology, it may be that I had some influence on their choice of a new position. In any case, it falls to me to explain why they are embarrassingly wrong in what they claim about math that they have gotten right.

The right approach, for a general readership, is to address only what is most obviously wrong, and to put as much as possible into pictures. We’ll be looking at broken sticks. We’ll even watch them breaking randomly to pieces. This is how Dembski et al. see the biases of an evolutionary process being determined, in the absence of design. CoI tells us something about the random length of a particular segment, selected before the stick breaks. But Felsenstein and I selected an outcome after modeling the evolutionary process. We targeted an outcome for which the bias was large. The bias was not large because we targeted the outcome. Even if we pretend that a broken stick determined the bias of the evolutionary process, CoI does not apply. The theorem that does apply has no name. It is the solution to Exercises 666-667 in a highly respected text of the 19th Century, Choice and Chance. Given that it bears the Number of the Beast, and comes from the Reverend William Allen Whitworth, I’m tempted to call it the Revelation Theorem. But I’ll avoid giving offense, and refer instead to the Broken Stick Theorem.

Breaking sticks

Dembski et al. believe that CoI applies to all physical events that scientists target for investigation. The gist of their error is easy to understand. A scientist is free to investigate any event whatsoever after observing what actually occurs in nature. But the CoI theorem assumes that a particular event is targeted prior to the existence of a process. This is appropriate when an engineer selects a process in order to generate a prespecified event, i.e., to solve a given problem. It is no coincidence that the peer-reviewed publications of Dembski et al. are all in the engineering literature. The assumption of the theorem does not hold when a scientist works in the opposite direction, investigating an event that tends to occur in a natural process. Put simply, there is a difference between selecting a process to suit a given target and selecting a target to suit a given process. The question, then, is just how big the difference is. How badly wrong is it to say that the CoI theorem characterizes conservation of bias in nature? Fortunately, the error can be conveyed accurately with pictures. What we shall see is not conservation, but instead unbounded growth, of the maximum bias (“active information”).


ETA: Text between the horizontal rules is an improved introduction to the technical material, developed in discussion here at TSZ. It comes verbatim from a comment posted a week ago. I’ve made clear all along my intent to respond to feedback, and improve the post. However, I won’t remove any of the original content, because that’s too easily spun into a retraction.

Dembski et al. represent natural processes abstractly. In their math, they reduce the evolutionary process to nothing but the chances of its possible outcomes. The CoI theorem is indifferent to what the possible outcomes actually are, in physical reality, and how the process actually works, that the outcomes should have the chances of occurrence that they do. Here I assume that there are only 6 possible outcomes, arbitrarily named 1, 2, 3, 4, 5, 6. The possible outcomes could be anything, and their names say nothing about what they really are. Each of the possible outcomes has a chance of occurrence that is no less than 0 (sure not to occur) and no greater than 1 (sure to occur). The chances of the possible outcomes are required to add up to 1.

As far as the CoI theorem is concerned, an evolutionary process is nothing but a list of chances that sum to 1. I’ll refer to the list of chances as the description of the process. The first chance in the description is associated with the possible outcome named 1, the second chance in the description is associated with the possible outcome named 2, and so forth. The list

    \[.1, \quad .3, \quad .1, \quad .2, \quad .1, \quad .2\]

is a valid description because each of the numbers is a valid chance, lying between 0 and 1, and because the total of the chances is 1. We can picture the description of the evolutionary process as a stick of length 1, broken into 6 pieces.

[Need a new figure here.]

Naming the segments 1, 2, 3, 4, 5, 6, from left to right, the length of each segment indicates the chance of the possible outcome with the corresponding name. Consequently, the depiction of the evolutionary process as a broken stick is equivalent to the description of the process as a list of the chances of its possible outcomes.

You perhaps wonder how I would depict the evolutionary process as a broken stick if a “possible” outcome had absolutely no chance of occurring. And the answer is that I could not. There is no segment of length 0. In the CoI theorem, however, chances precisely equal to 0 are effectively impossible. Thus it is not misleading to say that Dembski et al. reduce the evolutionary process to a broken stick.

There are infinitely many ways to break our metaphorical stick into a given number of segments. Averaging over all of them, the lengths of the segments are

    \[\frac{1}{6}, \quad \frac{1}{6}, \quad \frac{1}{6}, \quad \frac{1}{6}, \quad \frac{1}{6}, \quad \frac{1}{6}.\]

That is, in the average description of an evolutionary process, the possible outcomes are uniform in their chances of occurrence. Dembski et al. usually advocate taking uniform chances as the standard of comparison for all processes (though they allow for other standards in the CoI theorem). Dembski and Marks go much further in their metaphysics, claiming that there exist default chances of outcomes in physical reality, and that we can obtain knowledge of the default chances, and that deviation of chances from the defaults is itself a real and objectively measurable phenomenon. Although I want to limit myself to illustrating how they have gone wrong in application of CoI, I must remark that their speculation is empty, and comes nowhere close to providing a foundation for an alternative science. Otherwise, I would seem to allow that they might repair their arguments with something like the Broken Stick Theorem.

Taking uniform chance as the standard to which all evolutionary processes are compared, we naturally arrive at an alternative representation. We begin by writing the standard description a bit differently, multiplying each of the chances by 1.

    \[1 \times \frac{1}{6}, \quad 1 \times \frac{1}{6}, \quad 1 \times \frac{1}{6}, \quad 1 \times \frac{1}{6}, \quad 1 \times \frac{1}{6}, \quad 1 \times \frac{1}{6}.\]

Now we can write any description whatsoever by adjusting the multipliers, while leaving the fractions 1/6 just as they are. The trick is to multiply each of the chances in the description by 1, but with 1 written as 6 \times 1/6. For instance, the description

    \[\frac{1}{24}, \quad \frac{1}{3}, \quad \frac{1}{12}, \quad \frac{1}{4}, \quad \frac{1}{6}, \quad \frac{1}{8}\]

is equivalent to

    \[\frac{6}{24} \times \frac{1}{6}, \quad \frac{6}{3} \times \frac{1}{6}, \quad \frac{6}{12} \times \frac{1}{6}, \quad \frac{6}{4} \times \frac{1}{6}, \quad \frac{6}{6} \times \frac{1}{6}, \quad \frac{6}{8} \times \frac{1}{6}.\]

The multipliers

    \[\frac{6}{24}, \quad \frac{6}{3}, \quad \frac{6}{12}, \quad \frac{6}{4}, \quad \frac{6}{6}, \quad \frac{6}{8}\]

are the biases of the process, relative to the standard in which the chances are uniformly 1/6. The process is biased in favor of an outcome when the bias is greater than 1, and biased against an outcome when the bias is less than 1. For instance, the process is biased in favor of outcome 4 by a factor of 6/4 = 1.5, meaning that the chance of the outcome is 1.5 times as great as in the standard. Similarly, the process is biased against outcome 1 by a factor of 24/6 = 4, meaning that the chance of the outcome is 6/24 = 0.25 times as great as in the standard. The uniform standard is unbiased relative to itself, with all biases equal to 1.

The general rule for obtaining the biases of an evolutionary process, relative to the uniform standard, is to multiply the chances by the number of possible outcomes. With 6 possible outcomes, this is equivalent to scaling the the broken stick to a length of 6. We gain some clarity in discussion of CoI by referring to the biases, instead of the chances, of the evolutionary process. The process is metaphorically a broken stick, either way. Whether the segment lengths are biases or chances is just a matter of scale. We shall equate the length of the stick to the number of outcomes, and thus depict the biases of the process, for and against the possible outcomes corresponding to the segments.


To make the pictures clear, we assume that the evolutionary process has only 6 possible outcomes. Let’s name the possibilities 1, 2, 3, 4, 5, and 6. The process is unbiased if none of the possibilities has a greater chance of occurring than does any other, in which case the chance of each possible outcome is 1/6. According to Dembski et al., if we deny that the biases of the process are due to design, then we essentially say that a stick of length 6 broke randomly into 6 segments, and that the lengths of the segments determined the biases. Suppose that the length of the 3rd segment of the broken stick is 2. Then the evolutionary process is biased in favor of outcome 3 by a factor of 2. The chance of the outcome is

    \[2 \times \frac{1}{6} = \frac{1}{3}.\]

Suppose that the length of the 5th segment is 1/4. Then the process is biased against outcome 5 by a factor of 4, and the chance of the outcome is

    \[\frac{1}{4} \times \frac{1}{6} = \frac{1}{24}.\]

These biases are what Dembski et al. refer to as active information. The term, in and of itself, begs the question of whether something actively formed the process with bias in favor of a desired outcome.


ETA: Text between the horizontal rules comes from an earlier attempt at improving the introduction to the technical material, developed in discussion here at TSZ. I’ve quoted a comment posted 17 days ago.

Dembski et al. do not allow that such deviations from the supposedly “natural” default of uniform chance might be brute facts of physical reality. There must be a reason for bias. If we do not allow that bias is possibly due to design of the process to serve a purpose, then Dembski et al. force on us the view that bias itself arises by chance. (This is multifariously outrageous, but for reasons that are not clearly tied to their math.) That is, the chances of the possible outcomes of the evolutionary process are determined by an antecedent process, which is also random. Talk about the chances of chances gets very confusing, very fast. So I say instead that the evolutionary process is randomly biased by a process that occurs before it does. The biases of the evolutionary process are just the chances of the 6 possible outcomes of the evolutionary process, multiplied by 6. Setting the chances randomly is equivalent to setting the biases randomly.

The broken stick is a conventional metaphor for probabilities that are themselves set randomly. (I follow Dembski in reserving the word chance for the probability of a physically random outcome.) The random lengths of the segments of the stick are the probabilities. The stick is ordinarily of unit length, because the probabilities must sum to 1. To visualize random biases, instead of random chances, I need only multiply the length of the stick by the number of possible outcomes, 6, and randomly break the stick into 6 pieces. Then the biases sum to 6.

I stipulate that the biasing process, i.e., stick breaking, is uniform, meaning that all possible biases of the evolutionary process are equally likely to arise. A tricky point is that Dembski et al. allow for uniform biasing, but do not require it. The essential justification of my approach is that I need consider only something, not everything, that they allow in order to demonstrate that the theorem does not apply to scientific investigation. What I consider is in fact typical. The uniform biasing process is the average of all biasing processes. Thus there can be no objection to my choice of it.

Dembski et al. refer to all random processes as “searches.” The term is nothing but rhetorical assertion of the conclusion they want to draw. The stick-breaking “search” (process), which determines the biases of the evolutionary “search” (process), is a visualization of what they call a “search for a search.” Dembski et al. allow for the biasing process itself to be biased by an antecedent process, in which case there is a “search for a search for a search.” In Being as Communion, Dembski avoids committing to Big Bang cosmology, and indicates that the regress of searches for searches might go back forever in time. Fortunately, we need not enter a quasi-mystical quagmire to get at a glaring error in logic.


Animation 1. In the analysis of Dembski, Ewert, and Marks, the biases of an evolutionary process are like control knobs, either set by design, or set randomly by another process. The random biasing process is like a stick breaking into pieces. The biases of an evolutionary process are the lengths of the segments of a broken stick. Here the number of possible outcomes of the evolutionary process is 6, and a stick of length 6 breaks randomly into 6 segments. No segmentation is more likely than any other. Before the stick starts breaking, we expect any given segment to be of length 1. But when a scientist investigates an evolutionary process, the stick has already broken. The scientist may target the outcome for which the bias is greatest, i.e., the outcome corresponding to the longest segment of a broken stick. With 6 possible outcomes, the expected maximum bias is 2.45. Generalizing to n possible outcomes, the expected maximum bias of a randomly biased evolutionary process is a logarithmic function of n. The quantity grows without bound as the number of possible outcomes of evolution increases. The Conservation of Information Theorem of Dembski et al. tells us that the greater the bias in favor of an outcome specified in advance, the less likely the bias is to have arisen by breaking a stick, no matter how many the possible outcomes of the evolutionary process. It depends on an assumption that does not hold in scientific study of evolution.

In the most important case of CoI, all possible segmentations of the stick have equal chances of occurring. Although the segments almost surely turn out to be different in length, they are indistinguishable in their random lengths. That is, the chance that a segment will turn out to be a given length does not depend on which segment we consider. This is far from true, however, if the segment that we consider depends on what the lengths have turned out to be. Dembski et al. neglect the difference in the two circumstances when they treat their theorem as though it were a law of nature. Here’s an example of what CoI tells us: the probability is at most 1/2 that the first segment’s length will turn out to be greater than or equal to 2. More generally, for any given segment, the probability is at most 1/b that the segment’s length with turn out to be greater than or equal to b. This holds for sticks of all lengths n, broken into n segments. Recall that the random segment lengths are the random biases of the evolutionary process. CoI says that the greater the bias in favor of an outcome specified in advance, the less likely the bias is to have arisen by breaking a stick. The result is not useful in implicating design of biological evolution, as it assumes that an outcome was targeted in advance. To apply CoI, one must know not only that an outcome was targeted prior to the formation of the evolutionary process, but also which of the possible outcomes was targeted.3

Figure 2. In this frame from Animation 1, the segments of 20 broken sticks are colored according to their original positions. The expected length of each segment is 1, though the random lengths are highly variable. According to CoI, the probability is at most 1/2 that the length of the blue segment will turn out to be 2 or greater. More generally, for any given segment, the probability is at most 1/b that the length of the segment will turn out to be greater than or equal to b. This does not hold if we specify a segment in terms of the outcome of the random segmentation of the stick. In particular, CoI does not apply to the longest segment.

Figure 3. In this frame from Animation 1, the segments of each of the 20 broken sticks have been sorted into ascending order of length, and recolored. The expected length of the longest (red) segment is 2.45. By the Broken Stick Theorem, the probability is .728 that at least one of the segments is of length 2 or greater. By misapplication of CoI, the probability is at most 1/2. For a stick of length n, the probability is greater than 1/2 that at least one of the n segments exceeds \ln n in length. There is no limit on the ratio of probability 1/2 to the faux bound of 1/\ln n.

The Broken Stick Theorem tells us quite a bit about the lengths of segments. What is most important here is that, for any given length, we can calculate the probability that one or more of the segments exceeds that length. For instance, the probability is 1/2 that at least one of the segments is of length 2.338 or greater. If you were to misapply CoI, then you would say that the probability would be no greater than 1/2.338, which is smaller than 1/2. A simple way to measure the discrepancy is to divide the actual probability, 1/2, by the CoI bound, 1/2.338. The result, 1.169, is small only because the illustration is small. There is no limit on how large it can be for longer sticks. Let’s say that the stick is of length n, and is broken into n segments. Then the probability is greater than 1/2 that at least one of the segments exceeds \ln n in length. Here \ln n is the natural logarithm of n. The details are not important. What matters is that we can drive the faux bound of 1 / \ln n arbitrarily close to 0 by making n large, while the correct probability remains greater than 1/2.

Cool, but nonessential: The relation of the expected length of the i-th longest segment of a broken stick to the harmonic numbers. Here E[B_{(i)}] is the expected value of B_{(i)}, the i-th greatest of the random segment lengths (biases). As it happens, the notation E[\cdot], widely used in probability and statistics, was introduced by William Allen Whitworth, who derived the Broken Stick Theorem.

    \begin{align*} E[{B}_{(6)}] &= \frac{1}{1} + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} = \mathcal{H}_6\\ E[{B}_{(5)}] &= \phantom{\frac{1}{1} +\;\,} \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} = \mathcal{H}_6 - \mathcal{H}_1\\ E[{B}_{(4)}] &= \phantom{\frac{1}{1} + \frac{1}{2} +\;\, } \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} = \mathcal{H}_6 - \mathcal{H}_2 \\ E[{B}_{(3)}] &= \phantom{\frac{1}{1} + \frac{1}{2} + \frac{1}{3} +\;\, } \frac{1}{4} + \frac{1}{5} + \frac{1}{6} = \mathcal{H}_6 - \mathcal{H}_3 \\ E[{B}_{(2)}] &= \phantom{\frac{1}{1} + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} +\;\, } \frac{1}{5} + \frac{1}{6} = \mathcal{H}_6 - \mathcal{H}_4 \\ E[{B}_{(1)}] &= \phantom{\frac{1}{1} + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} +\;\, } \frac{1}{6} = \mathcal{H}_6 - \mathcal{H}_5 \\ E[{B}_{(1)}] + \cdots + E[{B}_{(6)}] &= \frac{1}{1} + \frac{2}{2} + \frac{3}{3} + \frac{4}{4} + \frac{5}{5} + \frac{6}{6} = 6 \end{align*}

For large n, \mathcal{H}_n \approx \ln n + \gamma, where \gamma \approx 0.5772 is the Euler-Mascheroni constant. So the expected maximum bias (“active information”) of a randomly biased process is logarithmic in the number of possible outcomes. For large n,

    \[P(B_{(n)} > \ln n) \approx 1 - \frac{1}{e} \approx .6321.\]

The derivation is straightforward, but not brief. I decided that the loose bound

    \[P(B_{(n)} > \ln n) > \frac{1}{2}\]

better serves present purposes.

Rather than simply argue that the analysis of Dembski et al. does not apply, I have identified a comparable analysis that does apply, and used it to quantify the error in misapplying their analysis. The expected maximum bias (“active information”) for a randomly biased process (“search”) grows without bound as the size of the space of possible outcomes (“search space”) increases. For n possible outcomes, the probability is greater than 1/2 that the maximum bias exceeds \ln n. According to CoI, the probability is at most 1 / \ln n that the bias in favor of a given outcome is \ln n or greater. The discrepancy is entirely a matter of whether a possible outcome is targeted in advance of generating the process (“hit this”), or the most probable outcome of the process is targeted after the fact (“this is what it hits”). It should be clear that a scientist is free to do the latter, i.e., to investigate the most probable outcome of a process observed in nature.4 In Dembskian terms, the active information measure permits us to inspect the distribution of arrows shot into a wall by a blind archer, and paint a target around the region in which the density of arrows is greatest. There is no requirement that the target have the detachable specification that Dembski emphasized in his earlier writings.

Why a bug is not a weasel

In 1986, Richard Dawkins published The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design, a response to William Paley’s Natural Theology: or, Evidences of the Existence and Attributes of the Deity; Collected from the Appearances of Nature (1802). Dembski’s career in ID is largely a response to Dawkins. Indeed, the highlights are cooptations of ideas in The Blind Watchmaker. Dawkins characterizes objects that are complicated, and seemingly designed, as “statistically improbable in a direction that is specified not with hindsight.” Dembski elaborates on the self-same property in The Design Inference: Eliminating Chance through Small Probabilities (1998), taking it as information imparted to objects by design. A not-with-hindsight specification is, in his parlance, detachable from the specified event (set of objects), called the target. Dembski usually refers to complicatedness as complex specified information or specified complexity, but sometimes also as specified improbability. The last term gives the best idea of how it contrasts with active information, the elevated probability of an event not required to have a detachable specification. In No Free Lunch: Why Specified Complexity Cannot Be Purchased without Information, he states a Law of Conservation of Information for specified complexity. (As explained below, in Appendix 1, Dembski and Marks have never mentioned this LCI, since stating their LCI for active information.)

[This section is not complete. I expect to add the guts over the next day or so, which means, as Joe can tell you, that you should expect them sometime in December. The gist is that specified complexity does not apply to Dawkins’ Weasel program. Dembski has made much of the meaningfulness of the target sentence. But the fact of the matter is that the model is the same for all target sentences comprising 28 uppercase letters and spaces. The target need not be specified. The measure of active information formalizes Dawkins’ comparison the model to the proverbial monkeys at typewriters. It does not stipulate that the target have a detachable specification. Dembski and Marks seem to have thought that it was transparently obvious that Dawkins had selected a desired outcome in advance, and had informed (programmed) the evolutionary process to “hit the target.”]

Dembski discusses the Weasel program on pp. 176-180 of Being as Communion. Here is how he describes “search,” in general, and the Weasel program, in particular:

In The Blind Watchmaker, Dawkins purports to show how natural selection creates information. In that book, he gives his famous METHINKS IT IS LIKE A WEASEL computer simulation. A historian or literary scholar, confronted with the phrase METHINKS IT IS LIKE A WEASEL, would look to its human author, William Shakespeare, to explain it (the phrase is from Hamlet). An evolutionary theorist like Dawkins, by contrast, considers what it would take for an evolutionary process, simulated by an algorithm running on a computer, to produce this target phrase. All such algorithms consist of:

  1. an initialization (i.e., a place where the algorithm starts — for Dawkins the starting point is any random string of letters and spaces the same length as METHINKS IT IS LIKE A WEASEL);
  2. a fitness landscape (i.e., a measure of the goodness of candidate solutions — for Dawkins, in this example, fitness measures proximity to the target phrase so that the closer it is to the target, the more fit it becomes);
  3. an update rule (i.e., a rule that says where to go next given where the algorithm is presently — for Dawkins this involves some randomization to existing candidate phrases already searched as well as an evaluation of fitness along with selection of those candidates with the better fitness);
  4. a stop criterion (i.e., a criterion that says when the search has gone on long enough and can reasonably be ended — for Dawkins this occurs when the search has landed on the target phrase METHINKS IT IS LIKE A WEASEL).

Note that in these four steps, natural selection is mirrored in steps (2) and (3).

It is important to note that Dembski addresses algorithms, or designs of computer programs, in engineering terms, and does not address models (implemented by computer programs) in scientific terms. This amounts to a presumption, not a demonstration, that the computational process (running program) is designed to generate a desired outcome.

[What I hope to get across here is why Dembski et al. cannot misconstrue Felsenstein’s model, called the GUC Bug, as he does Dawkins’ model. Those of you who argue with ID proponents should put the tired old Weasel out to pasture, or wherever it is that old Weasels like to go, and give Felsenstein’s Killer Bug a try.]

Figure 4. Felsenstein’s GUC Bug model contrasts starkly with Dembski’s travesty of Dawkins’ Weasel program. There can be no argument that Felsenstein designed the model to hit a target, because we define the target in terms of the modeled process. The model implemented by the Weasel program is not terribly different. But it is terribly easy to brush aside the model, and focus upon the program. Then the claim is that Dawkins designed the program to hit a specified target with its output.

ID the future

It is telling, I believe, that Dembski gave a detachable specification, “teleological system/agent,” of the target for biological evolution in his seminar talk, and that Ewert gives a detachable specification of the event that he targets, birds, in his response to Felsenstein and me. Ewert addressed active information in his master’s thesis (2010), but developed a new flavor of specified complexity for his doctoral dissertation (2013; sequestered until 2018). He, Dembski, and Marks have published several papers on algorithmic specified complexity (one critiqued here, another here). Dembski indicates, in a footnote of Being as Communion, that he and Marks are preparing the second edition of No Free Lunch (presumably without changing the subtitle, Why Specified Complexity Cannot Be Purchased without Information). My best guess as to what to make of this is that Dembski et al. plan to reintroduce specification in LCI Version 3. One thing is sure: ever mindful of the next judicial test of public-school instruction in ID, they will not breathe a hint that their publications on active information are any less weighty than gold. Ewert has demonstrated some of the revisionary tactics to come.

Appendix 1: Contradictory laws on the books

There actually have been two Laws of Conservation of Information. The first, featured in No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (Dembski, 2002), addresses the specified complexity, also known as the specified improbability, of an event. The second, featured in Being as Communion: A Metaphysics of Information (Dembski, 2014), addresses the active information of a process, supposedly necessary for “unnatural” elevation in probability of an event. Specified improbability is loosely the opposite of elevated probability. Dembski and Marks evidently saw better than to claim that both are conserved, as they have said nothing about the first law since coming up with the second. Although Dembski opens Being as Communion by indicating that it is the last book of a trilogy that includes No Free Lunch, his only mention of specified complexity is in a footnote listing examples of “materialist-refuting logic.” He also notes that he and Marks are preparing the second edition of No Free Lunch. To include both specified complexity and active information in the cutlery is to serve up free lunch. It equips the ID theorist to implicate design when an event is too improbable (relative to a probability induced by specification), and also when an event is too probable (relative to a probability asserted a priori).

Appendix 2: Remembrance of information past

Here I give ample evidence that the “search” really was supposed to search for the targeted event, and that “active information” really was supposed to account for its probability of success. I begin with two technical abstracts. If you find yourself getting bogged down, then read just the text I’ve highlighted. The first is for Dembski‘s seminar talk (August 2014).

Conservation of Information (CoI) asserts that the amount of information a search outputs can equal but never exceed the amount of information it inputs. Mathematically, CoI sets limits on the information cost incurred when the probability of success of a targeted search gets raised from p to q (p < q), that cost being calculated in terms of the probability p/q. CoI builds on the No Free Lunch (NFL) theorems, which showed that average performance of any search is no better than blind search. CoI shows that when, for a given problem [targeted event], a search outperforms blind search, it does so by incorporating an amount of information determined by the increase in probability with which the search outperforms blind search. CoI applies to evolutionary search, showing that natural selection cannot create the information that enables evolution to be successful, but at best redistributes already existing information. CoI has implications for teleology in nature, consistent with natural teleological laws mooted in Thomas Nagel’s Mind & Cosmos.

Apart from hiding a law of nature under a bushel, this is not much different from the abstract of “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information” [sic] (Dembski and Marks, 2010; preprint 2008).

LCI characterizes the information costs that searches incur in outperforming blind search. Searches that operate by Darwinian selection, for instance, often significantly outperform blind search. But when they do, it is because they exploit information supplied by a fitness function — information that is unavailable to blind search. Searches that have a greater probability of success than blind search do not just magically materialize. They form by some process. According to LCI, any such search-forming process must build into the search at least as much information as the search displays in raising the probability of success. More formally, LCI states that raising the probability of success of a search by a factor of q/p (> 1) incurs an information cost of at least log(q/p). [… Conservation of information] theorems provide the theoretical underpinnings for the Law of Conservation of Information. Though not denying Darwinian evolution or even limiting its role in the history of life, the Law of Conservation of Information shows that Darwinian evolution is inherently teleological. Moreover, it shows that this teleology can be measured in precise information-theoretic terms.

The putative measure of teleology is log(q/p), the active information of the evolutionary search. Dembski also says in Being as Communion that a search is informed to find a target, not merely biased in favor of it.

A precise theoretical justification for the claim that natural selection is inherently teleological comes from certain recent mathematical results known as Conservation of Information (CoI) theorems [p. 148].

Simply put, searches, in finding targets output information. At the same time, to find targets, searches need to input information [p. 152].

CoI shows that successful search (i.e., one that locates a target) requires at least as much input of information as the search by its success outputs [p. 150].

The information that goes into formation of the search, to increase the probability that it finds the target, is active information. Returning to “Life’s Conservation Law” (Section 1, “The Creation of Information”):

Nature is a matrix for expressing already existent information. But the ultimate source of that information resides in an intelligence not reducible to nature. The Law of Conservation of Information, which we explain and justify in this paper, demonstrates that this is the case.

Dembski and Marks hold that the ultimate source of active information, which increases the probability that evolutionary search achieves a purpose, is supernatural intelligence. However, Ewert tells us that “active information,” regarded as bias instead of information, is not necessarily due to design.

The conservation of information does not imply a designer. It is not a fine-tuning argument. It is not our intent to argue that all active information derives from an intelligent source. To do any of those things, we’d have to introduce metaphysical assumptions that our critics would be unlikely to accept. Conservation of information shows only that whatever success evolutionary processes might have, it is due either to the original configuration of the universe or to design.

This reversal is not due to Ewert. He’s obviously adapting arguments in Being as Communion, though without citing a source.

Notes

1. Felsenstein and I give the bound for just the fittest of all genotypes. I’ve extended it to the set of all local maxima of the fitness landscape. We classify haploid organisms into genotypes according to their DNA bases in L positions of the genome. The neighbors of a genotype are the 3L genotypes that differ from it in exactly one of the L positions. We require that the fittest genotype of each neighborhood of K=3L+1 genotypes be unique. It follows immediately that at most one genotype per neighborhood is a local maximum of the fitness landscape, and that the ratio of the total number of genotypes to the number of local maxima is at least K. Evolution begins with a random genotype, proceeds along the path of steepest ascent on the landscape, and ends at a local maximum. The minimum degree of constraint on the final genotype in the process is K. This is also the minimum “active information” with respect to (targeting) the set of local maxima. That is, the probability is q = 1 that the final genotype is a local maximum. The uniform probability of the set of local maxima is p \leq 1/K. Finally, the active information, without conversion to a log scale, is q / p \geq K.

2. Although the term bias is technically acceptable — indeed, I have used it, and will continue to use it in contexts where constraint is inappropriate — Ewert earns scorn by abusing it in the most predictable of ways. The problem with referring to the bias of a natural process is that the general reader gets the idea that the process “naturally” ought to have behaved in some other way, and deviates only because something biased it. And thus the Designer enters through the back door, not by evidence or reason, but instead by rhetorical device. Usually, the meaning of bias is only that some of the possible outcomes of a process have different chances of occurring than do others. If this were always the case, then I would refer instead to the non-uniformity of the probability distribution on outcomes. By the way, I am not conflating all probabilities in scientific models with physical chances, as Dembski et al. generally do. Much of what is modeled as random in biological evolution is merely uncertain, not attributed to quantum chance. The vitally important topic of interpretations of probability, which Dembski deflects with a false analogy to interpretations of quantum mechanics (Being as Communion, p. 157), will have to wait for another post.

3. CoI applies more generally to events, meaning sets of possible outcomes. But that’s irrelevant to the logic, or lack thereof. For readers familiar with Dembski’s measure of specified complexity, I should mention that the measure of active information permits us to target any event whatsoever. There is no requirement that the event have a detachable specification. Dembski’s arguments to the effect that an event with a detachable specification might as well have been prespecified are irrelevant here.

4. What it means to investigate the most probable outcome of a process observed in nature is highly problematic. In particular, we generally cannot say anything sensible about the chances of possible outcomes of a one-shot process. Complex processes that have occurred once, and cannot be repeated, are what commonly interest evolutionary biologists. I should make it clear that I don’t agree with Dembski et al. that evolutionary biologists should make claims about the chances of this, that, and the other. I’m essentially playing along, to show that their math is not even applicable.

228 thoughts on “The Law of Conservation of Information is defunct

  1. … it produces birds much more often than chance would otherwise lead us to predict.

    What exactly does that mean? How often would we expect birds? Since we got birds once, it seems that the only possibility for “less often” would be zero. But if our expectation (starting from what initial conditions?) is zero, how is one “much more often”, given an integer result? Very confusing.

  2. John Harshman:
    … it produces birds much more often than chance would otherwise lead us to predict.

    What exactly does that mean? How often would we expect birds? Since we got birds once, it seems that the only possibility for “less often” would be zero. But if our expectation (starting from what initial conditions?) is zero, how is one “much more often”, given an integer result? Very confusing.

    I have a fair idea of what he’s trying to say only because I know the formal definition of “active information.” He’s assuming that there’s a space of all “configurations” of matter. I have no idea of what that means, and I believe that the reason I have no idea is that it’s in fact meaningless. But give him that. Then what he means by birds is a subset of the configurations of matter. The outcome of a uniformly random configuration of matter is a bird with low chance p. The outcome of the process of biological evolution is a bird with relatively high chance q, i.e., q/p is large. The ratio b = q/p is the bias of the evolutionary process in favor of the event birds.

    The CoI theorem says, assuming that the process was randomly biased (with biases determined by a broken stick), that the chance of bias of b or greater in favor of a given event is at most 1/b. However, Ewert has selected the event after the fact, not in advance of formation of the evolutionary process. The CoI theorem does not apply, and thus he has no justification for his references to “conservation of information.”

  3. Tom English,

    So it’s the bias of evolution favoring birds instead of, for example, half a twinkie made of molybdenum or a small puddle of brown liquid, possibly creosote?

  4. Welcome, John. For those who don’t know, John is not only an excellent and experienced molecular systematist, as well as someone well-versed in phylogenetic methods (his 1994 paper got two pages of coverage in my book). But he has also been the voice of wisdom and sanity in innumerable online discussions of evolution.

  5. OK, I will get to the broken-sticks issue very soon. In the meantime, why is Ewert describing Active Information as involving picking one configuration of matter out of all possible configurations of matter?

    Dembski, Ewert, and Marks’s argument involves a space of genotypes (or of population genotypic compositions). Their “searches” are then all the ways of assigning probabilities to those, in effect all weighted averages whose weights are nonnegative and add up to 1. The fractions (p, q, etc.) are probabilities of those, under equiprobability.

    Why does that correspond with all possible configurations of matter? I can’t see that it possibly can. Most configurations of matter would be hard to achieve while maintaining physical laws (say putting most bits of metal in random places up in the air).

  6. John Harshman:
    Tom English,

    So it’s the bias of evolution favoring birds instead of, for example, half a twinkie made of molybdenum or a small puddle of brown liquid, possibly creosote?

    Yup. And a “physical” process that is unbiased in the sense of being uniformly distributed on a space of states has no physics, let alone chemistry, to speak of.

    Read Genesis 1:2-3 and John 1:1-4. There is no form and no life that is not due to active in-form-ation by the Word. I shrug off much of what Dembski says, and realized only recently that he was telling it straight when he said that ID is the Logos theology of the Gospel of John, restated in the idiom of information theory. I’m not one to attack ID simply because it’s religiously motivated. It’s not logically impossible to turn religious ideas into science. (I don’t mean to downplay the fabulous dishonesty of pretending to have science when you really have only secularized religious ideas as to how you might do science.) But I think it’s important to understand that there is a fairly strong constraint on how the claims of Dembski and Marks (and Boo Boo too) will evolve.

  7. Joe Felsenstein: OK, I will get to the broken-sticks issue very soon.

    Go for it. As you know, I want to do much more with the incomplete section than I’d originally planned. (Actually, I’ve already done much more than you know.) I’ve finally admitted to myself that I need to punt, and do another post. But I can’t give up wanting to put it in the coffin corner.

  8. Joe Felsenstein: In the meantime, why is Ewert describing Active Information as involving picking one configuration of matter out of all possible configurations of matter?

    Because it applies more generally than to biology. If you think about the “search for a search” regress, it really must, because abiotic processes “spawn” biotic processes.

    By the way, I avoided having to talk about the regress by going with uniform fragmentation of the stick (in one dimension). The biasing process is itself unbiased. If the fragmentation were not uniform, then the expected maximum bias of the evolutionary process would possibly go up, not down. So, to make a thoroughly confusing remark that I avoided in the post, I’ve given a logarithmic lower bound on the expected maximum bias.

    ETA: Wrong, wrong, wrong. A biased stick-breaking process might make all of the segments of equal length with probability 1.

  9. OK, let me grill Tom a bit on the broken stick model. You use this to assign lengths to pieces of the stick. If a process were to choose a random point on the stick, it would then be more likely to end up choosing a larger piece, and the math you cite shows that.

    But I see you choosing pieces at random, each with probability 1/6, rather than proportional to their lengths.

    You characterize Dembski’s statements by saying:

    We’ll even watch them breaking randomly to pieces. This is how Dembski et al. see the biases of an evolutionary process being determined, in the absence of design.

    Does Dembski invoke broken stick processes, or are you analogizing those to the processes he does describe? It’s the analogy of your process to “evolutionary search” that I need to understand.

  10. Tom English: You use this to assign lengths to pieces of the stick. If a process were to choose a random point on the stick, it would then be more likely to end up choosing a larger piece, and the math you cite shows that.

    But I see you choosing pieces at random, each with probability 1/6, rather than proportional to their lengths.

    As detailed as my detailed response (next comment) is, I haven’t managed to work in an adequate response to this. I’ve run out of steam for it, and need to get back to work on the incomplete section of the opening post. But I don’t want to ignore what you’ve said here.

    All pieces of the broken stick determine biases of the evolutionary process. There is one segment for each of the possible outcomes. The evolutionary process has 6 outcomes, so there must be 6 biases. We break a stick of length 6 randomly into 6 pieces. The length of the first segment is the bias for possible outcome 1, …, and the length of the last segment is the bias for possible outcome 6 of the evolutionary process.

    When we measure the bias (“active information”) with respect to (targeting) an outcome, we implicitly measure the length of the corresponding segment of the metaphorical broken stick. [That’s good. Need to work it in somewhere.] There is no random selection of an outcome/segment. The CoI theorem assumes that the targeting (outcome/segment selection) is in advance of formation of the evolutionary process. The scientist, however, may do the targeting after observing the process, and may tacitly select the longest of the metaphorical stick segments precisely because it is the longest. You did something similar with the GUC Bug model, targeting the fittest genotype because you knew that the process was strongly biased in favor of it. (However, we are not saying that the biases of the model process come from a broken stick. That is the bizarre interpretation of Dembski et al., described in my next comment.) Considering why your example worked is what led to my particular response to the CoI theorem.

  11. Your GUC Bug model is defined on a space of genotypes of length 1000, of which there are 4^{1000}, or about 1.15 \times 10^{602}. That’s the number of possible outcomes of the evolutionary process. Though modest by biological standards, it’s impossible to handle in graphics. I reduced the number of possible outcomes to 6, a number big enough to illustrate the inapplicability of CoI, and small enough to allow for clear images.

    Dembski et al. go with a radically abstract notion of a process. In their math, a process is reduced to nothing but the chances of its possible outcomes. So a process with 6 possible outcomes is represented as 6 numbers, each between 0 and 1, that sum to 1. In practice, Dembski et al. regard a process with uniform chances, e.g.,

        \[\frac{1}{6} + \frac{1}{6} + \frac{1}{6} + \frac{1}{6} + \frac{1}{6} + \frac{1}{6} = 1,\]

    as devoid of design. They say, in essence, that the possible outcomes “naturally” have equal chances of occurring, and that deviation from the default is itself a physically real phenomenon that demands explanation. When the possible outcomes do not have equal chances of occurring, e.g.,

        \[\frac{1}{24} + \frac{1}{3} + \frac{1}{12} + \frac{1}{4} + \frac{1}{6} + \frac{1}{8} = 1,\]

    then the process is biased in favor of some outcomes (the chances 1/3 and 1/4 are greater than 1/6), and biased against others (the chances 1/24, 1/12, and 1/8 are less than 1/6). To obtain the corresponding biases, we simply divide each of the chances by 1/6, which is equivalent to multiplying each of them by 6:

        \[\frac{6}{24} + \frac{6}{3} + \frac{6}{12} + \frac{6}{4} + \frac{6}{6} + \frac{6}{8} = \frac{6}{1},\]

    which reduces to

        \[\frac{1}{4} + 2 + \frac{1}{2} + \frac{3}{2} + 1 + \frac{3}{4} = 6.\]

    Dembski et al. do not allow that such deviations from the supposedly “natural” default of uniform chance might be brute facts of physical reality. There must be a reason for bias. If we do not allow that bias is possibly due to design of the process to serve a purpose, then Dembski et al. force on us the view that bias itself arises by chance. (This is multifariously outrageous, but for reasons that are not clearly tied to their math.) That is, the chances of the possible outcomes of the evolutionary process are determined by an antecedent process, which is also random. Talk about the chances of chances gets very confusing, very fast. So I say instead that the evolutionary process is randomly biased by a process that occurs before it does. The biases of the evolutionary process are just the chances of the 6 possible outcomes of the evolutionary process, multiplied by 6. Setting the chances randomly is equivalent to setting the biases randomly.

    The broken stick is a conventional metaphor for probabilities that are themselves set randomly. (I follow Dembski in reserving the word chance for the probability of a physically random outcome.) The random lengths of the segments of the stick are the probabilities. The stick is ordinarily of unit length, because the probabilities must sum to 1. To visualize random biases, instead of random chances, I need only multiply the length of the stick by the number of possible outcomes, 6, and randomly break the stick into 6 pieces. Then the biases sum to 6.

    I stipulate that the biasing process, i.e., stick breaking, is uniform, meaning that all possible biases of the evolutionary process are equally likely to arise. A tricky point is that Dembski et al. allow for uniform biasing, but do not require it. The essential justification of my approach is that I need consider only something, not everything, that they allow in order to demonstrate that the theorem does not apply to scientific investigation. What I consider is in fact typical. The uniform biasing process is the average of all biasing processes. Thus there can be no objection to my choice of it.

    Dembski et al. refer to all random processes as “searches.” The term is nothing but rhetorical assertion of the conclusion they want to draw. The stick-breaking “search” (process), which determines the biases of the evolutionary “search” (process), is a visualization of what they call a “search for a search.” Dembski et al. allow for the biasing process itself to be biased by an antecedent process, in which case there is a “search for a search for a search.” In Being as Communion, Dembski avoids committing to Big Bang cosmology, and indicates that the regress of searches for searches might go back forever in time. Fortunately, we need not wallow into enter a quasi-mystical quagmire to get at a glaring error in logic.

  12. I invite all to give me feedback on the preceding comment. Something like it will go into the opening post, along with a complete “Why a Bug Is Not a Weasel” section.

  13. Thanks, that clears me up on the analogy you intended.

    I presume from this that you can, for the broken-stick model, compute the amount of Active Information which is present once the sticks have been broken. It would be based on the unequal probabilities, assuming that the sticks have probability equal to their lengths.

  14. You seems to have it backwards. Variation and selection do not define excess reproduction, they fine tune it.

    If higher fecundity is favored, this assumes the organism had differential excess reproductive capability from the outset, otherwise there is nothing to favor. If all early rabbits produced 1-2 rabbitos, then the favoring would go downhill extremely fast. So you would have to assume that early rabbits reproduced in differing amounts, some 1-2, some 3-4, some 4-6, still others 8-12. And so finally the number 8-12 would be settled on as the optimum ratio for continuity.

    But why would early rabbits produce in differing ratios? The cost of a single reproductive session would already be high but more so the cost of reproduction AND subsequent post-natal care requirements of 8-12 offspring would be too high

    Unless of course they had already been outfitted with the required tools, IOW having excess reproduction designed in. Again, variation and selection dont work without it. Excess reproduction drives variation and selection, not the other way around.

    So which ever way you look at it, only a design perspective provides a satisfactory explanation for all these conundrums and paradoxes. Non-teleological step-wise incremental changes never gets you out of the box.

    Allan Miller:
    Steve,

    It’s true. If net replacement in a sexual species is &lt 2, the species is headed for extinction. Therefore higher fecundity is favoured in organisms with higher mortality. The species left are those that were able to adapt to this selective pressure, which has the same effect as any other. Fecundity can be selected for, and optimised by selection alone.

    It’s really no different in overall effect from (say) predator defences, and needs no special separate Design Factor. Of course, you think predator defences were designed as well. Regardless, there is no separate requirement for fecundity on either paradigm.

  15. Steve:

    That is silly. If there is no excess reproduction, there is still reproduction, and selection can act on the amount of reproduction, making it more (or less) excess.

    Excess reproduction above 2 offspring per parent would usually be necessary because in many species there is a considerable mortality before the offspring reach reproductive ages.

    Why do you think codfish produce billions of eggs? Are they just obsessive?

  16. Joe Felsenstein:
    Welcome, John.For those who don’t know, John is not only an excellent and experienced molecular systematist, as well as someone well-versed in phylogenetic methods (his 1994 paper got two pages of coverage in my book).But he has also been the voice of wisdom and sanity in innumerable online discussions of evolution.

    Welcome endorsed!

  17. John Harshman:
    … it produces birds much more often than chance would otherwise lead us to predict.

    What exactly does that mean? How often would we expect birds? Since we got birds once, it seems that the only possibility for “less often” would be zero. But if our expectation (starting from what initial conditions?) is zero, how is one “much more often”, given an integer result? Very confusing.

    Missing from just about any ID calculation I have seen is an estimated expected value under the null of No Design.

  18. Steve: Actually, it makes perfect sense. Excess reproduction is the design element that does not require organisms to have foresight. The foresight is in the design. It doesn’t matter what the environment conditions are at any given moment. Excess reproduction ensures that enough variation is enabled to meet any contingency.

    No it doesn’t.

    How much “excess” is required depends on the environment. “Anything over two” won’t “ensure…that enough variation is enabled to meet any contingency” if the contingency in question is a 1 in 10 survival rate. On the other hand a much larger excess won’t be advantageous if it means that resources go into reproduction and the feeding of infants in an enviornment where predators are few but food is scarce.

    There is an optimum reproduction rate for a given environment, and the Darwinian mechanism is an optimiser. It doesn’t require foresight.

  19. I must have missed it- what is the evidence that natural selection and drift can produce what ID says requires a designer?

  20. Elizabeth: Missing from just about any ID calculation I have seen is an estimated expected value under the null of No Design.

    Then why don’t you provide that? You do realize that P(T|H) is something that you need to provide, right?

  21. Frankie:

    Elizabeth: Missing from just about any ID calculation I have seen is an estimated expected value under the null of No Design.

    Then why don’t you provide that? You do realize that P(T|H) is something that you need to provide, right?

    Well, that’s the whole point about why (the current definition of) Specified Complexity is a useless afterthought to arguments about Design, rather than being a powerful quantity that shows that Design must be the only viable explanation.

    The logic of showing that only Design can explain an adaptation is (currently):

    1. Observe the adaptation.
    2. By some argument, which Design advocates do not provide us, compute the probability P(T|H) that an adaptation as good as this, or better, can evolve by ordinary evolutionary processes.
    3. If that is smaller than the Universal Probability Bound, declare that Design is the only viable explanation.

    So far, this is a reasonable argument, provided one can do step 2. The argument is over at this point, except that …

    4, Oh yes. Design having already won, we also declare that Specified Complexity is present.

    So what important role does step 4 play? Nothing, except for making the whole argument sound mysterious and “mathy”.

    And what powerful tool do Design advocates provide us with? Zilch, nada, bupkes.
    They leave us to do step 2, and don’t provide any means of doing so. Plus they tack on a useless step 4, and then go around crowing about what a powerful concept Specified Complexity is.

  22. Joe, you are simply confused and erecting starwmen. If your position had something then ID would be a non-starter. It is up to you and yours to provide a P(T|H) and you have failed.

  23. Frankie: what ID says requires a designer?

    What does ID say requires a designer? That seems to vary between the universe and a bacterial flagellum. That’s quite a range, care to get specific?

  24. Frankie: Joe, you are simply confused and erecting starwmen. If your position had something then ID would be a non-starter. It is up to you and yours to provide a P(T|H) and you have failed.

    Yet a little earlier in another thread Frankie quoted this:

    Darwinism, Design and Public Education page 92:

    1. High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design.
    2. Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity.
    3. Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity.
    4. Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems.

    So there are two possible indicators of Design. One is Specified Complexity. Points 2 argues that we see “high information content (specified complexity)” in biological systems and then point 3 argues that “naturalistic mechanisms” do not suffice to explain the origin of it. So someone is supposed to first look for SC, and if it is seen, then there is some reason to believe that naturalistic mechanisms cannot be responsible.

    Oops, problem: Specified Complexity, as defined by Dembski in his 2005-2006 paper, is defined as only present when natural evolutionary processes cannot with any reasonable probability produce the high level of “information”.

    So do we first assess whether Specified Complexity is present, then ask about the probability of producing that “high information content”? Or are we not even able to say that Specified Complexity is present until after we calculate the probability of the “high information content” being present under natural causes?

    Perhaps Frankie can explain what they (and he) mean.

    (And by the way, notice the wording that SC and IC “constitute strong indicators” of intelligent design. Indicators. Plural. So observing either one is a strong indicator. It is not required to observe both. I note this in case anyone tries to claim that IC is essential to the Design Inference.)

  25. In reconsidering the conservation law for information, it seems increasingly plausible to assume that information should be considered as the basic parameter for describing living systems and that a conservation law can in all cases be interpreted so as to be valid. It appears further to be capable of an unambiguous interpretation.

    – Herman R. Branson, Department of Physics, Howard University (1953)

    So Tom,

    Is a conservation law for information just a pipe dream? Or did DEM just happen to get it wrong? The idea, obviously, did not originate with DEM.

  26. Mung,

    There are many different definitions of information. A conservation law for one is not a conservation law for any of the others. References to “conservation of information” are fairly common in quantum mechanics. The term has a very precise meaning in that context. Googling “conservation of information” unitarity matrix, I get 1.67 million hits. The one listed first for me is this recent discussion at Physics Forums. The opening post is poor, but some of the responses are good.

    (Then again, physicists investigating deterministic chaos, i.e., nonlinear dynamical systems that are not random, speak of creation of information. Knowing the technical meaning, it seems to me more like gain of information by the observer than creation of information by nature. But I’ve had a physicist colleague insist that it really is creation of information. So… I am not a physicist.)

    Why do Dembski et al. not mention conservation of information in quantum theory? I suspect that they’re none too anxious to highlight the fact that they’re challenging fundamental physics, and not just biological evolution. But I really don’t want to stray into this sort of speculation, in a thread where I’m emphasizing a glaring error in logic. Also, I’m trying to refrain from Dembski-bashing for a while.

    Although my early claims, in the context of NFL, about conservation of information à la Shannon were bunk, I later dealt with algorithmic information (Kolmogorov complexity), and got things pretty much right. Basically, if you apply the function f to argument x, then there is a limit to how much the Kolmogorov complexity of the result y = f(x) can exceed that of x. And the upper bound on the difference is the Kolmogorov complexity of f.

        \[K(f(x)) \leq K(f) + K(x)\]

    (I’ve suppressed some technical details.) The earliest use of “conservation of information” in connection with a result like this, as best I can tell by Googling, was by Leonid Levin (early 1970’s). I don’t recall that Peter Medawar, whom Dembski likes to cite, mentions Kolmogorov complexity, but what he says about conservation of information is similar to the inequality I just wrote.

  27. Tom English,

    Winston Ewert, a software engineer at Google, Inc., indicates in a new paper that he is affiliated with the Biologic Institute, which is funded primarily by the Discovery Institute. That, in my mind, makes him a public person. Sad to say, the gloves will be coming off. But not in this thread. Here I stick to exposing the most obvious of errors in the claim that the Conservation of Information Theorem applies to nature.

    I look forward to your new thread. I read Ewert’s paper and there is plenty wrong with his approach. I’ll leave those details for the new topic, but I note this from his conclusion:

    “Thus far, the available evidence strongly supports the claims in this paper. Biological evolution cannot resolve the challenge of potentiating mutations.”

    Lenski has conclusively demonstrated otherwise. When your model doesn’t align with reality, most people adjust the model.

  28. With all due respect Prof. Felsenstein this seems muddled thinking. Variation and selection can’t act on a single offspring. What is selection if not a choice, implying two or more. Hence your earlier reply that excess reproduction is required for organisms to avoid extinction. This is why I said that non-teleological step-wise incremental change never gets out of the starting gate. It can’t traverse the single offspring to several offspring gap. Only design can do that.

    Why do codfish lay billions of eggs? Same reason insects produce thousands, and rabbits produce several and dolphins produce a single offspring. There appears to be a hyerarchical arrangement of excess reproduction. From a design perspective, it makes sense if your goal is to develop an interlocking, interdependent, balanced ecosphere. To do that, organisms need to produce for the community in order to earn their keep, so to speak. So the codfish are supplying food to the marine foodchain and guaranteeing their survival at the same time.

    Joe Felsenstein:
    Steve:

    That is silly.If there is no excess reproduction, there is still reproduction, and selection can act on the amount of reproduction, making it more (or less) excess.

    Excess reproduction above 2 offspring per parent would usually be necessary because in many species there is a considerable mortality before the offspring reach reproductive ages.

    Why do you think codfish produce billions of eggs?Are they just obsessive?

  29. Steve:

    There appears to be a hyerarchical arrangement of excess reproduction. From a design perspective, it makes sense if your goal is to develop an interlocking, interdependent, balanced ecosphere. To do that, organisms need to produce for the community in order to earn their keep, so to speak. So the codfish are supplying food to the marine foodchain and guaranteeing their survival at the same time.

    Steve,

    Do you have any idea how many organisms die of starvation?

  30. Elizabeth,

    This is wrong headed thinking.

    The biosphere is an integrated system. So reproduction is not soley about the organism’s survival, but about interdependence. Therefore, the excess reproduction is not a cost to the organism but in fact a benefit.

    As already mentioned variation and selection as the optimizer are acting on something to optimize, which is excess reproduction. They cannot optimize on a single offspring.

    Hence, excess reproduction is the driving force of a designed evolution. Foresight does not lie in the optimization process, but in the need for excess reproduction to 1) ensure that at least a single offspring survives to the next reproduction stage and 2) contribute to the food kitty.

    So again foresight does not lie in organisms themselves but in design. The foresight is in choosing a way that integrates changing environments with changing organisms. The environment exhibits cyclical changes in temperature, humidity, pressure, acidity.

    So organisms do not ‘take advantage’ of random variation. They respond to the environment with a barrage of variation in which one will hold. Its a pretty damn robust design as it has held for millions and millions of years.

    Elizabeth: No it doesn’t.

    How much “excess” is required depends on the environment.“Anything over two” won’t “ensure…that enough variation is enabled to meet any contingency” if the contingency in question is a 1 in 10 survival rate.On the other hand a much larger excess won’t be advantageous if it means that resources go into reproduction and the feeding of infants in an enviornment where predators are few but food is scarce.

    There is an optimum reproduction rate for a given environment, and the Darwinian mechanism is an optimiser.It doesn’t require foresight.

  31. Nearly all living things on earth are single celled. Most evolution has occurred in bacteria.

    How would you define excess reproduction in bacteria?

  32. Keiths,

    Im not sure what you are getting at here. Starvation is part of life. Its factored in the design. As I said, no matter what the environment presents, organism overcome them through excess reproduction which provides the variation which natural selection optimizes.

    Starvation is one of the by-products of that design. Yet, it is not wasted as organisms participate not only in their own survival but the survival of other organisms and the biosphere as a whole.

    keiths:
    Steve:

    Steve,

    Do you have any idea how many organisms die of starvation?

  33. Steve:
    Starvation is part of life. Its factored in the design.

    LOL! So now starvation is part of “Design”. Was your Designer so incompetent he couldn’t figure out how to provide for one species without inflicting great damage and death on the members of another? Or is this just another case of an ID pusher claiming the hole was “designed” to exactly fit the puddle it contains?

    What is the ID explanation for the “boom-bust” cycle phenomenon often seen in preditor-prey species relationships?

  34. Steve,

    So much for the “syncronization [sic] of excess reproduction at each level of life”.

    Are you now going to argue that the level of starvation is finely tuned? How many deaths by starvation does it take to make God the Designer happy?

  35. Reality:
    Alan Fox,

    This is the thread from which my comment was moved to Guano, even though my comment broke no rule. My comment should be put back.

    I have nothing against Tom English and I strongly support his efforts to expose IDiotic BS but there is no justification and no rule for giving him special status on this site. By doing that you are diminishing the status of everyone else here. Rules should not be arbitrarily made up because of who starts or comments in a thread.

    I wish I could have Alan move to Guano my comment that precipitated all of this. But it’s too late for that. I will not be taking off the gloves with Ewert. I don’t want to handle IDshit behavior, more vile than anything that ever issued from the cloaca of a penguin, anymore. I’ve got technically strong responses. And I’ve got a lot to learn about making them clear.

    What I’m trying to do is not higher in status. It’s qualitatively different from what we usually do at TSZ. And I’m not the only person ever to address technical matters here.

    I did not explain well how the breaking stick relates to the evolutionary process. I’m having trouble completing the post. But it at least announces a dramatic change in ID claims, coincidentally at the time when Dembski announces his (quasi-)retirement, and points readers to Joe’s post at PT.

  36. Petrushka,

    I think it is similar in bacteria. Bacterial colonies have been shown to act as a single organism. Therefore, they can be seen as a multi-cellular organism. The aggregate of each self-reproducing cell results in an extremely high frequency of reproduction which has the same effect of creating at least one variation that will hold.

    So there is no need for foresight in the organism. The huge reproductive quantity will guarantee a successful variation. The random element is in which variation will eventually take. But there is no randomness in the designed element of high frequency reproduction.

    Now does this always happen in each and every circumstance? No. But obviously it does work the majority of the time. A simple robust design element. Hence, life continues its billions of years of its existence.

    petrushka:
    Nearly all living things on earth are single celled. Most evolution has occurred in bacteria.

    How would you define excess reproduction in bacteria?

  37. Keiths,

    Ah, you have an issue with death I see. But is that not a religious question? Why would a designer create a design that includes death, you say? But if death is just a transition from one state to another, what is the problem?

    Are you protesting against God? But wait, wait……I thought you were an atheist? There are no Gods, remember?

    Hmmmm.

    keiths:
    Steve,

    So much for the “syncronization [sic] of excess reproduction at each level of life”.

    Are you now going to argue that the level of starvation is finely tuned?How many deaths by starvation does it take to make God the Designer happy?

  38. I’ve moved some off-topic comments, including some of mine and Tom’s to the “Failure to Respond” thread. While Tom’s post is featured so that ID proponents (maybe even Winston Ewert) can respond in the comments without the discussion being swamped by off-topic ones.

    I also moved some comments about moderation to the moderation issues thread.

    ETA messed up link

  39. Hi Tom, please let me know if I’m understanding your article correctly. Are you saying the probability of randomly choosing a biased process is relatively high? E.g in the stick analogy if you select a broken stick at random, you’re very likely to have a stick with non uniform breaks. So based on this we can conclude that the mere presence of bias without a specified target is not a good indicator of design?

  40. Elizabeth: Missing from just about any ID calculation I have seen is an estimated expected value under the null of No Design.

    Umm it is up to you and yours to provide the numbers for the null. And you have failed.

Leave a Reply