Trojan EleP(T|H)ant?

Winston Ewert has responded to my The eleP(T|H)ant in the room post at EnV.  Thanks to Steve for the heads-up.

So, here we go:

Part I

First, Winston graciously acknowledges the eleP(T|H)ant:

I wrote here previously (“Design Detection in the Dark“) in response to blogger Elizabeth Liddle’s post “A CSI Challenge” at The Skeptical Zone. Now she has written a reply to me, “The eleP(T|H)ant in the room.” The subject of discussion is CSI, or complex specified information, and the design inference as developed by William Dembski.

CSI is the method by which we test chance hypotheses. For any given object under study, there are a variety of possible naturalistic explanations. These are referred to as the relevant chance hypotheses. Each hypothesis is a possible explanation of what happened to produce the object. If a given object is highly improbable under a giving hypothesis, i.e. it is unlikely to occur, we say that the object is complicated. If the object fits an independent pattern, we say that it is specified. When the object exhibits both of these properties we say that it is an example of specified complexity. If an object exhibits specified complexity under a given hypothesis, we can reject that hypothesis as a possible explanation.

 

The design inference argues from the specified complexity in individual chance hypotheses to design. Design is defined as any process that does not correspond to a chance hypothesis. Therefore, if we can reject all chance hypotheses we can conclude by attributing the object under study to design. The set of all possible chance hypotheses can be divided into hypotheses that are relevant and those that are irrelevant. The irrelevant chance hypotheses are those involving processes that we have no reason to believe are in operation. They are rejected on this basis. We have reason to believe that the relevant chance hypotheses are operating, and these hypotheses may be rejected by our applying the criterion of specified complexity. Thus, the design inference gives us reason to reject all chance hypotheses and conclude that an object was designed.

 

Originally, Liddle presented a particular graphic image of unknown origin and asked whether it is possible to calculate the probability of its being the product of design. In reply, I pointed out that knowing the potential chance hypotheses is a necessary precondition of making a design inference. Her response is that we cannot calculate the probabilities needed to make the design inference, and even if we could, that would not be sufficient to infer design.

 

He then enumerates what he thinks are my errors:

Elizabeth Liddle’s Errors

At a couple of points, Liddle seems to misunderstand the design inference. As I mentioned, two criteria are necessary to reject a chance hypothesis: specification and complexity. However, in Liddle’s account there are actually three. Her additional requirement is that the object be “One of a very large number of patterns that could be made from the same elements (Shannon Complexity).”

 

This appears to be a confused rendition of the complexity requirement. “Shannon Complexity” usually refers to the Shannon Entropy, which is not used in the design inference. Instead, complexity is measured as the negative logarithm of probability, known as the Shannon Self-Information. But this description of Shannon Complexity would only be accurate under a chance hypotheses where all rearrangement of parts are equally likely. A common misconception of the design inference is that it always calculates probability according to that hypothesis. Liddle seems to be plagued by a vestigial remnant of that understanding.

Um, no – it’s Dembski himself and his followers that seem to be plagued by that vestigial remnant.  Perhaps they should have it surgically removed.  In a recent talk (24th January 2013) , which I partially transcribed, Dembski said:

Well, it turned out what was crucial for detecting design was this what I called Specified Complexity, that you have a pattern, where the pattern signifies an event of low probability, and yet the pattern itself is easily described, so it’s specified, but also low probability.  And I ended up calling it Specified Complexity, I don’t want to get into the details because this can be several lectures in itself, but so there was this marker, this sign of intelligence, in terms of specified complexity, it was a well-defined statistical notion, but it turned out it was also connected with various concepts  in information theory,  and as I developed Specified Complexity, and I was asked back in 05 to say what is the state of play of Specified Complexity, and I found that when I tried to cash it out in Information Theoretic terms, it was actually a form of Shannon Information,  I mean it had an extra twist in it, basically it had something called Kolmogorov complexity that had to be added to it.

Actually, Dembski is being a little curly here.  In his 2005 paper, he says:

It’s this combination of pattern- simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing  the corresponding event by chance) that makes the pattern exhibited by ( ψR) — but not (R) — a specification.

The “complexity” part of Specified Complexity is the “event-complexity” – P(T|H), which can only be the part referred to  as  “Shannon information” in his talk, Moreover, in his 2005 paper, his worked examples of “event-complexity” are of Shannon entropy, although he does indeed say that when computing P(T|H), H must be “the relevant chance hypothesis”, which of course will not necessarily be random independent draw.  But not only does Dembski spend no time showing us how to compute P(T|H) for anything other than random independent draw (i.e. the Shannon entropy), he goes to great lengths to show us examples where P(T|H) is random independent draw, the negative log of which gives us Shannon entropy. And the only attempts to calculate any version of CSI (including various versions where the Specification is function) that I have seen have used random independent draw.

Moreover, the “specified” part of CSI is the “pattern-simplicity” part – the Kolmogorov part.  This is not Kolmogorov complexity but its negation – the more compressible (less Kolmogorov complex) a pattern is, the more “specified” it is.   Dembski must be aware of this – perhaps he thought his 2013 audience would be happy to accept that “Kolmogorov complexity” has something to do with the Complexity in in Specified Complexity, even though the Kolmogorov part is is the Specification part, where more Specified = less complexity, and that “Shannon information” must be something to do with  the specification (a sequence that means something, right?), whereas in fact that’s the “complexity” (entropy) part. So he’s doing some voodoo here.  In the 2005 paper, Dembski quite clearly defines Specified Complexity as a sequence that improbable under some hypothesis (possibly random independent draw, in which case the answer would be “cashed out” in Shannon information) i.e. “complex”, AND is readily compressible (has low Kolmogorov complexity, and therefore specified).  I was recently taken to task by Barry Arrington for violating language by using Dembski’s terminology.  heh.  I think Dembski himself is exploiting his own language violations to squirt ink, here.

But I do accept that the eleP(T|H)ant is there, in the 2005 paper, in the small print.  Indeed, I pointed it out.  It is not my vestigial organ that is causing the problem.  It’s precisely that vestigial organ I am calling the eleP(T|H)ant, and asking to be addressed.

Moving on:

As I emphasized earlier, the design inference depends on the serial rejection of all relevant chance hypotheses. Liddle has missed that point. I wrote about multiple chance hypotheses but Liddle talks about a single null hypothesis. She quotes the phrase “relevant [null] chance hypothesis”; however, I consistently wrote “relevant chance hypotheses.”

Fine.  I am all for serial rejection of specific null hypotheses. It’s how science is usually done (although it has its problems).  In which case we all continue to ignore the single eleP(T|H)ant considered rejectable by Dembski in one single equation and get on with doing some science.

Liddle’s primary objection is that we cannot calculate the P(T|H), that is, the “Probability that we would observe the Target (i.e. a member of the specified subset of patterns) given the null Hypothesis.” However, there is no single null hypothesis. There is a collection of chance hypotheses. Liddle appears to believe that the design inference requires the calculation of a single null hypothesis somehow combining all possible chance hypotheses into one master hypothesis. She objects that Dembski has never provided a discussion of how to calculate this hypothesis.

Yes, indeed I do object.  Like you, Winston, I entirely agree that “there is no single null hypothesis”. That’s why Dembski’s 2005 is so absurd – it implies that we can compute a pantechnicon null to represent “all relevant chance hypotheses” and plug the expected probability distribution under that null into his CSI formula.  Of course it can’t be done.  You know it, and I expect that Dembski, being no fool, knows it too (just as I am sure he knows that “pattern simplicity” was a tactical error, hence his attempt to imply that “Kolmogorov complexity” is part of the CSI recipe, even though the Kolmogorov part is the simplicity part, and the  “complexity” part is simple improbability, with equal value for a meaningful pattern and for the same pattern drawn “by chance”).

But that is because Dembski’s method does not require it. Therefore, her objection is simply irrelevant.

um, Dembski’s method does require it. Nothing in that 2005 paper implies anything other than a single null.  Sure, that null is supposed to, be H, namely

…the relevant chance hypothesis that takes into account Darwinian and other material mechanisms

But I see only one H.  Are you sure you are not seeing pink eleP(T|H)ants, Winston?

Unknown Probabilities

Can we calculate the probabilities required to reject the various chance hypotheses? Attempting to do so would seem pretty much impossible. What is the probability of the bacterial flagellum under Darwinian evolution? What is the probability of a flying animal? What is the probability of humans? Examples given of CSI typically use simple probability distributions, but calculating the actual probabilities under something like Darwinian evolution is extremely difficult.

Yes indeed.  Thank you, Winston.

Nevertheless, intelligent design researchers have long been engaged in trying to quantify those probabilities. In Darwin’s Black Box, Mike Behe argues for the improbability of irreducibly complex systems such as the bacterial flagellum. In No Free Lunch, William Dembski also offered a calculation of the probability of the bacterial flagellum. In “The Case Against a Darwinian Origin of Protein Folds,” Douglas Axe argues that under Darwinian evolution the probability of finding protein folds is too low.

Yes, IC is probably the best argument for ID.  Unfortunately, it’s still terrible.  Dembski and Behe both made their bacterial flagellum calculations (well, assertions – I don’t see much in the way of calculations in either book) without taking into account at least one possible route, i.e. that proposed by Pallen and Matzke. Pallen and Matzke’s route may well be wrong, but when computing probabilities, at the very least, the probability of such a route should be taken into account.  Much more to the point, as Lenski and his colleagues have shown (both in their AVIDA  program and in their E.coli cultures) being “IC” by any of the definitions given by Behe or Dembski is not a bar to evolvability, either in principle (AVIDA) or practice (e-coli). Dembski writes, in No Free Lunch:

The Darwinian mechanism is powerless to produce irreducibly complex systems for which the irreducible core consists of numerous diverse parts that are minimally complex relative to the minimal level of function they need to maintain.

This claim has simply been falsified.  Of course it is still possible that the bacterial flagellum somehow couldn’t have, or didn’t, evolve.  But the argument that “the Darwinian mechanism” is in principle “powerless” to evolve an IC system is now known to be incorrect.  As for Axe’s paper, essentially his claim is the same, namely that functional protein folds are deeply IC (using Behe’s IC-pathway definition), and are therefore improbable by Darwinian mechanisms.  But he infers this via a similarly same falsified claim:

So, while it is true that neither structure nor function is completely lost along the mutational path connecting the two natural sequences, natural selection imposes a more stringent condition. It does not allow a population to take any functional path, but rather only those paths that carry no fitness penalty.

It turns out there is no such constraint. “Natural selection” as we now know, from both empirical observations and mathematical models is merely a bias in the sampling, in any one generation, of favour of traits that increase the probability of reproductive success in the current environment.  As most mutations are near-neutral, many will become highly prevalent in the population, including some substantially deleterious ones (ones that “carry a fitness penalty”.  So again, the argument that IC structures are unevolvable, or only evolvable by pathways that do not include deleterious steps, is simply unfounded. Falsified, in the strict Popperian sense.

Again, moving on:

While it is unreasonable to calculate the exact probabilities under a complex chance hypothesis, this does not mean that we are unable to get a general sense of those probabilities. We can characterize the probabilities of the complex systems we find in biology, and as the above research argues those probabilities are very small.

It is indeed “unreasonable to calculate the exact probabilities under a complex chance hypothesis”.  Precisely.  But I’d go further.  It is even more unreasonable to make a rough guesstimate (I hate that word, but it works here).  The entire concept of CSI is GIGO.  If you start with the guesstimate that Darwinian mechanisms are unlikely to have resulted in T, then P(T|H) will be, by definition, small.  If you start with the guesstimate that Darwinian mechanisms could well have resulted in T, then P(T|H) will be, by definition, large.  Whether the output from your CSI calculation reaches your threshold (ah, yes, that threshold….)  for rejection therefore depends entirely on the Number You First Thought Of.

And so, indeed, to get a defensible Number, we must turn to actual empirical research.  But the only “above research” cited is Behe’s (scarcely empirical, rebutted by Pallen and Matzke, and contains no actual estimate), and Axe’s, which needs to be taken together with the vast number of other empirical research into the evolution of protein folds.  And both are based on a falsified premise. Sure one might be able to get “get a general sense of those probabilities”, but in order to make the strong claim that non-design must be rejected (at p<10^-150!) we need something more than a “general sense of those possibilities”.  Bear in mind that no scientist rejects ID, at least qua scientist. There is no way to falsify it (how would you compute the null?)  ID could well be correct.  But having a hunch that non-design routes can’t do the job is not enough for an inference with that kind of p value.  No sir not nohow.

In summary so far: Ewert has done little more than agree with me that we cannot reject non-design on the basis of the single pantechnicon null as proposed in Dembski’s 2005 paper, and would therefore presumably also agree that quantity chi is uncomputable, and therefore useless as a test for design (which was the point of my original glacier puzzle). Instead he argues that research does suggest that Darwinian mechanisms are implausible, but does so citing research that is based on the assumption that Darwinian mechanisms cannot either produce IC systems (they can) or by IC pathways (they can) or by IC pathways that include deleterious steps (they can).  Now it remains perfectly possible that bacterial flagella and friends could not have evolved.  But the argument that they cannot have evolved because “natural selection… does not allow a population to take any functional path, but rather only those paths that carry no fitness penalty” fails, because the premise is false.

****************************************************************************************

Part II:

Earman and Local Inductive Elimination

Above, I mentioned the division of possible chance hypotheses into the relevant and irrelevant categories. However, what if the true explanation is an unknown chance hypothesis? That is, perhaps there is a non-design explanation, but as we are ignorant of it, we rejected it along with all the other irrelevant chance hypotheses. In that case, we will infer design when design is not actually present.

 

Dembksi defends his approach by appealing to the work of philosopher of physics John Earman, who defended inductive elimination. An inductive argument gives evidence for its conclusion, but stops short of actually proving it. An eliminative argument is one that demonstrates its conclusion by proving the alternatives false rather than proving the conclusion true. The design inference is an instance of inductive elimination: it gives us reason to believe that design is the best explanation.

Liddle objects that Dembski is not actually following what Earman wrote, and she quotes from Earman: “Even if we can never get down to a single hypothesis, progress occurs if we succeed in eliminating finite or infinite chunks of the possibility space. This presupposes of course that we have some kind of measure, or at least topology, on the space of possibilities.”

Dembski has not defined any sort of topology on the space of possibilities. He has not somehow divided up the space of all possible hypotheses and systematically eliminated some or all. Without that topology, Dembski cannot claim to have eliminated all the chance hypotheses.

 

True.  But nor can he claim to have eliminated any unless he actually calculates the ele(P(T|H)ant for each.

However, Liddle does not appear to have understood what Earman meant. Earman was not referring to a topology over every conceivable hypothesis, but over the set of what we might call plausible hypotheses. In Earman’s approach, inductive elimination starts by defining the set of plausible hypotheses. We do not consider every conceivable hypothesis, but only those hypotheses which we consider plausible. Only then do we define a topology on the plausible hypotheses and work towards eliminating the incorrect possibilities.

 

Hang on. I venture to suggest that Winston has not understood what Earman meant.  Let me write it out again, with feeling:

Even if we can never get down to a single hypothesis, progress occurs if we succeed in eliminating finite or infinite chunks of the possibility space. This presupposes of course that we have some kind of measure, or at least topology, on the space of possibilities.

 

In other words, surely: you can only partition the total possibility space into plausible and implausible sections IF you first have “some kind of measure, or at least topology, on the space of possibilities.”  Then of course, you can pick off the plausibles one by one until you are done.  So my objection stands.  And if that isn’t what Earman meant, then I have no idea what he meant.  It seems pretty clear from his gravity example that that’s exactly what he meant – entire sections of hypothesis space could be eliminated first, leaving some plausibles.

In his discussion of gravitational theories, Earman points out that the process of elimination began by an assumption of the boundaries for what a possible theory would look like. He says:

 

Despite the wide cast of its net, the resulting enterprise was nevertheless a case of what may properly be termed local induction. First, there was no pretense of considering all logically possible theories.

Later, Earman discusses the possible objection that because not all logically possible theories were considered, it remains possible that true gravitational theory is not the one that was accepted. He says:

 

I would contend that all cases of scientific inquiry, whether into the observable or the unobservable, are cases of local induction. Thus the present form of skepticism of the antirealist is indistinguishable from a blanket form of skepticism about scientific knowledge.

In contrast to Liddle’s understanding, Earman’s system does not require a topology over all possible hypotheses. Rather, the topology operates on the smaller set of plausible hypotheses. This is why the inductive elimination is a local induction and not a deductive argument.

 

I think Winston is confused.  Certainly, Earman does not require that all hypotheses be evaluated, merely those that are plausible.  Which is fine.  However he DOES require that first the entire set of possible hypotheses be partitioned into plausible and implausible, so that you can THEN “operate..on the smaller set of plausible hypotheses”.

So we could, for instance, eliminate at a stroke, all theories that require, say, violation of the 2nd Law of thermodynamics; or hitherto undiscovered fundamental forces; and material designers, as that would only push back the problem as to where they came from, and already we are short of cosmic time. We don’t need to even worry about that herd of ele(P(T|H)ants. And as Darwinian processes require self-replicators, we can eliminate Darwinian processes as an account of the first self-replicators.  And so on.  But that still leaves “Darwinian and other material processes” as Dembski correctly states in his 2005 paper.  And those null distributions have to be properly calculated if they are going to be rejected by null hypothesis testing. Which is not of course what Earman is even talking about – his entire book is about Bayesian inference.

Which brings me to my next point:  Dembski’s 2002 piece, cited by Ewert below, precedes his 2005 paper, and indeed, No Free Lunch, and presents a Bayesian approach to ID, not a Fisherian one.  In the 2005 paper, Dembski goes to excruciating length to justify a Fisherian approach, and explicitly rejects a Bayesian one:

I’ve argued at length elsewhere that Bayesian methods are inadequate for drawing design inferences. Among the reasons I’ve given is the need to assess prior probabilities in employing these methods, the concomitant problem of rationally grounding these priors, and the lack of empirical grounding in estimating probabilities conditional on design hypotheses.

Indeed they do.  But Fisherian methods don’t get him out of that responsibility (for grounding his prior), they merely allow him to sneak them in via a Trojan EleP(T|H)ant.

 

Furthermore, Dembski discusses the issue in “Naturalism’s Argument from Invincible Ignorance: A Response to Howard Van Till,” where he considers the same quote that Liddle presented:

In assessing whether the bacterial flagellum exemplifies specified complexity, the design theorist is tacitly following Earman’s guidelines for making an eliminative induction work. Thus, the design theorist orders the space of hypotheses that naturalistically account for the bacterial flagellum into those that look to direct Darwinian pathways and those that look to indirect Darwinian pathways (cf. Earman’s requirement for an ordering or topology of the space of possible hypotheses). The design theorist also limits the induction to a local induction, focusing on relevant hypotheses rather than all logically possible hypotheses. The reference class of relevant hypotheses are those that flow out of Darwin’s theory. Of these, direct Darwinian pathways can be precluded on account of the flagellum’s irreducible and minimal complexity, which entails the minuscule probabilities required for specified complexity. As for indirect Darwinian pathways, the causal adequacy of intelligence to produce such complex systems (which is simply a fact of engineering) as well as the total absence of causally specific proposals for how they might work in practice eliminates them.

Dembski is following Earman’s proposal here. He defines the boundaries of the theories under consideration. He divides them into an exhaustive partition, and then argues that each partition can be rejected therefore inferring the remaining hypothesis, design. This is a local induction, and as such depends on the assumption that any non-Darwinian chance hypothesis will be incorrect.

 

Yep, he is.  And he is using Bayesian reasoning: his rejection of indirect Darwinian  pathways is rejected because of his higher priors on something else, namely “the causal adequacy of intelligence to produce such complex systems” (which I would dispute, but let’s say per arguendo he is justified); and on “the total absence of causally specific proposals, which is ludicrous.  We do not demand to know the causally specific pathway by which a rock descended from a cliff before we can infer that erosion plus gravity were the likely cause.  What we need to know is whether indirect Darwinian pathways can produce “IC” systems, and we know they can.  It is “causally adequate” (and a heck sight more “causally adequate” than an “intelligence” with no apparent physical attributes capable of assembling a molecule – intelligence doesn’t make things, intelligent material beings do).

But all that is moot, because his 2002 is a Bayesian inference, not a Fisherian one.  If he wants to play the Fisherian game, then he’s welcome to do it, but in that case he has to carefully construct the probability distribution of every null he wants to reject, and, if he succeeds, claim only to have rejected that null, not a load of other nulls he thinks he can get away with.  Which then won’t allow him to reject “non-design”.  Merely the nulls he’s modelled.

Frankly, if I were an IDer I’d go down the Bayesian route. But, as Dembski says, it does present problems.

For Dembski.

At the end of the day, the design inference is an inductive argument. It does not logically entail design, but supports it as the best possible explanation.

 

“Best possible explanation” is a Bayesian inference, not a Fisherian one.  Fine.  But let’s bury CSI in that case.

It does not rule out the possibility of some unknown chance hypotheses outside the set of those eliminated.

 

It does not even rule out hypotheses within the set of those claimed to have been eliminated, because you can’t do that unless you can compute that null.  Nothing Winston has written here gets him, or Dembski, off the hook of having to reject a Darwinian null, for a post-OoL system, because Darwinian process pass any plausibility test.  So if you want to reject it, it you have to compute it.  Which is impossible.  It would be like trying to reject the null of that today’s weather was a result of set of causal chains X, where X is a set of specific hypothetical turbulent states, and then concluding that today’s weather wasn’t a result of a turbulent state. All we can say is that we know that turbulence leads to striking but unpredictable weather systems, and so we can’t reject it as a cause of this one.

However, rejecting the conclusion of design for this reason requires the willingness accept an unknown chance hypothesis for which you have no evidence solely due to an unwillingness to accept design. It is very hard to argue that such a hypothesis is actually the best explanation.

Indeed.  But there is sleight of hand here.  Darwinian processes is a known “chance hypothesis” (Dembski’s and your term, not mine), but the specific causal chain for any one observed system is probably unknown.  That doesn’t mean we can ignore it, any  more than we can reject ID because there are many possible ID hypotheses (front-loading; continuous intervention; fine-tuning; OoL only, whatever).  And indeed there is a double standard.  If we deem “intelligence”  “causally adequate to produce such complex systems” because it is “is simply a fact of engineering”, despite the fact that we have no hypothesis for any specific causal pathway by which a putative immaterial engineer could make an IC system, then we can at the very least deem Darwinian processes “causally adequate” because we know that those processes can produce IC systems, by IC pathways that include deleterious steps.

Closing Thoughts

Liddle objects that we cannot calculate the probability necessary to make a design inference. However, she is mistaken because the design inference requires that we calculate probabilities, not a probability.

Well, there was slightly more than to my objection than the number of hypotheses! And if Dembski agrees there must be more than one, I suggest that he retracts CSI as defined in his 2005 paper.

Each chance hypothesis will have it own probability, and will be rejected if that probability is too low. Intelligent design researchers have investigated these probabilities.

I have not seen a single calculation of such a probability that did not rest on the falsified premise that IC systems cannot evolve by indirect pathways, including those that include deleterious steps.

Liddle’s objections to Dembski’s appeal to Earman demonstrate that she is the one not following Earman. Earman’s approach involves starting assumptions about what a valid theory would look like, in the same way that any design inference makes starting assumptions about what a possible chance hypotheses would look like.

It also involves Bayesian inference, which Dembski specifically rejects. It also seems to involve rejecting a perfectly plausible hypothesis on falsified grounds.

In short, neither of Liddle’s objections hold water. Rather both appear to be derived from a mistaken understanding of Dembski and Earman.

 

Well, no. They remain triumphantly watertight; Winston has largely conceded their validity, by de facto agreeing with me that Dembski’s definition of “chi” is useless, and thus his Fisherian rejection method invalid. Winston substitutes a Bayesian approach, in which he improperly eliminates perfectly plausible hypotheses (e.g. indirect Darwinian pathways for IC systems), and offers no method for calculating them, merely appealing to researchers who have failed to note that IC is not an impediment to evolution.  He appears himself to have misunderstood Earman, and far from my “misunderstanding” Dembski, Winston appears to be telling us that Dembski did not say what he categorically did say in 2005.  At any rate, if Dembski in his 2005 was really telling us that we needed to use a Bayesian approach to the Design Inference and that multiple nulls must be rejected in order to infer Design, then it’s odd that his words appear to indicate the precise opposite.

 

93 thoughts on “Trojan EleP(T|H)ant?

  1. Unknown Probabilities

    Can we calculate the probabilities required to reject the various chance hypotheses? Attempting to do so would seem pretty much impossible. What is the probability of the bacterial flagellum under Darwinian evolution? What is the probability of a flying animal? What is the probability of humans? Examples given of CSI typically use simple probability distributions, but calculating the actual probabilities under something like Darwinian evolution is extremely difficult.

    And with this simple concession, apparently unbeknownst to the very author, the entire case for ID from CSI collapses to the ground like the empty shell it always was.

    You can’t calculate “the odds of evolution”. So if you can’t calculate the odds of evolution, you can’t use CSI to reject it. *sigh*

  2. Can anyone enlighten me as to what a “design inference” actually means?

    Ewert says “Design is defined as any process that does not correspond to a chance hypothesis”

    “the design inference gives us reason to reject all chance hypotheses and conclude that an object was designed.”

    and

    “At the end of the day, the design inference is an inductive argument. It does not logically entail design, but supports it as the best possible explanation. It does not rule out the possibility of some unknown chance hypotheses outside the set of those eliminated.”

    He uses the phrase 16 or so times and yet there is no explanation of what it is or why it should be the default inference.

  3. Hm, what are the odds that a designer would produce juvenile platypus teeth (which are shed before any use for chewing) and the coccyx as an overly-complex muscle attachment? Or, the different macroevolutionary patterns found in gene-swapping organisms from those like vertebrates which rarely transfer genes horizontally? The patterns of life in general?

    Actually, the odds would be understood to be extremely low for any known designer. Which is why the designer is only God (they fault us for discussing theology when we speak of design limits–no, it’s the other way around, we’re discussing a non-god designer, as they insist it could be, and they don’t like it), since the idea is that we don’t know how God would do it. But then it becomes impossible to find evidence for that sort of designer, although they try to foist off functional complexity = design.

    But “mysterious occurrence” isn’t the same thing as design, in fact. “Miracle” and “design” are different terms, only they try to conflate the two by claiming that “mind” is something supernatural.

    Glen Davidson

  4. “Design is defined as any process that does not correspond to a chance hypothesis”

    “It [the design inference] does not rule out the possibility of some unknown chance hypotheses outside the set of those eliminated.”

    Taking these two statements together with the Dembski definition of “chance” as including known evolutionary mechanisms, it seems that “design”, by Ewert’s definition, is synonymous with “unknown.”

    The only reason to use the word “design” is to encourage equivocation between the operational definition used in his argument and the common English usage.

  5. Perhaps someone in the ID movement could provide an example of biological design in action. I would be particularly interested in seeing the steps a designer would take to design an irreducibly complex structure.

  6. Patrick:
    “Design is defined as any process that does not correspond to a chance hypothesis”

    “It [the design inference] does not rule out the possibility of some unknown chance hypotheses outside the set of those eliminated.”

    Taking these two statements together with the Dembski definition of “chance” as including known evolutionary mechanisms, it seems that “design”, by Ewert’s definition, is synonymous with “unknown.”

    The only reason to use the word “design” is to encourage equivocation between the operational definition used in his argument and the common English usage.

    Dembski always calls the opposite of the “Design” hypothesis, the “Chance” hypothesis which I think is very misleading.

    I call it the non-Design hypothesis. I assume the Design hypothesis is the hypothesis that an Designer was involved.

  7. “I assume the Design hypothesis is the hypothesis that an Designer was involved.”

    That’s what Dembski et al. apparently intend for everyone to assume, but it does not follow from Ewert’s definition.

    Using his definition, “design” simply means “not corresponding to any known natural mechanism.” In short, “unknown.” The use of the word “design” is unnecessary and misleading.

    If any intelligent design creationist can come up with empirical evidence for a designer (and implementor), I’m all ears. What they have, though, is just a wordy way of saying “We don’t know how this came about.”

  8. Patrick:
    “I assume the Design hypothesis is the hypothesis that an Designer was involved.”

    That’s what Dembski et al. apparently intend for everyone to assume, but it does not follow from Ewert’s definition.

    Using his definition, “design” simply means “not corresponding to any known natural mechanism.”In short, “unknown.”The use of the word “design” is unnecessary and misleading.

    If any intelligent design creationist can come up with empirical evidence for a designer (and implementor), I’m all ears.What they have, though, is just a wordy way of saying “We don’t know how this came about.”

    Exactly, and that’s the point I’ve been trying to make: that if you insist on using Fisherian hypothesis testing for such an idiotic purpose, the only thing you can conclude is that you’ve rejected the nulls you tested. “None of the above” and “design” are not exclusive categories.

    And appealing to Earman is hopeless because Earman isn’t talking about Fisherian hypothesis testing, he’s talking about Bayesian inferences where you are actually comparing two (or more) hypotheses.

    Obviously with a Bayesian inference you can cut down the number of hypotheses – indeed you can even compare two very low probability models and keep the slightly more probable one.

    But with Fisher, you are fucked.

  9. Lizzie: Dembski always calls the opposite of the “Design” hypothesis, the “Chance” hypothesis which I think is very misleading.

    Ewert’s misconceptions are all contained in his closing assertion:

    Liddle objects that we cannot calculate the probability necessary to make a design inference. However, she is mistaken because the design inference requires that we calculate probabilities, not a probability. Each chance hypothesis will have it own probability, and will be rejected if that probability is too low. Intelligent design researchers have investigated these probabilities.

    ID/creationists cannot calculate probabilities of “chance” hypotheses using misconceptions about atoms and molecules or misconceptions about natural selection.

    Their misconceptions about genetic algorithms as well as their misuse of the laws of thermodynamics leads them to a complete misunderstanding of the meaning of “chance” in the processes of evolution or in the processes of the formation of complex molecular structures.

    I suspect that the reason Ewert believes that “Intelligent design researchers have investigated these probabilities” is because he believes that coins, strings of letters, dice, and junkyard parts are proper representations of how atoms and molecules behave and how natural selection works.

    If that is one’s understanding of the current state of physics, chemistry, and biology, then all “chance hypotheses” are eliminated in one swoop; where “chance hypotheses” are anything produced by “materialistic” laws of science.

    If this is the case – and I am pretty sure it is – it is circular reasoning all over again. The ID position starts out by assuming that nature can’t produce anything as complex as a living organism because to them “chance” means their misconceptions about the second law of thermodynamics; and “Darwinists” cheat by “putting in the answer” into their genetic algorithm programs.

  10. So far as I’m aware the only non-design probabilities investigated by “ID researchers” have been those pertaining to the sudden appearance of socking great big proteins and/or the corresponding genes.
    A complete strawman, since no biologist ever supposed that such things appeared by magic from a pre-biotic soup.
    The probabilities of small functional oligonucleotides and oligopeptides appearing, from which bigger molecules evolved, are of course rather greater than an IDist would care to contemplate

  11. Violent agreement! My favorite kind!

    Considering it a bit more, Ewert can’t even say “We don’t know how this came about.” As you note, they haven’t tested any nulls that reflect modern evolutionary mechanisms and they haven’t shown that they’re even able to do so (your eleP(T|H)ant).

    It seems his argument boils down to “I don’t unnerstan’ this evolution stuff, but I’m a’gin it.”

  12. I think a lot of the errors on the EIL side arise from insisting on characterising evolution as a search, in static space, for static (and rare) targets.

    Darwinian processes feel around configuration space, which changes as a function of the exploration itself, and which contains many contiguous solutions (because similar genotypes result in similar phenotypes), and is multidimensional (can be explored along many dimensions).

  13. Patrick:
    Violent agreement!My favorite kind!

    Considering it a bit more, Ewert can’t even say “We don’t know how this came about.”As you note, they haven’t tested any nulls that reflect modern evolutionary mechanisms and they haven’t shown that they’re even able to do so (your eleP(T|H)ant).

    It seems his argument boils down to “I don’t unnerstan’ this evolution stuff, but I’m a’gin it.”

    plus some mathy-lookin’ stuff

  14. Lizzie: plus some mathy-lookin’ stuff

    The mathy-lookin’ stuff is key. The idea that you can start with “I can’t calculate how likely this is to have evolved naturally”, stir a bunch of fancy math into your ignorance, and then conclude that the thing couldn’t have evolved naturally is just dumb. If you don’t know, you don’t know. It’s so dumb that it would be obvious to everyone, including the proponents, if they simply stopped and thought about what they were doing. The fancy math provides the distraction that lets them avoid noticing how absurd the whole enterprise is.

  15. I wouldn’t call it bogus, exactly. It does represent an extremely low probability. You want a number that is “big enough” by a considerable margin, and 500 bits seems to fit the bill.

    Bigger numbers mean fewer false positives but more false negatives. Smaller numbers mean more false positives but fewer false negatives.

    In the case of the design inference, false positives are worse than false negatives, so a bigger number is preferred.

    The problem is that ID proponents tend to think that there is something magical about the exact number and the way Dembski derived it.

  16. Sure. As I’ve said often enough, I’d be perfectly happy with 5 sigma if the rest of the math was valid. In my field we publish at 2! and 500 bits works out at 22 sigma IIRC (can’t do it in my head).

    As you say, it’s just there as a bit of stage business. Makes the bunny seem even more amazing 🙂

  17. Without an accurate P(T|H), the whole business is doomed. Even with an accurate P(T|H), it’s circular, as we’ve been saying for what seems like forever.

    Dembski seems to realize this. I wonder why Ewert didn’t get the memo.

  18. Jet Black reminded me recently of Feynman’s expression “cargo cult science”. I’m sorry Winston, if you are reading this, but that’s exactly how ID math appears to me. It’s cargo cult math.

    It’s not based on empirical data (or not much) and it certainly is not based on validated methods of data analysis. Nobody in science either computes pantechnicon nulls that incorporate everything except the study hypothesis, nor do we serially reject bits of the non-study space. If we do use null hypothesis testing (and mercifully it’s becoming less common, but it is still the work horse of data analysis) we are very precise about what we expect if our hypothesis is true and then construct a distribution under the null that that hypothesis is false. For example, if I have a hypothesis that people with schizophrenia will show different degree of modulation of oscillatory brain activity by attentional demand than shown by healthy controls, my null is dead easy: there will be no difference. That way, if I find a difference that has a low probability under my null, I can reject my null and consider my hypothesis supported. Actually, constructing that null is not quite so “dead easy”. I frequently run thousands of montecarlo iterations to ensure that I really have appropriately characterised my null, and know exactly how often I should expect my data were my hypothesis to be false.

    But you can only do that if you know what to expect if your hypothesis is true. Unless an ID hypothesis makes some actual predictions (“front-loading” conceivably might) it can’t used to generate an appropriate null. Nonetheless, IDers go through the motions of constructing a null equation, complete with a souped up alpha criterion (22 sigma!), plug in some values derived from hunch (falsified hunch at that), and claim astronomical statistical significance!

    And then bellyache when it doesn’t get past peer-review.

    (Feeling distinctly grumpy with ID today. I think it’s the banning of keiths at UD. Harrumph. But I do think it’s time ID empirical scientists learned some proper data probability theory, and the ID probability geeks learned some proper empirical science).

    )

  19. keiths:
    Without an accurate P(T|H), the whole business is doomed.Even with an accurate P(T|H), it’s circular, as we’ve been saying for what seems like forever.

    Dembski seems to realize this.I wonder why Ewert didn’t get the memo.

    I think he got the memo. That’s why he’s trying to say that Dembski didn’t really mean that H was a single hypothesis.

    I think Dembski realised he goofed, and is sending Winston out as cannon-fodder, on the off chance we won’t notice that Winston has a different script.

  20. “Feeling distinctly grumpy with ID today.”

    I noticed a bit less British reserve in your posts today. It’s a good look for you, you should cultivate it. 😉

  21. Patrick:
    “Feeling distinctly grumpy with ID today.”

    I noticed a bit less British reserve in your posts today.It’s a good look for you, you should cultivate it.;-)

    heh. It kind of amuses me (and my son!) that I have this reputation for politeness 🙂

  22. I was recently taken to task by Barry Arrington for violating language by using Dembski’s terminology.

    I had a similar conversation with Barry a few years ago. I pointed to some of Dembski’s examples of “specified complexity” that are descriptively simple, and Barry’s response was that I had “gone around the bend of linguistic sanity”.

  23. Done! Sheesh, I’m getting old.

    I got it right at the top, but I guess it got wired in wrong. And I used to be a good speller.

    Anyway, very glad to see you. This time will you stay for tea?

    A proper conversation would be nice, even if my manners do seem to have gone astray today!

  24. I don’t care what you say about me as long as you spell my name right.

    Mae West
    P.T. Barnum
    George M. Cohan
    Will Rogers
    W.C. Fields
    Mark Twain
    Oscar Wilde

  25. There is no reason to believe that ‘design’ and ‘chance’ are mutually exclusive or collectively exhaustive.

  26. petrushka:
    I don’t care what you say about me as long as you spell my name right.

    Mae West
    P.T. Barnum
    George M. Cohan
    Will Rogers
    W.C. Fields
    Mark Twain
    Oscar Wilde

    I do sympathise. I’m gutted to have made the same mistake twice. I hate it when people call me Elisabeth Liddell.

  27. Lizzie: I think he got the memo.That’s why he’s trying to say that Dembski didn’t really mean that H was a single hypothesis.

    My understanding of Ewert’s position is that there is not one H but many, and that
    Dembski has ruled out all the ones that explain the adaptation by natural selection. That leaves us imagining that there is some other explanation not involving Design. Ewert then argues that

    rejecting the conclusion of design for this reason requires the willingness accept an unknown chance hypothesis for which you have no evidence solely due to an unwillingness to accept design.

    All that depends on the assertion that Dembski has a way of ruling out natural selection as the explanation. I do not see Dembski having any such general method. Sure, he can argue that the presence of CSI rules out natural selection, but as we are only allowed to declare CSI to be present if we have already ruled out natural selection, that is a non-method.

  28. My impression is that Winston Ewert is a talented young computer scientist and mathematician, who needs a proper project.

    I suggest neuroimaging 🙂

  29. Winston:

    What is the probability of the bacterial flagellum under Darwinian evolution?

    In No Free Lunch, William Dembski also offered a calculation of the probability of the bacterial flagellum.

    Winston, are you under the impression that Dembski has calculated (or attempted to calculate) the probability of the flagellum arising by Darwinian evolution?

    It seems that you are. In a previous article, you say:

    In discussing a design inference for the bacterial flagellum, Dembski attempts a sketch of the probability of its arising through natural selection.

    How do you reconcile this with the fact that his calculations don’t take selection into account at all? They are based on the hypothesis that the flagellum just came together randomly, as Dembski himself acknowledges.

  30. r0bb:
    Winston:
    How do you reconcile this with the fact that his calculations don’t take selection into account at all?They are based on the hypothesis that the flagellum just came together randomly, as Dembski himself acknowledges.

    Yes, you see, that is the quintessential creationist strawman of evolution that they can never let go of. “You believe in chance”.
    It can never be emphasized enough, and possibly deserves a post of it’s own to really flesh out why, but evolution is not like throwing dice and hoping/expecting to just miraculously end up on “the one specific function we wanted”.

    There are two aspects to this strawman actually, and I think it also exists partially because the creationist have trouble letting go of the idea that “that which has resulted from evolution was planned all along, otherwise it wouldn’t have evolved“. So in that respect, the fact that a flagellum evolved, out of the total space of phenotypical possibilites, is incalulably miniscule, and so the creationist reasons – it must have happened due to planning, designing and foresight, otherwise how could this specific thing have evolved?

    So they’re really making two fundamental mistakes. One is to ignore natural selection and thus reduce evolution to mere dice-rolls that miraculously happen to land on the “working flagellum” side from scratch.

    The other is that they can’t seem to grasp that evolution simply finds “something, anyting that works”, it doesn’t care what this is and isn’t planning for it.
    In a different environment, given different preconditions and contingencies, perhaps on another planet, the flagellum wouldn’t have evolved, but something else equally unlikely would have. Evolution would simply have followed a different path, with mutation and drift sampling phenotypical space, and selection retaining advantageous variations.

  31. Mr Ewert,

    If you get chance to enlarge on what a design inference might be, that would be super.

  32. All of this discussion seems to derive from the ID folks trying very hard not to say what they mean directly. We can see that “design” is functionally equivalent to “unknown”, but of course they selected the word “design” in the first place to imply a designer, the core of their religion.

    Similarly, “chance” is used to describe all carefully researched and well understood biological feedback processes, not because it describes such processes but because it does not, and implies no such processes occur.

    The effort to support a clearly unsupportable but equally clearly unrejectable foregone conclusion, is what comes out of all this. Choose words that assume the conclusion, equivocate like mad, create terms in one paper rendered meaningless in another paper and then quote the paper suitable to the argument, and on and on.

    It’s like arguing with the weather.

  33. “The effort to support a clearly unsupportable but equally clearly unrejectable foregone conclusion, is what comes out of all this. Choose words that assume the conclusion, equivocate like mad, create terms in one paper rendered meaningless in another paper and then quote the paper suitable to the argument, and on and on.”

    Don’t forget the “Retreat to UD when the questions get too pointed elsewhere, then return after a few weeks spouting the same refuted assertions.” technique. The goal is not to progress the discussion.

Leave a Reply