Trojan EleP(T|H)ant?

Winston Ewert has responded to my The eleP(T|H)ant in the room post at EnV.  Thanks to Steve for the heads-up.

So, here we go:

Part I

First, Winston graciously acknowledges the eleP(T|H)ant:

I wrote here previously (“Design Detection in the Dark“) in response to blogger Elizabeth Liddle’s post “A CSI Challenge” at The Skeptical Zone. Now she has written a reply to me, “The eleP(T|H)ant in the room.” The subject of discussion is CSI, or complex specified information, and the design inference as developed by William Dembski.

CSI is the method by which we test chance hypotheses. For any given object under study, there are a variety of possible naturalistic explanations. These are referred to as the relevant chance hypotheses. Each hypothesis is a possible explanation of what happened to produce the object. If a given object is highly improbable under a giving hypothesis, i.e. it is unlikely to occur, we say that the object is complicated. If the object fits an independent pattern, we say that it is specified. When the object exhibits both of these properties we say that it is an example of specified complexity. If an object exhibits specified complexity under a given hypothesis, we can reject that hypothesis as a possible explanation.

 

The design inference argues from the specified complexity in individual chance hypotheses to design. Design is defined as any process that does not correspond to a chance hypothesis. Therefore, if we can reject all chance hypotheses we can conclude by attributing the object under study to design. The set of all possible chance hypotheses can be divided into hypotheses that are relevant and those that are irrelevant. The irrelevant chance hypotheses are those involving processes that we have no reason to believe are in operation. They are rejected on this basis. We have reason to believe that the relevant chance hypotheses are operating, and these hypotheses may be rejected by our applying the criterion of specified complexity. Thus, the design inference gives us reason to reject all chance hypotheses and conclude that an object was designed.

 

Originally, Liddle presented a particular graphic image of unknown origin and asked whether it is possible to calculate the probability of its being the product of design. In reply, I pointed out that knowing the potential chance hypotheses is a necessary precondition of making a design inference. Her response is that we cannot calculate the probabilities needed to make the design inference, and even if we could, that would not be sufficient to infer design.

 

He then enumerates what he thinks are my errors:

Elizabeth Liddle’s Errors

At a couple of points, Liddle seems to misunderstand the design inference. As I mentioned, two criteria are necessary to reject a chance hypothesis: specification and complexity. However, in Liddle’s account there are actually three. Her additional requirement is that the object be “One of a very large number of patterns that could be made from the same elements (Shannon Complexity).”

 

This appears to be a confused rendition of the complexity requirement. “Shannon Complexity” usually refers to the Shannon Entropy, which is not used in the design inference. Instead, complexity is measured as the negative logarithm of probability, known as the Shannon Self-Information. But this description of Shannon Complexity would only be accurate under a chance hypotheses where all rearrangement of parts are equally likely. A common misconception of the design inference is that it always calculates probability according to that hypothesis. Liddle seems to be plagued by a vestigial remnant of that understanding.

Um, no – it’s Dembski himself and his followers that seem to be plagued by that vestigial remnant.  Perhaps they should have it surgically removed.  In a recent talk (24th January 2013) , which I partially transcribed, Dembski said:

Well, it turned out what was crucial for detecting design was this what I called Specified Complexity, that you have a pattern, where the pattern signifies an event of low probability, and yet the pattern itself is easily described, so it’s specified, but also low probability.  And I ended up calling it Specified Complexity, I don’t want to get into the details because this can be several lectures in itself, but so there was this marker, this sign of intelligence, in terms of specified complexity, it was a well-defined statistical notion, but it turned out it was also connected with various concepts  in information theory,  and as I developed Specified Complexity, and I was asked back in 05 to say what is the state of play of Specified Complexity, and I found that when I tried to cash it out in Information Theoretic terms, it was actually a form of Shannon Information,  I mean it had an extra twist in it, basically it had something called Kolmogorov complexity that had to be added to it.

Actually, Dembski is being a little curly here.  In his 2005 paper, he says:

It’s this combination of pattern- simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing  the corresponding event by chance) that makes the pattern exhibited by ( ψR) — but not (R) — a specification.

The “complexity” part of Specified Complexity is the “event-complexity” – P(T|H), which can only be the part referred to  as  “Shannon information” in his talk, Moreover, in his 2005 paper, his worked examples of “event-complexity” are of Shannon entropy, although he does indeed say that when computing P(T|H), H must be “the relevant chance hypothesis”, which of course will not necessarily be random independent draw.  But not only does Dembski spend no time showing us how to compute P(T|H) for anything other than random independent draw (i.e. the Shannon entropy), he goes to great lengths to show us examples where P(T|H) is random independent draw, the negative log of which gives us Shannon entropy. And the only attempts to calculate any version of CSI (including various versions where the Specification is function) that I have seen have used random independent draw.

Moreover, the “specified” part of CSI is the “pattern-simplicity” part – the Kolmogorov part.  This is not Kolmogorov complexity but its negation – the more compressible (less Kolmogorov complex) a pattern is, the more “specified” it is.   Dembski must be aware of this – perhaps he thought his 2013 audience would be happy to accept that “Kolmogorov complexity” has something to do with the Complexity in in Specified Complexity, even though the Kolmogorov part is is the Specification part, where more Specified = less complexity, and that “Shannon information” must be something to do with  the specification (a sequence that means something, right?), whereas in fact that’s the “complexity” (entropy) part. So he’s doing some voodoo here.  In the 2005 paper, Dembski quite clearly defines Specified Complexity as a sequence that improbable under some hypothesis (possibly random independent draw, in which case the answer would be “cashed out” in Shannon information) i.e. “complex”, AND is readily compressible (has low Kolmogorov complexity, and therefore specified).  I was recently taken to task by Barry Arrington for violating language by using Dembski’s terminology.  heh.  I think Dembski himself is exploiting his own language violations to squirt ink, here.

But I do accept that the eleP(T|H)ant is there, in the 2005 paper, in the small print.  Indeed, I pointed it out.  It is not my vestigial organ that is causing the problem.  It’s precisely that vestigial organ I am calling the eleP(T|H)ant, and asking to be addressed.

Moving on:

As I emphasized earlier, the design inference depends on the serial rejection of all relevant chance hypotheses. Liddle has missed that point. I wrote about multiple chance hypotheses but Liddle talks about a single null hypothesis. She quotes the phrase “relevant [null] chance hypothesis”; however, I consistently wrote “relevant chance hypotheses.”

Fine.  I am all for serial rejection of specific null hypotheses. It’s how science is usually done (although it has its problems).  In which case we all continue to ignore the single eleP(T|H)ant considered rejectable by Dembski in one single equation and get on with doing some science.

Liddle’s primary objection is that we cannot calculate the P(T|H), that is, the “Probability that we would observe the Target (i.e. a member of the specified subset of patterns) given the null Hypothesis.” However, there is no single null hypothesis. There is a collection of chance hypotheses. Liddle appears to believe that the design inference requires the calculation of a single null hypothesis somehow combining all possible chance hypotheses into one master hypothesis. She objects that Dembski has never provided a discussion of how to calculate this hypothesis.

Yes, indeed I do object.  Like you, Winston, I entirely agree that “there is no single null hypothesis”. That’s why Dembski’s 2005 is so absurd – it implies that we can compute a pantechnicon null to represent “all relevant chance hypotheses” and plug the expected probability distribution under that null into his CSI formula.  Of course it can’t be done.  You know it, and I expect that Dembski, being no fool, knows it too (just as I am sure he knows that “pattern simplicity” was a tactical error, hence his attempt to imply that “Kolmogorov complexity” is part of the CSI recipe, even though the Kolmogorov part is the simplicity part, and the  “complexity” part is simple improbability, with equal value for a meaningful pattern and for the same pattern drawn “by chance”).

But that is because Dembski’s method does not require it. Therefore, her objection is simply irrelevant.

um, Dembski’s method does require it. Nothing in that 2005 paper implies anything other than a single null.  Sure, that null is supposed to, be H, namely

…the relevant chance hypothesis that takes into account Darwinian and other material mechanisms

But I see only one H.  Are you sure you are not seeing pink eleP(T|H)ants, Winston?

Unknown Probabilities

Can we calculate the probabilities required to reject the various chance hypotheses? Attempting to do so would seem pretty much impossible. What is the probability of the bacterial flagellum under Darwinian evolution? What is the probability of a flying animal? What is the probability of humans? Examples given of CSI typically use simple probability distributions, but calculating the actual probabilities under something like Darwinian evolution is extremely difficult.

Yes indeed.  Thank you, Winston.

Nevertheless, intelligent design researchers have long been engaged in trying to quantify those probabilities. In Darwin’s Black Box, Mike Behe argues for the improbability of irreducibly complex systems such as the bacterial flagellum. In No Free Lunch, William Dembski also offered a calculation of the probability of the bacterial flagellum. In “The Case Against a Darwinian Origin of Protein Folds,” Douglas Axe argues that under Darwinian evolution the probability of finding protein folds is too low.

Yes, IC is probably the best argument for ID.  Unfortunately, it’s still terrible.  Dembski and Behe both made their bacterial flagellum calculations (well, assertions – I don’t see much in the way of calculations in either book) without taking into account at least one possible route, i.e. that proposed by Pallen and Matzke. Pallen and Matzke’s route may well be wrong, but when computing probabilities, at the very least, the probability of such a route should be taken into account.  Much more to the point, as Lenski and his colleagues have shown (both in their AVIDA  program and in their E.coli cultures) being “IC” by any of the definitions given by Behe or Dembski is not a bar to evolvability, either in principle (AVIDA) or practice (e-coli). Dembski writes, in No Free Lunch:

The Darwinian mechanism is powerless to produce irreducibly complex systems for which the irreducible core consists of numerous diverse parts that are minimally complex relative to the minimal level of function they need to maintain.

This claim has simply been falsified.  Of course it is still possible that the bacterial flagellum somehow couldn’t have, or didn’t, evolve.  But the argument that “the Darwinian mechanism” is in principle “powerless” to evolve an IC system is now known to be incorrect.  As for Axe’s paper, essentially his claim is the same, namely that functional protein folds are deeply IC (using Behe’s IC-pathway definition), and are therefore improbable by Darwinian mechanisms.  But he infers this via a similarly same falsified claim:

So, while it is true that neither structure nor function is completely lost along the mutational path connecting the two natural sequences, natural selection imposes a more stringent condition. It does not allow a population to take any functional path, but rather only those paths that carry no fitness penalty.

It turns out there is no such constraint. “Natural selection” as we now know, from both empirical observations and mathematical models is merely a bias in the sampling, in any one generation, of favour of traits that increase the probability of reproductive success in the current environment.  As most mutations are near-neutral, many will become highly prevalent in the population, including some substantially deleterious ones (ones that “carry a fitness penalty”.  So again, the argument that IC structures are unevolvable, or only evolvable by pathways that do not include deleterious steps, is simply unfounded. Falsified, in the strict Popperian sense.

Again, moving on:

While it is unreasonable to calculate the exact probabilities under a complex chance hypothesis, this does not mean that we are unable to get a general sense of those probabilities. We can characterize the probabilities of the complex systems we find in biology, and as the above research argues those probabilities are very small.

It is indeed “unreasonable to calculate the exact probabilities under a complex chance hypothesis”.  Precisely.  But I’d go further.  It is even more unreasonable to make a rough guesstimate (I hate that word, but it works here).  The entire concept of CSI is GIGO.  If you start with the guesstimate that Darwinian mechanisms are unlikely to have resulted in T, then P(T|H) will be, by definition, small.  If you start with the guesstimate that Darwinian mechanisms could well have resulted in T, then P(T|H) will be, by definition, large.  Whether the output from your CSI calculation reaches your threshold (ah, yes, that threshold….)  for rejection therefore depends entirely on the Number You First Thought Of.

And so, indeed, to get a defensible Number, we must turn to actual empirical research.  But the only “above research” cited is Behe’s (scarcely empirical, rebutted by Pallen and Matzke, and contains no actual estimate), and Axe’s, which needs to be taken together with the vast number of other empirical research into the evolution of protein folds.  And both are based on a falsified premise. Sure one might be able to get “get a general sense of those probabilities”, but in order to make the strong claim that non-design must be rejected (at p<10^-150!) we need something more than a “general sense of those possibilities”.  Bear in mind that no scientist rejects ID, at least qua scientist. There is no way to falsify it (how would you compute the null?)  ID could well be correct.  But having a hunch that non-design routes can’t do the job is not enough for an inference with that kind of p value.  No sir not nohow.

In summary so far: Ewert has done little more than agree with me that we cannot reject non-design on the basis of the single pantechnicon null as proposed in Dembski’s 2005 paper, and would therefore presumably also agree that quantity chi is uncomputable, and therefore useless as a test for design (which was the point of my original glacier puzzle). Instead he argues that research does suggest that Darwinian mechanisms are implausible, but does so citing research that is based on the assumption that Darwinian mechanisms cannot either produce IC systems (they can) or by IC pathways (they can) or by IC pathways that include deleterious steps (they can).  Now it remains perfectly possible that bacterial flagella and friends could not have evolved.  But the argument that they cannot have evolved because “natural selection… does not allow a population to take any functional path, but rather only those paths that carry no fitness penalty” fails, because the premise is false.

****************************************************************************************

Part II:

Earman and Local Inductive Elimination

Above, I mentioned the division of possible chance hypotheses into the relevant and irrelevant categories. However, what if the true explanation is an unknown chance hypothesis? That is, perhaps there is a non-design explanation, but as we are ignorant of it, we rejected it along with all the other irrelevant chance hypotheses. In that case, we will infer design when design is not actually present.

 

Dembksi defends his approach by appealing to the work of philosopher of physics John Earman, who defended inductive elimination. An inductive argument gives evidence for its conclusion, but stops short of actually proving it. An eliminative argument is one that demonstrates its conclusion by proving the alternatives false rather than proving the conclusion true. The design inference is an instance of inductive elimination: it gives us reason to believe that design is the best explanation.

Liddle objects that Dembski is not actually following what Earman wrote, and she quotes from Earman: “Even if we can never get down to a single hypothesis, progress occurs if we succeed in eliminating finite or infinite chunks of the possibility space. This presupposes of course that we have some kind of measure, or at least topology, on the space of possibilities.”

Dembski has not defined any sort of topology on the space of possibilities. He has not somehow divided up the space of all possible hypotheses and systematically eliminated some or all. Without that topology, Dembski cannot claim to have eliminated all the chance hypotheses.

 

True.  But nor can he claim to have eliminated any unless he actually calculates the ele(P(T|H)ant for each.

However, Liddle does not appear to have understood what Earman meant. Earman was not referring to a topology over every conceivable hypothesis, but over the set of what we might call plausible hypotheses. In Earman’s approach, inductive elimination starts by defining the set of plausible hypotheses. We do not consider every conceivable hypothesis, but only those hypotheses which we consider plausible. Only then do we define a topology on the plausible hypotheses and work towards eliminating the incorrect possibilities.

 

Hang on. I venture to suggest that Winston has not understood what Earman meant.  Let me write it out again, with feeling:

Even if we can never get down to a single hypothesis, progress occurs if we succeed in eliminating finite or infinite chunks of the possibility space. This presupposes of course that we have some kind of measure, or at least topology, on the space of possibilities.

 

In other words, surely: you can only partition the total possibility space into plausible and implausible sections IF you first have “some kind of measure, or at least topology, on the space of possibilities.”  Then of course, you can pick off the plausibles one by one until you are done.  So my objection stands.  And if that isn’t what Earman meant, then I have no idea what he meant.  It seems pretty clear from his gravity example that that’s exactly what he meant – entire sections of hypothesis space could be eliminated first, leaving some plausibles.

In his discussion of gravitational theories, Earman points out that the process of elimination began by an assumption of the boundaries for what a possible theory would look like. He says:

 

Despite the wide cast of its net, the resulting enterprise was nevertheless a case of what may properly be termed local induction. First, there was no pretense of considering all logically possible theories.

Later, Earman discusses the possible objection that because not all logically possible theories were considered, it remains possible that true gravitational theory is not the one that was accepted. He says:

 

I would contend that all cases of scientific inquiry, whether into the observable or the unobservable, are cases of local induction. Thus the present form of skepticism of the antirealist is indistinguishable from a blanket form of skepticism about scientific knowledge.

In contrast to Liddle’s understanding, Earman’s system does not require a topology over all possible hypotheses. Rather, the topology operates on the smaller set of plausible hypotheses. This is why the inductive elimination is a local induction and not a deductive argument.

 

I think Winston is confused.  Certainly, Earman does not require that all hypotheses be evaluated, merely those that are plausible.  Which is fine.  However he DOES require that first the entire set of possible hypotheses be partitioned into plausible and implausible, so that you can THEN “operate..on the smaller set of plausible hypotheses”.

So we could, for instance, eliminate at a stroke, all theories that require, say, violation of the 2nd Law of thermodynamics; or hitherto undiscovered fundamental forces; and material designers, as that would only push back the problem as to where they came from, and already we are short of cosmic time. We don’t need to even worry about that herd of ele(P(T|H)ants. And as Darwinian processes require self-replicators, we can eliminate Darwinian processes as an account of the first self-replicators.  And so on.  But that still leaves “Darwinian and other material processes” as Dembski correctly states in his 2005 paper.  And those null distributions have to be properly calculated if they are going to be rejected by null hypothesis testing. Which is not of course what Earman is even talking about – his entire book is about Bayesian inference.

Which brings me to my next point:  Dembski’s 2002 piece, cited by Ewert below, precedes his 2005 paper, and indeed, No Free Lunch, and presents a Bayesian approach to ID, not a Fisherian one.  In the 2005 paper, Dembski goes to excruciating length to justify a Fisherian approach, and explicitly rejects a Bayesian one:

I’ve argued at length elsewhere that Bayesian methods are inadequate for drawing design inferences. Among the reasons I’ve given is the need to assess prior probabilities in employing these methods, the concomitant problem of rationally grounding these priors, and the lack of empirical grounding in estimating probabilities conditional on design hypotheses.

Indeed they do.  But Fisherian methods don’t get him out of that responsibility (for grounding his prior), they merely allow him to sneak them in via a Trojan EleP(T|H)ant.

 

Furthermore, Dembski discusses the issue in “Naturalism’s Argument from Invincible Ignorance: A Response to Howard Van Till,” where he considers the same quote that Liddle presented:

In assessing whether the bacterial flagellum exemplifies specified complexity, the design theorist is tacitly following Earman’s guidelines for making an eliminative induction work. Thus, the design theorist orders the space of hypotheses that naturalistically account for the bacterial flagellum into those that look to direct Darwinian pathways and those that look to indirect Darwinian pathways (cf. Earman’s requirement for an ordering or topology of the space of possible hypotheses). The design theorist also limits the induction to a local induction, focusing on relevant hypotheses rather than all logically possible hypotheses. The reference class of relevant hypotheses are those that flow out of Darwin’s theory. Of these, direct Darwinian pathways can be precluded on account of the flagellum’s irreducible and minimal complexity, which entails the minuscule probabilities required for specified complexity. As for indirect Darwinian pathways, the causal adequacy of intelligence to produce such complex systems (which is simply a fact of engineering) as well as the total absence of causally specific proposals for how they might work in practice eliminates them.

Dembski is following Earman’s proposal here. He defines the boundaries of the theories under consideration. He divides them into an exhaustive partition, and then argues that each partition can be rejected therefore inferring the remaining hypothesis, design. This is a local induction, and as such depends on the assumption that any non-Darwinian chance hypothesis will be incorrect.

 

Yep, he is.  And he is using Bayesian reasoning: his rejection of indirect Darwinian  pathways is rejected because of his higher priors on something else, namely “the causal adequacy of intelligence to produce such complex systems” (which I would dispute, but let’s say per arguendo he is justified); and on “the total absence of causally specific proposals, which is ludicrous.  We do not demand to know the causally specific pathway by which a rock descended from a cliff before we can infer that erosion plus gravity were the likely cause.  What we need to know is whether indirect Darwinian pathways can produce “IC” systems, and we know they can.  It is “causally adequate” (and a heck sight more “causally adequate” than an “intelligence” with no apparent physical attributes capable of assembling a molecule – intelligence doesn’t make things, intelligent material beings do).

But all that is moot, because his 2002 is a Bayesian inference, not a Fisherian one.  If he wants to play the Fisherian game, then he’s welcome to do it, but in that case he has to carefully construct the probability distribution of every null he wants to reject, and, if he succeeds, claim only to have rejected that null, not a load of other nulls he thinks he can get away with.  Which then won’t allow him to reject “non-design”.  Merely the nulls he’s modelled.

Frankly, if I were an IDer I’d go down the Bayesian route. But, as Dembski says, it does present problems.

For Dembski.

At the end of the day, the design inference is an inductive argument. It does not logically entail design, but supports it as the best possible explanation.

 

“Best possible explanation” is a Bayesian inference, not a Fisherian one.  Fine.  But let’s bury CSI in that case.

It does not rule out the possibility of some unknown chance hypotheses outside the set of those eliminated.

 

It does not even rule out hypotheses within the set of those claimed to have been eliminated, because you can’t do that unless you can compute that null.  Nothing Winston has written here gets him, or Dembski, off the hook of having to reject a Darwinian null, for a post-OoL system, because Darwinian process pass any plausibility test.  So if you want to reject it, it you have to compute it.  Which is impossible.  It would be like trying to reject the null of that today’s weather was a result of set of causal chains X, where X is a set of specific hypothetical turbulent states, and then concluding that today’s weather wasn’t a result of a turbulent state. All we can say is that we know that turbulence leads to striking but unpredictable weather systems, and so we can’t reject it as a cause of this one.

However, rejecting the conclusion of design for this reason requires the willingness accept an unknown chance hypothesis for which you have no evidence solely due to an unwillingness to accept design. It is very hard to argue that such a hypothesis is actually the best explanation.

Indeed.  But there is sleight of hand here.  Darwinian processes is a known “chance hypothesis” (Dembski’s and your term, not mine), but the specific causal chain for any one observed system is probably unknown.  That doesn’t mean we can ignore it, any  more than we can reject ID because there are many possible ID hypotheses (front-loading; continuous intervention; fine-tuning; OoL only, whatever).  And indeed there is a double standard.  If we deem “intelligence”  “causally adequate to produce such complex systems” because it is “is simply a fact of engineering”, despite the fact that we have no hypothesis for any specific causal pathway by which a putative immaterial engineer could make an IC system, then we can at the very least deem Darwinian processes “causally adequate” because we know that those processes can produce IC systems, by IC pathways that include deleterious steps.

Closing Thoughts

Liddle objects that we cannot calculate the probability necessary to make a design inference. However, she is mistaken because the design inference requires that we calculate probabilities, not a probability.

Well, there was slightly more than to my objection than the number of hypotheses! And if Dembski agrees there must be more than one, I suggest that he retracts CSI as defined in his 2005 paper.

Each chance hypothesis will have it own probability, and will be rejected if that probability is too low. Intelligent design researchers have investigated these probabilities.

I have not seen a single calculation of such a probability that did not rest on the falsified premise that IC systems cannot evolve by indirect pathways, including those that include deleterious steps.

Liddle’s objections to Dembski’s appeal to Earman demonstrate that she is the one not following Earman. Earman’s approach involves starting assumptions about what a valid theory would look like, in the same way that any design inference makes starting assumptions about what a possible chance hypotheses would look like.

It also involves Bayesian inference, which Dembski specifically rejects. It also seems to involve rejecting a perfectly plausible hypothesis on falsified grounds.

In short, neither of Liddle’s objections hold water. Rather both appear to be derived from a mistaken understanding of Dembski and Earman.

 

Well, no. They remain triumphantly watertight; Winston has largely conceded their validity, by de facto agreeing with me that Dembski’s definition of “chi” is useless, and thus his Fisherian rejection method invalid. Winston substitutes a Bayesian approach, in which he improperly eliminates perfectly plausible hypotheses (e.g. indirect Darwinian pathways for IC systems), and offers no method for calculating them, merely appealing to researchers who have failed to note that IC is not an impediment to evolution.  He appears himself to have misunderstood Earman, and far from my “misunderstanding” Dembski, Winston appears to be telling us that Dembski did not say what he categorically did say in 2005.  At any rate, if Dembski in his 2005 was really telling us that we needed to use a Bayesian approach to the Design Inference and that multiple nulls must be rejected in order to infer Design, then it’s odd that his words appear to indicate the precise opposite.

 

93 thoughts on “Trojan EleP(T|H)ant?

  1. Where could any discussion possibly progress to? We have an observed mechanism (evolution) derived from an extensive set of related observations – a huge body of evidence. And we have a proposed mechanism (poof) which waves away evidence as meaningless.

    The poof mechanism is impervous to all of these efforts to provide ever more evidence, ever more cogent reasoning, ever more focused and supportive studies. As Dawkins wrote, “no evidence, no matter how overwhelming, no matter how all-embracing, no matter how devastatingly convincing, can ever make any difference.”

    And we are supposed to “progress” this “discussion”? The very words give the game away – there can be no progress when one one starts with Absolute Truth, and there can be no discussion when there’s nothing to discuss. There is too much qualitative difference. We are a discussion group, they are a congregation. We discuss, they preach. We play on unrelated fields, ships passing in the night.

    And so we can show, with endless carefully reasoned presentations, the fundamental flaws, incorrect assumptions, empirical falsehoods, bait-and-switches, misdirections, inconsistencies, the whole substance of the ID repertoire, and it doesn’t make a damn bit of difference. A school is not a church.

  2. “At the end of the day, the design inference is an inductive argument. It does not logically entail design, but supports it as the best possible explanation.”

    Surely he means abductive argument.

  3. Winston, your criticism is based on the theoretical definition of “specified complexity”/CSI.  As I’m sure you’re aware, there is quite a gap between theory and practice when it comes to calculating CSI.  In practice, examples of CSI calculations rarely mention a null hypothesis, much less multiple null hypotheses.  Instead, they tacitly assume a uniform distribution.  For example, here are Joe (normal font) and Kairosfocus (bold font) calculating the CSI in the definition of “aardvark”:

    A simple character count reveals 202 characters which translates into 1010 bits of information/ specified complexity.

    1010 bits > 500, therefor CSI is present.

    Now wrt biology each nucleotide has 2 bits.

    _______

    CALC: 7 bits per ASCII character (not 5 per Baudot character) * 202 characters = 1414 bits.

    Functionally specific, so S = 1.

    Chi_500 = 1414 * 1 – 500 = 914 bits beyond the solar system threshold.

    Designed.

    KF

    This is the norm, not the exception, even when Dembski himself calculates CSI.  Critics have been pointing this out for years (e.g. Richard Wein’s “The Uniform-Probability Method” and Elsberry and Shallit’s “causal-history-based interpretation vs. uniform probability interpretation”).  If the “uniform probability” interpretation of CSI is erroneous, then it’s an error that is committed by the ID community in general, including Dembski.

  4. Another question, Winston:

    The irrelevant chance hypotheses are those involving processes that we have no reason to believe are in operation. They are rejected on this basis.

    If a chance hypothesis confers a reasonably high probability on the observed event, does that count as a reason to believe that it is in operation?  Apparently not, because we can always dream up a plethora of otherwise-unknown hypotheses that would confer such probabilities, as Dembski has pointed out.

    So you must mean, “The irrelevant chance hypotheses are those involving processes that we have no reason [independent of the observed event] to believe are in operation.”  It’s interesting that the design hypothesis is exempt from this requirement of independent evidence, as Dembski argues in his Specification paper.  How do you justify the double standard?

  5. As best I’ve ever been able to tell, ID-style CSI is worthless, function-free bullshit. I said “ID-style CSI” because a real scientist named Leslie Orgel seems to have used the term ‘CSI’ decades before the ID-pushers got hold of it, but Orgel-style CSI is a very different beast than ID-style CSI…

    Anyway.

    If Ewert, or any other ID-pusher, would like to demonstrate that ID-style CSI is not worthless, not function-free, and not bullshit, they could do worse than determine how much CSI there is in various entities that we bloody well know to have been Designed. How much CSI is there in…

    …a VHS videotape recording of the original Star Wars film?
    …a CD-ROM recording of the original Star Wars film (i.e., does the medium on which it’s recorded make any difference to CSI)?
    …a CD-ROM recording of the ‘new and improved’ Star Wars film, the version in which Greedo shot first (i.e., do relatively minor tweaks to content make any difference to CSI)?
    …a ham sandwich (this may seem frivolous, but if ham sandwiches are Designed, surely they must have CSI)?
    …the Giant Dipper rollercoaster at the Santa Cruz Boardwalk?
    …an F-14 fighter plane?
    …any one specific performance of Beethoven’s Ninth Symphony?
    …any one other specific performance of Beethoven’s Ninth Symphony?
    …any one performance of the John Cage piece 4’33” ?

    Based on my hypothesis that ID-style CSI is worthless, function-free bullshit, I predict that no ID-pusher will provide CSI figures for any of the items on the above list. I further predict a high probability that any ID-pusher who bothers to respond to this post will disgorge a bunch of verbiage that boils down to we don’t even have to provide any CSI figures for that crap.

  6. r0bb: Instead, they tacitly assume a uniform distribution.

    You have to be careful here. Sometimes a uniform distribution is not assumed, but what is assumed is independent random draw, which is the important part.

    Kairosfocus, for example, seems to think that as long as you allow for non-uniform distributions of your “alphabet”, you’ve dealt with the independent draw part. Or something. Anyway, what he doesn’t model is any Darwinian null at all. I’ve never actually seen that attempted. Certainly it’s not what Dembski does in NFL.

  7. “[Liddle] is mistaken because the design inference requires that we calculate probabilities, not a probability. Each chance hypothesis will have it own probability, and will be rejected if that probability is too low.”

    Ewert is using Fisherian testing to reject multiple (“a collection of”) chance hypotheses. How, then, is he handling the notoriously difficult problem of adjusting for multiple tests?

    By way of (extreme) illustration:
    Winston arrives just as my family is starting a hand of bridge. Neither my mother (the bridge-player) nor my daughter (the statistician) are particularly surprised by the deal – it falls within the range of point and suit distributions one typically sees.
    Winston inspects the four hands and declares “Design!!”.
    “Huh?”
    “Let’s consider the chance hypotheses, sequentially. First chance hypothesis: first the six of clubs is dealt, then the ten of spades….” This first chance hypothesis is rejected, since it has a probability of 1 in 8 x 10^67.
    Some time later, Winston has enumerated all of the chance hypotheses, and concluded that each is similarly unlikely. He therefore concludes design.
    My daughter just shakes her head sadly.

  8. DNA_Jock: Ewert is using Fisherian testing to reject multiple (“a collection of”) chance hypotheses. How, then, is he handling the notoriously difficult problem of adjusting for multiple tests?

    Excellent point. But his reference isn’t to Fisherian hypothesis testing. Ewert is comparing apples and oranges hear. Dembski, in his 2005 paper is absolutely clear that he favours a Fisherian hypothesis testing model, and explicitly rejects Bayes. Yet both Dembski’s and Ewert’s defense of their null[s in the case of Ewert, not Dembski) involves an appeal to a Bayesian argument and method.

  9. DNA_Jock:
    “[Liddle] is mistaken because the design inference requires that we calculate probabilities, not a probability. Each chance hypothesis will have it own probability, and will be rejected if that probability is too low.”

    Ewert is using Fisherian testing to reject multiple (“a collection of”) chance hypotheses. How, then, is he handling the notoriously difficult problem of adjusting for multiple tests?

    By way of (extreme) illustration:
    Winston arrives just as my family is starting a hand of bridge. Neither my mother (the bridge-player) nor my daughter (the statistician) are particularly surprised by the deal – it falls within the range of point and suit distributions one typically sees.
    Winston inspects the four hands and declares “Design!!”.
    “Huh?”
    “Let’s consider the chance hypotheses, sequentially. First chance hypothesis: first the six of clubs is dealt, then the ten of spades….” This first chance hypothesis is rejected, since it has a probability of 1 in 8 x 10^67.
    Some time later, Winston has enumerated all of the chance hypotheses, and concluded that each is similarly unlikely. He therefore concludes design.
    My daughter just shakes her head sadly.

    HAhahahahaha, that was fucking brilliant.

  10. Lizzie: You have to be careful here.Sometimes a uniform distribution is not assumed, but what is assumed is independent random draw, which is the important part.

    Kairosfocus, for example, seems to think that as long as you allow for non-uniform distributions of your “alphabet”, you’ve dealt with the independent draw part.Or something.Anyway, what he doesn’t model is any Darwinian null at all.I’ve never actually seen that attempted.Certainly it’s not what Dembski does in NFL.

    Yeah, there are exceptions to their tendency to stick with uniform distributions.  I seem to recall Dembski calculating P(T|H) where H was a sequence of biased but i.i.d. coin flips.  That’s the only probability calculation I can remember him doing that didn’t involve a uniform distribution.  And, as you say, that’s not much of an improvement when the hypotheses that they should be testing involve a boatload of variables, known and unknown, with non-linear interdependencies and feedback loops.  They’re caught between the rock of intractable calculations and the hard place of naive and useless nulls, and they’ve chosen the latter.

    The Evo Info Lab has tried to justify the assumption of equiprobability using Bernoulli’s Principle of Insufficient Reason.  They have now backpedaled on that approach in their recent paper from the Cornell conference, but not enough to solve the root problem, which is that you can’t justify a conclusion of design simply by eliminating one or two “chance hypotheses”.

  11. On a positive note, I really appreciate the articles that Winston has posted at evolutionnews.org.  His writing is straightforward, without evasion, hand-waving, or equivocation.

  12. “. . . the Cornell conference . . . .”

    The “Ithaca conference” would be better, when identifying it by its location. Cornell had nothing to do with it other than renting a room, no matter how much the intelligent design creationists would like people to assume otherwise.

  13. All your argument boils down to Lizzie, is that you think natural selection is a ‘plausible’ mechanism that can account for IC.

    But now you need to demonstrate ‘how’ plausible. Is it extremely plausible, arm-chair plausible, Hail Mary plausible; Matzke talks in terms of ‘tantalizing hints’ (which of course Minnoch took down rather handlily by showing Matzke had it backwards). And in one of your comments, you characterize darwinian processes as ‘feeling around’ configuration space.

    Is this how we should approach understanding the degree of confidence we have in NS as a mechanism that accounts for IC?

    Yes, yes dear. I know, its just a figure of speech ..

    Sooooo….. in a very small, particular sense we can say natural selection ‘feels’ the configuration space and so happens to chance upon the right combo every once in a while, holds on to those dollar chips until it saves enough to play a lincoln; of course that is if no diehard player happens to clip one or few of those dollar chips on the sly, making NS have to wait that much longer to climb the probability ladder, or even causing it to backslide into oblivious territory.

    But yeah, sure. Since the bacterial flagellum is here, it is prema-facia evidence that NS did in fact play a lincoln and even a franklin; not withstanding evidence to the contrary that genomes engineer changes in response to environmental cues provides supporting evidence that genomes make use of NS; not that NS happens whether genomes like it or not.

    In a word, it is the genome that is in the driver’s seat. NS is in the back seat playing backgammon with the kids.

    At any rate, a Natural Selection Feels Your Pain bumper sticker would be a nice souvenir to collect.

  14. Better read again, Steverino. Lizzie’s arguments have gone completely over your head. Her opinion of the plausibility of natural selection to create IC structures isn’t part of them. Nor is any measure of the plausibility of natural selection to do that.

    IIRC she did mention the fact that evolutionary mechanisms have been proven to be capable of producing IC structures.

  15. Steve,

    Hi Steve

    I see you are not a fan of evolutionary theory. On the other hand, there is, as far as I am aware, no alternative scientific theory. ID proponents mention a ‘design inference’ without elaboration. Mr Ewert only mentions it as something we can assume having dismissed other explanations. Wthout any explanation of what a ‘design inference’ entails, I find it hard to see where this gets you.

  16. Steve…you think natural selection is a ‘plausible’ mechanism that can account for IC.

    But now you need to demonstrate ‘how’ plausible.

    Lenski’s long-running experiment is widely considered to be a demonstration of the power of natural selection (and variation).

  17. “Design is defined as any process that does not correspond to a chance hypothesis.”

    I always thought it was the other way around: “chance” is defined as “not design.” We have some idea of what “design” means, so in principle we could partition the world into designed and not designed things (if only we could reliably tell the difference…).

    If instead “design” is defined as a complement of “chance,” and “chance” is any mechanistic explanation whatsoever, whether deterministic or stochastic (I suppose singular, irregular occurrences also ought be included under this rubric for completeness), then things couldn’t look worse for an ID proponent. If they have to positively exclude all of that before they can conclude design, their conclusion will never look plausible. That’s not just an elephant, but a legion of them!

  18. Since no one has ever proposed a method for biological, it might be more useful to speak of intentional or unintentional.

    Behe, for example, would argue that Leishmaniasis is an intended product.

  19. All these are really just code words for “Magic!”, which this bizarre definition of design makes especially clear. There are design patterns that we can abstract from the way people design things. Ditto for intentionality. But that’s not what creationists want. They seem reluctant to constrain their Creator with any patterns.

  20. Actually Alan,

    At best Lenski’s experiments show the oscillating nature of traits as induced via NS.

    It demonstrates NS is a maintenence junkie, not a building contractor.

    This is at the core of the dispute. Contrary to what Lizzie is asserting, NS does not possess a contractor’s licence.

    On another note, we have Poenie thinking he has refuted Axe’s claim that Darwinian mechanisms are inadequate to explain proteins. But his argument rests on recombination, which he mentions is a phenomena exhibited by sexually reproducing organisms, said sexual reproduction unexplainable by NS. In addition to not being able to explain abiogenesis, it cant explain endo-symbiosis, cant explain proteins, can’t explain genes, can’t explain sensory systems, defense mechanisms, etc, etc, etc.

    So it is clear that NS didn’t create the starter kit, but simply is a component of a maintenance program that runs off the starter kit.

    Alan Fox: Lenski’s long-running experiment is widely considered to be a demonstration of the power of natural selection (and variation).

  21. The issue is that NS works off existing systems. It was/is incapable of creating itself.

    If this is not true, then anyone here should be able to explain the following by natural selection:

    1. endo-symbiosis
    2. sexual reproduction
    3. sensory systems
    4. defense systems
    5. digestive systems

    so far the only explanation for the above it that change happened to provide a survival advantage. That’s it. Nothing more.

    But what does that mean, survival advantage? Can you envision a survival advantage for early life that did not have the requisite systems in place to utilize any perceived advantage on your part?

    At the end of the day, it really does all boil down to an argument from fortuity.

    Rumraket: Yes, you see, that is the quintessential creationist strawman of evolution that they can never let go of. “You believe in chance”.
    It can never be emphasized enough, and possibly deserves a post of it’s own to really flesh out why, but evolution is not like throwing dice and hoping/expecting to just miraculously end up on “the one specific function we wanted”.

    There are two aspects to this strawman actually, and I think it also exists partially because the creationist have trouble letting go of the idea that “that which has resulted from evolution was planned all along, otherwise it wouldn’t have evolved“. So in that respect, the fact that a flagellum evolved, out of the total space of phenotypical possibilites, is incalulably miniscule, and so the creationist reasons – it must have happened due to planning, designing and foresight, otherwise how could this specific thing have evolved?

    So they’re really making two fundamental mistakes. One is to ignore natural selection and thus reduce evolution to mere dice-rolls that miraculously happen to land on the “working flagellum” side from scratch.

    The other is that they can’t seem to grasp that evolution simply finds “something, anyting that works”, it doesn’t care what this is and isn’t planning for it.
    In a different environment, given different preconditions and contingencies, perhaps on another planet, the flagellum wouldn’t have evolved, but something else equally unlikely would have. Evolution would simply have followed a different path, with mutation and drift sampling phenotypical space, and selection retaining advantageous variations.

  22. But what does that mean, survival advantage?

    I suggest you meditate on that for a little while.

    Can you envision a survival advantage for early life that did not have the requisite systems in place to utilize any perceived advantage on your part?

    Can you? I mean, that’s more relevant isn’t it?

    You can’t imagine. That’s all this is. A failure on your part.

    Please describe this “early life” in detail and I’ll tell you if I can imagine how a small change might provide a reproductive advantage without an array of requisite systems in place that are just waiting for that to happen and doing nothing otherwise (hint: sarcasm).

    In fact, please describe what a “requisite systems in place” might look like.

    Or, of course, you could explain how the designer got from “early life” to us.

    I suspect you’ll choose none of those options.

  23. Steve:

    Actually Alan,

    At best Lenski’s experiments show the oscillating nature of traits as induced via NS.

    I am not sure what you mean by “the oscillating nature of traits as induced via NS” but it seems to imply ‘natural selection’ (or as I like to refer to it environmental design – though it’s not catching on 🙁 ) causes variation, This is a misunderstanding. Evolutionary theory is based on the concept that imperfect copying of the genome (and other like processes but we can come back to that) will result in genomic variety in individuals in a breeding population such that differential breeding success will promote the survival of particular alleles (traits, if you like).

    It demonstrates NS is a maintenence junkie, not a building contractor.

    Again, not sure what you mean by this analogy. Passing a population through a sieve of survivability concentrates and fixes those genes in the population. Less successful combinations of genes will be lost. It is the reiterative process of ‘trial and error’ that can produce phenotypic ‘change over time’.

    This is at the core of the dispute. Contrary to what Lizzie is asserting, NS does not possess a contractor’s licence.

    Your analogy is not clear enough for me to be sure what you are disputing (other than the whole deal 🙂 )

  24. Steve:
    The issue is that NS works off existing systems. It was/is incapable of creating itself.

    Indeed. This is why it needs to be said loud and often that the theory of evolution does not attempt to explain the origin of life, just its diversity.

    If this is not true, then anyone here should be able to explain the following by natural selection:

    I doubt anyone here would claim to have all the answers to your list!

    1. endo-symbiosis

    The co-opting of cyanobacteria into chloroplasts is well illustrated merely by looking at transitional forms that exist today and I see no reason to suppose that these populations are not subject to selective processes.

    2. sexual reproduction

    I’d be interested in looking at what the current thinking on this is. Certainly sexual reproduction gives a huge boost to the selective process.

    3. sensory systems
    4. defense systems
    5. digestive systems

    I think these all follow from multicellularity and cell differentiation. Here again we can look at transitionals alive today to get clues as to what paths could have been followed. Clumped undifferentiated cells exist in nature (dinoflagellates seem a good example) so at all stages natural selection can act on these populations.

    Biology is a huge and constantly expanding field so, if you are genuinely interested, I am sure we can help you find current answers where they exist, however partially.

    I notice you did not pick up on my suggestion that a “design inference” is an utterly empty concept; merely something one is supposed to assume in the absence of other plausible explanations.

  25. Steve:

    1. endo-symbiosis
    2. sexual reproduction
    3. sensory systems
    4. defense systems
    5. digestive systems

    so far the only explanation for the above it that change happened to provide a survival advantage.That’s it.Nothing more.

    Your ignorance does not mean that others are equally ignorant.

    There is, you know, Google. Or even Google Scholar, with links to quite a few articles not behind paywalls. Or wiki-fucking-pedia even.

    Besides which, if you removed the blinkers you yourself could probably see the flaw in the insistence that a theory that has mountains of evidence supporting it must have the definitive answers to everything now.

  26. I’m struggling a little to understand Steve, here.

    It’s almost as if he’s imagining natural selection to be a mechanism built into the genome, rather than the result of a complex interaction between a population of organisms and its environment
    At least, his analogies with contractors and the like suggest he sees it as something of an active agent.

  27. Alan Fox:
    damitall2,

    I’m sure this comment was intended for this thread.

    Alan, thanks!

    Perhaps I’m getting too old for the roiling cockpit of debate that is TSZ

  28. bird beaks increasing and decreasing in size is a good example of oscillating traits. The genome adjusts for varying environmental conditions. It is obviously not the result of duplication errors. Duplication errors are searched out and repaired , eliminated as much as possible by the genome. So evolutionary theory according to your definition, really has no role to play here.

    Alan Fox: I am not sure what you mean by “the oscillating nature of traits as induced via NS” but it seems to imply ‘natural selection’ (or as I like to refer to it environmental design – though it’s not catching on ) causes variation, This is a misunderstanding. Evolutionary theory is based on the concept that imperfect copying of the genome (and other like processes but we can come back to that) will result in genomic variety in individuals in a breeding population such that differential breeding success will promote the survival of particular alleles (traits, if you like).

    The process that maintains the genome is not the same as the one that built it. Since replication errors are eliminated by the genome, replication error cannot be the source of variation. Mutation is a controlled process, not a haphazard one.

    Alan Fox:Again, not sure what you mean by this analogy. Passing a population through a sieve of survivability concentrates and fixes those genes in the population. Less successful combinations of genes will be lost. It is the reiterative process of ‘trial and error’ that can produce phenotypic ‘change over time’.

  29. So OMagain, you seem to be saying one has to have a vivid imagination to believe small, step change. Er, yeah. That seems to be a true statement, OMagain.

    But the onus is on you to describe early life, as it is you who claims evolution works on the first life. Now the first simple like if we take darwinian evolution to be true, did indeed start from humble beginnings. Just as Lizzie. It replicating RNA with inheritable variation; which of course lead to the first simple cell; that simple cell consisting of a nucleus and lipid membrane. so there were no systems in place. And why should there be; it happened to emerge without them. And then it replicated. But if it was able to replicate alright without the systems, what was the point of complexification?

    So in fact, if evolution cannot explain the emergence of said systems; it is really an esoteric theory, explaining only the most rudimentary aspects of life’s development and maintenence.

    OMagain: I suggest you meditate on that for a little while.

    Can you? I mean, that’s more relevant isn’t it?

    You can’t imagine. That’s all this is. A failure on your part.

    Please describe this “early life” in detail and I’ll tell you if I can imagine how a small change might provide a reproductive advantage without an array of requisite systems in place that are just waiting for that to happen and doing nothing otherwise (hint: sarcasm).

    In fact, please describe what a “requisite systems in place” might look like.

    Or, of course, you could explain how the designer got from “early life” to us.

    I suspect you’ll choose none of those options.

  30. Love that mountains of evidence meme.

    Those mountains support only a superficial explanation of life.

    To be sure, I have not asked for a detailed explanation of each and every aspect of life’s development and maintenence.

    I am only asking for an explanation of the most important, core issues!!

    If Darwinian processes cannot explain these core developments, it doesn,t explain much.

    Remember, OoL refers only to the most rudimentary first proto-cell. This is Darwinian processes starting point, not the first multi-cellular organism complete with all the bells and whistles.

    Otherwise, darwinian processes have done no heavy lifting whatsoever.

    davehookeBesides which, if you removed the blinkers you yourself could probably see the flaw in the insistence that a theory that has mountains of evidence supporting it must have the definitive answers to everything now.

  31. Steve: Since replication errors are eliminated by the genome, replication error cannot be the source of variation. Mutation is a controlled process, not a haphazard one.

    Wrong on both counts. Replication errors are not always eliminated, and more are constantly introduced by external influences such as toxins or ionising radiation, as well as internal factors such as polymerases of low fidelity. Cellular replication is usually an imperfect process – it’s one of the reasons why you are different from your parents

    The claim that “mutation is a controlled process” requires evidence other than “because that is what I’ve been told, and want to believe”

  32. Steve: bird beaks increasing and decreasing in size is a good example of oscillating traits.

    Let me ask you, Steve:

    What is it that you think makes the traits oscillate, rather than move continuously in one direction?

  33. Love that designer meme. Evidence for Him – none. Evidence for evolution is accepted by almost all biologists.

    Those mountains are not an explanation for life at all. They are an explanation for the diversity of life. OOL is not in any way core to the explanation for the diversity of life. Zeus could have magicked the first life into existence and evolution would still be true, despite your risible assertions to the contrary.

Leave a Reply