The eleP(T|H)ant in the room

The pattern that signifies Intelligence?

Winston Ewert has a post at Evolution News & Views that directly responds to my post here, A CSI Challenge which is nice. Dialogue is good.  Dialogue in a forum where we can both post would be even better.  He is extremely welcome to join us here 🙂

 

In my Challenge, I presented a grey-scale photograph of an unknown item, and invited people to calculate its CSI.  My intent, contrary to Ewert’s assumption, was not:

…to force an admission that such a calculation is impossible or to produce a false positive, detecting design where none was present.

but to reveal the problems inherent in such a calculation, and, in particular, the problem of computing the probability distribution of the data under the null hypothesis: The eleP(T|H)ant in the room

In his 2005 paper, Specification: the Pattern that Signifies Intelligence, Dembski makes a bold claim: that there is an identifiable property of certain patterns that “Signifies Intelligence”.  Dembski spends the major part of his paper on making three points:

    • He takes us on a rather painful walk-through Fisherian null hypothesis testing, which generates the probability (the “p value”) that we would observe our data were the null to be true, and allows us to “reject the null” if our data fall in the tails of the probability distribution where the p value falls below our “alpha” criterion: the “rejection region”.
    • He argues that if we set the “alpha” criterion at which we reject a Fisherian null as 1/[the number of possible events in the history of the universe], no way jose will we ever see the observed pattern under that null. tbh I’d be perfectly happy to reject a non-Design null at a much more lenient alpha that that.
    • He defines a pattern as being Specified if it is both
      • One of a very large number of patterns that could be made from the same elements (Shannon Complexity)
      • One of a very small subset of those patterns that can be defined as, or more simply than the pattern in question (Kolmogorov compressibility)

He then argues that if a pattern is one of a very small specifiable subset of patterns that could be produced under some non-Design null hypothesis, and that subset is less than 1/[the number of possible events in the history of the universe] of the whole set, it has CSI and we must conclude Design.

The problem, however, as I pointed out in a previous post, Belling the Cat, is that the problem with CSI is not computing the Specification (well, it’s a bit of a problem, but not insuperable) nor with deciding on an alpha criterion (and, as I said, I’d be perfectly happy with something much more lenient – after all, we frequently accept an alpha of .05 in my field (making appropriate corrections for multiple comparisons) and even physicists only require 5 sigma.  The problem is computing the probability of observing your data under the null hypothesis of non-Design.

Ewert points out to me that Dembski has always said that the first step in the three-step process of design detection is:

  1. Identify the relevant chance hypotheses.

  2. Reject all the chance hypotheses.

  3. Infer design.

and indeed he has.  Back on the old EF days, the first steps were to rule out “Necessity” which can often produce patterns that are both complex and compressible (indeed, I’d claim my Glacier is one) as well as “Chance”, and to conclude, if these explanations were to rejected, Design. And I fully understand why, for the sake of algebraic elegance, Dembski has decided to roll Chance and Necessity up together in a single null.

But the first task is not merely to identify the “relevant [null] chance hypothesis” but to compute the expected probability distribution of our data under that null, which we need in order to compute the the probability of observing our data under that null, neatly written as P(T|H), and which I have referred to as the eleP(T|H)ant in the room (and, being rather proud of my pun, have repeated it in this post title).  P(T|H) is the Probability that we would observe the Target (i.e. a member of the Specified subset of patterns) given the null Hypothesis.

And not only does Dembski not tell us how to compute that probability distribution, describing H in a throwaway line as “the relevant chance hypothesis that takes into account Darwinian and other material mechanisms”, but by characterising it as a “chance” hypothesis, he implicitly suggests that the probability distribution under a null hypothesis that posits “Darwinian and other material mechanisms” is not much harder to compute than that in his toy example, i.e. the probability distribution under the null that a coin will land heads and tails with equal probability, in which the null can be readily computed using the binomial theorem.

Which of course it is not.  And what is worse is that using the Fisherian hypothesis testing system that Dembski commends to us, our conclusion, if we reject the null,is, merely that we have, well, rejected the null.  If our null is “this coin is fair”, then the conclusion we can draw from rejecting this null is easy: “this coin is not fair”.  It doesn’t tell us why it is not fair – whether by Design, Skulduggery, or indeed Chance (perhaps the coin was inadvertently stamped with a head on both sides).  We might have derived our hypothesis from a theory (“this coin tosser is a shyster, I bet he has weighted his coin”), in which case rejecting the null (usually written H0), and accepting our “study hypothesis” (H1) allows us to conclude that our theory is supported.  But it does not allow us to reject any hypothesis that was not modelled as the null.

Ewert accepts this; indeed he takes me to task for misunderstanding Dembski on the matter:

We have seen that Liddle has confused the concept of specified complexity with the entire design inference. Specified complexity as a quantity gives us reason to reject individual chance hypotheses. It requires careful investigation to identify the relevant chance hypotheses. This has been the consistent approach presented in Dembski’s work, despite attempts to claim otherwise, or criticisms that Dembski has contradicted himself.

Well, no, I haven’t.  I’m not as green as I’m cabbage-looking.  I have not “confused the concept of Specified Complexity with the entire design inference”.  Nor, even am I confused as to whether Dembski is confused. I think he is very much aware of the eleP(T|H)ant in the room, although I’m not so sure that all his followers are similarly unconfused – I’ve seen many attempts to assert that CSI is possessed by some biological phenomenon or other, with calculations to back up the assertion, and yet in those calculations no attempt has been made to computed P(T|H) under any hypothesis other than random draw.  In fact, I think CSI, or FCSI, or FCO are a perfectly useful quantities when computed under the null of random draw, as both Durston et al (2007) and Hazen et al 2007 do. They just don’t allow us to reject any null other than random draw.  And this is very rarely a “relevant” null.

It doesn’t matter how “consistent” Dembski has been in his assertion that Design detection requires “careful investigation to identify the relevant chance hypothesis”. Unless Dembski can actually compute the probability distribution under the null that some relevant chance hypothesis is true, he has no way to reject it.

However, let’s suppose that he does manage to compute the probability distribution under some fairly comprehensive null that includes “Darwinian and other material mechanisms”.  Under Fisherian hypothesis testing, still, all he is entitled to do is to reject that null, not reject all non-Design hypotheses, including those not included in the rejected “relevant null hypothesis”.

Ewert defends Dembski on this:

But what if the actual cause of an event, proceeding from chance or necessity, is not among the identified hypotheses? What if some natural process exists that renders the event much more probable than would be expected? This will lead to a false positive. We will infer design where none was actually present. In the essay “Specification,” Dembski discusses this issue:

Thus, it is always a possibility that [the set of relevant hypotheses] omits some crucial chance hypothesis that might be operating in the world and account for the event E in question.

The method depends on our being confident that we have identified and eliminated all relevant candidate chance hypotheses. Dembski writes at length in defense of this approach.

But how does Dembski defend this approach?  He writes

At this point, critics of specified complexity raise two objections. First, they contend that because we can never know all the chance hypotheses responsible for a given outcome, to infer design because specified complexity eliminates a limited set of chance hypotheses constitutes an argument from ignorance.
Yes, indeed, this critic does. But Dembski counters:
In eliminating chance and inferring design, specified complexity is not party to an argument from ignorance. Rather, it is underwriting an eliminative induction. Eliminative inductions argue for the truth of a proposition by actively refuting its competitors (and not, as in arguments from ignorance, by noting that the proposition has yet to be refuted). Provided that the proposition along with its competitors form a mutually exclusive and exhaustive class, eliminating all the competitors entails that the proposition is true.
OK, but…
But eliminative inductions can be convincing without knocking down every conceivable alternative,a point John Earman has argued effectively. Earman has shown that eliminative inductions are not just widely employed in the sciences but also indispensable to science.
Hold it right there.  When Earman makes his plea for eliminative induction, he says:

 Even if we can never get down to a single hypothesis, progress occurs if we succeed in eliminating finite or infinite chunks of the possibility space.  This presupposes of course that we have some kind of measure, or at least topology, on the space of possibilities.

Earman gives as an example a kind of “hypothesis filter” whereby hypotheses are rejected at each of a series of stages, none of which non-specific “Design” would even pass, as each requires candidate theories to make specific predictions.  Not only that, but Earman’s approach is in part a Bayesian one, an approach Dembski specifically rejects for design detection.  Just because Fisherian hypothesis testing is essentially eliminative (serial rejection of null hypotheses) does not mean that you can use it for eliminative induction when the competing hypotheses do not form an exhaustive class, and Dembski offers no way of doing so.

In other words, not only does Dembski offer no way of computing the probability distribution under P(T|H) unless H is extremely limited, thereby precluding any Design inference anyway, he also offers no way of computing the topology of the space of non-Design Hypotheses, and thus no way of systematically eliminating them other than one-by-one, never knowing what proportion of viable hypotheses have been eliminated at any stage.  In other words, his is, indeed, an argument from ignorance. Earman’s essay simply does not help him.

Dembski comments:

Suffice it to say, by refusing the eliminative inductions by which specified complexity eliminates chance, one artificially props up chance explanations and (let the irony not be missed) eliminates design explanations whose designing intelligences don’t match up conveniently with a materialistic worldview.

The irony misser here, of course, is Dembski.  Nobody qua scientist has “eliminated” a “design explanation”. The problem for Dembski is not that those with a “materialistic worldview” have eliminated Design, but that the only eliminative inductionist approach he cites (Earman’s) would eliminate his Design Hypothesis out of the gate. That’s not because there aren’t perfectly good ways of inferring Design (there are), but because by refusing to make any specific Design-based predictions, Dembski’s hypothesis remains (let the irony not be missed) unfalsifable.

But until he deals with the eleP(T|H)ant, that’s a secondary problem.

 

Edited for typos and clarity

 

62 thoughts on “The eleP(T|H)ant in the room

  1. Well, we know Dembski is arguing from a foregone conclusion, and constructing a system of rationalizations. And we know that Dembski knows that his foregone conclusion is unable to actually explain anything, rendering it incapable of making any predictions.

    I suppose it’s fun to poke holes in it, and find the errors in the rationalization. But underneath all this verbiage, both the motivations and the errors are fairly transparent. We know 2+2 does not equal four becauise God Said So. And therefore we can conclude that 2+2=22, which we knew all along anyway, because we’ve eliminated all other competing sums. Trust me.

  2. Lizzie,

    However, let’s suppose that he does manage to compute the probability distribution under some fairly comprehensive null that includes “Darwinian and other material mechanisms”.

    It’s ironic that ID proponents are always demanding mutation-by-mutation accounts of how this or that biological feature evolved, because that is the level of detail they must provide in order to justify the values they assign to P(T|H). It’s even worse for them, in fact, because P(T|H) must encompass all possible evolutionary pathways to a given endpoint.

    P.S. Winston’s last name is “Ewert”, with two E’s.

  3. I haven’t read it yet, but I would rather appreciate it if you’d correct the spelling of my last name. Thanks!

  4. A very good and pointed summary.

    When we discussed this before, I summarized the situation thus:

    a) We are trying to see whether natural selection and random mutation (and similar evolutionary forces) can explain why we have an adaptation that is as good as it is.

    b) If not, then we conclude for Design.

    So according to Dembski’s protocol we
    1. Look at the adapation and what might have brought it about.
    2. See if we can rule out RM+NS.
    3. If we can, then we conclude for Design.
    and, oh yes, in that case, and in that case only, we declare that points 1, 2, 3 show that it has CSI.

    Now note that the declaration that it has CSI is simply an afterthought, and does not even occur until we have already reached stage 3. So the concept of CSI is not at all central to the design detection.

    There are many of Dembski’s friends at UD who have declared, loudly and proudly, that CSI is a property that can be detected without knowing how an object came about. And that having CSI shows that the object was designed.

    Apparently they have all been wrong all this time, which speaks for a charge of lack of clarity against Dembski’s works.

  5. Joe,

    There are many of Dembski’s friends at UD who have declared, loudly and proudly, that CSI is a property that can be detected without knowing how an object came about.

    They are technically correct. You don’t have to know how an object actually originated to decide that it has CSI, but you do have to know the probability that it could have been produced via “Darwinian and other material mechanisms” — P(T|H), in other words.

    The problem, as you say, is that CSI is an afterthought. You already have to know that something could not have evolved before you attribute CSI to it. Thus CSI is useless for demonstrating that something could not have evolved.

    I pointed this out to Dembski in 2006 and to many other ID proponents since then. I’ve never seen him, or them, acknowledge the circularity.

  6. But they already DO know something (life) could not have evolved. All that CSI life has is just a way of expressing and reflecting this knowledge.

    I hope we sometimes step back from the scientific approach where conclusions come last, to recognize that this simply isn’t the case for Dembski at all. For Dembski and all the UD folks, their conclusions drive their reasoning and not the other way around.

    Technically, the entire design inference is an afterthought.

  7. This is a very nice summary of the ID/creationist position.

    As Flint points out, they already know the “answer” they want. The game, therefore, is to bury those preconceptions under a pile of “math” while at the same time not learning enough science to see any “distracting” possibilities in the natural world.

    In fact getting fundamental scientific concepts wrong and foisting those misconceptions on the unsuspecting public simply makes ID/creationists arguments seem more plausible to those with little or no scientific education. The science – as ID/creationists tell it – can’t possibly get the job done; as anyone off the street can see (this was actually a recent argument by Sewell).

    The real story is pretty much the opposite of the Dembski narrative; knowing the relevant science puts more possibilities in front of us than we can check in a lifetime. Finding the “recipe” of life is a daunting task because there are so many possible lines to explore. Thus, part of the research involves checking other places that might harbor life. This is done in order to try to bracket the problem.

    I think Dembski’s filter might better be turned around. Eliminate all the deities that may or may not have been involved in the origins and evolution of life. There are not only thousands of them; there are thousands of interpretations of the characteristics of each of these deities.

    Since sectarians cannot agree among themselves – even to the point of centuries of bloodshed – what even one of these deities is like; we place deities at the extreme low probability end of the spectrum, leave the ID/creationist misconceptions and mischaracterizations of science behind, and carry on doing science with a clear conscience. Life is short; and there are many scientific avenues to explore.

    Just because ID/creationists are unable to come up with scientific research programs doesn’t mean that more knowledgeable and talented people can’t.

  8. Dembski is notorious for scoffing that

    ID is not a mechanistic theory, and it’s not ID’s task to match your pathetic level of detail in telling mechanistic stories.

    His statement was mocked for obvious reasons, but it was also unintentionally prophetic. He’s right that ID’s job isn’t to match evolution’s “pathetic level of detail” — ID has to exceed that level of detail in order to establish the value of P(T|H). Without a value for P(T|H), or at least a defensible upper bound on its value, the presence of CSI can never be demonstrated — by Dembski’s own rules.

    Think of what that would involve in the case of biology. You’d not only have to identify all possible mutational sequences leading to the feature in question — you’d also have to know the applicable fitness landscapes at each stage, which would mean knowing things like the local climatic patterns and the precise evolutionary histories of the other organisms in the shared ecosystem.

    If he didn’t realize it then, Dembski must certainly see by now that it’s a quixotic and hopeless task. That may be why he’s moved on to “the search for a search”.

  9. I don’t know if anyone else has read Earman’s essay/book (there’s a pdf of it, linked to in the OP), but it would be interesting to apply his eliminative induction to the essays in the “Cornell” collection 🙂

    For instance, we could eliminate all the YEC theories on the first pass. Probably most of the Separate Creation for Humans theories on a second, including Adam and Noah.

    We’d probably be left with Behe, by which time we would have eliminated a human-health-prioritising God.

    And we still wouldn’t have eliminated any non-materialist theory except on grounds of being incomplete, which all scientific theories are anyway.

  10. No, I haven’t yet replied to “Information, Past and Present”. I have been a little busy, but I will get to it. The issues there are (1) whether Dembski was clear about CSI having the P(T|H) step in writings such as the book No Free Lunch, and (2) whether he has any place in his Search For a Search where he shows that natural selection cannot have produced the adaptations that we see. I have been reading my way through NFL. Little matters like research, summer teaching, and grant-writing keep getting in the way.

    The remaining big issue is the one that you are so nicely dealing with here, whether the version of Dembski’s argument that he gives in his 2006 paper has any way of dealing with the EleP(T|H)ant in the room. As you argue, it doesn’t. And that relegates the CSI step to total irrelevance. Not one of these folks has demonstrated that we need the concept of CSI to do their design inference.

  11. Welcome to TSZ, Mr Ewert. Lizzie has already corrected the mis-spelling. Apologies that your comment lay pending for a while. Any further comment should appear immediately. Nothing personal, it’s the norm for any new registration.

  12. Winston Ewert has commented upthread but was held in moderation. So, no headsup needed.

  13. Richardthughes:
    Exactly so, and more. They need all *possible* accounts and paths.

    True, but even with that “pathetic level of detail” the lottery winner fallacy is still present. Considering one isolated biological artifact ignores the fact that evolutionary mechanisms are capable of generating a phenomenal range of outcomes. The implicit assumption that humans or bacterial flagella or beetles are a target or intended outcome is not valid.

  14. The important concept for both biologists and IDists is whether the history of a genome contains any implausible steps.

    Not knowing the history in pathetic detail makes this determination impossible.

  15. winstonewert:
    I haven’t read it yet, but I would rather appreciate it if you’d correct the spelling of my last name. Thanks!

    Yes, welcome!

    And apologies again for mis-spelling your name. I’m usually good at getting names right, but I should be lest trusting of my own accuracy!

  16. I don’t see why the assumption that humans are an intended outcome is not valid. I don’t think validity is relevant when intention cannot be either established or discarded.

    But I don’t think ID people are concerned with a history of “plausible steps”. In their model, there was only a single step – poof – and whether or not you find this plausible is up to you.

    Evidence really doesn’t much matter to a model not built on evidence.

  17. Patrick:

    True, but even with that “pathetic level of detail” the lottery winner fallacy is still present. Considering one isolated biological artifact ignores the fact that evolutionary mechanisms are capable of generating a phenomenal range of outcomes. The implicit assumption that humans or bacterial flagella or beetles are a target or intended outcome is not valid.

    Their lottery winner fallacy has in recent years (especially since Edwards v. Aguillard in 1986) been focused on things like molecular assemblies; such as proteins and DNA. While this singles out the molecules of life – and replicating molecules in particular – it now places the ID issue among all molecular assemblies.

    And this is where ID/creationist arguments really get weird; they have to assert that certain molecular assemblies are due to “chance and necessity,” but others above a certain threshold of complexity require assistance from some intelligent input.

    Where along this chain of increasing level of complexity do the laws of physics and chemistry stop and intelligence has to take over in order to do the job that physics and chemistry “cannot do?”

    The sample space in all these CSI calculations is always an “ideal gas” of inert atoms that are supposed to just come together into some precisely specified configuration. Strings of letters and numbers are now used as stand-ins for atoms and molecules in these calculations.

    But if one is going to play this game, why not calculate the CSI of a rock (we did that here and on Panda’s Thumb)? Why can’t a specified rock be the target of an atomic/molecular assembly? We are at the atomic/molecular level now. Rocks and crystals self assemble; and crystals in particular can replicate under suitable conditions. In fact, DNA is a quasi-crystal; so why not include rocks?

    Where is the cutoff between assemblies that can occur due to “chance and necessity” – I really dislike that mischaracterization – and those that require intelligent guidance?

    Think about just how weird this ID picture is. There are all these atomic and molecular assemblies out there in the universe that can come together by “chance and necessity;” but there is this island of assemblies that suddenly are the result of ideal gases of inert atoms and molecules being jockeyed into specified positions.

    It’s as though atoms and molecules that are about to assemble into certain specified arrangements suddenly lose all their properties and have to seek help from some intelligent being. How do they do that?

  18. Intelligence (on observation) needs fuel – molecular fuel. We can certainly apply a certain amount of intent to generate assemblies that did not ‘just happen’. But we had lunch, and it wasn’t free.

    Even weirder is the insistence that the very existence of atoms themselves required intelligence to bring it about. I really don’t know how you’d distinguish that kind of ID!

  19. Mike Elzinga,

    Isn’t the essence of the Probability Bound calculation that they are arguing that a result this good (this far out into the tail of the distribution) cannot occur even once in the whole history of the universe? So that is the border between “chance and necessity” and design?

    Or do I misunderstand your argument.

  20. Flint: “I don’t see why the assumption that humans are an intended outcome is not valid. I don’t think validity is relevant when intention cannot be either established or discarded.”

    Fair enough. Let me rephrase as: The implicit assumption that humans or bacterial flagella or beetles are a target or intended outcome must be made explicit and supported. Otherwise the lottery winner fallacy is still present.

  21. Hi. I gave up looking at ID some years ago. But I came across your interesting blog, and couldn’t resist commenting.

    First, on “eliminative inductions”. I think we might reasonably consider various non-design hypotheses, reject them, and then infer design. But I suggest that if we’re justified in doing so it’s because we are (perhaps intuitively) weighing the merits of the design hypothesis against the plausibility of there being some further unconsidered explanation. Dembski wants us to accept the design hypothesis without considering its merits. No thanks. I like to consider the merits of a hypothesis before I accept it.

    Dembski is extremely vague about how to apply his method to real cases. The bacterial flagellum was supposed to be his flag-ship case. But in that case he hardly applied his own method. Instead he relied primarily on an argument from irreducible complexity. The one “chance hypothesis” he considered was purely random combination of parts, which no one seriously proposes, so it was effectively irrelevant. And even in considering that hypothesis he omitted most of the apparatus of his Fisher-based statistical method.

    I won’t address here the validity of that method for eliminating individual hypotheses. But it seems that Ewert has failed to apply it correctly.

    Ewert: In the case of the model used in “Specification: The Pattern that Signifies Intelligence,” we need to determine the specification resources. This is defined as being the number of patterns at least as simple as the one under consideration. We can measure the simplicity of the image by how compressible it is using PNG compression. A PNG file representing the image requires 3,122,824 bits. Thus we conclude that there are 2 to the 3,122,824th power simpler or equally simple images.

    In fact, Dembski defines the specificational resources as “the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T.” Surely Ewert should be calculating the complexity of the description of his chosen rejection region, not the complexity of the image. Unfortunately he doesn’t identify what rejection region (T) he is using, nor provide a probability calculation or a calculation of replicational resources. We have no way of checking his calculations

    For two of his hypotheses Ewert calculates negative figures for so-called “specified complexity”, which correspond to probabilities greater than 1! If the untransformed numbers (before applying the -log2 transformation) are really probabilities, they shouldn’t exceed 1. If they’re not really probabilities, then there’s even less justification for applying this transformation than there was when Dembski claimed to be transforming probabilities into information measures. Either way, the transformation by -log2 is entirely superfluous, and this isn’t a genuine measure of complexity.

  22. Joe Felsenstein: Mike Elzinga, Isn’t the essence of the Probability Bound calculation that they are arguing that a result this good (this far out into the tail of the distribution) cannot occur even once in the whole history of the universe? So that is the border between “chance and necessity” and design?Or do I misunderstand your argument.

    On page 23 of his “Specification” paper Dembski uses a legitimate physics calculation by Seth Lloyd in Physical Review Letters that estimates that it takes 10^120 logical operations to specify the entire universe; which includes everything in it. Here is the sentence from page 23 of Dembski’s “Specification” paper.

    Theoretical computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.

    So all Dembski is doing with his CSI is taking the logarithm to base 2 of Np, where N = 10^120, and then asserting that p has to be greater than or equal to 1/10^120 in order for there to be at least one occurrence of a specified event such as the origin of a living molecule.

    Taking log to base 2 and calling it CSI simply obscures this simple calculation as well as the assertion Dembski is making in his calculations.

    Ironically, Lloyd’s calculation already includes life forms; which obviously means that p is much greater than 1/N because life forms are already an existing subset of the universe; but Dembski didn’t appear to notice this.

  23. I honestly find the Seth Lloyd thing (even if correct, which I understand is in doubt) completely irrelevant. As I’ve said, I’d be perfectly happy with a much more lenient rejection alpha. 5 sigma would do me fine.

    It’s how you actually compute the probability distribution that concerns me, not where you make the cut-off having computed it.

  24. I would hazard a speculation about that. I suspect that Dembski picked that number because it came from a paper in PRL.

    Lloyd’s estimate is interesting because it is a legitimate calculation that tries to get a handle on how much computing power is needed to do a complete simulation of the universe and its history.

    It’s a rough estimate at best; but it shows how information is used properly in physics and computing. For example, “information” is connected to entropy, and thereby properly to energy states by the amount of energy required to flip bits in a computer.

    If I am recalling correctly, in earlier attempts by ID/creationists, they didn’t want to leave any doubt about a bound on a probability (CSI), so they had some of their own estimates of how much “information” is contained in the universe. None of these estimates had anything to do with energy; they were just enumerations of configurations.

    By picking a paper from PRL that happened along at a propitious moment in the evolution of his calculations, Dembski borrows “legitimacy” from a paper in a prestigious physics journal.

    I don’t believe Dembski read or understood Lloyd’s paper; I think he just copied what he wanted from the abstract.

    That’s my speculation. Could be wrong, but it would be consistent with ID/creationist history and tactics.

  25. P.S. My point above about the bacterial flagellum was based on Dembski’s treatment of the subject in NFL. I’d forgotten that he returned to that subject in the “Specification” paper. There he attempts to apply the latest version of his statistical method to “an evolutionary chance hypothesis”. I think his attempt to measure specificational resources, based on the number of words in the description “bidirectional rotary motor-driven propeller”, is pretty weak. But more important, he is unable to calculate P(T|H). So he hasn’t managed to infer design in biology, even with his own method.

    I think this inability to calculate P(T|H) in the case that matters most to him is the reason why he has been moving away from this empirical method towards an attempt to make an in-principle argument based on search algorithms. He thinks that can free him of the pesky need to provide a probability calculation or any other empirical evidence. Of course, it’s pie in the sky.

  26. It’s how you actually compute the probability distribution that concerns me, not where you make the cut-off having computed it.

    I think your analysis shows very nicely the equivalent way of looking at the probabilities given the hypothesis of “chance” and necessity.

    At, say your five sigma alpha, there would have to be a large number of trials in order to produce an instance that far out in the tail of the distribution.

    If there hasn’t been enough time in the history of the universe to run enough trials that would produce a specified event, then the probability of that event is certainly far out on the tail of some distribution.

    But as you have pointed out, Dembski can’t tell us what that distribution looks like. He simply declares that chance configurations of certain atoms and molecules can’t occur in the history of the universe given some upper limit on how fast trials can take place. If I recall correctly, that frequency was something like the inverse of the Planck time.

  27. TSZ is back? Grats.

    Lizzie: In my Challenge, I presented a grey-scale photograph of an unknown item, and invited people to calculate its CSI.

    Why do you expect people to be able to calculate the CSI of some of some unknown item when you can’t even accurately calculate the CSI for your own CSI demonstration?

  28. Mung,

    “People”, including ID proponents, can’t determine the CSI of known biological items, much less unknown ones, because they can’t determine P(T|H).

    That’s the point of this thread. I suggest you read the OP and the comments.

    The reason that ID proponents can’t determine CSI in actual biological cases is that they can’t calculate P(T|H) when H is “the relevant chance hypothesis that takes into account Darwinian and other material mechanisms.”

    Instead, they typically compute P(T|H) using a much simpler H: the hypothesis that something came about through pure random luck with no selection. It’s much easier to use this H, but the answers are bogus because the simpler H is not the real H, which involves not only randomness but also non-random selection.

    Lizzie’s earlier post was written when she still believed that the H in P(T|H) was pure randomness. Most likely she believed it because so many ID supporters have (mistakenly) interpreted it that way.

    However, Dembski specifies that the H in P(T|H) is “the relevant chance hypothesis that takes into account Darwinian and other material mechanisms.”

    Can you calculate it for an actual biological feature, Mung?

    I remember that you had a lot of trouble grasping the concept of P(T|H) when we last discussed it. (At one point you even thought that T|H was a fraction, with T in the numberator and H in the denominator!)

    P(T|H) is what’s known as a conditional probability. You might want to grab a beginner’s book on probability and read about it before commenting further.

  29. Mung:
    TSZ is back? Grats.

    Lizzie: In my Challenge, I presented a grey-scale photograph of an unknown item, and invited people to calculate its CSI.

    Why do you expect people to be able to calculate the CSI of some of some unknown item when you can’t even accurately calculate the CSI for your own CSI demonstration?

    Well, Mung, the “people” in question—the people to whom Lizzie directed her query—are ID-pushers who are stridently insistent that yes, this CSI thingie damn well is a real thing, and this CSI thingie damn well is measurable, and this CSI thingie damn well can be used to distinguish between Design and Not-Design.
    This being the case, it makes perfect sense to ask CSI-lovin’ ID-pushers to, like, determine the CSI of an arbitrary target (a photographic image, in this case). And if someone who isn’t a CSI-lovin’ ID-pusher can’t use CSI the way that CSI-lovin’ ID-pushers claim CSI can be used… so fucking what? The question is whether or not CSI-lovin’ ID-pushers can use CSI the way that CSI-lovin’ ID-pushers claim CSI can be used!
    And in this case, it seems that CSI-lovin’ ID-pushers… couldn’t use CSI the way that CSI-lovin’ ID-pushers claim CSI can be used.
    Tell me, Mung: If the proponents of a putative ‘scientific theory’ can’t substantiate the claims they make in support of their putative ‘theory’, exactly why should anybody else give two wet farts in a hurricane about said putative ‘theory’?

  30. Could someone clarify this for me?

    Upthread it seems that several people are saying that every possible mutational sequence (sequence in time) occurring in a set of nucleotides represents an H in the sense of “P(T|H)”. That is to say, any investigator wanting to eliminate “chance and necessity” or any other non-design cause, would need to work through all possible mutational sequences to prove they couldn’t have done it?

  31. timothya,

    That is to say, any investigator wanting to eliminate “chance and necessity” or any other non-design cause, would need to work through all possible mutational sequences to prove they couldn’t have done it?

    It depends on what you mean by “work through”.

    Dembski’s approach depends on being able to eliminate all non-design explanations, so every possible non-design cause must at least be considered. However, it may be possible to reject some of them without doing a detailed analysis.

    For example, the probability of the vertebrate eye evolving in a single generation is vanishingly small. It’s not impossible, but the associated probability is so small as to be negligible. It will have almost no effect on the overall P(T|H) and can therefore be neglected.

    The problem for Dembski et al is that even without considering these vastly improbable outliers, the difficulty in calculating P(T|H) for a complicated biological structure is overwhelming. The required information is simply not available. That’s why IDers haven’t done it, and that’s why no one expects them to.

  32. CSI as a practical calculation in biology would appear to suffer fatally from combinatorial explosion, since results come from series of events. Going backwards, each genetic change A depends upon the prior situation B. P=(A|B) – the probability of A given B. Since B was itself the result of a prior (x|y), we have a nesting of probabilities P=(A|(B|(C|(D| … )))), and could rapidly exceed the UPB for any series if ‘this’ is the target. But one would need to know the density of outcomes with this level of ‘surprise’ or greater in the (vast) overall space of event series to know how unlikely this one was.

    One could try applying the notion to one’s own genome, for example. One’s genetic constitution is the result of that of one’s parents, which in turn derives from theirs … each is a probabilistic sampling of a prior situation. You don’t have to go back very far before the number of ‘possible individuals’ who aren’t me exceeds the UPB – and keeps growing. Yet here I am, improbably.

  33. Very nicely articulated and more clear than my somewhat terse comment on the Lottery Winner fallacy. Evolutionary mechanisms operating in the environment we observe have a vast number of potential results. To be useful, Dembski’s calculations must include something like your “density of outcomes with this level of ‘surprise’ or greater.”

    That strikes me as at least as difficult to calculate as P(T|H) for a single artifact.

  34. Allan Miller: But one would need to know the density of outcomes with this level of ‘surprise’ or greater in the (vast) overall space of event series to know how unlikely this one was

    Well, essentially, this is what Dembski is getting at with his concept of “Specification”.

    But giving it a name doesn’t make it calculable.

  35. Considered more generally, essentially everything that happens in life is the result of a sequence of contingent prior events, every one of which is vanishingly unlikely, and the particular sequence even moreso. It takes only a very short time for everything that happens to exceed the UPB, from any given prior set of conditions. Trying to use math to eliminate all of reality from instant to instant as “too unlikely” is prima facie folly.

    Faced with all this, most of us here shrug because that’s how reality MUST operate. Some regard every event from the quantum level on up as being Divinely guided in real time (and who can say they’re wrong?) The UD people try to eliminate all this by (1) asserting POOF as the historical mechanism; and (2) constraining the scope of their vision so as to eliminate all the “noise” which is the substance of reality.

    At some point, “goddidit” morphed from a one-size-fits-all non-explanation of what’s not understood, into an actual reified “invisible superman” intelligent agent. Working backwards to demonstrate the reality of the imaginary is guaranteed to be both nonsense, and incurable.

  36. Mung:
    TSZ is back? Grats.

    Lizzie: In my Challenge, I presented a grey-scale photograph of an unknown item, and invited people to calculate its CSI.

    Why do you expect people to be able to calculate the CSI of some of some unknown item when you can’t even accurately calculate the CSI for your own CSI demonstration?

    Hi, Mung. As others have said, I calculated the CSI for the demonstration you linked to on the assumption that H was “random draw”. This is what Kairosfocus, for example, does here, as you can see from the fact that he takes (at the bottom of his OP) Durston et al’s fits for random-draw, without alteration, and interprets them as estimates of p(T|H).

    However, at the time when I wrote that OP, I had indeed overlooked the fact that Dembski adds the rider “that takes into account Darwinian and other material mechanisms” to his description of the “relevant chance hypothesis”.

    My challenge (which you are welcome to take up) is how you compute an appropriate P(T|H) for the “relevant Chance hypothesis” where this is not restricted to random-draw.

    For example, for a biological phenomenon.

  37. Well, essentially, this is what Dembski is getting at with his concept of “Specification”.

    “Specification” is Dembski’s attempt at dealing with the fact that vastly improbable things happen all the time. Problem is, specifications are usually too specific.

    For example, Dembski knows that he would be committing the lottery winner fallacy if he claimed that the bacterial flagellum, exactly as it appears today, was evolution’s “target”. Instead, he broadens the specification to include any “bidirectional rotary motor-driven propeller.”

    But this is still far too specific. Even “propulsion system” is too specific, because evolution didn’t set out to produce a propulsion system. Evolution’s only “target” is differential reproductive advantage, and even then the word “target” is too strong.

    Allan’s formulation is closer to what Dembski should have been shooting for:

    But one would need to know the density of outcomes with this level of ‘surprise’ or greater in the (vast) overall space of event series to know how unlikely this one was.

  38. It had occurred to me that this probability calculation of a specified target at the end of a long chain of contingencies was at the heart of what ID/creationists were actually trying to assert happens; but then I thought that was just too …, well, stupid. I figured they simply meant something like tornados-in-a-junkyard or things falling out of an ideal gas of inert stuff.

    But it appears you may be right; they really do calculate the probability of a specified event at the end of a chain of contingencies.

    My impression of how they “take into account Darwinian and other material mechanisms” is that they always misrepresent them with some type of emotionally loaded caricature that appeals to their sectarian base. These mechanisms can’t possibly work because they are the result of “materialistic thinking,” which rules out design from the beginning.

    It may be too much to expect that ID/creationist “arguments” would be consistent among themselves. Historically they appear to be the result of their latest emotional outbursts at “Darwinists,” “materialists,” and atheists.

    If there is anything consistent in ID/creationist thinking, it would be circularity. They already have the answer they want; they just have to court-proof it by making it look like science in order to get it into public education.

    Their latest push – if we are to take UD as a barometer of their thinking – appears to be making “materialism” to be a competing “philosophy” in the most pejorative sense. “Materialism” clouds the mind and makes it impossible for “materialists” to understand simple CSI calculations. According to UD, we haven’t deconstructed CSI here because we don’t understand CSI.

    The culture war continues.

  39. Me:

    But one would need to know the density of outcomes with this level of ‘surprise’ or greater in the (vast) overall space of event series to know how unlikely this one was.

    Taken in isolation, I now struggle to parse my own statement!

    The difficulty is in factoring in the role of selection. When there is a beneficial phenotype, the ‘surprising’ result is its extinction. But that still can happen with nonzero probability, because s values are typically small. Yet, because there are many different genetic paths to an equivalent phenotype, and many potential phenotypes that could be beneficial, the population has multiple bites at the cherry, and favourable accessible phenotypes are likely to arise.

    With a process that favours serial adaptation, the vast ‘random’ space is substantially reduced and channeled. Nonetheless, the challenge of calculating likelihood is in no way diminished, not least because historic benefit depends on lost information.

Leave a Reply