(5th April, 2013: stickying this, for a bit, as it has come up. Mung might like to comment).
Time to look at this in detail, I think
His definitive paper to date on CSI, is Specification: The Pattern That
Signifies Intelligence. It is very clearly written, not very mathy, but, by the same token, a paper in which it is easy (IMO) to see where he goes wrong.
Here is the abstract:
ABSTRACT: Specification denotes the type of pattern that highly improbable events must exhibit before one is entitled to attribute them to intelligence. This paper analyzes the concept of specification and shows how it applies to design detection (i.e., the detection of intelligence on the basis of circumstantial evidence). Always in the background throughout this discussion is the fundamental question of Intelligent Design (ID): Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? This paper reviews, clarifies, and extends previous work on specification in my books The Design Inference and No Free Lunch.
It’s in eight sections, and the argument in brief goes like this:
We cannot conclude that just because a pattern is one of a vast number of possible patterns that it was Designed; to conclude Design, it has to be one of small subset of those patterns that conforms to some kind of specification. Fisher proposed that if a pattern of data was one that fell in the tail of a probability distribution (aka Probability Density Function, PDF) of patterns of data under some null hypothesis, we could reject the null, but he didn’t give a clear rational for the cut-off point. Dembski suggests that if we have sequence data which a very large number of sequences would be possible under a non-Design hypothesis, and those sequences are binned according to their “compressibility” (ease of description) then they will form a Probability Density Function in which there is a tail consisting of a small subset of easy-to-describe pattern. Under the non-Design null hypothesis, these will happen rarely. If, therefore, the number of opportunities for them to happen is low enough, we can reject non-Design. And if the number of opportunities required to give them a sporting chance of happening at least once in the history of the universe is fewer than the number of events that have occurred in the universe, then we can confidently reject Design.
If any ID proponents think I have mischaracterised Dembski’s argument, I welcome your comments. But, assuming I have this broadly right, here are the problems as I see them:
- He does not attempt to characterise the probability distribution of his compressible sequences under his “non-Design” null, and simply assumes that only Design processes could reliably result in highly compressible patterns that would be improbable under a process that assigned each element in the sequence independently from any other – he does not attempt to argue why this should be the case, and it demonstrably is not.
- He does not show how compressibility should be measured (in fact Hazen et al, as discussed here IMO do a much better job, by substituting functional efficiency for compressibility, but their paper does not help Dembski’s case)
- He ignores the fact that the very easiest-to-describe sequences (e.g. ranked order sequences) are readily produced by non-Design sorting processes, yet can be highly “complex” i.e. one of a vast number of sorted and unsorted sequences), e.g. Chesil Beach.
Now, there may be various other ID papers proposing some kind of alternative to CSI that tackle some of these problems, but my point is that these three objections are fatal flaws in Dembski’s concept, and that therefore any improvement has to tackle all three. But it would be interesting to see if there is any disagreement about whether I have his argument right, and what the flaws are.