There are two versions of the metric for Bill Dembski’s CSI. One version can be traced to his book No Free Lunch published in 2002. Let us call that “CSI v1.0”.
Then in 2005 Bill published Specification the Pattern that Signifies Intelligence where he includes the identifier “v1.22”, but perhaps it would be better to call the concepts in that paper CSI v2.0 since, like windows 8, it has some radical differences from its predecessor and will come up with different results. Some end users of the concept of CSI prefer CSI v1.0 over v2.0.
It was very easy to estimate CSI numbers in version 1.0 and then argue later whether the subjective patterns used to deduce CSI were independent and not postdictive. Trying to calculate the CSI in v2.0 is cumbersome, and I don’t even try anymore. And as a matter of practicality, when discussing origin-of-life or biological evolution, ID-sympathetic arguments are framed in terms of improbability not CSI v2.0. In contrast, calculating CSI v1.0 is a very transparent transformation going from improbability to taking the negative logarithm of probability.
I = -log2(P)
In that respect, I think MathGrrl (who’s real identity he revealed here) has scored a point with respect to questioning the ability to calculate CSI v2.0, especially when it would have been a piece of cake in CSI v1.0.
For example, take 500 coins, and suppose they are all heads. The CSI v1.0 score is 500 bits. The calculation is transparent and easy, and accords with how we calculate improbability. Try doing that with CSI v2.0 and justifying the calculation.
Similarly, with pre-specifications (specifications already known to humans like the Champernowne Sequences), if we found 500 coins in sequence that matched a Champernowne Sequence, we could argue the CSI score is 500 bits as well. But try doing that calculation in CSI v2.0. For more complex situations, one might get different answers depending on who you are talking to because CSI v2.0 depends on the UPB and things like the number possible primitive subjective concepts in a person’s mind.
The motivation for CSI v2.0 was to try account for the possibility of slapping on a pattern after the fact and calling something “designed”. v2.0 was crafted to try to account for the possibility that someone might see a sequence of physical objects (like coins) and argue that the patterns in evidence were designed because he sees some pattern in the coins somewhat familiar to him but no one else. The problem is everyone has different life experiences and they will project their own subjective view of what constitutes a pattern. v2.0 tried to use some mathematics to create at threshold whereby one could infer, even if the recognized pattern was subjective and unique to the observer of a design, that chance would not be a likely explanation for this coincidence.
For example, if we saw a stream of bits which some claims is generated by coin flips, but the bit stream corresponds to the Chapernowne sequence, some will recognize the stream as designed and others will not. How then, given the subjective perceptions that each observer has, can the problem be resolved? There are methods suggested in v2.0, which in and of themselves would not be inherently objectionable, but then v2.0 tries to quantify how likely the subjective perception can arise out of chance and then it convolves this calculation with the probability of the objects emerging by chance. Hence we mix the probability of an observer concocting a pattern in his head by chance and then mixing it with the probability an event or object happens by chance, and after some gyrations out pops a CSI v2.0 score. v1.0 does not involve such heavy calculations regarding the random chance an observer formulates a pattern in his head, and thus is more tractable. So why the move from v1.0 to v2.0? The v1.0 approach has limitations which v2.0 does not. However, I recommend, that when v1.0 is available to use, use v1.0!
The question of post diction is an important one, but if I may offer an opinion — many designs in biology don’t require exhaustive rigor as attempted in v2.0 to try to determine if our design inferences are postdictive (the result of our imagination) or whether the designed artifacts themselves are inherently evidence against a chance hypothesis. This can be done using simpler mathematical arguments.
For example, consider if we saw 500 fair coins all heads, do we actually have to consider human subjectivity when looking at the pattern and concluding it is designed? No. Why? We can make an alternative mathematical argument that says if coins are all heads they are sufficiently inconsistent with the Binomial Distribution for randomly tossed coins, hence we can reject the chance hypothesis. Since the physics of fair coins rules out physics as being the cause of the configuration, we can then infer design. There is no need in this case to delve into the question of subjective human specification to make the design inference in that case. CSI v2.0 is not needed to make the design inference, and CSI v1.0, which says we have 500 bits of CSI, is sufficient in this case.
Where this method (v1.0 plus pure statistics) fails is in questions of recognizing design in a sequence of coin flips that follow something like the Champernowne sequence. Here the question of how likely it is for humans to make the Champernowne sequence special in their minds becomes a serious question, and it is difficult to calculate that probability. I suppose that is what motivated Jason Rosenhouse to argue that the sort of specifications used by ID proponents aren’t useful for biology. But that is not completely true if the specifications used by ID proponents can be formulated without subjectivity (as I did in the example with the coins) 🙂
The downside of the alternative approach (using CSI v1.0 and pure statistics) is that it does not include the use of otherwise legitimate human subjective constructs (like the notion of motor) in making design arguments. Some, like Michael Shermer or my friend Allen MacNeill, might argue that we are merely projecting our notions of design by saying something looks like a motor or a communication system or a computer, but the perception of design is owing more to our projection than to an inherent design. But the alternative approach I suggest is immune from this objection, even though it is far more limited in scope.
Of course I believe something is designed if it looks like a motor (flagellum), a telescope (the eye), a microphone (the ear), a speaker (some species of bird can imitate an incredible range of sounds), a sonar system (bat and whale sonar), a electric field sensor (sharks), a magnetic field navigation system (monarch butterflies), etc. The alternative method I suggest will not directly detect design in these objects quite so easily, since the pure statistics are hard pressed to describe the improbability of such features in biology even though it is so apparent these features of biology are designed. CSI v2.0 was an ambitious attempt to cover these cases, but it came with substantial computational challenges to arrive at information estimates. I leave it to others to calculate CSI v2.0 for these cases.
Here is an example of using v1.0 in biology regarding homochirality. Amino acids can be left or right handed. Physics and chemistry dictate that left-handed and right-handed amino acids arise mostly (not always) in equal amounts unless there is a specialized process (like living cells) that creates them. Stanley Miller’s amino acid soup experiments created mixtures of left and right handed amino acids, a mixture we would call racemic (a mix of right and left-handed amino acids) versus the homochiral variety (only left-handed) we find in biology.
Worse for the proponents of mindless oirgins of life, even homochiral amino acids will racemize spontaneously over time (some half lives are on the order of hundreds of years), and they will deanimate. Further, when Sidney tried to polymerize homochiral amino acids into protoproteins, they racemized due to the extreme heat and created many non-chains, and the chains he did create had few if any alpha peptide bonds. And then in the unlikely event the amino acids polymerize, in a soup, the amino acids can undergo hydrolysis. These considerations are consistent with the familiar observation that when something is dead, it tends to remain dead and moves farther away from any chance of resuscitation over time.
I could go on and on, but the point being is we can provisionally say the binomial distribution I used for coins also applies to the homochirality in living creatures, and hence we can make the design inference and assert a biopolymer has at least -log2(1/2^N) = N bits of CSI v1.0 based on N stereoisomer residues. One might try to calculate CSI v2.0 for this case, but me being lazy will stick to the CSI v1.0 calculation. Easier is sometimes better.
So how can the alternative approach (CSI v1.0 and pure statistics) detect design of something like the flagellum or DNA encoding and decoding system? It cannot do so as comprehensively as CSI v2.0, but v1.0 can argue for design in the components. As I argued qualitatively in the article Coordinated Complexity – the key to refuting postdiction and single target objections one can formulate observer independent specification (such as I did with the 500 coins being all heads) by appeal to pure statistics. I gave the example of how the FBI convicted cheaters of using false shuffles even though no formal specifications for design were asserted. They merely had to use common sense (which can be described mathematically as cross or auto correlation) to detect the cheating.
Here is what I wrote:
The opponents of ID argue something along the lines: “take a deck of cards, randomly shuffle it, the probability of any given sequence occurring is 1 out of 52 factorial or about 8×10^67 — Improbable things happen all the time, it doesn’t imply intelligent design.”
In fact, I found one such Darwinist screed here:
Creationists and “Intelligent Design” theorists claim that the odds of life having evolved as it has on earth is so great that it could not possibly be random. Yes, the odds are astronomical, but only if you were trying to PREDICT IN ADVANCE how life would evolve.
Ah, but what if cards dealt from one random shuffle are repeated by another shuffle, would you suspect Intelligent Design? A case involving this is reported in the FBI website: House of Cards
In this case, a team of cheaters bribed a casino dealer to deal cards and then reshuffle them in same order that they were previously dealt out (no easy shuffling feat!). They would arrive at the casino, play cards which the dealer dealt and secretly record the sequence of cards dealt out. Thus when the dealer re-shuffled the cards and dealt out the cards in the exact same sequence as the previous shuffle, the team of cheaters would be able to play knowing what cards they would be dealt, thus giving them substantial advantage. Not an easy scam to pull off, but they got away with it for a long time.
The evidence of cheating was confirmed by videotape surveillance because the first random shuffle provided a specification to detect intelligent design of the next shuffle. The next shuffle was intelligently designed to preserve the order of the prior shuffle.
Biology is rich with self-specifying systems like the auto correlatable sequence of cards in the example above. The simplest example is life’s ability to make copies of itself through a process akin to Quine Computing. Physics and chemistry makes Quine systems possible, but simultaneously improbable. Computers, as a matter of principle, cannot exist if they have no degrees of freedom which permit high improbability in some of its constituent systems (like computer memory banks).
We can see the correlation between a parent organism and its offspring not being the result of chance, and thus we can reject the chance hypothesis for that correlation. One might argue that though the offspring (copy) is not the product of chance, the process of copying is the product of a mindless copy machine. True, but we can further then estimate the probability of randomly implementing particular Quine computing algorithms (that makes it possible for life to act like computerized copy machines). The act of a system making copies is not in-and-of-itself spectacular (salt crystals do that), but the act of making improbable copies via an improbable copying machine? That is what is spectacular.
I further pointed out that biology is rich with systems that can be likened to login/password or lock-and-key systems. That is, the architecture of the system is such that the components are constrained to obey a certain pattern or else the system will fail. In that sense, the targets for individual components can be shown to be specified without having to calculate the chances the observer is randomly formulating subjective patterns onto the presumably designed object.
That is to say, even though there are infinite ways to make lock-and-key combinations, that does not imply that emergence of a lock-
and-key system is probable! Unfortunately, Darwinists will implicitly say, “there are infinite number of ways to make life, therefore we can’t use probability arguments”, but they fail to see the errors in their reasoning as demonstrated with the lock-and-key analogy.
This simplified methodology using v1.0, though not capable of saying “the flagellum is a motor and therefore is designed”, is capable of asserting “individual components (like the flagellum assembly instructions) are improbable hence the flagellum is designed.”
But I will admit, the step of invoking the login/password or lock-and-key metaphor is a step outside of pure statistics, and making the argument for design in the case of login/password and lock-and-key metaphors more rigorous is a project of future study.
Mathgrrl, though we’re opponents in this debate, he strikes me a decent guy
The fact that life makes copies motivated Nobel Laureate Eugene Wigner to hypothesize a biotonic law in physics. That was ultimately refuted. Life does copy via a biotonic law but through computation (and the emergence of computation is not attributable to physical law in principle just like software cannot be explained by hardware alone).