Barry Arrington has started a new thread over at UD, where he argues that it is perfectly okay if CSI (complex specified information) cannot be quantified.
I responded, in a comment, that he seems to be making the case that CSI is not a scientific concept. It occurred to me that folk here might want to have a discussion on what is required of a concept, for it to be part of a scientific theory.
It seems to me that, at the very least, the concept has to be defined precisely enough that one can form testable hypotheses. Moreover, these hypotheses need to be testable by independent researchers, and the tests need to provide some kind of reliability (i.e. reasonable agreement between results obtained by independent researchers). Perhaps that’s weaker that quantifiable. However, I think even that weaker requirement is a problem for CSI, as it is currently “defined”.
The topic is open for discussion.
Another goofy post from Barry. He doesn’t even get that he’s undermining the arguments his own side is making.
It’s not enough for CSI to be a meaningful but subjective concept. Dembski’s entire point in introducing CSI was to establish an objective indicator of design. (He failed, but that’s another story). Without an objective criterion, ID is reduced to “seems designed, therefore is designed” arguments.
Also, Barry is confused about what quantification means. Critics aren’t demanding the CSI value of Mt. Rushmore to a precision of 30 decimal places. We would be quite satisified with the following:
1) A rigorous demonstration that any object with CSI greater than some fixed value Z must be the product of design, and
2) A rigorous demonstration that Mt. Rushmore’s (or any other object’s) CSI is greater than Z.
It doesn’t matter whether the CSI is Z+1, 10Z, 1,000Z or Z raised to the Zth power. As long as it’s greater than Z, then the object must be designed — but only if steps 1 and 2 are satisified.
We still await a successful argument of this kind. Meanwhile, Barry helpfully shoots his allies in the back.
Dembski’s description of CSI makes it very clear that he considers it a quantitative value with units of bits.
Whatever Barry is on about, it isn’t CSI.
Well, they don’t necessarily have to have a CSI meter with numeric output. They do need to have a method of ranking CSI and detecting whether one instance of CSI is larger than, smaller than, or equal to some other instance. This does, of course, immediately lead to numbering instances of CSI, but the numbering is a little artificial.
However, the ability to compare is required.
That’s certainly one option for a metric, but it doesn’t correspond to at least some of the variants of CSI discussed at UD.
The first thing the IDCists need to do is define the term. It would also be interesting to get them to explain why they don’t all use Dembski’s definition.
As far as I can tell, Dembski made a good-faith effort to quantify an intuition. At the start of the effort, “design” was kind of like Potter Stewart’s “obscenity” – not defined, but he knows it when he sees it.
So he asked himself, HOW does he know it? What is there about Designed objects that make their nature so clear to him? What do Designed objects seem to have in common, that give them away?
The underlying problem is the same in both above cases – the essence of both design and obscenity is INTENT, not content. What gives them away is the specification, the notion that such objects simply could not have occurred unless there was a specification. And specifications require intent.
Dembski may not have realized he was assuming his conclusions, because his conclusions are so obviously true in his mind. THAT life is intentionally designed is beyond any rational doubt, showing this requires some useful metric. But that effort founders on quantifying intent. There is no “amount of intentional” — something is either intentional or it isn’t.
In the case of obscenity, this is a bit simpler – you can ask whoever created it what his intention was, or it might be clear from his context. That is, how it was presented, target audience, distribution channels, etc. In the case of Design, where there may not be any entity to have any intent, this becomes circular – he’s using the entity to establish the intent, and the intent to establish the entity.
I think Barry has conceded that CSI is not quantifiable. And in that case, the quesiton is whether it’s meaningful at all. Barry is fighting a rearguard action to defend the notion that CSI exists at all. The irony is that his assumption that it exists will be accepted only by those who assume it exists! The circularity is inherent.
What separates design from non-design is neither content nor intent, but history.
History is what’s missing from ID. The history of the object, without which one cannot calculate probabilities. Darwin knew this. That’s why he prefigured Dembski’s Explanatory Filter by saying any feature that was not the result of small, incremental change would ruin his theory. that’s why creationism relied on gaps for as long as possible.
Paley made a perfectly sound argument that living things are not the result of any non-directed process. What he failed to imagine was that there could be a natural process doing the directing. (I downloaded Natural Theology and have been reading it It’s vastly better than any current ID argument. Not different, just more honest and more literate.)
That bit of imagining had to wait for Darwin and Wallace. the fact that two isolated people came up with the same idea at the same time tells me that it would have happened anyway, even if fifty years later.
When calculated as bits per protein coding string, CSI is necessarily post-hoc. Given two “new” coding strings, one of which codes a protein and one of which doesn’t, you cannot distinguish them by any method other than trying them (and chemistry is going to be faster than simulations of chemistry).
The problem is going to be worse for regulatory sequences. Worse yet when you consider the net consequence for reproductive success.
So Dembski’s “specification” is always post-hoc. There is no royal road to biological design. It’s evolution or the Easter Bunny.
Life is the ultimate Edison laboratory, where every possible combination can be tried and tested. To borrow a science fiction meme, it’s the lathe of heaven. Or to mix metaphors, it’s the wheel that grinds slowly, but exceedingly fine.
Termites would likely place a higher value on the shack than they would on the cathedral.
The problem with any “measure” of CSI is that it lies in the eye of the beholder. Cats and dogs like things that humans don’t. Why is what cats and dogs value any less significant than what a human would value?
Different groups of people from different cultures and vastly different eras in human history see things of “value” or “significance” that other groups never see. Who saw significance in the shapes of constellations? Who decided which set of stars delineated which shape and which name? What does it mean to attach CSI to a constellation in the heavens?
One can stare into a thicket of bushes and branches, or at a mottled pattern in a carpet, or at a bunch of clouds, and see all kinds of “faces” and “creatures.” Those patterns are meaningful only to the human seeing them.
Dembski, Abel, et. al., have attempted to elevate CSI to a mathematically quantifiable measure by alluding to arrangements of atoms and molecules randomly sampled with a uniform sampling distribution and placed in “specified” arrangements that are alleged to be infinitesimally probable. Who decides the “significance” of such arrangements? Dembski or Abel? Why? What criteria are used to place “value” or “significance” on an arrangement of atoms and molecules? Are they “significant” just because they are associated with some particular living creature?
What if one examines such molecules and doesn’t know they have anything to do with a living organism? Suppose urea was discovered before it was known it had anything to do with living organisms? What is the “significance of the arrangement of carbon and hydrogen in benzene? What CSI does it have?
CSI, no matter how one tries or doesn’t try to connect a number to it, is a completely subjective concept that permits endless arguments over its significance. That is why it is so attractive to sectarians who spend their lives arguing passionately over the meanings of meanings.
What you’ve called history, I’ve usually called context. By which I mean more than just history, but additionally the background information, interrelationships with other objects, the nature of the creator (his motivations, intentions, needs, limitations, tools, procedures, etc.)
In any case, the essential point is that Design cannot be inferred from examination ONLY of the object itself. Design is best considered a process, and not the result of a process.
Now Axel is grasping at straws in a comment to the UD thread, where he writes:
And yet he debunks his own point in the very next paragraph, where he writes:
Thanks for a good discussion, and please keep it up.
Having formulated general relativity ( and making a few boners while doing so) he was on pins and needles waiting for the results of the solar eclipse observations.
It’s amusing to watch ID proponentsists making fun of string theory for being untestable.
I agree with you, but Dembski does not. From the paper I linked to above:
The CSI that requires context is not the true CSI.
Oh, I agree. My point was that this contextual knowledge is required, and Design is impossible to identify without it. But Dembski could not afford to admit this, and indeed was required to deny it, because he HAS no contextual knowledge. I think he got tired of being asked to calculate the CSI of objects he could not identify, since he’s been silent on this for a while now.
It occurs to me that the same questions could be asked of “intelligence”. I guess we can order people comparatively by their abilities at controlled tasks but is “intelligence” a meaningful scientific concept?
I’d say there’s a difference between “meaningful” and “well operatinalized”. Certainly a dog can process more input than a nematode, consistent with the scope of adaptive behaviors open to the two. But it’s not easy to devise tests appropriate to the needs of dogs and nematodes that are not biased by human needs.
To steal from Arthur C Clarke, intelligence, when used by a Design advocate, is indistinguishable from magic.
1) A rigorous demonstration that any object with CSI greater than some fixed value Z must be the product of design,
And that’s the real problem. Perhaps we should not discuss it in this thread, but I have argued elsewhere that Dembski’s CSI is an OK measure (for simple models) of a genotype being so high on a fitness scale that it could not have gotten there by pure random mutation. And I have also argued there that it could have gotten there by natural selection, that his Law of Conservation of Complex Specified Information is both unproven and of the wrong form to establish that the CSI must have come from Design (or cannot have come from natural selection).
In my view that’s the problem with the Design Inference that uses CSI. The problem is not in the definition of CSI, it’s in the assertion that it could not have arisen by natural selection.
Meanwhile, back to debates about whether CSI is OK. That’s what this thread is about, and I respect that. But I just wanted to blurt this out.
But the question of context or whether a sequence could be the result of incremental change is the central question, the only question worth asking, and CSI can’t address it.
CSI itself can’t address it. But when you add in Dembski’s Law of Conservation of Complex Specified Information, CSI shows that there must be Design. If, that is, Dembski’s LCCSI is provable and is formulated so as to show that natural selection cannot get you to CSI starting from no-CSI.
But alas for Dembski’s argument, the LCCSI is both not-proven and formulated in a way that changes the specification between the Before and the After. So his Design Inference doesn’t work.
I have been arguing that this is where the hole(s) in Dembski’s argument are — not in the alleged meaninglessness of CSI. I think that, in simple models of evolution, CSI is OK. Leslie Orgel was not silly when he introduced the concept of Specified Information.
Anyway, most people here seem to be convinced that the weakness of Dembski’s argument is in the concept of CSI itself. You’re wrong about that but I should respect the topic of the OP and not belabor this point here.
ID advocates have to argue that functional space is so sparse that it can’t be bridged.
Their metaphor for search space is a combination lock with only one correct combination. Biologists would assert that there are vastly more correct combinations. So many, in fact, that most changes to the combination have no effect at all.
So there is a hypothetical universe in which Dembski’s argument makes sense. It just doesn’t seem to be this one. I agree that it is important to discuss why CSI doesn’t work in the real world, and that reason may not be that it can’t be calculated.
To me it seems that it is wrong because it assumes its conclusion. It tries to prove that evolution can’t produce complex things, but starts with the assumption that evolution can’t produce complex things. It’s the same problem with the Explanatory Filter.
Certainly there are legitimate research programs that use a mathematical definition of “information” or “functional information” in order to quantify the changes that take place in molecular structures that are involved in the establishment of some specified function. There is nothing wrong with attempting to quantify such ideas in the search for objective measures of evolving structures. While I personally might question the use of the word “information” as a concept that triggers misconceptions in this context, it’s no worse than the physicist’s use of words like “color” or “up” or “down” or “top” or “bottom” when referring to quarks.
We have been through the writings of Abel and Dembski a number of times on this site and on PT. These writings turn out to be extremely superficial in their use of logarithms and the label “information” as they try to make something trivial appear to be deep and profound. The use of such concepts also contributes to misconceptions and confusion.
What is more, not one of the ID/creationist camp followers – who never hesitate to copy/paste this material as “powerful arguments” for ID – appears to be able to comprehend, let alone articulate, what any of it means. Any attempt to get them to do a calculation reveals that they don’t even understand high school algebra. Yet, to an ID/creationist camp follower, a logarithm is an advanced math concept that brings out the jealousy of “evilutionists” who are too stupid to grasp what only “geniuses” like Dembski and Abel can.
In the 1980s, the ID socio/political movement morphed out of “scientific” creationism both as an attempt to get around the courts and to appear “scholarly.” I have a strong suspicion that the leaders of the ID/creationist movement recognize how their followers use the “theoretical works” of ID in attacking secular science. They certainly have had plenty of feedback from members of the science community; they know, yet they keep encouraging it.
What the ID/creationist leaders can’t achieve by subjecting themselves to the processes of peer-reviewed science they attempt to achieve through political means by inspiring their awed followers to wear down the opposition; poor rubes who don’t know that the “impressive weapons” they carry into battle are made of papier-mâché and who don’t recognize when they have had their arms and legs cut off.
Apparently Joe G is still reading and he has a little cry over at his blog:
“Earth to Neil- your position, evolutionism, cannot be quantified. There isn’t anything that is defined precisely enough to allow for forming a testable hypothesis.” (That’s the non-sweary bit)
Oh Noes, Joe!
And unfortunately he’s now showing his misunderstanding by committing a
Perhaps we should thank Joe for publicizing our thread.
Neil, unfortunately Joe’s readership is small, and largely looking for comedy.