I see that in the unending TSZ and Jerad Thread Joe has written in response to R0bb
Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI. Even Stephen C. Meyer says CSI is not amendable to compression.
A protein sequence is not compressable- CSI.
So please reference Dembski and I will find Meyer’s quote
To save Robb the effort. Using Specification: The Pattern That Signifies Intelligence by William Dembski which is his most recent publication on specification; turn to page 15 where he discusses the difference between two bit strings (ψR) and (R). (ψR) is the bit stream corresponding to the integers in binary (clearly easily compressible). (R) to quote Dembksi “cannot, so far as we can tell, be described any more simply than by repeating the sequence”. He then goes onto explain that (ψR) is an example of a specified string whereas (R) is not.
This conflict between Dembski’s definition of “specified” which he quite explicitly links to low Kolmogorov complexity (see pp 9-12) and others which have the reverse view appears to be a problem which most of the ID community don’t know about and the rest choose to ignore. I discussed this with Gpuccio a couple of years ago. He at least recognised the conflict and his response was that he didn’t care much what Dembski’s view is – which at least is honest.
Yes, it’s clear that Dembksi and most ID advocates are quite confused about the relationship between Kolmogorov complexity and the bogus concept of CSI. In my paper with Elsberry we point out that Dembski associates CSI with low Kolmogorov complexity (highly compressible strings). But strings with low Kolmogorov complexity are precisely those that are “easy” to produce with simple algorithmic procedures (in other words, likely to occur from some simple natural algorithm). By contrast, organismal DNA (for example) doesn’t seem that compressible; experiments show long strings of organismal DNA are often poorly compressible, say only by about 10% or so. This is, in fact, good evidence that organismal DNA arose through a largely random process.
A rather odd response from Joe.
Dembksi’s paper is quite clearly and explicitly about what “specified” means when used in the context of complex specified information. It is what the whole paper is about.
Since ID advocates like to compare the genetic code to computer code, one wonders why the designer failed to implement CRC checking and a bit of redundancy in the code. The actual error correcting machinery is woefully inadequate if the goal is something other than evolution. I can copy a computer file through a million generations without error, even though media are imperfect.
Now if the mutation system is in itself part of a designed adaptive system, as Shapiro argues, that implies the designer relies on the landscape having gradients.
Goodness it is difficult having a debate with Joe – the issue under debate seems to drift all over the place. This is the latest:
Leaving aside the fact I only just joined the debate and he has never referred me to anything, I was demonstrating that Dembski defines “specified” (as in CSI) one way and he defines it another. For Joe to repeat his definition doesn’t contribute to this.
Joe you need to take up the issue of compressability with William Dembski not me. I am only pointing out that your definitions are incompatible.
Joe
Uhm – Joe have you read Dembski’s paper? (I have a feeling you will avoid answering this)
You’ve noticed! Maybe Joe and Mung should have a natter, since they operate in the same milieu. Mung reckons a string of all 1’s has high CSI; Joe reckons completely random sequences do (given that they are the least compressible). Maybe they are both right, and any string can have high CSI? Rendering CSI a handy term for ‘it’s-a-string’?
Joe
Well that is great. Can you then explain what Dembski does mean by specified if he does not mean compressible?
I am not sure what I am ignoring of what you wrote. I accept that you and many other ID proponents define “specified” in such a way that it is incompressible. My single point is that this definition conflicts with the most recent work on the subject by the leading ID thinker.
I didn’t want to get into a silly dispute over what is meant by “referring someone to something”. It seems to me that just because I have quoted something you wrote it doesn’t mean you referred me to it – but who cares?
Joe
That’s true. So what? We were discussing different concepts of specification (yours and Dembski’s). You and Dembski agree that specification is only one part of CSI. Where you disagree is whether something is specified is compressible. Or do you think that Dembski says in the paper that specified things are not compressible? I would be truly interested to know where in the paper you think he says this.
I am more tolerant of Orgel’s concept of specified information than Jeff is. In a simple genetic algorithm model it is some relevant scale, and Dembski’s use of it to define CSI is to define a region of genotypes far enough out on that scale that pure mutational processes would never once, even in the whole history of the Universe, produce a value that extreme.
But what scale? The relevant one (which Dembski does allow) is fitness, or at any rate one which expresses a degree of adaptation. We could instead define an arbitrary one such as the number of purple spots on the organism, but that would be of no interest. The whole purpose of Dembski’s Design Inference argument is that he thinks that there is a conservation theorem that prevents natural selection from getting organisms as far out on he fitness scale, as highly adapted as they are in real life. (And he’s wrong about the theorem showing that, as Jeff and I have argued).
Dembski’s Kolmogorov argument is just one other possible scale — we rank organisms according to the smallness of the computer program that can produce them. That seems to have nothing to do with fitness or adaptation. An organism that was a perfect sphere might be the winner, though it would not be well-adapted. So I think it is as silly as using the number of purple spots as the specification.
At that point in Dembski’s argument he is extremely fuzzy about why he wants to use this criterion, what it accomplishes. For example he simply has no discussion of the probability that a random genotype will be a program that does the job.
So I think the Kolmogorov Complexity criterion Dembski invokes should just be tossed, along with the Purple Spots Criterion.
Bless you, Mark, for trying. But why anyone would want to try to reason with Joe, a person who seems to be irrational, is beyond my understanding.
You are right. I feel a fool for trying.
Mark,
As long as the creationists have Joe, Mung, KF, UB and gpuccio on their side, we’re guaranteed many more Dover rulings! 🙂
I think I can understand the problem here. What Joe and Dembski are both doing is looking at the object in question and deciding whether it was Designed. If the answer is yes, then it must have high CSI, otherwise the CSI must be substantially lower.
Now here, we start to run into problems. Some objects which they regard as obviously Designed are very difficult to compress, while others compress easily and significantly. There doesn’t seem to be a consistent pattern with respect to compressability.
Nor can we simply discard compressability as a component of specification, because if we do, what remains is all too obviously arbitrary and subjective. Fact is, the specification cannot be determined from examining an object since there is no way to know whether the object meets the specification. So Joe and Dembski are like Justice Potter Stewart trying to specify pornography. They know it when they see it, but the ONLY thing multiple Designed objects have in common is Joe’s (or Dembski’s) determination based on what they decide to believe about it.
And so here we are. Joe realizes that high compressability can’t be the measure of specification, because far too many simple repetitive clearly non-Designed objects compress very tightly. But Dembski realizes that low compressability can’t be it either, because the genuinely random and patternless cannot be compressed and THAT is surely not Designed either.
And if compressability is orthogonal to specification, what DO we look at? There must be SOME identifiable and measureable hallmarks of Design, or else we’re left only with subjective preference. Like, you know, religion.
Flint,
It’s even worse than that. An object can conform to multiple specifications, in which case it simultaneously possesses multiple CSI values, all equally valid:
One common test of creativity in children is to ask them how many different uses they can think up for some object, like a brick. Some children can think of a dozen, some of them quite creative. The number of uses to which bricks have been put is probably quite large. Similarly, you can probably find half a dozen or more objects used for purposes other than what was probably originally intended (chairs used as doorstops, coins used as shims, whatever) around your home in a few minutes.
The very idea of CSI lacks any real-world referent.
So the point was they are trying to paste post hoc rationalizations onto foregone conclusions derived from religious precepts, to make them smell scientistical. They can hardly help but know this.
Per Dembski’s muddled terminology (emphasis added):
Joe:
So when you say “Try and compress x – high CSI”, you aren’t linking CSI with compressibility? The point is that ‘random’ strings – with no pattern at all – are the least compressible. Of course they are not CSI, because they are random. But they are minimally compressible. So CSI lies in that nice middle ground between minimally compressible and non-random? It’s not so much that we ignore what you wrote, it is just incoherent.
You tell us, sunshine! It’s your (ID’s) bloody concept! You claimed to generate it by making a string of 1’s. Elect one of your number to give a coherent presentation of it on which you all agree.
Allan – I wish you the best of luck with this.
I have a theory that Joe and Mung are really quite rational and are doing this as a kind of cruel tongue-in-cheek wind up.
The simple fact is you can’t do a useful probability calculation without assuming something about the history and context of the sequence.
By themselves, all sequences are equally probable.
What leads us to suspect that some are improbable is their usefulness. Usefulness seems to be associated with the term specification.
There is an assumption not backed up by evidence that usefulness is unimaginably rare, and that useful sequences have no close neighbors. This is contradicted by the existence of alleles, by the work of Lenski and by the work of Thornton.
ID therefore is assuming its conclusion. It is trying to prove that current configurations could not have been reached via stepping stones by invoking probability calculations that depend on there being no stepping stones.
Either they are, or I am! 🙂
Interesting that, regardless of what is said about CSI, no-one pipes up and tells a fellow-IDer that they have it all wrong. Yet skeptics of this hard-to-pin-down property do nothing but misrepresent, distort, lie, equivocate, bluff and bluster … well, everyone needs a hobby!
That means Dembski is treating specified complexity as a quantity, not a boolean.
Dembski fails at the get-go:
If we see objects lined up the same way, and not knowing how the pattern arose, if we then assume a uniform probability distribution, then we conclude design. But they could just as easily be molecules in a crystal aligned in the same direction. It seems you do have to know “how they arose”.
Dembski’s problem seems pretty obvious. He KNOWS that his Designer did it, but he has no way to demonstrate this. No mechanisms at all. So he has to work backwards, showing that his Designer MUST have done it because there are no possible alternatives. And to do this, he must eliminate a mechanism widely recognized as producing such Designs through normal operation. What he can NOT do, under any circumstances, is concede that the process visibly and inexorably producing such Designs as we watch, is even possible.
So if he should happen to accidentally admit that a specification PRECEDES a Design, which it must do, that means knowledge of an object’s history is necessary. And he has no history.
So where Dembski fails is BEFORE the get-go. He assumes his conclusions out of religious necessity. These foregone conclusions are not negotiable, questionable, even examinable. They are given. They are also false. So the challenge is to demonstrate that a falsehood is true because it MUST be true, lest Dembski’s faith be misguided.
I’m sure he’s sincere in his efforts. My question is whether such resounding and repeated (and obvious) failures CAN be visible to him. So far, he has resolutely ignored all critics.
Joe is apparently defining SC as a quantity, but CSI as a Boolean. That would make his first statement incoherent as he would be comparing apples and oranges, but that seems to be his understanding.
Dembski’s definition doesn’t involve function. Presumably, the “separate functional system” would be equivalent to a semiotic agent’s description. The problem is that one can never know that a given sequence is non-compressible. A sequence may appear non-compressible (random), but have a simple description beyond our knowledge. In that situation, you would initially conclude design, but when provided the ‘key’, discover that the initial conclusion was a false positive.
You are assuming that Joe is capable of understanding. I remain unconvinced that he isn’t an undergraduate Markov text generator that has been allowed to run amok.
Yes, yes, I’ll see myself to Guano….
So what are the units for SC? What are the units for CSI? This statement seems to indicate that CSI is Boolean.
So, CJYman is the primary source for the definition CSI? Does Dembski define CSI? If so, where?
Zachriel,
This is probably the only time I’ll ever defend a statement of Joe’s, but I think that in this instance his statement is at least coherent. Whether it comports with Dembski’s intent, I have no idea.
Joe isn’t saying that CSI is a boolean. He’s saying that SC and CSI are commensurable quantities. An SC value below a certain threshold is not CSI, while a value of SC above that threshold is CSI.
It’s like saying that a speed below 65 mph is not an illegal speed, while a speed above 65 is an illegal speed. They’re measured in the same units, but one speed qualifies as illegal while the other doesn’t.
Okay.
But then, he says this:
He may mean they have the same calculation of value, but that wouldn’t make them synonymous. It would make CSI a subset of SC.
Shorter Mung: Compressible, might be CSI.
Shorter Joe: Non-compressible, required for CSI.
Zachriel,
Yep. But at least he got one statement right. That’s progress.
Except that you offered it as a definition. This is easy to resolve, though. Just provide a clear and unambiguous definition of CSI.
Asking nicely for a definition and some example calculations worked so well the last time it was tried, after all.
gpuccio shows how to calculate dFSCI in practice.
Finally, a definitive recipe we’ve all been waiting for!
He makes a good point about the relevance of compressibility to biological strings, though then rather blatantly ‘smuggles in’ a function relating to the existence of the transcription/translation system. The total dFSCI is that of the string itself plus the ‘decoding’ system. So there can never be translated strings that do not have dFSCI, whatever they contain, since they go through the ribosome/mRNA/tRNA/aaRS system! Any individual string’s complexity and functionality is utterly dwarfed by that of the ‘system’.
By analogy, all of Lizzie’s organisms have high dFSCI because they run inside a complex, designed computer, and are handled by a program that is more complex than they are … which I know is how many ID-ers like to play it – you can’t look at evolution within the system without first accounting for the system – but it does seem decidedly fishy to me. You can’t investigate accounting without explaining accountants?
Yes. The probability of each event can be used to calculate entropy. For instance, the toss of a weighted coin has less than one bit of entropy.
Yes. It’s part of the basic definition; Entropy is the negative of the sum of the products of the probability of each event and the log of that probability. The calculated entropy, then, depends on our knowledge of the events. So if we know nothing, then each symbol may be considered to have equal probability. If we know that the message represents a text in English, and because some letters are more common than others, the entropy is less than it would be otherwise. Indeed, human observers can often guess the content of messages even when more than half the letters are missing. (See Sajak & White, W h – – l o f F – – – – n -.)
Heh.
Well, that establishes that there are conflicting definitions.
We only gave the thread a cursory view, but from what we did read, it was presumably Dembski’s definition; a long sequence which has a simple description (the function), but is unlikely due to chance alone (a uniform probability distribution).
Turns out that the planets keep moving whether we want them to or not.
From your question, most people would assume you are referring to five independent binary digits, but if the next symbol were “9”, then that assumption would be shown to be in error. They might then assume they are decimals digits, but if the next symbol were “a”, then that assumption would also be shown to be in error.
In an engineering context, it is generally assumed binary bits are independent, and are then subject to lossless compression after the fact.
So your answer to Dembski’s rhetorical question, “Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?” is no.
That seems to contradict what you just said. Is CSI necessarily non-compressible?
Also, we can never know that a given sequence is non-compressible. A sequence may appear non-compressible (random), but have a simple description beyond our knowledge. In that situation, you would initially conclude design, but when provided the ‘key’, discover that the initial conclusion was a false positive.
χ = –log2[10120 · ϕS(T)·P(T|H)]
Dembski, Specification: The Pattern That Signifies Intelligence 2005.
Gpuccio said, “We also have to consider of any known necessity mechanism can explain what we observe, completely or in part.” That clearly indicates we have to consider the mechanism of how a pattern arose.
Joe:
Joe, it would be very interesting for you to “show your working” as it were. Could you show what your inputs and outputs were? I’d be very interested if you were able to substantiate this very clear claim (of plugging in the numbers) with some, oh I don’t know, evidence. http://www.uncommondescent.com/intelligent-design/conservation-of-information-made-simple-at-env/#comment-436151
GP has clarified my point, and says that I have misinterpreted this:
as implying that he would always take the ‘software’ into account, including translation/transcription. I am happy to correct that.
nonetheless,
I doubt that this would be particularly instructive – or rather, very conclusive. It is presently difficult-to-impossible to distinguish which features of known replicating entities are primary, and which secondary, for events that took place prior to LUCA. We only know what is universal in her descendants.
The minimal known reproducing beings are protein-based, and tend to be parasitic. Even ignoring that parasitism, we simply have no handle on what a minimal reproducer would have consisted of. I certainly strongly doubt that protein is primary, as I have argued (with equally little effect!) elsewhere.
It means people can be discussing CSI, but referring to different things entirely.
I would like to point out that having two or more dogmas does not imply that any of them are correct. there are hundreds of religions, but this does not guarantee that any are true.
What makes a position dogmatic is not its correctness or incorrectness, but its imperviousness to evidence.In the case of the various versions of CSI, it is the adherence to fallacy of assuming the conclusion.
CSI in its ID garb, cannot exist if there is a natural process that can generate the structure in question. ID advocates argue that some structures could not have arisen naturally because they are too comples (which implies CSI).
If you wish to break out of this fallacy you must demonstrate the inability of evolution to operate continually.
I find it amusing that Wallace is preferred over Darwin by ID advocates, but the title of Wallace’s original paper is On the Tendency of Varieties to Depart Indefinitely From the Original Type.
Where is the evidence against this indefinite tendency?
Joe
I thought ID was only about detecting design? So now that you are looking at organisms the correct way what additional information can you provide? Nothing? Thought so.
What’s that got to do with ID? And I ask once again, what is the “it” in doingit?
What is it that I’m claiming they can do? Is pretending that you are responding to a claim I’m making really the best way you can think of to try and redirect attention away from the fact that you have not and cannot respond to the actual questions being asked?
Huh? What’s that got to do with anything? What am I equivocating about, specifically?
If that’s the case then why are your writings not more famous then his?
Once again I ask you for that pseudocode. I believe I can build a program that will output what would commonly be known as CSI. I’m asking you how I could go about checking that output? How can I measure that “CSI”?
I think that you can logically demonstrate that any objective measure of CSI would enable a GA to produce CSI. All you have to do is use the CSI definition to compare the quantity of CSI in child sequences and favor the better sequences in reproduction.
I might point out that this is what is accomplished in natural selection, but that would be mean.
It is becoming obvious that Dembski’s “true stroke of genius” in “defining” CSI was in his taking logarithms to base 2 of ratios of the cardinalities of sets.
Watching the UD crowd grapple with this “advanced math” suggests that log base 2 is the ultimate obfuscator in any arguments with “Darwinists.” To the UD crowd, log base 2 is so advanced that no “Darwinist” can possibly understand the “proofs” of intelligent design.
It is quite amazing that people who never learned high school level science can feel so brilliant at log base 2 math to “prove” things about design in science.
Reality is cruel to the aggressively, willfully ignorant. You are but its instrument. 😉
Based on the original post, you count the number of sequences that meet the specification and compare it to the number of possible sequences. This is consistent with Dembski’s definition.
To see if islands of function are isolated, you count the number of sequences that are beneficial or neutral and which are within one mutational hop of the current configuration. Mutation type TBA.
So far the count has never been less than one, even in Axe’s PhD experiment.
So the probability that neutral or beneficial change will be found from any existing position is one, The best approximation from available evidence.
Mung
Thanks for those links, but I’ve already built one of those. What I don’t have is a “CSI calculation”. That’s for your side to provide. You really don’t get that? A few comments down gpuccio says:
So on the one hand we have you seemingly telling me that CSI cannot be calculated and on the other hand gpuccio is saying it’s apparently deadly to my “beliefs”, whatever they might be.
I’m not arguing any possible thing to evade the concept.
You are Joe are.
Can’t you see that?
I’m just asking you how I can determine what the value of CSI is for a given string. Yet you turn that around onto me? Good luck with that.
Shakespeare has CSI then? Show your working…..
Mung,
Why don’t you run the output through the Explanatory Filter?
Good question. That’s exactly what you need to answer. Ask Joe!
It seems to depend on who you ask. If you’ve been reading this very thread you’ll see that.
What is your definition?