Conflicting Definitions of “Specified” in ID

I see that in the unending TSZ and Jerad Thread Joe has written in response to R0bb

Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI. Even Stephen C. Meyer says CSI is not amendable to compression.

A protein sequence is not compressable- CSI.

So please reference Dembski and I will find Meyer’s quote

To save Robb the effort.  Using Specification: The Pattern That Signifies Intelligence by William Dembski which is his most recent publication on specification;  turn to page 15 where he discusses the difference between two bit strings (ψR) and (R). (ψR) is the bit stream corresponding to the integers in binary (clearly easily compressible).  (R) to quote Dembksi “cannot, so far as we can tell, be described any more simply than by repeating the sequence”.  He then goes onto explain that (ψR) is an example of a specified string whereas (R) is not.

This conflict between Dembski’s definition of “specified” which he quite explicitly links to low Kolmogorov complexity (see pp 9-12) and others which have the reverse view appears to be a problem which most of the ID community don’t know about and the rest choose to ignore.  I discussed this with Gpuccio a couple of years ago. He at least recognised the conflict and his response was that he didn’t care much what Dembski’s view is – which at least is honest.

261 thoughts on “Conflicting Definitions of “Specified” in ID

  1. Yes, it’s clear that Dembksi and most ID advocates are quite confused about the relationship between Kolmogorov complexity and the bogus concept of CSI.  In my paper with Elsberry we point out that Dembski associates CSI with low Kolmogorov complexity (highly compressible strings). But strings with low Kolmogorov complexity are precisely those that are “easy” to produce with simple algorithmic procedures (in other words, likely to occur from some simple natural algorithm).    By contrast, organismal DNA (for example) doesn’t seem that compressible; experiments show long strings of organismal DNA are often poorly compressible, say only by about 10% or so.   This is, in fact, good evidence that organismal DNA arose through a largely random process.

  2. A rather odd response from Joe.

    Mark Frank is confused. Above I was discussing COMPLEX SPECIFIED INFORMATION and he links to a paper about SPECIFICATION ONLY, in an attempt to refute what I said.

    SPECIFICATION is the S in CSI, Mark.

    Dembksi’s paper is quite clearly and explicitly about what “specified” means when used in the context of complex specified information.  It is what the whole paper is about. 

  3. Since ID advocates like to compare the genetic code to computer code, one wonders why the designer failed to implement CRC checking and a bit of redundancy in the code. The actual error correcting machinery is woefully inadequate if the goal is something other than evolution. I can copy a computer file through a million generations without error, even though media are imperfect.

    Now if the mutation system is in itself part of a designed adaptive system, as Shapiro argues, that implies the designer relies on the landscape having gradients.

  4. Goodness it is difficult having a debate with Joe – the issue under debate seems to drift all over the place.  This is the latest:

     

    To Mark Frank- once again I refer you to my exmaples in 186- none of which are compressible and all of which exhibit CSI.

    How to YOU deal with those facts, Mark? By ignoring them, as usual…

    Leaving aside the fact I only just joined the debate and he has never referred me to anything, I was demonstrating that Dembski defines “specified” (as in CSI) one way and he defines it another.  For Joe to repeat his definition doesn’t contribute to this. 

    Joe you need to take up the issue of compressability with William Dembski not me.  I am only pointing out that your definitions are incompatible.

  5. Joe

    The most likely explanation is that YOUR interpretation of Dembski is wrong.

    Uhm – Joe have you read Dembski’s paper?  (I have a feeling you will avoid answering this)

  6. Goodness it is difficult having a debate with Joe – the issue under debate seems to drift all over the place.

    You’ve noticed! Maybe Joe and Mung should have a natter, since they operate in the same milieu. Mung reckons a string of all 1’s has high CSI; Joe reckons completely random sequences do (given that they are the least compressible). Maybe they are both right, and any string can have high CSI? Rendering CSI a handy term for ‘it’s-a-string’?

  7. Joe

    Yes I have read the paper. Nice of YOU to ignore everything I have said

    Well that is great.  Can you then explain what Dembski does mean by specified if he does not mean compressible?

    I am not sure what I am ignoring of what you wrote.  I accept that you and many other ID proponents define “specified” in such a way that it is incompressible. My single point is that this definition conflicts with the most recent work on the subject by the leading ID thinker.

    I didn’t want to get into a silly dispute over what is meant by “referring someone to something”. It seems to me that just because I have quoted something you wrote it doesn’t mean you referred me to it – but who cares?

  8. Joe

    SPECIFICATION IS NOT CSI. Specification is only one part of CSI- ie the S.

    That’s true.  So what?  We were discussing different concepts of specification (yours and Dembski’s).  You and Dembski agree that specification is only one part of CSI. Where you disagree is whether something is specified is compressible. Or do you think that Dembski says in the paper that specified things are not compressible?  I would be truly interested to know where in the paper you think he says this.

  9. I am more tolerant of Orgel’s concept of specified information than Jeff is. In a simple genetic algorithm model it is some relevant scale, and Dembski’s use of it to define CSI is to define a region of genotypes far enough out on that scale that pure mutational processes would never once, even in the whole history of the Universe, produce a value that extreme.

    But what scale? The relevant one (which Dembski does allow) is fitness, or at any rate one which expresses a degree of adaptation. We could instead define an arbitrary one such as the number of purple spots on the organism, but that would be of no interest. The whole purpose of Dembski’s Design Inference argument is that he thinks that there is a conservation theorem that prevents natural selection from getting organisms as far out on he fitness scale, as highly adapted as they are in real life. (And he’s wrong about the theorem showing that, as Jeff and I have argued).

    Dembski’s Kolmogorov argument is just one other possible scale — we rank organisms according to the smallness of the computer program that can produce them. That seems to have nothing to do with fitness or adaptation. An organism that was a perfect sphere might be the winner, though it would not be well-adapted. So I think it is as silly as using the number of purple spots as the specification.

    At that point in Dembski’s argument he is extremely fuzzy about why he wants to use this criterion, what it accomplishes. For example he simply has no discussion of the probability that a random genotype will be a program that does the job.

    So I think the Kolmogorov Complexity criterion Dembski invokes should just be tossed, along with the Purple Spots Criterion. 

  10. Bless you, Mark, for trying.  But why anyone would want to try to reason with Joe, a person who seems to be irrational, is beyond my understanding. 

  11. Mark,

    As long as the creationists have Joe, Mung, KF, UB and gpuccio on their side, we’re guaranteed many more Dover rulings! :)

     

     

  12. I think I can understand the problem here. What Joe and Dembski are both doing is looking at the object in question and deciding whether it was Designed. If the answer is yes, then it must have high CSI, otherwise the CSI must be substantially lower.

    Now here, we start to run into problems. Some objects which they regard as obviously Designed are very difficult to compress, while others compress easily and significantly. There doesn’t seem to be a consistent pattern with respect to compressability.

    Nor can we simply discard compressability as a component of specification, because if we do, what remains is all too obviously arbitrary and subjective. Fact is, the specification cannot be determined from examining an object since there is no way to know whether the object meets the specification. So Joe and Dembski are like Justice Potter Stewart trying to specify pornography. They know it when they see it, but the ONLY thing multiple Designed objects have in common is Joe’s (or Dembski’s) determination based on what they decide to believe about it.

    And so here we are. Joe realizes that high compressability can’t be the measure of specification, because far too many simple repetitive clearly non-Designed objects compress very tightly. But Dembski realizes that low compressability can’t be it either, because the genuinely random and patternless cannot be compressed and THAT is surely not Designed either.

    And if compressability is orthogonal to specification, what DO we look at? There must be SOME identifiable and measureable hallmarks of Design, or else we’re left only with subjective preference. Like, you know, religion.   

  13. Flint,

    Nor can we simply discard compressability as a component of specification, because if we do, what remains is all too obviously arbitrary and subjective. Fact is, the specification cannot be determined from examining an object since there is no way to know whether the object meets the specification.

    It’s even worse than that. An object can conform to multiple specifications, in which case it simultaneously possesses multiple CSI values, all equally valid:

    By speaking of “bits of CSI”, IDers also invite the unwary to conclude that CSI is an intrinsic property of an object. They reinforce this notion when they speak of objects “containing” a certain number of bits of CSI. In reality, CSI is not intrinsic and can only be determined relative to a specified function. An object with n functions has n CSI values, one for each target space.

    CSI is just probability in a cheap tuxedo, to borrow a metaphor. And it’s a probability of non-design, with design as the default.

  14. One common test of creativity in children is to ask them how many different uses they can think up for some object, like a brick.  Some children can think of a dozen, some of them quite creative. The number of uses to which bricks have been put is probably quite large. Similarly, you can probably find half a dozen or more objects used for purposes other than what was probably originally intended (chairs used as doorstops, coins used as shims, whatever) around your home in a few minutes.

    The very idea of CSI lacks any real-world referent. 

    So the point was they are trying to paste post hoc rationalizations onto foregone conclusions derived from religious precepts, to make them smell scientistical. They can hardly help but know this.

  15. Per Dembski’s muddled terminology (emphasis added): 

    Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?

    a specification is that the pattern is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance.

    ϕS(T) = the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T… ϕS(T) defines the specificational resources that S associates with the pattern T.

    Thus, for a pattern T, a chance hypothesis H, and a semiotic agent S for whom ϕS measures specificational resources, the specificity σ is given as follows: σ = –log2[ ϕS(T)·P(T|H)].

    In addition, we need to factor in what I call the replicational resources associated with T, that is, all the opportunities to bring about an event of T’s descriptive complexity and improbability by multiple agents witnessing multiple events. If you will, the specificity ϕS(T)·P(T|H) (sans negative logarithm) needs to be supplemented by factors M and N where M is the number of semiotic agents (cf. archers) that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen (cf. arrows). 

    Moreover, we define the logarithm to the base 2 of M·N· ϕS(T)·P(T|H) as the context dependent specified complexity of T given H, the context being S’s context of inquiry: χ~ = –log2[M·N· ϕS(T)·P(T|H)].

    We thus define the specified complexity of T given H (minus the tilde and context sensitivity) as χ = –log2[10120 · ϕS(T)·P(T|H)].

  16. Joe:

    And Allan Miller chimes in:

    Mung reckons a string of all 1?s has high CSI; Joe reckons completely random sequences do (given that they are the least compressible).

    Nope, completely random sequences do not have CSI. As I said you guys ignore what I write and make stuff up.

    So when you say “Try and compress x – high CSI”, you aren’t linking CSI with compressibility? The point is that ‘random’ strings – with no pattern at all – are the least compressible. Of course they are not CSI, because they are random. But they are minimally compressible. So CSI lies in that nice middle ground between minimally compressible and non-random? It’s not so much that we ignore what you wrote, it is just incoherent.

    Mung: And I see Allan misrepresent me as well.
    High CSI? What’s that? Low CSI? What’s that?
    How much CSI makes for high CSI and how little CSI makes for low CSI?

    You tell us, sunshine! It’s your (ID’s) bloody concept! You claimed to generate it by making a string of 1’s. Elect one of your number to give a coherent presentation of it on which you all agree.

  17. Allan – I wish you the best of luck with this.

    I have a theory that Joe and Mung are really quite rational and are doing this as a kind of cruel tongue-in-cheek wind up.

  18. The simple fact is you can’t do a useful probability calculation without assuming something about the history and context of the sequence.

    By themselves, all sequences are equally probable.

    What leads us to suspect that some are improbable is their usefulness. Usefulness seems to be associated with the term specification.

    There is an assumption not backed up by evidence that usefulness is unimaginably rare, and that useful sequences have no close neighbors. This is contradicted by the existence of alleles, by the work of Lenski and by the work of Thornton.

    ID therefore is assuming its conclusion. It is trying to prove that current configurations could not have been reached via stepping stones by invoking probability calculations that depend on there being no stepping stones.

  19. I have a theory that Joe and Mung are really quite rational and are doing this as a kind of cruel tongue-in-cheek wind up.

    Either they are, or I am! :)

    Interesting that, regardless of what is said about CSI, no-one pipes up and tells a fellow-IDer that they have it all wrong. Yet skeptics of this hard-to-pin-down property do nothing but misrepresent, distort, lie, equivocate, bluff and bluster … well, everyone needs a hobby!

  20. From Addendum 1: One final difference that should be pointed out regarding my past work on specification is the difference between specified complexity then and now. In the past, specified complexity, as I characterized it, was a property describing a relation between a pattern and an event delineated by that pattern. Accordingly, specified complexity either obtained or did not obtain as a certain relation between a pattern and an event. In my present treatment, specified complexity still captures this relation, but it is now not merely a property but an actual number calculated by a precise formula (i.e., χ = –log2[10120 · ϕS(T)·P(T|H)]). This number can be negative, zero, or positive. When the number is greater than 1, it indicates that we are dealing with a specification.

    That means Dembski is treating specified complexity as a quantity, not a boolean.

     

  21. Dembski fails at the get-go:

    Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?

    a specification is that the pattern is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance.

    If we see objects lined up the same way, and not knowing how the pattern arose, if we then assume a uniform probability distribution, then we conclude design. But they could just as easily be molecules in a crystal aligned in the same direction. It seems you do have to know “how they arose”.

     

  22. Dembski’s problem seems pretty obvious. He KNOWS that his Designer did it, but he has no way to demonstrate this. No mechanisms at all. So he has to work backwards, showing that his Designer MUST have done it because there are no possible alternatives. And to do this, he must eliminate a mechanism widely recognized as producing such Designs through normal operation. What he can NOT do, under any circumstances, is concede that the process visibly and inexorably producing such Designs as we watch, is even possible.

    So if he should happen to accidentally admit that a specification PRECEDES a Design, which it must do, that means knowledge of an object’s history is necessary. And he has no history.

    So where Dembski fails is BEFORE the get-go. He assumes his conclusions out of religious necessity. These foregone conclusions are not negotiable, questionable, even examinable. They are given. They are also false. So the challenge is to demonstrate that a falsehood is true because it MUST be true, lest Dembski’s faith be misguided.

    I’m sure he’s sincere in his efforts. My question is whether such resounding and repeated (and obvious) failures CAN be visible to him. So far, he has resolutely ignored all critics.     

  23. Joe: CSI is a special case of specified complexity, which would mean all CSI is SC but not all SC = CSI.

    Joe: CSI and SC are different manifestations of the same thing.

    Joe: You have to see if the quantity is there to qualify as CSI.

    Joe is apparently defining SC as a quantity, but CSI as a Boolean. That would make his first statement incoherent as he would be comparing apples and oranges, but that seems to be his understanding.

    Joe: “If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information.”–CJYman

    Dembski’s definition doesn’t involve function. Presumably, the “separate functional system” would be equivalent to a semiotic agent’s description. The problem is that one can never know that a given sequence is non-compressible. A sequence may appear non-compressible (random), but have a simple description beyond our knowledge. In that situation, you would initially conclude design, but when provided the ‘key’, discover that the initial conclusion was a false positive.

     

  24. Joe is apparently defining SC as a quantity, but CSI as a Boolean. That would make his first statement incoherent as he would be comparing apples and oranges, but that seems to be his understanding.

    You are assuming that Joe is capable of understanding.  I remain unconvinced that he isn’t an undergraduate Markov text generator that has been allowed to run amok.

    Yes, yes, I’ll see myself to Guano….
     

  25. Zachriel: Joe is apparently defining SC as a quantity, but CSI as a Boolean.

    Joe: Nope.

    So what are the units for SC? What are the units for CSI? This statement seems to indicate that CSI is Boolean.

    Joe: “If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information.” — CJYMan

    So, CJYman is the primary source for the definition CSI? Does Dembski define CSI? If so, where? 
     

  26. Zachriel,

    Joe is apparently defining SC as a quantity, but CSI as a Boolean. That would make his first statement incoherent as he would be comparing apples and oranges, but that seems to be his understanding

    This is probably the only time I’ll ever defend a statement of Joe’s, but I think that in this instance his statement is at least coherent. Whether it comports with Dembski’s intent, I have no idea.

    Joe isn’t saying that CSI is a boolean. He’s saying that SC and CSI are commensurable quantities. An SC value below a certain threshold is not CSI, while a value of SC above that threshold is CSI.

    It’s like saying that a speed below 65 mph is not an illegal speed, while a speed above 65 is an illegal speed. They’re measured in the same units, but one speed qualifies as illegal while the other doesn’t.

  27. keiths: Joe isn’t saying that CSI is a boolean. He’s saying that SC and CSI are commensurable quantities. An SC value below a certain threshold is not CSI, while a value of SC above that threshold is CSI.

    Okay. 

    But then, he says this:

    R0bb: You said that “all CSI is SC but not all SC = CSI” and “CSI and SC are different manifestations of the same thing.” These indicate that the terms are not synonymous. Agreed?

    Joe: Disagree.

    He may mean they have the same calculation of value, but that wouldn’t make them synonymous. It would make CSI a subset of SC.

  28. Mung (quoting Dembski): It follows that the collection of nonrandom [algorithmically compressible] sequences has small probability among the totality of sequences so that observing a nonrandom sequence is reason to look for explanations other than chance.

    Joe (quoting CJYman): If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information.”–CJYman 

    Shorter Mung: Compressible, might be CSI.
    Shorter Joe: Non-compressible, required for CSI.
     
     

  29. Zachriel,

    He may mean they have the same calculation of value, but that wouldn’t make them synonymous. It would make CSI a subset of SC.

    Yep. But at least he got one statement right. That’s progress.

  30. Joe: when cjyman said:

    If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information, it does not mean that every instance of CSI has to be like that. He is saying if that is what you have then you have CSI.

    Except that you offered it as a definition. This is easy to resolve, though. Just provide a clear and unambiguous definition of CSI. 

  31. gpuccio shows how to calculate dFSCI in practice. 

    In the end, I believe we can safely affirm dFSCI for the works of Shakespeare anyway.

    Finally, a definitive recipe we’ve all been waiting for!

  32. He makes a good point about the relevance of compressibility to biological strings, though then rather blatantly ‘smuggles in’ a function relating to the existence of the transcription/translation system. The total dFSCI is that of the string itself plus the ‘decoding’ system. So there can never be translated strings that do not have dFSCI, whatever they contain, since they go through the ribosome/mRNA/tRNA/aaRS system! Any individual string’s complexity and functionality is utterly dwarfed by that of the ‘system’.

    By analogy, all of Lizzie’s organisms have high dFSCI because they run inside a complex, designed computer, and are handled by a program that is more complex than they are … which I know is how many ID-ers like to play it – you can’t look at evolution within the system without first accounting for the system – but it does seem decidedly fishy to me. You can’t investigate accounting without explaining accountants?

  33. Mung: Do probability calculations enter into the determination of “Shannon information”?

    Yes. The probability of each event can be used to calculate entropy. For instance, the toss of a weighted coin has less than one bit of entropy. 

    Mung: Do probability calculations enter into the determination of “Shannon information”?

    Yes. It’s part of the basic definition; Entropy is the negative of the sum of the products of the probability of each event and the log of that probability. The calculated entropy, then, depends on our knowledge of the events. So if we know nothing, then each symbol may be considered to have equal probability. If we know that the message represents a text in English, and because some letters are more common than others, the entropy is less than it would be otherwise. Indeed, human observers can often guess the content of messages even when more than half the letters are missing. (See Sajak & White, W h – – l o f F – – – – n -.) 

    R0bb: Yes, algorithmic compression. What kind of compression did you think I was talking about? Hydraulic?

    Heh.

    gpuccio: First of all, I am aware that Dembski considers compressibility as a form of specification. He may be right, but very simply I have never considered it as a form of functional specification in my discussions about biology.

    Well, that establishes that there are conflicting definitions. 

    Mung:  What definition was Lizzie using, …

    We only gave the thread a cursory view, but from what we did read, it was presumably Dembski’s definition; a long sequence which has a simple description (the function), but is unlikely due to chance alone (a uniform probability distribution). 

    Mung:  and where were you in that thread? 
    http://theskepticalzone.com/wp/?p=576

    Turns out that the planets keep moving whether we want them to or not.

    Mung: Do you think Shannon information can just be read off any old sequence? How much “Shannon information” is in the following sequence: 00101

    In order to calculate the “amount of information” in that sequence in Shannon terms, what did you either know or assume?

    From your question, most people would assume you are referring to five independent binary digits, but if the next symbol were “9”, then that assumption would be shown to be in error. They might then assume they are decimals digits, but if the next symbol were “a”, then that assumption would also be shown to be in error. 
     
    In an engineering context, it is generally assumed binary bits are independent, and are then subject to lossless compression after the fact. 
     

  34. gpuccio: As I have stated many times, it is not enough to compute the maximum functional information in that string (the ratio of the target space to the search space). We also have to consider of any known necessity mechanism can explain what we observe, completely or in part.

    So your answer to Dembski’s rhetorical question, “Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?” is no. 

  35. Joe: I offered what cjyman said about CSI to support my claim pertaining to CSI being not algorithmically compressible.

    That seems to contradict what you just said. Is CSI necessarily non-compressible? 

    Also, we can never know that a given sequence is non-compressible. A sequence may appear non-compressible (random), but have a simple description beyond our knowledge. In that situation, you would initially conclude design, but when provided the ‘key’, discover that the initial conclusion was a false positive.

    Joe: Please reference Dembski stating that, because I have provided CSI that is not that.

    χ = –log2[10120 · ϕS(T)·P(T|H)]
    Dembski, Specification: The Pattern That Signifies Intelligence 2005.

    Joe: Nope, that does NOT follow from what he said.

    Gpuccio said, “We also have to consider of any known necessity mechanism can explain what we observe, completely or in part.” That clearly indicates we have to consider the mechanism of how a pattern arose. 

  36. Joe:

    Also the fact that when I plug in the values I say are correct, the equations [Dembski and Marks’ equations] actually work, tells me I am right.

    Joe, it would be very interesting for you to “show your working” as it were. Could you show what your inputs and outputs were? I’d be very interested if you were able to substantiate this very clear claim (of plugging in the numbers) with some, oh I don’t know, evidence. http://www.uncommondescent.com/intelligent-design/conservation-of-information-made-simple-at-env/#comment-436151

  37. GP has clarified my point, and says that I have misinterpreted this:

    So, if we want to apply that to the works of Skakespeare, you can reduce the funtional complexity of the original observed string (the works of S themselves) by calculating the total complexity of:

    a) The compressed string that you obtained

    +

    b) The software that can expand it into the original observed string.

    as implying that he would always take the ‘software’ into account, including translation/transcription. I am happy to correct that.

    nonetheless,

    But, if we are debating OOL, then the whole complexity of the minimal known reproducing beings should be taken into consideration.

    I doubt that this would be particularly instructive – or rather, very conclusive. It is presently difficult-to-impossible to distinguish which features of known replicating entities are primary, and which secondary, for events that took place prior to LUCA. We only know what is universal in her descendants. 

    The minimal known reproducing beings are protein-based, and tend to be parasitic. Even ignoring that parasitism, we simply have no handle on what a minimal reproducer would have consisted of. I certainly strongly doubt that protein is primary, as I have argued (with equally little effect!) elsewhere.

  38. Zachriel: Well, that establishes that there are conflicting definitions.

    gpuccio: And so? That is good evidence of intellectual vitality and non dogmatism in the ID field!

    It means people can be discussing CSI, but referring to different things entirely.

     

  39. I would like to point out that having two or more dogmas does not imply that any of them are correct. there are hundreds of religions, but this does not guarantee that any are true.

    What makes a position dogmatic is not its correctness or incorrectness, but its imperviousness to evidence.In the case of the various versions of CSI, it is the adherence to fallacy of assuming the conclusion.

    CSI in its ID garb, cannot exist if there is a natural process that can generate the structure in question. ID advocates argue that some structures could not have arisen naturally because they are too comples (which implies CSI).

    If you wish to break out of this fallacy you must demonstrate the inability of evolution to operate continually.

    I find it amusing that Wallace is preferred over Darwin by ID advocates, but the title of Wallace’s original paper is On the Tendency of Varieties to Depart Indefinitely From the Original Type.

    Where is the evidence against this indefinite tendency?

  40. Joe

    It adds the correct way at looking at organisms- just as archaeology adds the correct way of looking at a group of rocks, ie Stonehenge.

    I thought ID was only about detecting design? So now that you are looking at organisms the correct way what additional information can you provide? Nothing? Thought so.

    Or perhaps OMTWO can provide a testable hypothesis fpr blind and undirected chemical processes doingit.

    What’s that got to do with ID? And I ask once again, what is the “it” in doingit?

    Doing what YOU claim they can- try any bacterial flagellum.

    What is it that I’m claiming they can do? Is pretending that you are responding to a claim I’m making really the best way you can think of to try and redirect attention away from the fact that you have not and cannot respond to the actual questions being asked?

    Only intellectual cowards equivocate and you do in continually. ID is NOT anti-evolution.

    Huh? What’s that got to do with anything? What am I equivocating about, specifically?

    Saw it and read it. Darwin didn’t know anything and he argued against a strawman. IOW he was intellectually dishonest.

    If that’s the case then why are your writings not more famous then his?

    But thanks for proving that you are not only a waste of time but also a waste of sj=kin…

    Once again I ask you for that pseudocode. I believe I can build a program that will output what would commonly be known as CSI. I’m asking you how I could go about checking that output? How can I measure that “CSI”?

  41. http://theskepticalzone.com/wp/?p=1331&cpage=1#comment-16494

    I think that you can logically demonstrate that any objective measure of CSI would enable a GA to produce CSI. All you have to do is use the CSI definition to compare the quantity of CSI in child sequences and favor the better sequences in reproduction.

    I might point out that this is what is accomplished in natural selection, but that would be mean.

  42. It is becoming obvious that Dembski’s “true stroke of genius” in “defining” CSI was in his taking logarithms to base 2 of ratios of the cardinalities of sets.

    Watching the UD crowd grapple with this “advanced math” suggests that log base 2 is the ultimate obfuscator in any arguments with “Darwinists.” To the UD crowd, log base 2 is so advanced that no “Darwinist” can possibly understand the “proofs” of intelligent design.

    It is quite amazing that people who never learned high school level science can feel so brilliant at log base 2 math to “prove” things about design in science.

  43. I might point out that this is what is accomplished in natural selection, but that would be mean.

    Reality is cruel to the aggressively, willfully ignorant.  You are but its instrument.  ;-)

  44. you count the number of sequences that meet the specification and compare it to the number of possible sequences.

    To see if islands of function are isolated, you count the number of sequences that are beneficial or neutral and which are within one mutational hop of the current configuration. Mutation type TBA.

    So far the count has never been less than one, even in Axe’s PhD experiment.

    So the probability that neutral or beneficial change will be found from any existing position is one, The best approximation from available evidence.

  45. Mung

    Let us know when you find the CSI calculation.

    Thanks for those links, but I’ve already built one of those. What I don’t have is a “CSI calculation”. That’s for your side to provide. You really don’t get that? A few comments down gpuccio says:

    They probably know all too well that CSI is deadly to their beliefs, and would argue any possible thing to evade the concept.

    So on the one hand we have you seemingly telling me that CSI cannot be calculated and on the other hand gpuccio is saying it’s apparently deadly to my “beliefs”, whatever they might be.
    I’m not arguing any possible thing to evade the concept.
    You are Joe are.
    Can’t you see that?
    I’m just asking you how I can determine what the value of CSI is for a given string. Yet you turn that around onto me? Good luck with that.

    Shakespeare has CSI then? Show your working…..

  46. Mung,

    I examined Elizabeth’s program closely and I don’t see where she even attempts to calculate CSI in it. So how do you suppose she knows she generated CSI?

    Why don’t you run the output through the Explanatory Filter?

    Does “less than one” indicate “less” CSI? How much less?

    Good question. That’s exactly what you need to answer. Ask Joe!

    So in what sense is any one of them more or less compressible? Do they all then have the exact same CSI?

    It seems to depend on who you ask. If you’ve been reading this very thread you’ll see that.

  47. gpuccio: Or to slightly different aspects of the same thing. Or to different definitions of similar concepts. 

    What is your definition?  
     

Leave a Reply