Conflicting Definitions of “Specified” in ID

I see that in the unending TSZ and Jerad Thread Joe has written in response to R0bb

Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI. Even Stephen C. Meyer says CSI is not amendable to compression.

A protein sequence is not compressable- CSI.

So please reference Dembski and I will find Meyer’s quote

To save Robb the effort.  Using Specification: The Pattern That Signifies Intelligence by William Dembski which is his most recent publication on specification;  turn to page 15 where he discusses the difference between two bit strings (ψR) and (R). (ψR) is the bit stream corresponding to the integers in binary (clearly easily compressible).  (R) to quote Dembksi “cannot, so far as we can tell, be described any more simply than by repeating the sequence”.  He then goes onto explain that (ψR) is an example of a specified string whereas (R) is not.

This conflict between Dembski’s definition of “specified” which he quite explicitly links to low Kolmogorov complexity (see pp 9-12) and others which have the reverse view appears to be a problem which most of the ID community don’t know about and the rest choose to ignore.  I discussed this with Gpuccio a couple of years ago. He at least recognised the conflict and his response was that he didn’t care much what Dembski’s view is – which at least is honest.

261 thoughts on “Conflicting Definitions of “Specified” in ID

  1. Yes, it’s clear that Dembksi and most ID advocates are quite confused about the relationship between Kolmogorov complexity and the bogus concept of CSI.  In my paper with Elsberry we point out that Dembski associates CSI with low Kolmogorov complexity (highly compressible strings). But strings with low Kolmogorov complexity are precisely those that are “easy” to produce with simple algorithmic procedures (in other words, likely to occur from some simple natural algorithm).    By contrast, organismal DNA (for example) doesn’t seem that compressible; experiments show long strings of organismal DNA are often poorly compressible, say only by about 10% or so.   This is, in fact, good evidence that organismal DNA arose through a largely random process.

  2. A rather odd response from Joe.

    Mark Frank is confused. Above I was discussing COMPLEX SPECIFIED INFORMATION and he links to a paper about SPECIFICATION ONLY, in an attempt to refute what I said.

    SPECIFICATION is the S in CSI, Mark.

    Dembksi’s paper is quite clearly and explicitly about what “specified” means when used in the context of complex specified information.  It is what the whole paper is about. 

  3. Since ID advocates like to compare the genetic code to computer code, one wonders why the designer failed to implement CRC checking and a bit of redundancy in the code. The actual error correcting machinery is woefully inadequate if the goal is something other than evolution. I can copy a computer file through a million generations without error, even though media are imperfect.

    Now if the mutation system is in itself part of a designed adaptive system, as Shapiro argues, that implies the designer relies on the landscape having gradients.

  4. Goodness it is difficult having a debate with Joe – the issue under debate seems to drift all over the place.  This is the latest:

     

    To Mark Frank- once again I refer you to my exmaples in 186- none of which are compressible and all of which exhibit CSI.

    How to YOU deal with those facts, Mark? By ignoring them, as usual…

    Leaving aside the fact I only just joined the debate and he has never referred me to anything, I was demonstrating that Dembski defines “specified” (as in CSI) one way and he defines it another.  For Joe to repeat his definition doesn’t contribute to this. 

    Joe you need to take up the issue of compressability with William Dembski not me.  I am only pointing out that your definitions are incompatible.

  5. Joe

    The most likely explanation is that YOUR interpretation of Dembski is wrong.

    Uhm – Joe have you read Dembski’s paper?  (I have a feeling you will avoid answering this)

  6. Goodness it is difficult having a debate with Joe – the issue under debate seems to drift all over the place.

    You’ve noticed! Maybe Joe and Mung should have a natter, since they operate in the same milieu. Mung reckons a string of all 1′s has high CSI; Joe reckons completely random sequences do (given that they are the least compressible). Maybe they are both right, and any string can have high CSI? Rendering CSI a handy term for ‘it’s-a-string’?

  7. Joe

    Yes I have read the paper. Nice of YOU to ignore everything I have said

    Well that is great.  Can you then explain what Dembski does mean by specified if he does not mean compressible?

    I am not sure what I am ignoring of what you wrote.  I accept that you and many other ID proponents define “specified” in such a way that it is incompressible. My single point is that this definition conflicts with the most recent work on the subject by the leading ID thinker.

    I didn’t want to get into a silly dispute over what is meant by “referring someone to something”. It seems to me that just because I have quoted something you wrote it doesn’t mean you referred me to it – but who cares?

  8. Joe

    SPECIFICATION IS NOT CSI. Specification is only one part of CSI- ie the S.

    That’s true.  So what?  We were discussing different concepts of specification (yours and Dembski’s).  You and Dembski agree that specification is only one part of CSI. Where you disagree is whether something is specified is compressible. Or do you think that Dembski says in the paper that specified things are not compressible?  I would be truly interested to know where in the paper you think he says this.

  9. I am more tolerant of Orgel’s concept of specified information than Jeff is. In a simple genetic algorithm model it is some relevant scale, and Dembski’s use of it to define CSI is to define a region of genotypes far enough out on that scale that pure mutational processes would never once, even in the whole history of the Universe, produce a value that extreme.

    But what scale? The relevant one (which Dembski does allow) is fitness, or at any rate one which expresses a degree of adaptation. We could instead define an arbitrary one such as the number of purple spots on the organism, but that would be of no interest. The whole purpose of Dembski’s Design Inference argument is that he thinks that there is a conservation theorem that prevents natural selection from getting organisms as far out on he fitness scale, as highly adapted as they are in real life. (And he’s wrong about the theorem showing that, as Jeff and I have argued).

    Dembski’s Kolmogorov argument is just one other possible scale — we rank organisms according to the smallness of the computer program that can produce them. That seems to have nothing to do with fitness or adaptation. An organism that was a perfect sphere might be the winner, though it would not be well-adapted. So I think it is as silly as using the number of purple spots as the specification.

    At that point in Dembski’s argument he is extremely fuzzy about why he wants to use this criterion, what it accomplishes. For example he simply has no discussion of the probability that a random genotype will be a program that does the job.

    So I think the Kolmogorov Complexity criterion Dembski invokes should just be tossed, along with the Purple Spots Criterion. 

  10. Bless you, Mark, for trying.  But why anyone would want to try to reason with Joe, a person who seems to be irrational, is beyond my understanding. 

  11. Mark,

    As long as the creationists have Joe, Mung, KF, UB and gpuccio on their side, we’re guaranteed many more Dover rulings! :)

     

     

  12. I think I can understand the problem here. What Joe and Dembski are both doing is looking at the object in question and deciding whether it was Designed. If the answer is yes, then it must have high CSI, otherwise the CSI must be substantially lower.

    Now here, we start to run into problems. Some objects which they regard as obviously Designed are very difficult to compress, while others compress easily and significantly. There doesn’t seem to be a consistent pattern with respect to compressability.

    Nor can we simply discard compressability as a component of specification, because if we do, what remains is all too obviously arbitrary and subjective. Fact is, the specification cannot be determined from examining an object since there is no way to know whether the object meets the specification. So Joe and Dembski are like Justice Potter Stewart trying to specify pornography. They know it when they see it, but the ONLY thing multiple Designed objects have in common is Joe’s (or Dembski’s) determination based on what they decide to believe about it.

    And so here we are. Joe realizes that high compressability can’t be the measure of specification, because far too many simple repetitive clearly non-Designed objects compress very tightly. But Dembski realizes that low compressability can’t be it either, because the genuinely random and patternless cannot be compressed and THAT is surely not Designed either.

    And if compressability is orthogonal to specification, what DO we look at? There must be SOME identifiable and measureable hallmarks of Design, or else we’re left only with subjective preference. Like, you know, religion.   

  13. Flint,

    Nor can we simply discard compressability as a component of specification, because if we do, what remains is all too obviously arbitrary and subjective. Fact is, the specification cannot be determined from examining an object since there is no way to know whether the object meets the specification.

    It’s even worse than that. An object can conform to multiple specifications, in which case it simultaneously possesses multiple CSI values, all equally valid:

    By speaking of “bits of CSI”, IDers also invite the unwary to conclude that CSI is an intrinsic property of an object. They reinforce this notion when they speak of objects “containing” a certain number of bits of CSI. In reality, CSI is not intrinsic and can only be determined relative to a specified function. An object with n functions has n CSI values, one for each target space.

    CSI is just probability in a cheap tuxedo, to borrow a metaphor. And it’s a probability of non-design, with design as the default.

  14. One common test of creativity in children is to ask them how many different uses they can think up for some object, like a brick.  Some children can think of a dozen, some of them quite creative. The number of uses to which bricks have been put is probably quite large. Similarly, you can probably find half a dozen or more objects used for purposes other than what was probably originally intended (chairs used as doorstops, coins used as shims, whatever) around your home in a few minutes.

    The very idea of CSI lacks any real-world referent. 

    So the point was they are trying to paste post hoc rationalizations onto foregone conclusions derived from religious precepts, to make them smell scientistical. They can hardly help but know this.

  15. Per Dembski’s muddled terminology (emphasis added): 

    Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?

    a specification is that the pattern is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance.

    ϕS(T) = the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T… ϕS(T) defines the specificational resources that S associates with the pattern T.

    Thus, for a pattern T, a chance hypothesis H, and a semiotic agent S for whom ϕS measures specificational resources, the specificity σ is given as follows: σ = –log2[ ϕS(T)·P(T|H)].

    In addition, we need to factor in what I call the replicational resources associated with T, that is, all the opportunities to bring about an event of T’s descriptive complexity and improbability by multiple agents witnessing multiple events. If you will, the specificity ϕS(T)·P(T|H) (sans negative logarithm) needs to be supplemented by factors M and N where M is the number of semiotic agents (cf. archers) that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen (cf. arrows). 

    Moreover, we define the logarithm to the base 2 of M·N· ϕS(T)·P(T|H) as the context dependent specified complexity of T given H, the context being S’s context of inquiry: χ~ = –log2[M·N· ϕS(T)·P(T|H)].

    We thus define the specified complexity of T given H (minus the tilde and context sensitivity) as χ = –log2[10120 · ϕS(T)·P(T|H)].

  16. Joe:

    And Allan Miller chimes in:

    Mung reckons a string of all 1?s has high CSI; Joe reckons completely random sequences do (given that they are the least compressible).

    Nope, completely random sequences do not have CSI. As I said you guys ignore what I write and make stuff up.

    So when you say “Try and compress x – high CSI”, you aren’t linking CSI with compressibility? The point is that ‘random’ strings – with no pattern at all – are the least compressible. Of course they are not CSI, because they are random. But they are minimally compressible. So CSI lies in that nice middle ground between minimally compressible and non-random? It’s not so much that we ignore what you wrote, it is just incoherent.

    Mung: And I see Allan misrepresent me as well.
    High CSI? What’s that? Low CSI? What’s that?
    How much CSI makes for high CSI and how little CSI makes for low CSI?

    You tell us, sunshine! It’s your (ID’s) bloody concept! You claimed to generate it by making a string of 1′s. Elect one of your number to give a coherent presentation of it on which you all agree.

  17. Allan – I wish you the best of luck with this.

    I have a theory that Joe and Mung are really quite rational and are doing this as a kind of cruel tongue-in-cheek wind up.

  18. The simple fact is you can’t do a useful probability calculation without assuming something about the history and context of the sequence.

    By themselves, all sequences are equally probable.

    What leads us to suspect that some are improbable is their usefulness. Usefulness seems to be associated with the term specification.

    There is an assumption not backed up by evidence that usefulness is unimaginably rare, and that useful sequences have no close neighbors. This is contradicted by the existence of alleles, by the work of Lenski and by the work of Thornton.

    ID therefore is assuming its conclusion. It is trying to prove that current configurations could not have been reached via stepping stones by invoking probability calculations that depend on there being no stepping stones.

  19. I have a theory that Joe and Mung are really quite rational and are doing this as a kind of cruel tongue-in-cheek wind up.

    Either they are, or I am! :)

    Interesting that, regardless of what is said about CSI, no-one pipes up and tells a fellow-IDer that they have it all wrong. Yet skeptics of this hard-to-pin-down property do nothing but misrepresent, distort, lie, equivocate, bluff and bluster … well, everyone needs a hobby!

  20. From Addendum 1: One final difference that should be pointed out regarding my past work on specification is the difference between specified complexity then and now. In the past, specified complexity, as I characterized it, was a property describing a relation between a pattern and an event delineated by that pattern. Accordingly, specified complexity either obtained or did not obtain as a certain relation between a pattern and an event. In my present treatment, specified complexity still captures this relation, but it is now not merely a property but an actual number calculated by a precise formula (i.e., χ = –log2[10120 · ϕS(T)·P(T|H)]). This number can be negative, zero, or positive. When the number is greater than 1, it indicates that we are dealing with a specification.

    That means Dembski is treating specified complexity as a quantity, not a boolean.

     

  21. Dembski fails at the get-go:

    Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?

    a specification is that the pattern is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance.

    If we see objects lined up the same way, and not knowing how the pattern arose, if we then assume a uniform probability distribution, then we conclude design. But they could just as easily be molecules in a crystal aligned in the same direction. It seems you do have to know “how they arose”.

     

  22. Dembski’s problem seems pretty obvious. He KNOWS that his Designer did it, but he has no way to demonstrate this. No mechanisms at all. So he has to work backwards, showing that his Designer MUST have done it because there are no possible alternatives. And to do this, he must eliminate a mechanism widely recognized as producing such Designs through normal operation. What he can NOT do, under any circumstances, is concede that the process visibly and inexorably producing such Designs as we watch, is even possible.

    So if he should happen to accidentally admit that a specification PRECEDES a Design, which it must do, that means knowledge of an object’s history is necessary. And he has no history.

    So where Dembski fails is BEFORE the get-go. He assumes his conclusions out of religious necessity. These foregone conclusions are not negotiable, questionable, even examinable. They are given. They are also false. So the challenge is to demonstrate that a falsehood is true because it MUST be true, lest Dembski’s faith be misguided.

    I’m sure he’s sincere in his efforts. My question is whether such resounding and repeated (and obvious) failures CAN be visible to him. So far, he has resolutely ignored all critics.     

  23. Joe: CSI is a special case of specified complexity, which would mean all CSI is SC but not all SC = CSI.

    Joe: CSI and SC are different manifestations of the same thing.

    Joe: You have to see if the quantity is there to qualify as CSI.

    Joe is apparently defining SC as a quantity, but CSI as a Boolean. That would make his first statement incoherent as he would be comparing apples and oranges, but that seems to be his understanding.

    Joe: “If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information.”–CJYman

    Dembski’s definition doesn’t involve function. Presumably, the “separate functional system” would be equivalent to a semiotic agent’s description. The problem is that one can never know that a given sequence is non-compressible. A sequence may appear non-compressible (random), but have a simple description beyond our knowledge. In that situation, you would initially conclude design, but when provided the ‘key’, discover that the initial conclusion was a false positive.

     

  24. Joe is apparently defining SC as a quantity, but CSI as a Boolean. That would make his first statement incoherent as he would be comparing apples and oranges, but that seems to be his understanding.

    You are assuming that Joe is capable of understanding.  I remain unconvinced that he isn’t an undergraduate Markov text generator that has been allowed to run amok.

    Yes, yes, I’ll see myself to Guano….
     

  25. Zachriel: Joe is apparently defining SC as a quantity, but CSI as a Boolean.

    Joe: Nope.

    So what are the units for SC? What are the units for CSI? This statement seems to indicate that CSI is Boolean.

    Joe: “If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information.” — CJYMan

    So, CJYman is the primary source for the definition CSI? Does Dembski define CSI? If so, where? 
     

  26. Zachriel,

    Joe is apparently defining SC as a quantity, but CSI as a Boolean. That would make his first statement incoherent as he would be comparing apples and oranges, but that seems to be his understanding

    This is probably the only time I’ll ever defend a statement of Joe’s, but I think that in this instance his statement is at least coherent. Whether it comports with Dembski’s intent, I have no idea.

    Joe isn’t saying that CSI is a boolean. He’s saying that SC and CSI are commensurable quantities. An SC value below a certain threshold is not CSI, while a value of SC above that threshold is CSI.

    It’s like saying that a speed below 65 mph is not an illegal speed, while a speed above 65 is an illegal speed. They’re measured in the same units, but one speed qualifies as illegal while the other doesn’t.

  27. keiths: Joe isn’t saying that CSI is a boolean. He’s saying that SC and CSI are commensurable quantities. An SC value below a certain threshold is not CSI, while a value of SC above that threshold is CSI.

    Okay. 

    But then, he says this:

    R0bb: You said that “all CSI is SC but not all SC = CSI” and “CSI and SC are different manifestations of the same thing.” These indicate that the terms are not synonymous. Agreed?

    Joe: Disagree.

    He may mean they have the same calculation of value, but that wouldn’t make them synonymous. It would make CSI a subset of SC.

  28. Mung (quoting Dembski): It follows that the collection of nonrandom [algorithmically compressible] sequences has small probability among the totality of sequences so that observing a nonrandom sequence is reason to look for explanations other than chance.

    Joe (quoting CJYman): If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information.”–CJYman 

    Shorter Mung: Compressible, might be CSI.
    Shorter Joe: Non-compressible, required for CSI.
     
     

  29. Zachriel,

    He may mean they have the same calculation of value, but that wouldn’t make them synonymous. It would make CSI a subset of SC.

    Yep. But at least he got one statement right. That’s progress.

  30. Joe: when cjyman said:

    If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information, it does not mean that every instance of CSI has to be like that. He is saying if that is what you have then you have CSI.

    Except that you offered it as a definition. This is easy to resolve, though. Just provide a clear and unambiguous definition of CSI. 

  31. gpuccio shows how to calculate dFSCI in practice. 

    In the end, I believe we can safely affirm dFSCI for the works of Shakespeare anyway.

    Finally, a definitive recipe we’ve all been waiting for!

  32. He makes a good point about the relevance of compressibility to biological strings, though then rather blatantly ‘smuggles in’ a function relating to the existence of the transcription/translation system. The total dFSCI is that of the string itself plus the ‘decoding’ system. So there can never be translated strings that do not have dFSCI, whatever they contain, since they go through the ribosome/mRNA/tRNA/aaRS system! Any individual string’s complexity and functionality is utterly dwarfed by that of the ‘system’.

    By analogy, all of Lizzie’s organisms have high dFSCI because they run inside a complex, designed computer, and are handled by a program that is more complex than they are … which I know is how many ID-ers like to play it – you can’t look at evolution within the system without first accounting for the system – but it does seem decidedly fishy to me. You can’t investigate accounting without explaining accountants?

  33. Mung: Do probability calculations enter into the determination of “Shannon information”?

    Yes. The probability of each event can be used to calculate entropy. For instance, the toss of a weighted coin has less than one bit of entropy. 

    Mung: Do probability calculations enter into the determination of “Shannon information”?

    Yes. It’s part of the basic definition; Entropy is the negative of the sum of the products of the probability of each event and the log of that probability. The calculated entropy, then, depends on our knowledge of the events. So if we know nothing, then each symbol may be considered to have equal probability. If we know that the message represents a text in English, and because some letters are more common than others, the entropy is less than it would be otherwise. Indeed, human observers can often guess the content of messages even when more than half the letters are missing. (See Sajak & White, W h – - l o f F – - – - n -.) 

    R0bb: Yes, algorithmic compression. What kind of compression did you think I was talking about? Hydraulic?

    Heh.

    gpuccio: First of all, I am aware that Dembski considers compressibility as a form of specification. He may be right, but very simply I have never considered it as a form of functional specification in my discussions about biology.

    Well, that establishes that there are conflicting definitions. 

    Mung:  What definition was Lizzie using, …

    We only gave the thread a cursory view, but from what we did read, it was presumably Dembski’s definition; a long sequence which has a simple description (the function), but is unlikely due to chance alone (a uniform probability distribution). 

    Mung:  and where were you in that thread? 
    http://theskepticalzone.com/wp/?p=576

    Turns out that the planets keep moving whether we want them to or not.

    Mung: Do you think Shannon information can just be read off any old sequence? How much “Shannon information” is in the following sequence: 00101

    In order to calculate the “amount of information” in that sequence in Shannon terms, what did you either know or assume?

    From your question, most people would assume you are referring to five independent binary digits, but if the next symbol were “9″, then that assumption would be shown to be in error. They might then assume they are decimals digits, but if the next symbol were “a”, then that assumption would also be shown to be in error. 
     
    In an engineering context, it is generally assumed binary bits are independent, and are then subject to lossless compression after the fact. 
     

  34. gpuccio: As I have stated many times, it is not enough to compute the maximum functional information in that string (the ratio of the target space to the search space). We also have to consider of any known necessity mechanism can explain what we observe, completely or in part.

    So your answer to Dembski’s rhetorical question, “Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?” is no. 

  35. Joe: I offered what cjyman said about CSI to support my claim pertaining to CSI being not algorithmically compressible.

    That seems to contradict what you just said. Is CSI necessarily non-compressible? 

    Also, we can never know that a given sequence is non-compressible. A sequence may appear non-compressible (random), but have a simple description beyond our knowledge. In that situation, you would initially conclude design, but when provided the ‘key’, discover that the initial conclusion was a false positive.

    Joe: Please reference Dembski stating that, because I have provided CSI that is not that.

    χ = –log2[10120 · ϕS(T)·P(T|H)]
    Dembski, Specification: The Pattern That Signifies Intelligence 2005.

    Joe: Nope, that does NOT follow from what he said.

    Gpuccio said, “We also have to consider of any known necessity mechanism can explain what we observe, completely or in part.” That clearly indicates we have to consider the mechanism of how a pattern arose. 

  36. Joe:

    Also the fact that when I plug in the values I say are correct, the equations [Dembski and Marks’ equations] actually work, tells me I am right.

    Joe, it would be very interesting for you to “show your working” as it were. Could you show what your inputs and outputs were? I’d be very interested if you were able to substantiate this very clear claim (of plugging in the numbers) with some, oh I don’t know, evidence. http://www.uncommondescent.com/intelligent-design/conservation-of-information-made-simple-at-env/#comment-436151

  37. GP has clarified my point, and says that I have misinterpreted this:

    So, if we want to apply that to the works of Skakespeare, you can reduce the funtional complexity of the original observed string (the works of S themselves) by calculating the total complexity of:

    a) The compressed string that you obtained

    +

    b) The software that can expand it into the original observed string.

    as implying that he would always take the ‘software’ into account, including translation/transcription. I am happy to correct that.

    nonetheless,

    But, if we are debating OOL, then the whole complexity of the minimal known reproducing beings should be taken into consideration.

    I doubt that this would be particularly instructive – or rather, very conclusive. It is presently difficult-to-impossible to distinguish which features of known replicating entities are primary, and which secondary, for events that took place prior to LUCA. We only know what is universal in her descendants. 

    The minimal known reproducing beings are protein-based, and tend to be parasitic. Even ignoring that parasitism, we simply have no handle on what a minimal reproducer would have consisted of. I certainly strongly doubt that protein is primary, as I have argued (with equally little effect!) elsewhere.

  38. Zachriel: Well, that establishes that there are conflicting definitions.

    gpuccio: And so? That is good evidence of intellectual vitality and non dogmatism in the ID field!

    It means people can be discussing CSI, but referring to different things entirely.

     

  39. I would like to point out that having two or more dogmas does not imply that any of them are correct. there are hundreds of religions, but this does not guarantee that any are true.

    What makes a position dogmatic is not its correctness or incorrectness, but its imperviousness to evidence.In the case of the various versions of CSI, it is the adherence to fallacy of assuming the conclusion.

    CSI in its ID garb, cannot exist if there is a natural process that can generate the structure in question. ID advocates argue that some structures could not have arisen naturally because they are too comples (which implies CSI).

    If you wish to break out of this fallacy you must demonstrate the inability of evolution to operate continually.

    I find it amusing that Wallace is preferred over Darwin by ID advocates, but the title of Wallace’s original paper is On the Tendency of Varieties to Depart Indefinitely From the Original Type.

    Where is the evidence against this indefinite tendency?

  40. Joe

    It adds the correct way at looking at organisms- just as archaeology adds the correct way of looking at a group of rocks, ie Stonehenge.

    I thought ID was only about detecting design? So now that you are looking at organisms the correct way what additional information can you provide? Nothing? Thought so.

    Or perhaps OMTWO can provide a testable hypothesis fpr blind and undirected chemical processes doingit.

    What’s that got to do with ID? And I ask once again, what is the “it” in doingit?

    Doing what YOU claim they can- try any bacterial flagellum.

    What is it that I’m claiming they can do? Is pretending that you are responding to a claim I’m making really the best way you can think of to try and redirect attention away from the fact that you have not and cannot respond to the actual questions being asked?

    Only intellectual cowards equivocate and you do in continually. ID is NOT anti-evolution.

    Huh? What’s that got to do with anything? What am I equivocating about, specifically?

    Saw it and read it. Darwin didn’t know anything and he argued against a strawman. IOW he was intellectually dishonest.

    If that’s the case then why are your writings not more famous then his?

    But thanks for proving that you are not only a waste of time but also a waste of sj=kin…

    Once again I ask you for that pseudocode. I believe I can build a program that will output what would commonly be known as CSI. I’m asking you how I could go about checking that output? How can I measure that “CSI”?

  41. http://theskepticalzone.com/wp/?p=1331&cpage=1#comment-16494

    I think that you can logically demonstrate that any objective measure of CSI would enable a GA to produce CSI. All you have to do is use the CSI definition to compare the quantity of CSI in child sequences and favor the better sequences in reproduction.

    I might point out that this is what is accomplished in natural selection, but that would be mean.

  42. It is becoming obvious that Dembski’s “true stroke of genius” in “defining” CSI was in his taking logarithms to base 2 of ratios of the cardinalities of sets.

    Watching the UD crowd grapple with this “advanced math” suggests that log base 2 is the ultimate obfuscator in any arguments with “Darwinists.” To the UD crowd, log base 2 is so advanced that no “Darwinist” can possibly understand the “proofs” of intelligent design.

    It is quite amazing that people who never learned high school level science can feel so brilliant at log base 2 math to “prove” things about design in science.

  43. I might point out that this is what is accomplished in natural selection, but that would be mean.

    Reality is cruel to the aggressively, willfully ignorant.  You are but its instrument.  ;-)

  44. you count the number of sequences that meet the specification and compare it to the number of possible sequences.

    To see if islands of function are isolated, you count the number of sequences that are beneficial or neutral and which are within one mutational hop of the current configuration. Mutation type TBA.

    So far the count has never been less than one, even in Axe’s PhD experiment.

    So the probability that neutral or beneficial change will be found from any existing position is one, The best approximation from available evidence.

  45. Mung

    Let us know when you find the CSI calculation.

    Thanks for those links, but I’ve already built one of those. What I don’t have is a “CSI calculation”. That’s for your side to provide. You really don’t get that? A few comments down gpuccio says:

    They probably know all too well that CSI is deadly to their beliefs, and would argue any possible thing to evade the concept.

    So on the one hand we have you seemingly telling me that CSI cannot be calculated and on the other hand gpuccio is saying it’s apparently deadly to my “beliefs”, whatever they might be.
    I’m not arguing any possible thing to evade the concept.
    You are Joe are.
    Can’t you see that?
    I’m just asking you how I can determine what the value of CSI is for a given string. Yet you turn that around onto me? Good luck with that.

    Shakespeare has CSI then? Show your working…..

  46. Mung,

    I examined Elizabeth’s program closely and I don’t see where she even attempts to calculate CSI in it. So how do you suppose she knows she generated CSI?

    Why don’t you run the output through the Explanatory Filter?

    Does “less than one” indicate “less” CSI? How much less?

    Good question. That’s exactly what you need to answer. Ask Joe!

    So in what sense is any one of them more or less compressible? Do they all then have the exact same CSI?

    It seems to depend on who you ask. If you’ve been reading this very thread you’ll see that.

  47. gpuccio: Or to slightly different aspects of the same thing. Or to different definitions of similar concepts. 

    What is your definition?  
     

  48. gpuccio: Biological strings are scarcely compressible. 

    That is not quite correct. While standard compression routines, such as those suitable for compressing text or pictures are not effective, it is still possible to compress biological sequences. 
     
    Adjeroh & Nan, On Compressibility of Protein Sequences, Proceedings of the Data Compression Conference 2006. 

    Also, http://data-compression.info/Corpora/ProteinCorpus/ 

    This relates to our conversation with Joe earlier. The problem with defining CSI in terms of non-compressibility is that it can lead to false positives. Indeed, the more ignorant one is, the more false positives.

  49. I see gpuccio has returned to his standard argument. RM+NS fails because it can’t create dFSCI, and dFSCI is that which RM+NS can’t create because there’s just too much of it.

  50. Isn’t gpuccio’s argument that dFCSI can’t be put into the genome by natural selection because an organism capable of replication has to be there to have natural selection, and such an organism already has dFCSI, so that is the source of the dFCSI.

    That argument implies that if an organism (having its initial dFCSI) evolves for a while by RM+NS and makes adaptation X which has enough extra SI to constitute dFCSI, that the dFCSI comes from the initial SI. Then if it continues and also achieves adaptation Y which also has enough extra SI to constitute dFCSI, that too comes from that initial complement of SI.

    And so on: the initial SI keeps getting converted to make the dFCSI of each successive adaptation. This does not make sense to me: it is “the gift that keeps on giving”, too much so. Perhaps I misunderstand, but it seemed that in the previous discussion of gpuccio’s argument, whenever the genome ended up containing dFCSI because of a particular adaptation, gpuccio kept saying that that dFCSI was already there since the organism was capable of replication.

  51. When pinned down, gpuccio always reverts back to the argument that protein domains are irreducible. He bolsters that by arguing there is a level beyond which they appear to have no cousin sequences and therefore must have been poofed into existence in their current form.

  52. So if the “information” is already there – who cares which kind of information it is called; it’s too confusing to keep track of all the sectarian versions of information – the question that no ID/creationist has ever answered is, “Just how does this information push atoms and molecules around?”

    “If this information doesn’t push atoms and molecules around, then what is the mechanism by which this information gets to those atoms and molecules so that they “know” where to go?” Does information push the laws of physics and chemistry around? If so, how; what is the mechanism?

    Why can’t ID/creationists answer these questions? Where along the chain of complexity does information kick in and take over from the laws of physics and chemistry? And which is it; semiotics or information?”

  53. Joe F:

    Isn’t gpuccio’s argument that dFCSI can’t be put into the genome by natural selection because an organism capable of replication has to be there to have natural selection, and such an organism already has dFCSI, so that is the source of the dFCSI.

    In comments to me, having made a similar interpretation, GP denied that this is his argument. Once the replication system or translation or whatever is in place, we take that dFSCI-to-date as a given, and apply the metric to the ‘extra’ dFSCI within a particular Time Span.

  54. Mung, 
    To clarify.

    KF claims that CSI is generated billions of times a day. Every message on a message board has a value for CSI.

    When I (or Lizzie) claim that we can write a program that can output CSI the onus is not on us to define what CSI is. The onus is on your to test the output from the program and determine the level of CSI present, if any. After all, I might be just making it all up!

    That might seem strange to you, but consider this: If ID claims to detect design via CSI then it’s irrelevant if I believe my program can output CSI or not as you can simply test it’s output and determine if it does in fact produce CSI or not. 

    So for you to say, as you seem to have by linking to the OP where Lizzie’s CSI generator was described that “CSI is real, look Lizzie claims to generate it and if she’s generating it she must know the definition” is a pathetic attempt at misdirection.

    If you can really determine design from CSI then you don’t need any further information then the output of the program. 

    If KF can say that every message on the internet is an example of intelligent design and has a measurable value for CSI then you can’t stop at messages you don’t know the origin of and say “well, just no way to tell” as that shows that you only indicate CSI is present when you already know something is designed. 

    “This string of letters and punctuation makes sense and therefore is unlikely to have come about by chance” is one thing. Yet what if the message is in a language you don’t understand? No CSI? It might just be random for all you know, yet you claim to be able to detect design. 

    So detect it already! 

  55. gpuccio: Biological strings are scarcely compressible. 

    “Scarcely” is probably too strong, but it was just an aside, and probably not relevant to the main point.

    gpuccio: As I commented about Hamlet, you can certainly compress the text somewhat, but you would still need the compressd sequence plus the decompressing algorithm to get Hamlet. 

    Sure, but the compression routine can usually be made proportionally smaller by extending the text, rendering the size of the compression routine negligible. That’s rarely an issue for a text the size of Hamlet, but if so, then try The Oxford Shakespeare: The Complete Works
     

  56. Yet confronted by Elizabeth’s GA program, gpuccio was not willing to acknowledge that the amount of SI increased in that program. gpuccio’s argument was that the dFCSI was already there because Elizabeth had made the program’s organisms able to reproduce.

    That’s when we all started arguing about intelligently computer simulations of unintelligent natural processes.

    This seems to me to be a big contradiction. When an organism has dFCSI and can reproduce, gpuccio says that we can count the “extra” SI put into the genome by an adaptation. But when the genomes are in a GA, gpuccio refused to count the extra SI that was put into those genomes. There all the SI was said to be coming from the original SI put in when the GA was set up.

    Again, am I misunderstanding gpuccio’s argument? How? 

  57. gpuccio: dFSCI is the form of CSI that I explicitly define. The definition is more or less as follows:

    We’ll number your points for reference. 

    gpuccio: #1) Any material object whose arrangement is such that a string of digital values can be read in it according to some code, and for which string of values a conscious observer can objectively define a function, objectively specifying a method to evaluate its presence or absence in any digital string of information, is said to be functionally specified (for that explicit function).

    It’s not important, but what is the function of Hamlet? 

    gpuccio: #2) The complexity (in bits) of the target space (the set of digital strings of the same or similar length that can effectively convey that function according to the definition), divided by the complexity in bits of the search space (the total number of strings of that length) is said to be the functional complexity of that string for that function.

    Again, just as an aside, how many permutations of words have the same function as Hamlet? Keep in mind the many, many versions of Hamlet. Seems intractable, especially given the lack of a clear functional specification. 

    gpuccio: #3) Any string that exhibits functional complexity higher than some conventional threshold, that can be defined according to the system we are considering (500 bits is an UPB; 150 bits is, IMO, a reliable Biological Probability Bound, for reasons that I have discussed) is said to exhibit dFSCI.

    Let’s grant that Hamlet has high functional complexity, per your definition. 

    gpuccio: #4) It is required also that no deterministic explanation for that string is known.

    So if we are ignorant, we are more likely to judge it to be design. This is nothing but a gap argument. 

    gpuccio: #5) Any object whose origin is known that exhibits dFSCI is designed (without exception).

    Of course. You just defined dFSCI in #4 as something with no known “deterministic explanation”. How could it be otherwise? 

    If we didn’t know the origin of nylonase, for instance, you would conclude design. Discovering its plausible evolutionary origin, you would then realize it was a false positive. But you could still say #5, because we would just shrink the universe of dFSCI to accommodate our findings. 
     
    Frankly, you don’t even need the math, just #4 & #5: 

    Any object whose origin is known that exhibits dFSCI is designed = 
    Any object whose origin is known that exhibits (no known deterministic origin) is designed =

    If we already know the origin and that origin is not deterministic, then design. 

  58. Joe Felsenstein

    Again, am I misunderstanding gpuccio’s argument? How?

    His position shifts all the time. If you wait long enough, he will eventually agree with you on technical things. Not on the big picture, which is independent of technical arguments.

  59. gpuccio:

    It has all to do with dFSCI. Protein domains:
    a) Have high functional complexity (therefore cannot arise in a purely random system)
    AND
    b) Are irreducible to simpler functional naturally selectable intermediates, and therefore cannot be explained by the only available necessity mechanism, NS.

    Remo Rohs and Gorka Lasso:

    This paper provides new insights into the evolution of the symmetry of protein domains and into protein engineering. The authors show that the widely adopted domain duplication and divergence model is not the only source for domain evolution. A new evolutionary model is described, according to which a particular subdomain can lead to the assembly of a new symmetry-based protein domain by combining several repeats of the same subdomain. The latter implies that modular evolution is an ongoing process.

    Unlike Joe, I will not read an abstract and argue that the issue is settled. I will, however, argue that your claim of irreducibility is probably wrong and based entirely on absence of pathetic level of detail in the evolutionary history of sequences. This is probably true of all claims of irreducibility.

    gpuccio:

    The simple explanation for the nested hierarchy is that it is easier for the designer to modify what already exists than to redo everything from scratch. Is that so difficult to understand?

    That seems to have two unrelated problems. It violates the ID code of not discussing the motives and attributes of the Designer, and it makes no sense. An omniscient being, or one that can assemble long strings of functional DNA, anticipating its function within a changing ecosystem, would not have the kind of limitations characteristic of mere mortal designers. At any rate it makes no sense to assign attributes to invisible imaginary magicians. Except as an ad hoc rationalization.

  60. That kairosfocus character lays out his “definitive” argument over at UD; and it demonstrates why ID/creationism cannot even explain the existence of galaxies, stars, the periodic table, compounds, liquids, and solids.

    This is a pretty good example of why it would take far more than 6000 words just to deconstruct all the ID/creationist misconceptions about basic chemistry, physics, and biology. Then one would have to start all over again to try to bring them up to speed on all the science they stopped learning since middle school.

    In a very rare inkling of insight, an ID/creationist, Sal Cordova, recognized something was wrong with Granville Sewell’s paper on the second law of thermodynamics. He recognized this just based on his classical understandings of thermodynamics alone.

    When Sal tried to take that insight directly to the people over a UD, he was angrily rebuffed by KF and by Sewell as well as by others. And how was Sal “proven wrong?” The crowd over at UD found a textbook on statistical mechanics, written back in the 1980s, that attempted to apply an “information theory perspective” to statistical mechanics.

    ”Information” is the great, mysterious concept of ID/creationism on which all ID/creationist arguments appear to hinge. It has to be “information” because “information” is connected with “intelligence.” “Information” overcomes all. It overcomes uniform random sampling of huge sample spaces of inert things that have to assemble into complex structures that are specified ahead of time. Therefore, intelligent design.

    To mal-appropriate a line from “The Music Man;” “INFORMATION! With a capital I, and that rhymes with pi and that stands for Intelligence!”

  61. Mung: If genomes were just random assemblages, what sort of objective nested hierarchy would that result in?

    With random sequences of significant length, widely divergent hierarchies would typically have similar, albeit weak, degrees of fit.

    If, however, you were to start with a single sequence of significant length, and subject the sequence to replication with variation, and assuming reasonable mutation rates, then it would form an objective fit to a single nested hierarchy, and you would be able to reconstruct the lines of descent with reasonable accuracy.

  62. Perhaps someone who is allowed to post there should ask him from where he got the list of configurations that can occur without magical intervention, and how that list was assembled.

    Without such a list you cannot separate configurations into designed and non-designed.

    Perhaps the list is stored and indexed in the Library of Babel.

  63. gpuccio: Points 2-4 are intended to explain how dFSCI is defined and measured.

    That’s right. #2-4 are the definition. As long as the definition is self-consistent and not conflated with other definitions, then it is what it is. 

    gpuccio: Point 5 is a completely different thing. 

    That’s right. #5 is a conclusion. 

    Per #4, anything with dFSCI has no known deterministic explanation; therefore if something with dFSCI has a known explanation, that explanation can’t be deterministic—by definition. #2 and #3 are superfluous to the vacuous tautology. They’re just window dressing. 

  64. gpuccio: It is possible to describe descriptive information (like Hamlet) in term of an explicit function, such as: a text that can convey all the information about the story, the characters, the meaning, and if we want even the emotion and the beauty.

    Sure. That’s easily put into quantitative terms.  

  65. kairosfocus: Take a protein. How much can its string vary without disastrous loss of function? If not a lot, then it is specifically functional. (In short, we are in zones T when we have relatively narrow sets of possible configs in a much larger space, that will work.) 

    Sure, let’s take a protein, say a random sequence that weakly binds to ATP. The specified complexity would be low as these proteins are relatively common in sequence space. Now, let’s replicate and mutagenate the sequences, and select those with the most binding function. The specified complexity has increased. After repeated generations, CSI. 

  66. Joe: Umm replication is STILL the thing you need to explain. By just using replication you expose your desperation.

    We’re just concerned with defining and measuring CSI at this point.
     

  67. Joe: How are you defining nested hierarchy? 

    The usual way, as a hierarchical ordering of nested sets. 

    Joe: It is already a given that ancestor-descendent relationships for non-nested hierarchies.

    Our comment referred to the pattern of offspring. 

    Joe: List your criteria for each level and each set, please.

    That would depend on the specific history, of course. It’s something very easy to verify for anyone interested. 

    Zachriel: If you were to start with a single sequence of significant length, and subject the sequence to replication with variation, and assuming reasonable mutation rates, then it would form an objective fit to a single nested hierarchy, and you would be able to reconstruct the lines of descent with reasonable accuracy.
     

  68. OMTWO: “When I (or Lizzie) claim that we can write a program that can output CSI the onus is not on us to define what CSI is. The onus is on your to test the output from the program and determine the level of CSI present, if any. After all, I might be just making it all up!

    Mung: “LOL! And you probably ARE making it up.”

    Probably? :)

    Mung, are you saying you don’t know for sure?

    Mung, are you saying you don’t know how to test whether CSI is present?

     

     

  69. Mung: And the connection to biological reality is?

    We’re discussing definitions of CSI, which is supposedly a signature of design. As such, we need a clear metric. Gpuccio provided a definition of what he calls “dFSCI”, which, unfortunately, includes design in its definition, so can’t be used to argue for design. 

    Eric Anderson: Indeed Shannon “information” is not even true information in any meaningful sense of the word; certainly not in the CSI sense we are interested in for technology, communications, bioinformatics, etc. 

    Um, Shannon Information is the theoretical backbone of information technology and communication systems. 

  70. This is not my area of expertise, but Shannon information seems tied to measures of bandwidth, and the various versions of CSI seem intended to measure meaning. I don’t see much prospect for a measure of meaning.

  71. Joe: With your “definition” you need to define hierarchical ordering and nested sets.

    A nested set is one which is a subset of another. More generally, a nested set model is one where any two sets are either disjoint or one is a subset of the other. Hierarchy refers to whether sets are contained or containing.

    This is off-topic for this thread. If someone wanted to start a new thread, we could continue this discussion there. Not sure it would be productive, though.

  72. It’s certainly reasonable to say that Shannon Information isn’t what they mean when discussing ID, but it isn’t reasonable to say it’s not meaningful in terms of technology and communications.

  73. Joe: “Thank you Robb,

    So you did NOT compress the text, but a digital representation of the text. Got it. Not the same thing and your bait-n-switch is more than a tad dishonest.”

    Where does this leave kairosfocus and his example of ASCII characters?

    If a digital representation of the characters we type is NOT CSI, then whatever kairosfocus sees typed on his computer screen is NOT CSI.

    Where does this leave gpuccio’s argument about dFSCI?

    By using a digital representation of FSCI, is it still an example of CSI or is gpuccio being more than a tad dishonest?

     

     

  74. gpuccio,

    This is in response to your commment 320 on the UD thread.

    To Zachriel (at TSZ):

    That’s right. #5 is a conclusion.

    Are you kidding?

    #5 is not a conclusion. It is an independent empirical observation.

    You keep using that word. I do not think it means what you think it means. — Inigo Montoya

    You have defined dFSCI as follows:

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    You have also stated that the mechanisms of the modern synthesis are a “deterministic explanation” under your definitions.

    You therefore cannot claim that your #5 is an empirical observation when there is no possible empirical observation that could lead to a conclusion that dFSCI is present in an artifact known to have evolved. The lack of dFSCI is a direct consequence of your definition, nothing else.

    A more interesting question is whether or not evolution can generate functional complexity, by your definition, in excess of 150 bits. If it can, as numerous examples in these threads suggest, then whether you call it dFSCI or not is immaterial — evolution will have been shown to be a sufficient explanation for our actual empirical observations.

     

  75. gpuccio: #4) It is required also that no deterministic explanation for that string is known.

    gpuccio: #5) Any object whose origin is known that exhibits dFSCI is designed (without exception).

    gpuccio: #5 is not a conclusion. It is an independent empirical observation.

    No, because you have defined dFSCI as something without a known deterministic explanation, hence any object with dFSCI whose origin is known can’t have a deterministic explanation — by definition.

    Try removing that clause of the definition and see what you are left with.

    gpuccio: You are obviously referring to the shameful Szostak paper.

    Shameful? Seriously?!

    gpuccio: The only algorithm present in biological contexts is NS.

    And natural selection can often select for very specific functions, just like in Szostak’s experiment. A simple example is the evolution of antibiotic resistance which is often seen in natural settings.

    gpuccio: It is. That’s how it can be done.

    a) We define the function as the ability to convey the full set of meanings in the original text (we can refer to a standard version, for objectivity).

    b) We prepare 1000 detailed questions about various parts of the text.

    c) We define the following procedure to measure our function: the function will be considered as present if, and only if, an independent observer, given the text, is able to answer correctly all the questions.

    How many possible Hamlets are possible? It’s as broad as human imagination. Only as a thought-experiment is it possible to count them. 

     

  76. In reading through this, particularly Mung’s question about the amount of Shannon information in 00101, it struck me that some of the folks at UD are sneaking in an assumption of context as the specification. In other words, there appears to be a Post Hoc Ergo Propter Hoc assumption in the assignment of CSI, as in – “Because DNA contains information about how an organism should develop, that’s what it’s supposed to do.” The “suppose” then is taken as the context/specification/intent. There does not appear to be any awareness that many biological functions can be adapted for a variety of conditions/contexts.

  77. gpuccio:

    A smart designer, I would say. Maybe not omniscient or omnipotent, but certainly smart.

    would not have the kind of limitations characteristic of mere mortal designers

    But he could certainly have other kinds of limitations.

    As long as you realize that you have invented an imaginary entity having exactly the attributes needed to fulfill your fantasy.

    In detective fiction, say in some serious work of literature like Scooby Doo, your designer would be a ghost or evil spirit.

  78. It has become abundantly clear that the people over at UD have absolutely no clue about what any kind of information is. And they certainly don’t know anything about Shannon “entropy,” Shannon “information,” Shannon “uncertainty,” or any of the different names they call it. They think taking a logarithm to base 2 endows a calculation with “information” even though they can’t tell anyone what this “information” is about, what it does, or what the mechanism is for how it pushes atoms and molecules around.

    Not one of those characters over at UD has any idea what goes on in the world of signal and image processing. They have never done any signal and image processing; and they wouldn’t have a clue about how signals and images are processed. They are just making stuff up as they go; as is easily discernable by the fact that they have been mud wrestling and word-gaming for something like 50 years now without converging on anything. It has been all smoke and mirrors and primitive grunting for the entire 50 years.

    So all anyone is going to get from those characters over at UD is immature sneering, name-calling, mooning, the finger, feces hurling, taunting, and repeated mimicking of any and all critiques of ID/creationism and of the people who offer those critiques.

    The equation for Shannon entropy is – ∑pi log2 pi,

    where pi the probability of the occurrence of the ith event, and the sum is over all these events. It is a very general equation that pops up frequently in analyses of the probabilities of ensembles of events.

    We went over the behavior of this equation on an earlier thread. As was pointed out there, this is the average of the logarithms of the probabilities. And because all the probabilities have to add up to 1, this average becomes a maximum when all those probabilities are equal. Thus, all this formula does is become a maximum when all events are equally probable.

    There is nothing weird going on here; there is no “magic information” that is being conveyed other than the fact that this calculation becomes smaller as some events become more probable than others in the ensemble of events.

    In fact, one doesn’t even have to use a logarithm; simply looking at the products of those probabilities gets a similar result. The logarithm is both a convenience and, in certain contexts such as statistical mechanics, it establishes a relationship to other variables that describe the system under study. It depends on the context in which the equation is used.

    Ask an ID/creationist what that means and he can’t tell you. He can’t tell you where the knowledge about those probabilities comes from. He can’t tell you how this equation is used in signal and image processing. He can’t tell you how it is used in statistical mechanics. He simply doesn’t have a clue! To an ID/creationist, this is just a big, bamboozling, advanced-math equation that somebody called “entropy” or “information” or “uncertainty;” but he can never tell you what it means or how it makes ID/creationism a science.

    If you try to explain it to an ID/creationist, all you will get is feces hurling in return. It has been this way for decades; it never changes.

  79. But of course the ID folks don’t really need to understand the derivation or application of any of this, because for them “information” is some ineffable something-or-other that can only be created by their god, so it’s a ritual incantation, a shibboleth that identifies them as followers of the One True Faith.

    The application is actually quite straightforward: Decide whether someting requires their god, dub it “information”, and conclude that because it’s information, it must have been created by their god. What else is there to know? 

  80. Flint said: But of course the ID folks don’t really need to understand the derivation or application of any of this, because for them “information” is some ineffable something-or-other that can only be created by their god, so it’s a ritual incantation, a shibboleth that identifies them as followers of the One True Faith.

    Watching the churning over there at UD is a bit like watching some kind of bizarre acting routine in which the writers can’t write, the actors can’t act, the producers can’t produce, and nobody knows what is supposed to happen.

    It’s neither a tragicomedy nor a comical tragedy. It’s a thoroughly screwed up version of the Keystone Cops or the Three Stooges being done by people with pompous egos, no senses of humor, and complete certainty that they are THE masters of all knowledge in the universe.

    It might be funny if it were a single, sick routine being done on Saturday Night Live. Instead, it plods on endlessly as it churns itself into an infinite regress of grotesque caricatures of itself that just become nauseating to watch. I’m not sure that even Monty Python could capture it. It doesn’t stay funny; it just gets sicker.

  81. gpuccio: The only algorithm present in biological contexts is NS.

    Not even. The ‘algorithm’, such as it is, is essentially the processes ‘survive’ and ‘reproduce’ in each individual. When you have a set of individuals following that algorithm, higher-level constraints winnow the results in a finite world – there is not enough room for everybody, which impinges upon the ‘survive’ process. The results are winnowed whether NS is in operation or not.

  82. gpuccio: To Zachriel (at TSZ) 

    Those were onlooker’s comments. 

    gpuccio: “deterministic explanation” … They are a RV + NS (where NS is the deterministic part of the algorithm) 

    That may be the source of confusion. You had seemed to be including evolution as a deterministic process (taken broadly). However, evolution is not purely deterministic, but includes random elements. For that matter, so are evolutionary algorithms. (If you want to be pedantic, you can use a true-random generator.)

    So evolutionary algorithms can generate dFSCI, per your definition #2-4.
     

  83. Hold it. That can’t be right. 

    gpuccio: The concept is very simple: dFSCI that cannot be explained by any known mechanism warrants a design inference.

    Your definition referred to a deterministic mechanism. 

    gpuccio: The concept is very simple: dFSCI that cannot be explained by any known mechanism warrants a design inference. Why? Because dFSCI is a very good indicator of design (100% specificity in empirical tests). 

    Heh. You couldn’t have stated the God of the Gaps more explictly. Per your own statements, there are some sequences with “functional complexity” and that some of these sequences have known causes! But you still conclude that those that don’t must be designed. And when another gap is filled, you simply remove it from the class and claim your definition never fails!

  84. Shorter gpuccio:

    1. Take a bucket of complex sequences.

    2. Throw out the ones that are explained by a “known mechanism”.

    3. Amazing!  Of the sequences that are left, not a single one is explained by a known mechanism!

    4. Later you discover a mechanism that can explain one of the remaining sequences.

    5. Throw it out of the bucket and return to step #3. 

    Come on, gpuccio.  You can do better than this. 

  85. It occurs to me that no one on either side of the debate know the history of sequences or how far removed they are from a random sequences they are. No one knows how many stepwise mutations separate a minimally functional sequence from a highly specialized one.

    There is no grammar or syntax that we understand.

    So it makes mo sense to count bases. Length of sequence does not imply meaning. This is what I had in mind when I tried to distinguish between bandwidth and meaning. DNA does not lend itself to quantifying meaning.

  86. Petrushka notes: No one knows how many stepwise mutations separate a minimally functional sequence from a highly specialized one.

    This pretty much gets to the point. All this posturing about the “improbabilities” of specified structures and functions is totally irrelevant; even at most of the simplest levels of complexity.

    Given a bunch of oxygen and hydrogen. What prediction does one make about the properties and functions that emerge when they are put into the same volume of space and allowed to do whatever they do? How do you even predict what they will do without having seen it?

    Will anyone predict that a function that emerges from this will be to erode huge canyons on planetary objects? Will they predict that within a very narrow temperature range that it will be instrumental in the leaching of salts out of rocks? Will they predict that within an even narrower temperature range that it will split rocks? Will they predict that it will be a solvent for millions of other compounds as well? Will they even predict snowflakes?

    Water has thousands of properties and functions that are not predictable by knowing the properties of hydrogen and oxygen. Properties and function emerge not only from the increased complexity itself, but from the interactions of emergent properties with other emergent properties extant in the environment.

    What possible prediction can anyone make about far more complex molecules and their environments without already having considerable experience with complex molecules along with the benefit of hindsight and experience? What possible prediction can one make about the properties and functions that emerge from all the atoms that make up a biomolecule in the presence of water within a narrow temperature range?

    ID/creationist log base 2 math is a pretentious child’s game compared with the real world of chemistry, physics, and biology. ID/creationists just sneer at chemistry, physics, and biology; they don’t have to learn any of it. All they need to know is how to take a logarithm to base 2 of the ratios of the cardinalities of sets of non-interacting objects and suddenly they know all; and they can pompously “predict” what will NOT happen. This is ID/creationism in a nutshell.

  87. Mike,

    Oddly enough, I see a steady, entirely predictable pattern. Like a book of problems with the answers in the back. The answers might be wildly wrong, or unrelated to the problems, but they all use the same book and the answers are Defined Truth. If they don’t fit the problems, the problems are wrong.

    Seriously, you know what they’re going to say in each instance sure as sunrise. By now, you’ve noticed that the answers never change. You can, by now, predict exactly what response you’ll get and you’ll never be wrong.

    They’re like Joseph Heller’s soldier who saw everything twice. Hold up one finger, he sees two. Hold up two, he sees two. Hold up three, he sees two. You know what’s supposed to happen, and it always does.   

  88. Flint said: They’re like Joseph Heller’s soldier who saw everything twice. Hold up one finger, he sees two. Hold up two, he sees two. Hold up three, he sees two. You know what’s supposed to happen, and it always does.

    But, as we all know, the answer is 42. :-)

  89. Flint:

    Your observation got me to thinking about the new window dressing over at UD.

    That site has always been a pathetic scene of kvetching and self-pity about the cabal of bad old scientists throughout the entire world that rejects them and gets in the way of their winning the Nobel Prize or being the intellectual power houses of society.

    Now they have apparently adopted those two blackguards, Mung and Joe, to sit all day and throw feces, belch, fart, and moon everyone in the world.  Apparently that is their major talent; and with nothing else to do in life, what better exposure (pun intended) can two such blackguards have?  They have become the face of UD and its true feelings.  Indeed, the answer must always be two; how obvious!  They are no longer even faking the intellectualism.

    Maybe there is some humor in all that after all.

  90. Mung: ” Yes, I see you do the same thing as Lizzie. You don’t actually calculate CSI.”

    //—————————-

    keiths: // program stops when this fitness threshold is exceeded
    #define FITNESS_THRESHOLD 1.0e60

    while (genome_array[0].fitness < FITNESS_THRESHOLD) …”

    //——————————

    Let me relabel this for you Mung.

    #define CSI_THRESHOLD 1.0e60

    while (genome_array[0].dFSCI < CSI_THRESHOLD){ …}

    return(CSI_TRUE);

     

     

  91. Mung: And then they think that if hey can just generate enough Shannon Information that it qualifies as CSI.

    A randomizer is sufficient to generate Shannon Information. Clearly CSI is meant to represent something else. The problem is getting a consistent metric. 

    Mung: Don’t blame your sloppy use of language on language.

    We used the accepted terminology. 

    natural selection: a natural process that results in the survival and reproductive success of individuals or groups best adjusted to their environment and that leads to the perpetuation of genetic qualities best suited to that particular environment.

    Zachriel: Per your own statements [gpuccio], there are some sequences with “functional complexity” and that some of these sequences have known causes! But you still conclude that those that don’t must be designed.

    Mung: That’s false. 

    Actually, that’s precisely how we read gpuccio’s statements. He defines functional complexity, excludes those with known causes, then concludes the remaining sequences are designed. Keiths summarized it above. 

  92. What makes you think he can do better. This is all tha ID and creationism are. Gussied up gaps. The trick is to surround the gap with enough verbiage that you lose track of what is being done.

  93. Mung: ” Finally, an actual string to analyze.

    So, can someone please post a program to algorithmically compress and decompress this string?

    And can some give me a description of it that doesn’t just consist of the string itself?

    H H H T H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H T H H H T H H H T H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H T H H H H T H H H H T H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H T H H H H T H H H T H H H H T”

    Mung, are you saying you don’t know how to compress this string?

     

  94. gpuccio: The “known causes” have nothing to do with the assesment of dFSCI. The requisites to assess dFSCI are two (as I have said millions of times):

    a) High functional information in the string (excludes RV as an explanation)

    b) No known necessity mechanism that can explain the string (excludes necessity explanation)

    Previously, you used said “no deterministic explanation for the string is known”. Now you use “necessity mechanism”. We suggested there was confusion with your terminology. Is evolution a necessity mechanism? You seem to imply so when you exclude protein relatives from the set of dFSCI. 

Leave a Reply