A list of things for which CSI has been determined

Intelligent Design advocates are still talking about CSI and determining the value of it.

CSI measures whether an event X is best explained by a chance hypothesis C, or some specification S.

So I’d like this thread to be a list of biological entities and the value of CSI that has been determined for each.

If no entries are made then I believe that would demonstrate that CSI might measure X Y or Z but it never actually has done so.

Out of interest, what is the CSI of a bacterial flagellum?

133 thoughts on “A list of things for which CSI has been determined

  1. keiths: The CSI being discussed is Dembski’s 2002 version, though Eric keeps trying to change the subject.

    But to what exactly is he trying to change the subject? I’m trying to be charitable to Eric and assume he has a reason for what he posts and that he understands the concerns with multiple definitions of CSI and the need to pick one, or explicitly justify why he does not need to.

    Same charitableness for my bit on the xor stuff. I assume Eric knows the difference between xor and a permutation, so why would he go to an xor? I agree it does not make sense when considering efficiency of either implementation or specification of f. Maybe there is some theorectical reason to pick a sequence of simple XORs that Eric had in mind? I admit that seems to be extreme charitableness.

    Bruce: what ASC argument against stochastic evolution is Eric referring to

    Keith: He’s referring to this paper: “On the Improbability of Algorithmic Specified Complexity”

    Right, I knew about that paper, but neither “design” nor “stochastic” appear according to my text search of that paper. Besides providing that bound on ASC, the paper (eg in conclusion) claims that ASC measures “how well a probability explains a given event”.

    In the Game of Life paper, that phrase changes to
    [start of quote}
    “Objects with high ASC defy explanation by the stochastic process model. Thus, we expect objects with large ASC are designed rather than arising spontaneously. Note, however, we are only approximating the complexity of patterns and the result is only probabilistic.” (p. 586)
    [end of quote]
    But going from “wrong probability model” to “it must be designed” seems to omit some steps. Like, what other probability distribution is possible given what science tells us about the world?

    That issue reminds me of the issue with Dembski’s CSI version that depends first on determining the probability that the event/system can be explained by evolution.

    I understand you to have noted that issue for G of L paper by criticizing the chosen probability distribution as ignoring the physics of the game.

  2. keiths:

    Bruce: what and how the principle of maximum entropy is being used (is there an ASC paper on that to your knowledge?)
    Keith: I’m unaware of any ASC paper on that topic.

    Looking more closely at the G of L paper, I found this at the end of the introduction section:
    [start of quote]
    “Use of KCS complexity has been used elsewhere to measure meaning in other ways. Kolmogorov sufficient statistics [2], [23] can be used in a two-part procedure. First, the degree to which an object deviates from random is ascertained. What remains is algorithmically random. The algorithmically nonrandom portion of the object is then said to capture the meaning of the object [24]. The term meaning here is solely determined by the internal structure of the object under consideration and does not directly consider the context available to the observer as is done in ASC”
    [end of quote]
    I think that “algorithmic randomness” is related to the maximum entropy principle that Eric name dropped in this thread and in an OP. Because he did not explain why that mattered to ASC/CSI, I think “name dropping” is the right description of his usage of these terms.

    I’m not sure if that claimed advantage of adding context to the standard approach, mentioned at the end of the quote, is really an advantage. I need to go back and review with you and Tom said about that in the other thread.

  3. Random thought:

    Suppose I generate a few thousand bits of data via radioactive decay.

    Is there any computational or logical process for insuring that this sequence does not occur in the expansion of pi?

  4. petrushka,

    Suppose I generate a few thousand bits of data via radioactive decay.

    Is there any computational or logical process for insuring that this sequence does not occur in the expansion of pi?

    No. And though it hasn’t been proven, mathematicians suspect that every possible finite sequence of digits will occur somewhere in the decimal expansion of pi.

  5. OMagain,

    So, will you be assessing my claim? If so, how? What is it I need to provide to you?

    I cannot access your claim until you support it.

    From known empirical evidence you cannot so I have to assume it is false at this point unless you bring new information to the table.

  6. keiths:

    The CSI being discussed is Dembski’s 2002 version, though Eric keeps trying to change the subject. The 2002 version of CSI is the one which Dembski claimed was conserved — a claim that Eric has been unable to defend, as we’ve seen.

    Bruce:

    But to what exactly is he trying to change the subject?

    To Montañez’s version of CSI, rather than Dembski’s. That’s why he keeps talking about “normalization by kardis”.

    I assume Eric knows the difference between xor and a permutation, so why would he go to an xor?

    Just sloppiness, as far as I can tell. He remembered that Joe was scrambling the image, but he didn’t remember that it was being done by permutation — and he didn’t bother to check:

    Using ASC notation, and assuming f is essentially the XOR mask of say 100×100 bits, and X is the scrambled b/w image of 100×100 pixels…

  7. Bruce,

    But going from “wrong probability model” to “it must be designed” seems to omit some steps. Like, what other probability distribution is possible given what science tells us about the world?

    Yes. In Dembski’s 2005 version of CSI, he’s explicit that P(T|H) must account for “Darwinian and other material mechanisms”. Somehow that lesson was forgotten when it came to ASC.

    That issue reminds me of the issue with Dembski’s CSI version that depends first on determining the probability that the event/system can be explained by evolution.

    I understand you to have noted that issue for G of L paper by criticizing the chosen probability distribution as ignoring the physics of the game.

    Right. They make this confession in the paper:

    In order to approximate the probabilities, we will assume that the probability of a pattern arising is about the same whether or not the rules of the Game of Life are applied, i.e., the rules of the Game of Life do not make interesting patterns much more probable than they would otherwise be.

    I commented:

    They’ve run head-on into the problem that has bedeviled [Dembski’s] specified complexity all along: no one can calculate the relevant probabilities. If these guys can’t even estimate the probabilities in a vastly simplified universe like the Game of Life, what hope do they have of applying ASC to the real world?

    Just imagine trying to make this argument about the real world: “The laws of nature make it difficult to predict the probability of meaningful patterns arising from our initial conditions, so we’re just going to assume that they don’t make any difference, and that interesting patterns aren’t much more probable with the laws of nature than without.”

  8. colewd:
    OMagain,

    I cannot access your claim until you support it.

    From known empirical evidence you cannot so I have to assume it is false at this point unless you bring new information to the table.

    LOL! Good old Bill. Demands the exact thing from others he refuses to provide for his own ID nonsense.

    Looks like Bill’s ID claims have failed miserably again. Too bad. 🙂

  9. Not saying it was Eric’s intent, but the “XOR with a one-time pad” function has the ‘benefit’ that its information content scales with the size of the image (as described by Eric), whereas a permutation function could be rather simple and still obliterate the CSI arbitrarially large images, AIUI.
    Either way, citing an XOR obscures the fact that the function need not scale with the data acted upon.

  10. Jock,

    Not saying it was Eric’s intent, but the “XOR with a one-time pad” function has the ‘benefit’ that its information content scales with the size of the image (as described by Eric), whereas a permutation function could be rather simple and still obliterate the CSI arbitrarially large images, AIUI.

    Joe’s permutation function was also tied to the size of the image. Here’s his June 1st description:

    Let me add that, to show that Specified Information is not conserved, we can use a simpler example, which will be found in my 2007 article (Google with terms “Dembski” and “Felsenstein” to find it).

    We have a digital image, an array of 10,100 0’s and 1’s, which looks like a flower (it was in fact made from a photo of a flower). And we take a permutation of the integers 1 through 10,100 and we scramble it. We know the permutation, so we can in principle unscramble it any tme we want. The information is conserved, since we can reverse the permutation at will and the image will return, unchanged.

    But if we use the specification “looks like a flower” the original image has it, and the scrambled image doesn’t. So although information is conserved Specified Information isn’t. The amount of it can either increase or decrease (depending on which direction we’re going), while all that time, the Shannon information is conserved.

    Again, I hope that EricMH is convinced by this example that the conservation of information does not apply to Specified Information.

  11. Not sure what “security” is in this context. I didn’t say (until later) how the permutation was chosen (it was with pseudorandom numbers). Lots of other permutations would have done the job of destroying CSI while also able to “create” it when going the other way. Applying a random dot pattern with an XOR function would do much the same things.

  12. I was alluding to cryptographic ‘security’. I forgot that you stipulated that the size of the permutation block be equal to the size of the message.
    We agree that a smaller block would be sufficient to destroy any CSI, whether f is an XOR or a permutation.

  13. DNA_Jock,

    One doesn’t need “security”. I could have used a permutation that kept odd-numbered columns unchanged, and replaced the even-numbered columns by those same ones in reverse order of those columns.

    I am pretty sure that would destroy like-a-flower-ness. Plus it’s inverse is easy — it:s the same function applied again.

  14. keiths: To Montañez’s version of CSI, rather than Dembski’s. That’s why he keeps talking about “normalization by kardis

    Yes, I noticed those brief references, but his subsequent posts switched to ASC rather than trying to show how Montañez’s CSI solved the issues Joe raised.

    Has there ever been any discussion of that Montañez paper in TSZ?

    I’ll leave it to Joe to demonstrate the issues with ASC when it comes to science, but I’m still curious about Eric’s name dropping post, particularly the stuff starting with the “things like ID” bulleted list.

    http://theskepticalzone.com/wp/correspondences-between-id-theory-and-mainstream-theories/

    Being charitable again, I assume Eric has something in mind when he claims ID is just re-using those ideas.

    But how do all those concepts in that OP fit together with ID in general with ASC or Montanez paper in particular, at least in Eric’s thinking?

  15. keiths:
    Just imagine trying to make this argument about the real world: “The laws of nature make it difficult to predict the probability of meaningful patterns arising from our initial conditions, so we’re just going to assume that they don’t make any difference, and that interesting patterns aren’t much more probable with the laws of nature than without.”

    Exactly. This is why we will never see a real-life worked out ID example of CSI, SI, SCI, ASC or whatever flavour of alphabet soup is current this month. It simply cannot be done because the probabilities involved cannot be estimated to any reasonable degree.

    This emperor has no clothes and all they do is try to befuddle us with hocus-pocus math. You actually don’t have to be a math guru to see right through it all.

    Eric, next time you visit, please provide a worked-through biological example of this metric that you claim demonstrates design. If you can’t do it yourself, then go and ask one of your ID buddies. If they can’t do it either, then please explain to us what the fecking use is of all of this.

    In the absence of a positive response, why would you expect anyone to take it seriously?

  16. faded_Glory:

    Eric, next time you visit, please provide a worked-through biological example of this metric that you claim demonstrates design.

    I predict he’ll fall back on this excuse:

    I choose not to talk about it because I don’t know much about the details of biology. I choose to talk about areas in my expertise.

    …which would be a bit more believable if he were consistent about it.

    fG:

    If you can’t do it yourself, then go and ask one of your ID buddies. If they can’t do it either, then please explain to us what the fecking use is of all of this.

    Eric doesn’t like talking about that, either. The response I quoted above came after I asked him about Dembski’s failed attempt at showing that the flagellum was designed.

  17. Bruce,

    Has there ever been any discussion of that Montañez paper in TSZ?

    It’s been mentioned more than once, but I’m not aware of any extended discussions of it.

    Being charitable again, I assume Eric has something in mind when he claims ID is just re-using those ideas.

    It’s a defense strategy. When people attack ID, Eric likes to pretend that they are attacking mainstream ideas. That’s the basis for his Fields Medal babbling, for instance.

    What he doesn’t realize is that his strategy is self-defeating. If there were nothing new in ID, then Dembski, Marks and Ewert would be plagiarists presenting old ideas as if they were original. On the other hand, if there is something new to ID, then critics can criticize ID without thereby criticizing the mainstream ideas from which ID borrows.

  18. BruceS: Has there ever been any discussion of that Montañez paper in TSZ?

    I’ll take a shot at chewing through the leather straps. Perhaps you’ll see an OP in the next day or two. (Apologies to Joe for ignoring another matter.)

  19. Bruce,

    In light of recent discussions, it’s worth pointing out that Montañez’s paper uses the CSI described in Dembski (2005), for which Dembski never claimed conservation, and not the version in Dembski (2002), for which he did.

    Eric has said that Dembski’s 2002 claim is correct, which is why Joe and I don’t let him get away with changing the subject to Montañez.

  20. keiths: Montañez’s paper uses the CSI described in Dembski (2005)

    Actually, it doesn’t, though Montañez claims that it does. That’s the main thing I hope to explain in an OP. Note that Dembski (2005) assigns specified complexity to the set T containing all of the possible outcomes that match pattern T. (Dembski denotes the pattern and the set of matching outcomes identically.) Montañez assigns specified complexity to individual outcomes.

    Suppose that object is in the vocabulary of the semiotic agent, and that all possible outcomes are objects. Suppose also that the actual outcome is very low in probability. Dembski (2005) will not say that a particular low-probability object is high in specified complexity, because it is the probability of an object (=1) that he considers. Montañez will say that a low-probability object is high in semiotic specified complexity, because he mistakenly puts the (very low) probability of the particular outcome in place of the probability (= 1) of the set of objects [outcomes] matching the pattern object.

    Edited.

  21. For the curious, Tom is talking about the following equation from Montañez’s paper. Note that Montañez is using p(x) in place of Dembski’s P(T|H).

  22. keiths: If there were nothing new in ID, then Dembski, Marks and Ewert would be plagiarists presenting old ideas as if they were original. On the other hand, if there is something new to ID, then critics can criticize ID without thereby criticizing the mainstream ideas from which ID borrows.

    There is something new in ID, but what critics criticize are the mainstream ideas from which ID borrows.

  23. EricMH:

    There is something new in ID, but what critics criticize are the mainstream ideas from which ID borrows.

    This thread indicates otherwise. Those 20+ errors are original to Ewert, Dembski and Marks.

  24. graham2:
    Ive come a bit late to the party.Is there a list of things with calculated CSI ?

    It’s a null set as far as biology goes.

  25. keiths: This thread indicates otherwise. Those 20+ errors are original to Ewert, Dembski and Marks.

    The one ‘new’ criticism there is by RichardHughes, and it is the same criticism as Dr. Felsenstein and Dr. English, where he uses a different probability distribution than the original chance hypothesis, and claims victory. All this criticism does is substantiate the ASC claim that it can falsify the chance hypothesis, so it is not really a criticism but a corroboration.

    Additionally, RH’s beef is really with randomness deficiency, which ASC is based on. Another piece of mainstream mathematics that you can earn a fields medal by refuting. Like I say, you guys are aiming way too low wasting your time with ID writers when you could be taking on the entire mathematical establishment.

  26. You’re a terrible bluffer, Eric.

    That thread is full of unanswered, detailed criticisms, not of mainstream mathematics, but of the claims that Ewert, Dembski, and Marks make in their Game of Life paper.

    Where are your rebuttals?

  27. EricMH: Another piece of mainstream mathematics that you can earn a fields medal by refuting.Like I say, you guys are aiming way too low wasting your time with ID writers when you could be taking on the entire mathematical establishment.

    Hey Eric, why don’t you submit your mathematical proof evolution is impossible to any mainstream science journal? A Fields medal is nothing compared to the Nobel Prize you’re sure to earn.

    Or maybe deep down you realize you’re just an egotistical empty drum making lots of meaningless noise and don’t want to embarrass yourself any further.

  28. Adapa: Hey Eric, why don’t you submit your mathematical proof evolution is impossible to any mainstream science journal?

    Depends what you mean exactly, but I’d be happy to.

    If by ‘evolution is impossible’ you mean evolution cannot generate ASC, then that is already proven in ‘Improbability of ASC’. Evolution is a stochastic process, so generates a probability distribution over possible events. We set the chance hypothesis to that probability distribution, and by the ‘improbability of ASC’ theorem evolution cannot generate X bits of ASC with probability better than 2^-X.

    There you go!

  29. EricMH,

    This sounds like a refutation of the ‘tornado in a junkyard’ scenario, not a refutation of evolution through stepwise changes and cumulative selection, where the added complexity between successive steps can actually be quite minor. Surely the probabilty distribution of each single step is vastly different from the probability distribution of the final end product when that is regarded as a one-step occurrence? Moreover, the probability distributions of each step are by no means necessarily identical, so how do you go about specifying each one of these?

  30. EricMH: Depends what you mean exactly, but I’d be happy to.

    If by ‘evolution is impossible’ you mean evolution cannot generate ASC, then that is already proven in ‘Improbability of ASC’.Evolution is a stochastic process, so generates a probability distribution over possible events.We set the chance hypothesis to that probability distribution, and by the ‘improbability of ASC’ theorem evolution cannot generate X bits of ASC with probability better than 2^-X.

    There you go!

    When will you be publishing this evolution killing evidence and going for your Nobel Prize?

    With your huge ego and minuscule knowledge of actual evolutionary biology you’re what the British refer to as “too clever by half”.

  31. faded_Glory: This sounds like a refutation of the ‘tornado in a junkyard’ scenario, not a refutation of evolution through stepwise changes and cumulative selection, where the added complexity between successive steps can actually be quite minor. Surely the probabilty distribution of each single step is vastly different from the probability distribution of the final end product when that is regarded as a one-step occurrence? Moreover, the probability distributions of each step are by no means necessarily identical, so how do you go about specifying each one of these?

    The successive steps scenario still sets up a probability distribution over outcomes, so the main argument still stands.

Leave a Reply