Circularity of using CSI to conclude Design?

At Uncommon Descent, William Dembski’s and Robert Marks’s coauthor Winston Ewert has made a post conceding that using Complex Specified Information to conclude that evolution of an adaptation is improbable is in fact circular. This was argued at UD by “Keith S.” (our own “keiths”) in recent weeks. It was long asserted by various people here, and was argued in posts here by Elizabeth Liddle in her “Belling the Cat” and “EleP(T|H)ant in the room” series of posts (here, here, and here). I had posted at Panda’s Thumb on the same issue.

Here is a bit of what Ewert posted at UD:

CSI and Specified complexity do not help in any way to establish that the evolution of the bacterial flagellum is improbable. Rather, the only way to establish that the bacterial flagellum exhibits CSI is to first show that it was improbable. Any attempt to use CSI to establish the improbability of evolution is deeply fallacious.

I have put up this post so that keiths and others can discuss what Ewert conceded. I urge people to read his post carefully. There are still aspects of it that I am not sure I understand. What for example is the practical distinction between showing that evolution is very improbable and showing that it is impossible? Ewert seems to think that CSI has a role to play there.

Having this concession from Ewert may surprise Denyse O’Leary (“News” at UD) and UD’s head honcho Barry Arrington. Both of them have declared that a big problem for evolution is the observation of CSI. Here is Barry in 2011 (here):

All it would take is even one instance of CSI or IC being observed to arise through chance or mechanical necessity or a combination of the two. Such an observation would blow the ID project out of the water.

Ewert is conceding that one does not first find CSI and then conclude from this that evolution is improbable. Barry and Denyse O’Leary said the opposite — that having observed CSI, one could conclude that evolution was improbable.

The discussion of Ewert’s post at UD is interesting, but maybe we can have some useful discussion here too.

210 thoughts on “Circularity of using CSI to conclude Design?

  1. I should also add that I am glad that we are discussing different definitions of CSI. I see two-and-a-half definitions:

    1. Orgel’s original specified information. Let’s call it O-SI. It measures how far out the adaptation is on a scale, such as fitness, how unlikely that a simple mutational process will produce it.

    Dembski’s original (2001) CSI was declared when you are so far out on that scale that the mutational process (or a monkey typing on a four-letter AGCT typewriter) would not be expected to produce that much SI even once in the history of the universe. Let’s call that O-CSI. Dembski then has a Law of Conservation of Complex Specified Information (LCCSI) that was supposed to show that you couldn’t get that far by processes such as natural selection. Alas for that argument, the LCCSI turned out not to prove that.

    2. Dembski’s (2006) CSI. That is what we are calling D-CSI. It builds the P(T|H) term into the definition of CSI and says that the processes in H include all naturalistic processes including natural selection. It was described as a clarification of O-CSI, but the use of the LCCSI with O-CSI would never had made sense under that clarification. So as far as I can see, it is a different definition of CSI than O-CSI.

    3. Alt-CSI. “Well-matched parts”. I need to find the definition of this. That is why I am saying that we have two-and-a-half definitions of CSI.

  2. petrushka,

    O-CSI is an interesting measure of how much information has been built into the genome by (natural processes). I actually defined a version of it, “adaptive information” in 1978. It only refutes evolution if you have the Law of Conservation of Complex Specified Information, and if that actually works, which it doesn’t.

    D-CSI is a conclusion, drawn from other data, whose nature is not explained.

    Alt-CSI: ??

  3. What that means is that he only accepts one possible solution in an evolutionary lineage. He is estimating the probability that an organism will have precisely the genetic sequence it has, as derived from a purely random sequence, within a limited amount of trials. No incremental approach is allowed, and worse, it is the one and only sequence that is functionally relevant. The only way he imagines a sequence can be reached is by randomization, and all he considers is the conclusion. It really is a gussied-up version of the ‘747 in a junkyard’ argument that the old school creationists still use.

    Okay, here’s my stab at a better definition of CSI.

    Take two homologous sequences. I hope I use the word right. I mean two sequences that are different but produce the same phenotype.

    Count the differences and estimate how many steps it would require to have evolved from a common ancestor. Is the result compatible with the available time?

    We have Lensky’s LTEE as a baseline. I’m not aware of any other experiment that has actually recorded neutral evolution over time in pathetic detail.

  4. Hi Joe,

    First of all, apologies to you and keith for the link mess up. Evidently the second link didn’t copy properly and I ended up just pasting the first one again. Barry’s partial quote was taken from this comment I made in another thread.

    Now, that having been said, as I explained in my comment to Keith, both of those (being the same comment) are about Dembski-CSI. My discussion of the Alt-CSI is in this very thread, in my comment to Patrick that was posted just before my first comment to you in this recent run starting a few days ago. I also discussed it a bit in a comment to R0bb over at UD later in the same thread as the extensive comment to him about Dembski-CSI that I linked to earlier in this thread.

    For petrushka’s sake, I will paste here the portion of that other comment at UD that is relevant to Alt-CSI so he doesn’t have to click over there:

    I will say that I think your confusion on this issue is not entirely your own fault. Different proponents of ID have sometimes used the term Complex Specified Information in different contexts. But as confusing as it can be, it is also understandable because, for example, it is perfectly sensible to speak of something like DNA having Complex Specified Information [when] using the term “Complex” according to its more common meaning of “having many well-matched parts”. In this case, one would be using CSI as a descriptive term for one or more features of a system rather than as a calculated value of the system’s improbability on chance hypotheses. And if this is what one means, that a system has many well-matched parts, that it matches an independent specification, and that it has some kind of semiotic dimension, what descriptive term could be more apt than “Complex Specified Information”? Personally, I think this is the more intuitive context in which to use the term CSI, which is why I think it would be more helpful if the CSI related to improbability was renamed for clarity to replace the “complex” with “highly improbable” or something of that nature.

    Joe, you further said:

    Obviously I followed the wrong links. The second of them was supposed to lead to the comment in which you discussed why use of Dembski’s CSI to conclude that something was unlikely under naturalistic processes was not circular. Which are the right ones to find that, and where will I find your statement of alt-CSI?

    Huh? I don’t know where you’re getting that bold bit from. I never said that using Dembski-CSI to conclude that something was unlikely under naturalistic processes was not circular. Again, that would be backwards. You determine that Dembski-CSI (or a high amount of it) is present only after you have already determined that, among other things, something was unlikely under naturalistic processes.

    Rather, what I said was that it is not circular to use the presence of Alt-CSI to conclude that something is unlikely under naturalistic processes, because Alt-CSI is not a measure of improbability but an observable attribute of a system that is believed to be unlikely, in principle, under naturalistic processes based on the way those processes are believed to work. Now, you could try to argue that the conclusion based on Alt-CSI is wrong, or that it wouldn’t actually require anything that is unlikely, in principle, based on the way known naturalistic processes are believed to work, etc., but it is not circular.

  5. Hi petrushka,

    petrushka:
    I’m browsing on a phpne and tablet. It really isn’t possible to follow your argunent across several different websites. I would appreciate a summary argument.

    My argument – or explanation – about Alt-CSI is here, in this very thread, in my last comment to Patrick.

    I don’t understand how unlikelihood is calculated.

    Unlikelihood (probability) is calculated (or assumed to have been calculated) under Dembski-CSI. Alt-CSI may calculate complexity in the sense of “well-matched parts”, or information, but not probability. At least not at the macro scale.

    HeKS

  6. HeKS,

    Are you sure you want to spend the last days of your vacation discussing this? The thread will still be here after you arrive home.

    Alt-CSI may calculate complexity in the sense of “well-matched parts”, or information, but not probability. At least not at the macro scale.

    I think Richard and petrushka were hoping that you would describe how it is calculated and provide an example.

    After that, you can show us how you establish (and justify) the alt-CSI threshold above which the design inference is made.

  7. HeKS: Unlikelihood (probability) is calculated (or assumed to have been calculated) under Dembski-CSI. Alt-CSI may calculate complexity in the sense of “well-matched parts”, or information, but not probability. At least not at the macro scale.

    I thought we had agreed that Dembski CSI can’t be calculated at all until you rule out incremental evolution.

    I don’t see what “well matched parts” has to do with anything. You have to rule out well matched parts being the result of evolution. All these variations of CSI and irreducible complexity require you first to rule out evolution.

    If you try to use them to rule out evolution, your argument is circular.

  8. You do realize, of course, that Paley put forth a rather good argument for “well matched parts.” His discussion is still better than anything current IDists are presenting.

    And likewise, you must realize that Darwin was responding to Paley.

    So unless you get into biochemistry and do a better job at it than Behe’s Edge, you haven’t made any progress.

  9. HeKS,

    Barry’s challenge related to the validity and logic of Dembski’s CSI argument, which is that if some effect is both highly improbable on any relevant naturalistic hypothesis and also matches an independent specification (i.e. has a high CSI value on all chance hypotheses), it is vastly more reasonable to conclude it was designed than that it came about through unguided processes.

    In other words:

    1. Determine the probability of a specified target T under naturalistic assumptions.

    2. If T is so vanishingly improbable under naturalistic assumptions that design is the only reasonable explanation, attribute CSI to it.

    3. If T exhibits CSI, conclude that T is vanishly improbable under naturalistic assumptions and that design is the only reasonable explanation.

    The circularity is obvious.

    Also note that CSI serves no role in that argument except as a label for things that we’ve already determined must have been designed. Designed things are designed.

    Now let’s look at Barry’s challenge, which was to provide an example of unguided natural processes producing 500 or more bits of CSI.

    An outcome has 500 bits of CSI only if it is vanishingly improbable under unguided naturalistic assumptions — so improbable that even if there were billions of parallel universes, you wouldn’t expect it to happen even once.

    So Barry’s challenge amounts to this:

    Show me an example of unguided natural processes doing something that is vanishingly improbable for unguided natural processes to do.

    The circularity and the vacuity are obvious.

    As Joe put it earlier:

    Upon being made aware of it [an apparent example of unguided processes producing CSI], we (or Dembski) would then modify our assessment that the pattern contained CSI. So it would still be true that assessing whether a pattern is so improbable under natural processes of evolution as to be implausible is something you do before declaring it to have CSI.

  10. petrushka:
    Is O-CSI anything like Durston’s dFIASCO?

    Richardthughes:
    petrushka,

    Durston’s (FSC) is measured in Fits and Peezus (blessed be his name) does a good job on it here:

    http://scienceblogs.com/pharyngula/2009/01/31/durstons-devious-distortions/

    this was interesting / cryptic, too:
    http://www.uncommondescent.com/intelligent-design/durston-contd/

    If I understand correctly, Durston is using Hazen et. al.’s “Functional Information”, which is a version of Orgel’s Specified Information. That underlies Dembski’s O-CSI.

    petrushka: Okay, here’s my stab at a better definition of CSI.

    Take two homologous sequences. I hope I use the word right. I mean two sequences that are different but produce the same phenotype.

    Count the differences and estimate how many steps it would require to have evolved from a common ancestor. Is the result compatible with the available time?

    I see no similarity between this definition and anybody’s definition of CSI. For example, none of them discuss comparing two genomes (or two sequences).

    Will comment on the latest from HeKS later today but have to do my day job for now.

  11. Joe Felsenstein: I see no similarity between this definition and anybody’s definition of CSI. For example, none of them discuss comparing two genomes (or two sequences).

    I wasn’t trying to paraphrase any IDist definition of CSI. I was trying to make one that makes sense. It’s just a thought experiment to test my own understanding.

  12. Richardthughes: Durston’s (FSC) is measured in Fits and Peezus (blessed be his name) does a good job on it…

    There was also this post by Kirk Durston at UD in August 2013. He responded to comments (including from Lizzie and a couple from me.) He said, for example:

    No structural protein biologist thinks that stable 3D structures are common in protein sequence space. The consensus is that they are extremely rare.

    and a final assertion before quitting the thread:

    Finally, nobody in the field thinks that stable, 3D structures are common in sequence space. Please note I am talking about 3D structural proteins, which biological life seems to need, not some random sequence that has a lab-defined ‘function’ of merely binding to something. I repeat, nobody in the field thinks that sequences forming stable, 3D structures are common in sequence space.

  13. Alan Fox: Finally, nobody in the field thinks that stable, 3D structures are common in sequence space.

    Then why are there alleles? Why are there neutral and nearly neutral mutations? Why does Lensky assert that adaptive evolution continues indefinitely?

    Lensky is nobody. Wagner is nobody. Etc.

  14. Richardthughes: I think this is the part most of us doubt.

    How so? In some cases the calculation of the complexity or the information may be fairly straightforward, and I’ve already given such an example. But my point in saying “may calculate” is that no particular calculation is required to observe the presence of Alt-CSI. Look at a car with the panels stripped away and all the parts and wires exposed that are working together to make the car function as intended and you will not doubt that the presence of Alt-CSI is an observable, empirical fact without having to pull out a scientific calculator. The same goes for a paragraph of meaningful text. Where you see a lot of different parts working interdependently to fulfill a functional purpose or convey a meaningful message you are observing Alt-CSI. Often times it is further possible to calculate the complexity of the system in some way, or the amount of information present in it, but you don’t need to do that in order to recognize that you are observing Alt-CSI.

    And if at any point I happen to seem like I’m getting a little impatient, you’ll have to forgive me, but virtually every question that has been asked of me on this topic in the past few days was already answered in some detail in my comment to Patrick in this thread a few days ago. I’m getting the feeling that nobody actually read my comment to him, because all the questions put to me have essentially been about tiny individual aspects of issues that I addressed at some length (and I think with a fair bit of clarity) in that post.

    Want to know what I mean by Alt-CSI? See the post to Patrick.
    Want to know what I mean by complex? See the post to Patrick.
    Want to know what I mean by function? See the post to Patrick.
    Want to know how this form of CSI might be calculated? See the post to Patrick.
    Want to know what types of probability relate to it and how they factor in? See the post to Patrick.

    My post to Patrick was over 10 pages long and covered a lot of issues. It’s one thing if someone doesn’t understand some comment I made in it and they want some clarification, but virtually all of the questions have been asking me to explain this stuff from scratch as though I hadn’t already done so.

  15. Hi HeKS,
    Your patience is appreciated. It may be you ‘get’ in and we don’t. Be gentle with your students.

    Taking a step back I think there is a continuum going from

    1. It looks designed (to me)

    (through various justifications of complexity and or improbability)

    through:

    (N). Here is the statistical likelihood of this winning hand

    to:

    (X).Here is the statistical likelihood of any winning hand.

    With (X) we can then perhaps test against a chance hypothesis and (N) is certainly on the path to (X). (X) would be a requirement for science IMHO.

    I’ve read the supporting posts and I can’t see the design detection getting out of the conceptual. Obviously this is problematic because some of the conceptual hurdles may be insurmountable (The math!)?

    Where would you place new CSI on the continuum?

    Thanks!

    Rich

  16. Or perhaps another way:

    You think (Z) is designed and I do not.

    You think (Z) is complex, and I agree

    You think (Z) is unlikely to come about by natural forces and I think Yes, spontaneously but No through a recursive process like evolution.

    You tell me (Z) is full of new CSI. How is this a new argument? Its a new word but does it have new meaning (other than “looks designed”), does it have new methods and entailments?

    Thanks!

  17. HeKS: Look at a car with the panels stripped away and all the parts and wires exposed that are working together to make the car function as intended and you will not doubt that the presence of Alt-CSI is an observable, empirical fact without having to pull out a scientific calculator.

    But this argument is of no value if the object is evolving. You do in fact need pathetic detail to argue it could not have evolved.

  18. petrushka: I thought we had agreed that Dembski CSI can’t be calculated at all until you rule out incremental evolution.

    I don’t mean to be nitpicky, but for the sake of clarity, you don’t technically “rule out” evolution per se. You determine its degree of probability or improbability, which leads to your CSI calculation on that hypothesis. The amount of Dembski-CSI is hypothesis-dependent, so the same effect, event or system would have different amounts of Dembski-CSI based on the naturalistic hypothesis under consideration. The effect would only be inferred to be designed if it had a large amount of Dembski-CSI under every relevant chance hypothesis and also conformed to a specification. Furthermore, as Ewert has said, Dembski’s CSI argument assumes you have already determined the chance hypotheses to be improbable explanations. It is a conditional argument that tells you you have to determine the probability of the chance hypotheses, but it doesn’t tell you what specific way you must do that. Other arguments or methods must be used to establish that the naturalistic hypotheses are improbable. Dembski-CSI is not a complete argument for ID and is not intended to be. Hence the comment from Ewert that I posted here earlier:

    The problem is that people like Keith attempt to critique specified complexity as though it were a complete argument by itself.

    You really would probably benefit from reading the comment thread to Ewert’s post over at UD.

    I don’t see what “well matched parts” has to do with anything. You have to rule out well matched parts being the result of evolution. All these variations of CSI and irreducible complexity require you first to rule out evolution.

    If you try to use them to rule out evolution, your argument is circular.

    Using the presence of something like irreducible complexity to argue for the implausibility of evolutionary explanations is not circular. The point of such arguments is to show that there are aspects of biological systems that are, for one reason or another, not conducive to evolutionary explanations because they would require steps that would be very difficult, if not functionally impossible, for known evolutionary mechanisms to achieve, such that you would at best have to rely on dumb luck or, even worse, be consistently foiled by purifying selection. As I’ve said, you can debate whether or not these arguments are correct, but they are not circular.

  19. keiths, to HeKS:

    I think Richard and petrushka were hoping that you would describe how it [“alt-CSI”] is calculated and provide an example.

    After that, you can show us how you establish (and justify) the alt-CSI threshold above which the design inference is made.

    HeKS:

    I’m getting the feeling that nobody actually read my comment to him, because all the questions put to me have essentially been about tiny individual aspects of issues that I addressed at some length (and I think with a fair bit of clarity) in that post.

    Want to know what I mean by Alt-CSI? See the post to Patrick.
    Want to know what I mean by complex? See the post to Patrick.
    Want to know what I mean by function? See the post to Patrick.
    Want to know how this form of CSI might be calculated? See the post to Patrick.

    HeKS,

    I reread your extremely lengthy comment to Patrick, but it doesn’t answer my questions.

    On the question of how to measure alt-CSI, you wrote:

    Now, when it comes to directly measuring the functional complexity I’ve been describing, I’m honestly not exactly sure what would be the best way to measure it. There are ways to measure the complexity of systems, but to the best of my knowledge there is no single way to measure this kind of complexity that holds across all systems. I’m not an expert in this area, so I’m not the best person to provide an answer to this question.

    You don’t describe how alt-CSI is calculated. You don’t provide an example. Instead, you admit that you don’t know how to calculate it, and you suggest that we ask someone else!

    My next question was:

    After that, you can show us how you establish (and justify) the alt-CSI threshold above which the design inference is made.

    Here’s the closest you came to addressing that issue in your comment to Patrick:

    And yet, at a certain point I think pretty much anyone would agree that something is, indeed, complex in this sense. Would you disagree that a computer, or a monitor, or a printer, or a car is complex in this sense, and in a way that is functionally-specified in the sense I’ve described above? We might debate whether or not something with three or four parts is truly complex (depending on how the parts fit together), but we’d likely agree that something that consists of 10 parts working together is complex. Certainly it seems very likely we would agree that something consisting of hundreds of interrelated parts is very complex. I think we’d probably also agree that a system consisting of several separate but interdependent subsystems, each of which rely on tens or hundreds of interrelated parts, is also very highly complex.

    You don’t establish, or justify, a complexity threshold above which design can be inferred.

    Why do you think alt-CSI is useful if a) you don’t know how to measure it, by your own admission, and b) you don’t know how much of it would be required for a design inference, even if you could measure it?

  20. HeKS:

    I haven’t yet done the readings you suggested, but one question arises out of your exchange with petrushka and with keiths, here.

    Is there some reason why phenotypes that are “complex” by your criteria cannot have evolved by natural processes, particularly natural selection? (Or why their evolution is at least extremely improbable?)

  21. HeKS: The point of such arguments is to show that there are aspects of biological systems that are, for one reason or another, not conducive to evolutionary explanations because they would require steps that would be very difficult, if not functionally impossible, for known evolutionary mechanisms to achieve

    Behe tried that argument in Edge of Evolution, and failed. Do you know something that Behe doesn’t know?

  22. keiths:
    HeKS,

    In other words:

    1. Determine the probability of a specified target T under naturalistic assumptions.

    2. If T is so vanishingly improbable under naturalistic assumptions that design is the only reasonable explanation, attribute CSI to it.

    3. If T exhibits CSI, conclude that T is vanishly improbable under naturalistic assumptions and that design is the only reasonable explanation.

    The circularity is obvious.

    No, what’s obvious is that you aren’t accurately representing Dembski-CSI or Barry’s challenge. Let’s rework your steps in a way that accurately reflects Dembski-CSI and Barry’s challenge.

    1. Determine the amount of Dembski-CSI associated with a given specified target under all known relevant chance hypotheses. The lower the probability on a given chance hypothesis, the higher the amount of CSI associated with the target under that hypothesis.

    2. If the probability of the specified target is very low on every relevant naturalistic hypothesis, such that the CSI associated with the target is above a certain threshold (i.e. 500 bits) on each hypothesis, infer that design is the best causal explanation (due to the combination of both improbability and specification) based on the current state of our knowledge of cause-and-effect relationships in the real world.

    Your #3 has nothing to do with anything. It’s an extra step that is not required in Dembski’s CSI argument. It may seem plausible to your readers because it’s made possible by your misstatement of #2, but it is ultimately just something that you’ve erroneously tacked on to give the impression of circularity.

    Also note that CSI serves no role in that argument except as a label for things that we’ve already determined must have been designed.Designed things are designed.

    The only thing for me to note is that you’ve come to this wrong conclusion because you have misunderstood and misstated the original argument.

    Now let’s look at Barry’s challenge, which was to provide an example of unguided natural processes producing 500 or more bits of CSI.

    An outcome has 500 bits of CSI only if it is vanishingly improbable under unguided naturalistic assumptions — so improbable that even if there were billions of parallel universes, you wouldn’t expect it to happen even once.

    I have no awareness of this billions of parallel universes clause.

    So Barry’s challenge amounts to this:

    Show me an example of unguided natural processes doing something that is vanishingly improbable for unguided natural processes to do.

    The circularity and the vacuity are obvious.

    Well, I actually offered two ways one could respond to Barry’s challenge in the very comment that Joe originally referenced and Barry agreed with me. Here’s what I said:

    [T]he reasoning goes that if some effect is calculated to display a high degree of CSI on all chance hypotheses – or, put another away, is found to match an independent specification and also be astronomically improbable with respect to every known natural process that might be proposed to explain it – then design is tentatively considered to be a better explanation of the effect (being the only kind of cause known to be capable of producing it) than an appeal to extreme good fortune that would not be expected to happen even once in the entire history of the universe.

    There are at least two ways this inference could be falsified: [i.e., Barry’s challenge could be met]:

    1) A natural process could be discovered that shows the effect not to be improbable, thereby falsifying the claim that it demonstrates [a large amount of] CSI; or 2) A natural process could be demonstrated to bring about specified effects that are highly improbable with respect to that particular natural process, thereby falsifying the claim that [a large amount of] CSI implies design for similar and lesser degrees of complexity (improbability).

    In other words, to show that the Dembski-CSI argument simply doesn’t work in principle, show that natural processes can produce effects that match an independent specification and also happen to be sufficiently improbable with respect to those processes to generate a CSI measurement greater than 500 bits. Or to show that it doesn’t help in practice, show that targets that ID proponents think are improbable under known natural processes aren’t really improbable after all (i.e. show that there’s nothing in the biological world that would legitimately generate a CSI measurement that exceeds Barry’s threshold of 500 bits, making the challenge irrelevant to the issue of biological evolution and ID).

    There is nothing circular or vacuous about this. I don’t see anything in Barry’s challenge that contradicts Ewert’s own explanations about Dembski-CSI and Ewert didn’t draw attention to any issues with the challenge when I talked about it with him by email. If there’s some flaw in Barry’s challenge, it is something other than the criticisms you have leveled at it.

    As Joe put it earlier:

    Upon being made aware of it [an apparent example of unguided processes producing CSI], we (or Dembski) would then modify our assessment that the pattern contained CSI. So it would still be true that assessing whether a pattern is so improbable under natural processes of evolution as to be implausible is something you do before declaring it to have CSI.

    I don’t know where Joe is getting that from, but it’s certainly not consistent with Ewert’s presentation of the concept or even the logic of Dembski-CSI. The comparison of a specified effect to a chance hypothesis attempting to explain it will generate a Dembski-CSI measurement in bits based on the degree of improbability exhibited by the effect under that hypothesis. The improbability of the target on a given hypothesis hinges on the nature of the hypothesized naturalistic process. It is whatever it is. It’s either highly improbable on the hypothesis or it isn’t. Likewise, an effect either legitimately matches a specification or it doesn’t. If it does, sticking your fingers in your ears and shaking your head isn’t going to make a difference. Because of all this, if a natural process generates an effect that is highly improbable given that process and it happens to match a specification, then it will generate a large Dembski-CSI measurement no matter who does or does not want it to. The logic of the methodology and the argument doesn’t allow for one to make an arbitrary claim that there is no large amount of CSI present in the effect simply because it was generated by a non-intelligent process. If you or Joe think it does then you should drop Ewert a line and get a response from him directly.

  23. You really seem to be ignoring the fact that you can’t calculate CSI until you determine the probability under the “chance” hypothesis.

  24. Keith,

    From my comment to Patrick:

    Now, you ask how exactly can we measure functionally specified information and using what units. I would say that when it comes to something like molecular machines, we can at least begin by looking to the genetic code that specifies their parts lists. There would be approximately 2 bits of genetic information per base pair of nucleotides (1 member of the pair specifies the other but there are 4 options to choose from). We may be able to add another bit per base pair signaling methylated or non-methylated where appropriate (and there are other types of epigenetic information, but we’ll ignore them for this purpose). On this basis one could determine the number of base pairs that impact the folding of the individual protein parts and their ability to do their job. This would give a low-ball measure of information in bits, since there’s also the matter of assembly instructions for molecular machines, which would likely include further epigenetic information (and perhaps further genetic information) as well. Nonetheless, this would give us somewhere to start. That said, I’m not really in a position to personally figure out those numbers.

    Now, when it comes to directly measuring the functional complexity I’ve been describing, I’m honestly not exactly sure what would be the best way to measure it. There are ways to measure the complexity of systems, but to the best of my knowledge there is no single way to measure this kind of complexity that holds across all systems. I’m not an expert in this area, so I’m not the best person to provide an answer to this question.

    I’ve plainly admitted that I am, unfortunately, not a math person (though if anyone is aware of any good online resources I’m happy to accept suggestions). I’m also not an expert in this field. I see no point in pretending to know what I don’t know or taking on the role of determining or trying to teach others highly technical issues that are not within my areas of expertise and that I don’t have a firm grasp on myself. I understand the logic of the argument. I don’t do the math. If you want to criticize me for that or for suggesting that it would be wiser to get those details from someone who has a more extensive background in the field then you are free to do so.

    Also, regarding an Alt-CSI threshold, the point I was making to Patrick was that there comes a point apart from any calculation or measurement that one simply recognizes the presence of this Alt-CSI as an empirical observable fact. As for whether there would be a single numerical threshold for such CSI in order to make the design inference, that would probably depend on whether there’s a single consistent method of measuring complexity across all types of systems, which as I’ve said, I don’t think there is. That’s not a failing of ID. It’s just a present reality of studies into system complexity in general. Apart from any such numerical threshold, as I’ve said, I think most ID proponents would say that what determines the need for a design inference, in addition to the need for a specification, is whether the particular system would seem to require stages that would, in principle, seem to be highly problematic for known evolutionary mechanisms on any kind of reasonable timescale. If such stages would be required then it would be reasonable to conclude that evolutionary explanations are unlikely to be correct.

  25. petrushka:
    You really seem to be ignoring the fact that you can’t calculate CSI until you determine the probability under the “chance” hypothesis.

    No, I’m not ignoring that. And I’m not sure why you think I am since I’ve said that multiple times. Knowing the probability is necessary to make the calculation. What I have been saying, however, is that the structure of the Dembski-CSI argument assumes that the probability or improbability of a specified target has already been calculated. The means used to establish that improbability is open and is not directly determined by Dembski’s argument.

  26. Well gentlemen, it’s 4:10am here and I’m going to bed. I’ll check in tomorrow if I get a chance but, to be honest, it’s getting a little exhausting trying to provide substantive responses to 4 or so people by myself and I’m sure I’m missing some stuff since every time I post a comment I notice 5 other comments directed at me have gone up as well. I head home Saturday morning and then early next week I jump head first into a programming project that, if the last iteration is any guide, will turn me into a hermit for a few months (the last one had me working 15-17 hr days, 7 days per week, for almost a full month). I’ll check in as I can but it may be months again before I have any time to really participate (and I need to give some of my attention to UD). I’m sorry if I failed at being clear on any issues or didn’t have time to respond to something. It wasn’t intentional.

    Take care,
    HeKS

  27. You have invested an enormous amount of time and effort avoiding the only thing we have asked you to do. Show us the calculation for alt-csi.

  28. petrushka:
    You have invested an enormous amount of time and effort avoiding the only thing we have asked you to do. Show us the calculation for alt-csi.

    Note that Heks’ first step is “Determine the amount of Dembski-CSI associated with a given specified target under all known relevant chance hypotheses”

    I.e. it fails at step one.

  29. This really quite easy. There are no stages that are problematic for evolution. Next question.

  30. petrushka, to HeKS:

    You have invested an enormous amount of time and effort avoiding the only thing we have asked you to do. Show us the calculation for alt-csi.

    He’s already admitted that he doesn’t know how to measure or calculate it.

    He goes on to argue that it doesn’t need to be calculated, nor does a threshold need to be determined, because

    …regarding an Alt-CSI threshold, the point I was making to Patrick was that there comes a point apart from any calculation or measurement that one simply recognizes the presence of this Alt-CSI as an empirical observable fact.

    Petrushka got it right:

    So if it looks [sufficiently] complicated to you, it’s designed.

  31. I would just add that there is an inaccuracy in HeKS’s statement of how one uses D-CSI (Dembski’s post-2005 version).

    One does not calculate the “amount of CSI”. Dembski has us calculate the amount of specified information (SI). If it exceeds the threshold that is calculated from the Universal Probability Bound, then we get to say that CSI is present. That threshold is 500 bits.

    Dembski’s elaborate-looking 2006 formula used to establish whether CSI is present amounts to doing this comparison.

    So CSI is not a number that you calculate. It is a yes/no statement based on comparing two numbers.

  32. I’m trying to be charitable, but it appears that alt-CSI boils down to one or more of the following.

    1. It exists because I say so. (Durston)
    2. It exists because it’s really complex and has all kinds of interlocking wizbangs. (Paley)
    3.It exists because the coding string is really long. (Durston)
    4. It exists because there is an unbridgeable gap between the ancestral coding sequence and the current sequence. (Behe’s Edge)
    5. Unbridgeable gaps exist because islands of function are isolated. (KF)
    6. If 4 and 5 are false, then the functional landscape is designed. (Dembsli, Search for a Search)
    7. Something else, TBA

  33. Responding to Joe:

    The concept of CSI exists to prove or support one of the following:

    1. Coding sequences are too long to have arisen by natural means.
    2. Coding sequences could not have arisen by natural means for some other reason.

    The point is that CSI as a concept exists to argue that natural means are insufficient. CSI as a conclusion can only exist if one has demonstrated by other arguments that natural means are insufficient.

    The concept of CSI is circular or irrelevant, or both.

  34. We’ve gone from:

    Its Designed because design is self-evident

    to

    Its designed because its full of CSI. CSI is self-evident.

  35. It’s designed because evolution is insufficient.
    Evolution is insufficient because it cannot produce CSI.
    CSI is present because evolution is insufficient.

    You can make it moar sciency by adding intermediate arguments to fill the gaps.

    While making the previous posts it occurred to me that B-CSI (Behe CSI) could be called micro-CSI to distinguish it from Durston CSI.

    Micro-CSI is doable by natural means. When it falls over Behe’s Edge, you have macro-CSI

  36. “You can make it moar sciency by adding intermediate arguments to fill the gaps.” – which helps prop up the courtier’s reply.

  37. You guys are such kidders.

    As I’ve said a few times now, there is a difference between calculating the complexity of Alt-CSI and the information of Alt-CSI. When it comes to the former, there is no single way to measure complexity across all types of systems that I’m aware of, and so there can be no single numerical threshold of complexity that is required. That some system is complex in the sense of having many well-matched parts that work together is simply an observable fact that is unlikely to be questioned as soon as you get any kind of system that consists of more than a couple parts. When it comes to biological systems, there’s no ambiguity in this regard.

    Now, when it comes to calculating the information of Alt-CSI (since, you know, we’re talking about Complex Specified Information), I already gave an example of how you would go about doing that for some molecular machine that should be none too surprising. As for the informational threshold required of Alt-CSI to infer design, I’ve never heard it as being any different than the Dembski-CSI threshold of 500 bits. In both cases you’re talking about information that is specified to fulfill a useful function. In Alt-CSI, the C is primarily a descriptor of an observable attribute of some effect, system, block of code, etc., rather than a specific measure. As such, there’s is no need for an additional numerical threshold of complexity over and above the required information threshold.

    HeKS

  38. HeKS: I already gave an example of how you would go about doing that for some molecular machine that should be none too surprising.

    Can you do this for a biological system? Can you show us a worked example? You could start with something simple, like a virus. (Unless you think viruses aren’t designed 🙂 )

  39. HeKS:
    You guys are such kidders.
    As I’ve said a few times now, there is a difference between calculating the complexity of Alt-CSI and the information of Alt-CSI. When it comes to the former, there is no single way to measure complexity across all types of systems that I’m aware of, and so there can be no single numerical threshold of complexity that is required. That some system is complex in the sense of having many well-matched parts that work together is simply an observable fact that is unlikely to be questioned as soon as you get any kind of system that consists of more than a couple parts. When it comes to biological systems, there’s no ambiguity in this regard.

    Now, when it comes to calculating the information of Alt-CSI (since, you know, we’re talking about Complex Specified Information), I already gave an example of how you would go about doing that for some molecular machine that should be none too surprising. As for the informational threshold required of Alt-CSI to infer design, I’ve never heard it as being any different than the Dembski-CSI threshold of 500 bits. In both cases you’re talking about information that is specified to fulfill a useful function. In Alt-CSI, the C is primarily a descriptor of an observable attribute of some effect, system, block of code, etc., rather than a specific measure. As such, there’s is no need for an additional numerical threshold of complexity over and above the required information threshold.
    HeKS

    Can evolution produce CSI? If so, why bother?

    If not, how do you know that evolution cannot produce it?

    You are producing a lot of verbiage without responding to the only question that matters.

  40. Perhaps a simpler question. What is the point of calculating the information content of a coding string if the string can be produced by evolution?

    And if you are saying the string cannot be produced by evolution, how do you know?

  41. petrushka: Can evolution produce CSI?

    This is the question. This is why I wanted William to address some specific points about Lenski’s work. Points that could only have gone badly for him as they required him to take a position and explain it.

    If we take before and after snapshots then if *INSERT CURRENT ITEM HERE* can be measured by IDists then I’d expect them to be falling over themselves there to do it. It proves their case!

    I’d not expect them to say, as they do, that “all that stuff that happened was just rearrangement of existing stuff or breaking stuff” – if evolution can add what they claim it cannot then that’s that for that argument. Seems eminently testable to me, for the generic case. They just have to actually provide the mechanism to do the before and after calculation.

    HeKS: I already gave an example of how you would go about doing that for some molecular machine that should be none too surprising

    I have a good feeling about this ID science project that might, just might, be starting up. Let the rest of em know will ya! 🙂

  42. If you want an honest estimate of the information in a novel DNA sequence, you would need to know how far it has diverged from its nearest ancestor that did something different.

  43. petrushka:
    Responding to Joe:

    The concept of CSI exists to prove or support one of the following:

    1. Coding sequences are too long to have arisen by natural means.
    2. Coding sequences could not have arisen by natural means for some other reason.

    The point is that CSI as a concept exists to argue that natural means are insufficient. CSI as a conclusion can only exist if one has demonstrated by other arguments that natural means are insufficient.

    The concept of CSI is circular or irrelevant, or both.

    petrushka:
    If you want an honest estimate of the information in a novel DNA sequence, you would need to know how far it has diverged from its nearest ancestor that did something different.

    However logical it may seem to look at divergence of coding sequences, Dembski’s argument does not specifically mention coding sequences. It has a scale (implicitly, fitness) and in its original form compares the position of the organism to the distribution of values you would get when the genomes are randomly generated strings. The distinction between coding and noncoding sequences is not invoked.

Leave a Reply