Does gpuccio’s argument that 500 bits of Functional Information implies Design work?

On Uncommon Descent, poster gpuccio has been discussing “functional information”. Most of gpuccio’s argument is a conventional “islands of function” argument. Not being very knowledgeable about biochemistry, I’ll happily leave that argument to others.

But I have been intrigued by gpuccio’s use of Functional Information, in particular gpuccio’s assertion that if we observe 500 bits of it, that this is a reliable indicator of Design, as here, about at the 11th sentence of point (a):

… the idea is that if we observe any object that exhibits complex functional information (for example, more than 500 bits of functional information ) for an explicitly defined function (whatever it is) we can safely infer design.

I wonder how this general method works. As far as I can see, it doesn’t work. There would be seem to be three possible ways of arguing for it, and in the end; two don’t work and one is just plain silly. Which of these is the basis for gpuccio’s statement? Let’s investigate …

A quick summary

Let me list the three ways, briefly.

(1) The first is the argument using William Dembski’s (2002) Law of Conservation of Complex Specified Information. I have argued (2007) that this is formulated in such a way as to compare apples to oranges, and thus is not able to reject normal evolutionary processes as explanations for the “complex” functional information.  In any case, I see little sign that gpuccio is using the LCCSI.

(2) The second is the argument that the functional information indicates that only an extremely small fraction of genotypes have the desired function, and the rest are all alike in totally lacking any of this function.  This would prevent natural selection from following any path of increasing fitness to the function, and the rareness of the genotypes that have nonzero function would prevent mutational processes from finding them. This is, as far as I can tell, gpuccio’s islands-of-function argument. If such cases can be found, then explaining them by natural evolutionary processes would indeed be difficult. That is gpuccio’s main argument, and I leave it to others to argue with its application in the cases where gpuccio uses it. I am concerned here, not with the islands-of-function argument itself, but with whether the design inference from 500 bits of functional information is generally valid.

We are asking here whether, in general, observation of more than 500 bits of functional information is “a reliable indicator of design”. And gpuccio’s definition of functional information is not confined to cases of islands of function, but also includes cases where there would be a path to along which function increases. In such cases, seeing 500 bits of functional information, we cannot conclude from this that it is extremely unlikely to have arisen by normal evolutionary processes. So the general rule that gpuccio gives fails, as it is not reliable.

(3) The third possibility is an additional condition that is added to the design inference. It simply declares that unless the set of genotypes is effectively unreachable by normal evolutionary processes, we don’t call the pattern “complex functional information”. It does not simply define “complex functional information” as a case where we can define a level of function that makes probability of the set less than 2^{-500}.  That additional condition allows us to safely conclude that normal evolutionary forces can be dismissed — by definition. But it leaves the reader to do the heavy lifting, as the reader has to determine that the set of genotypes has an extremely low probability of being reached. And once they have done that, they will find that the additional step of concluding that the genotypes have “complex functional information” adds nothing to our knowledge. CFI becomes a useless add-on that sounds deep and mysterious but actually tells you nothing except what you already know. So CFI becomes useless. And there seems to be some indication that gpuccio does use this additional condition.

Let us go over these three possibilities in some detail. First, what is the connection of gpuccio’s “functional information” to Jack Szostak’s quantity of the same name?

Is gpuccio’s Functional Information the same as Szostak’s Functional Information?

gpuccio acknowledges that gpuccio’s definition of Functional Information is closely connected to Jack Szostak’s definition of it. gpuccio notes here:

Please, not[e] the definition of functional information as:

“the fraction of all possible configurations of the system that possess a degree of function >=
Ex.”

which is identical to my definition, in particular my definition of functional information as the
upper tail of the observed function, that was so much criticized by DNA_Jock.

(I have corrected gpuccio’s typo of “not” to “note”, JF)

We shall see later that there may be some ways in which gpuccio’s definition
is modified from Szostak’s. Jack Szostak and his co-authors never attempted any use of his definition to infer Design. Nor did Leslie Orgel, whose Specified Information (in his 1973 book The Origins of Life) preceded Szostak’s. So the part about design inference must come from somewhere else.

gpuccio seems to be making one of three possible arguments;

Possibility #1 That there is some mathematical theorem that proves that ordinary evolutionary processes cannot result in an adaptation that has 500 bits of Functional Information.

Use of such a theorem was attempted by William Dembski, his Law of Conservation of Complex Specified Information, explained in Dembski’s book No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2001). But Dembski’s LCCSI theorem did not do what Dembski needed it to do. I have explained why in my own article on Dembski’s arguments (here). Dembski’s LCCSI changed the specification before and after evolutionary processes, and so he was comparing apples to oranges.

In any case, as far as I can see gpuccio has not attempted to derive gpuccio’s argument from Dembski’s, and gpuccio has not directly invoked the LCCSI, or provided a theorem to replace it.  gpuccio said in a response to a comment of mine at TSZ,

Look, I will not enter the specifics of your criticism to Dembski. I agre with Dembski in most things, but not in all, and my arguments are however more focused on empirical science and in particular biology.

While thus disclaiming that the argument is Dembski’s, on the other hand gpuccio does associate the argument with Dembski here by saying that

Of course, Dembski, Abel, Durston and many others are the absolute references for any discussion about functional information. I think and hope that my ideas are absolutely derived from theirs. My only purpose is to detail some aspects of the problem.

and by saying elsewhere that

No generation of more than 500 bits has ever been observed to arise in a non design system (as you know, this is the fundamental idea in ID).

That figure being Dembski’s, this leaves it unclear whether gpuccio is or is not basing the argument on Dembski’s. But gpuccio does not directly invoke the LCCSI, or try to come up with some mathematical theorem that replaces it.

So possibility #1 can be safely ruled out.

Possibility #2. That the target region in the computation of Functional Information consists of all of the sequences that have nonzero function, while all other sequences have zero function. As there is no function elsewhere, natural selection for this function then cannot favor sequences closer and closer to the target region.

Such cases are possible, and usually gpuccio is talking about cases like this. But gpuccio does not require them in order to have Functional Information. gpuccio does not rule out that the region could be defined by a high level of function, with lower levels of function in sequences outside of the region, so that there could be paths allowing evolution to reach the target region of sequences.

An example in which gpuccio recognizes that lower levels of function can exist outside the target region is found here, where gpuccio is discussing natural and artificial selection:

Then you can ask: why have I spent a lot of time discussing how NS (and AS) can in some cases add some functional information to a sequence (see my posts #284, #285 and #287)

There is a very good reason for that, IMO.

I am arguing that:

1) It is possible for NS to add some functional information to a sequence, in a few very specific cases, but:

2) Those cases are extremely rare exceptions, with very specific features, and:

3) If we understand well what are the feature that allow, in those exceptional cases, those limited “successes” of NS, we can easily demonstrate that:

4) Because of those same features that allow the intervention of NS, those scenarios can never, never be steps to complex functional information.

Jack Szostak defined functional information by having us define a cutoff level of function to define a set of sequences that had function greater than that, without any condition that the other sequences had zero function. Neither did Durston. And as we’ve seen gpuccio associates his argument with theirs.

So this second possibility could not be the source of gpuccio’s general assertion about 500 bits of functional information being a reliable indicator of design, however much gpuccio concentrates on such cases.

Possibility #3. That there is an additional condition in gpuccio’s Functional Information, one that does not allow us to declare it to be present if there is a way for evolutionary processes to achieve that high a level of function. In short, if we see 500 bits of Szostak’s functional information, and if it can be put into the genome by natural evolutionary processes such as natural selection then for that reason we declare that it is not really Functional Information. If gpuccio is doing this, then gpuccio’s Functional Information is really a very different animal than Szostak’s functional information.

Is gpuccio doing that? gpuccio does associate his argument with William Dembski’s, at least in some of his statements.  And William Dembski has defined his Complex Specified Information in this way, adding the condition that it is not really CSI unless it is sufficiently improbable that it be achieved by natural evolutionary forces (see my discussion of this here in the section on “Dembski’s revised CSI argument” that refer to Dembski’s statements here). And Dembski’s added condition renders use of his CSI a useless afterthought to the design inference.

gpuccio does seem to be making a similar condition. Dembski’s added condition comes in via the calculation of the “probability” of each genotype. In Szostak’s definition, the probabilities of sequences are simply their frequencies among all possible sequences, with each being counted equally. In Dembski’s CSI calculation, we are instead supposed to compute the probability of the sequence given all evolutionary processes, including natural selection.

gpuccio has a similar condition in the requirements for concluding that complex
functional information is present:  We can see it at step (6) here:

If our conclusion is yes, we must still do one thing. We observe carefully the object and what we know of the system, and we ask if there is any known and credible algorithmic explanation of the sequence in that system. Usually, that is easily done by excluding regularity, which is easily done for functional specification. However, as in the particular case of functional proteins a special algorithm has been proposed, neo darwininism, which is intended to explain non regular functional sequences by a mix of chance and regularity, for this special case we must show that such an explanation is not credible, and that it is not supported by facts. That is a part which I have not yet discussed in detail here. The necessity part of the algorithm (NS) is not analyzed by dFSCI alone, but by other approaches and considerations. dFSCI is essential to evaluate the random part of the algorithm (RV). However, the short conclusion is that neo darwinism is not a known and credible algorithm which can explain the origin of even one protein superfamily. It is neither known nor credible. And I am not aware of any other algorithm ever proposed to explain (without design) the origin of functional, non regular sequences.

In other words, you, the user of the concept, are on your own. You have to rule out that natural selection (and other evolutionary processes) could reach the target sequences. And once you have ruled it out, you have no real need for the declaration that complex functional information is present.

I have gone on long enough. I conclude that the rule that observation of 500 bits of functional information is present allows us to conclude in favor of Design (or at any rate, to rule out normal evolutionary processes as the source of the adaptation) is simply nonexistent. Or if it does exist, it is as a useless add-on to an argument that draws that conclusion for some other reason, leaving the really hard work to the user.

Let’s end by asking gpuccio some questions:
1. Is your “functional information” the same as Szostak’s?
2. Or does it add the requirement that there be no function in sequences that
are outside of the target set?
3. Does it also require us to compute the probability that the sequence arises as a result of normal evolutionary processes?

1,971 thoughts on “Does gpuccio’s argument that 500 bits of Functional Information implies Design work?

  1. Mung: Some things are just blatantly obvious. Haven’t we already had this conversation?

    You have blindly asserted that before, yes. Still no support given.

    If you can just declare it blatantly obvious, I can just flat out contradict it. Is that really where we are?

    Explain yourself, what is it that makes it “obvious” to you?

  2. DNA_Jock,

    People who are equating FI with a real-world probability and therefore committing the TSS fallacy.
    gpuccio
    colewd
    Szostak

    Read the paper mate.

  3. That’s cute, colewd,

    I suspect that you cannot tell the difference between

    the fraction of all
    possible configurations of the system that possess a degree of
    function >= Ex.
    [Hazen, 2007]

    which is what FI is a measure of, and

    the probability that a given sequence will encode a molecule with greater than any
    given degree of function.
    [Szostak, 2003; as edited by DNA_Jock]

    which is NOT what FI is a measure of.
    Have a more careful look at what Szostak actually wrote, and you might realize that he doesn’t make the mistake you do.

    There’s a reason I included the qualifier ‘real-world’ colewd: because I have read Szostak.

  4. DNA_Jock,

    There’s a reason I included the qualifier ‘real-world’ colewd: because I have read Szostak.

    I know why you used the qualifier yet there are many who have estimated functional information through a probability equation derived empirically. Our friend Author Hunt wrote an article about this and so did Jack Szostak. Lets also not forget the Hayashi paper.

    The basic equation that Szostak established is a probability equation and its purpose is to set the framework for estimating real world functional information. As long as the estimates are made in good faith your TSS label is bogus or you are damning all historical science.

  5. DNA_Jock: I am curious, how does your “minimum level” differ from my “threshold”, if at all?

    I doubt that it differs at all. It’s where the plane intersects. Below the plane doesn’t mean non-functional, it just means below the threshold set by the person who wants to calculate the FI as a measure of system complexity

  6. colewd: The basic equation that Szostak established is a probability equation and its purpose is to set the framework for estimating real world functional information.

    Avida isn’t real-world.

  7. DNA_Jock: Naughty boy, Mung. I explained to you why he was not. Please behave better.

    I was referring to the following comment of yours, which clearly includes what dazz was doing.

    DNA_Jock: Of course one changes the threshold as one goes along. One is merely using an alternative way of describing altitude. And of course that constitutes TSS.

  8. colewd: The basic equation that Szostak established is a probability equation and its purpose is to set the framework for estimating real world functional information. As long as the estimates are made in good faith your TSS label is bogus or you are damning all historical science.

    Thank you colewd, for such an excellent distillation of the fallacies.
    Sentence 1:
    The FI equation is NOT a probability equation. It represents a ratio. You could equate that ratio with a probability if and only if all elements are equiprobable (which is an abstraction). Hence my use of the phrase “real-world probability“. I used the qualifier on “probability”, not “functional information”.
    WTF?
    In 2003, Jack used the word “random”, meaning (in the context he used it) equiprobable. He assumed that the readership of Nature would be numerate.
    Sentence 2:
    Now, I wouldn’t make any assumptions about ‘good faith’; it is a simple fact that you cannot look at your data and then devise a valid Fisherian test for it. I covered this with gpuccio years ago, but he has not cottoned on yet. Both the “historical sciences” and clinical research (and statisticians everywhere) have found ways around this particular issue. IDists, not so much. And there’s a reason for that.

  9. Mung,

    Still being a naughty boy Mung – you claimed that I said dazz was ‘committing’ TSS.
    Which I made clear he was not. Adjusting the threshold of course comprises TSS; it only becomes a fallacy that one could “commit” if you misuse the metric.
    He hasn’t, and I have never implied otherwise.

  10. Mung: Avida isn’t real-world.

    Seems to me that avida really does exist in the real world in so far as computers on which the program runs are real. It is it’s very own instance of evolution which is in many ways highly analogous to how biological evolution works.

  11. Mung: Is there some disagreement about whether gpuccio specified a function or not? You seem to be accepting that he has done. So where’s the problem?

    The problem is that gpuccio insists that 500 bits of this pointless I-decided-what-the-function-is-after-the-fact FI demonstrates that a protein has been designed. But I suspect that even a protein neutrally evolving by genetic drift will at some point have diverged enough from its ancestral state to make the cut. I would like to have some function in my function, please.

  12. DNA_Jock: Still being a naughty boy Mung – you claimed that I said dazz was ‘committing’ TSS.

    Would it make you happier if I said that he was adjusting the threshold and of course that comprises TSS? So he painted the target around the tightest grouping of hits but he didn’t commit the fallacy because he never claimed to be a sharpshooter. That about it?

  13. DNA_Jock: The FI equation is NOT a probability equation. It represents a ratio.

    Which can be viewed as a probability. And everyone seems to be treating it as such because it’s additive. And if its not additive why is everyone treating it as if it is?

    The whole OP and much of the debate in this thread is constructed around the idea that you can take this bit of FI, and another bit of FI to it, another to that, until you get 500bits, and voila! evolution can generated 500 bits of FI. Therefore gpuccio is wrong.

  14. DNA_Jock,

    Now, I wouldn’t make any assumptions about ‘good faith’; it is a simple fact that you cannot look at your data and then devise a valid Fisherian test for it. I covered this with gpuccio years ago, but he has not cottoned on yet. Both the “historical sciences” and clinical research (and statisticians everywhere) have found ways around this particular issue. IDists, not so much. And there’s a reason for that.

    On the Fisherian test, I agree. What are the ways around it and why are IDists not in compliance?

  15. Mung,
    Yes. I’m happy now. You should apologize to dazz, though.

    Mung,

    No. Are you insane? Probabilities aren’t “additive”. (unless they are mutually exclusive, which obviously doesn’t apply here.

    Ratios aren’t additive either. But if you take the logs, then they are.
    (And no, the logs of probabilities aren’t “additive” either, unless the probabilities are independent, which obviously doesn’t apply here).
    Think, Mung.

  16. colewd: On the Fisherian test, I agree. What are the ways around it and why are IDists not in compliance?

    Well, you should check out the link I provided, Bill:
    pre-specifying the analysis and the criteria for rejection of the null, are key.
    Sadly, IDists cannot even formulate the appropriate null.

    Hint: “What is the probability that a random choice amongst equiprobable sequences (the IDist workhorse) would yield a sequence this unusual?” is not an appropriate null. Dr. Dembski has explained why.

    Oy vey.

  17. Rumraket: I don’t see how this whole information-gibberish even advances the design argument at all. It doesn’t tell us anything meaningful or useful. It just appears like fancy technobabble designed to make the idea of a “design inference” sound sciency. It’s pretension.

    Nothing in ID makes sense except in the light of history.

    I wish it were not so. But I have found, over and over, that technical anomalies of ID are attributable to strategic decisions made by leaders of the ID movement — especially the law professor Phillip Johnson — in the first decade. In my opinion, the most important book on ID is Creationism’s Trojan Horse: The Wedge of Intelligent Design (2004). I went a long time without reading it, and I’m telling you now that it was a big mistake.

    Johnson is generally honest in what he says about the movement. His dishonesty is in what he neglects to say. When Johnson says that he saw the creation-evolution debate encapsulated in two books, Evolution: A Theory in Crisis and The Blind Watchmaker, you definitely should believe him. However, you should not make anything of his non-mention of Paley, whose Natural Theology Dawkins attacked. It’s obvious that some major components of ID come from the first two chapters of Natural Theology. Those chapters are secular, but the ID movement is not going to associate itself with a theological treatise.

    There’s a lot of yimmer-yammer about information, including genetic information, in The Blind Watchmaker. I don’t know who first read into it the claim that evolution creates information, but I suspect that it was Dembski, not Johnson. In Chapter 1, Dawkins identifies a property of complicated objects that give the appearance of design, “statistically improbable in a direction specified not by hindsight alone” (I’m quoting from memory, so that may be a bit off). In essence, the ID movement said, “Hey, we can work with that.” Dembski tried to establish that the property is not just the appearance of design, but in fact a “reliable marker” of design. In his dissertation in history and philosophy of science, The Design Inference, he referred to the property as specified complexity (alternatively, complex specified information). However, he did not attribute specified complexity to Dawkins. From the perspective of the ID movement, it would have been better if he had made it clear that he was confronting Dawkins head on. One reasonably wonders whether Dembski’s dissertation committee would have approved if he had given a complete explanation of what he was doing.

    There’s little doubt that Dembski was stepping carefully. Only after he had defended his dissertation, and had gotten a publisher for it, did he reveal that he had his very own law of nature. The Law of Conservation of [Complex Specified] Information first appeared in “Intelligent Design as a Science of Information,” on “The Philosophy Page” of Science in Christian Perspective (1997). The ID movement went on to make up stories about the suppression of ID by materialists who rejected information. Funny thing is, while the ID movement was busy making up stories, physicists actually did a fair amount of work on information. (Funnier yet, reports of the work have appeared in the science news outlets that Big Zero Leary monitors, but have not made their way to UD.) I’m inclined to say that there’s no stronger indictment of ID than its total oblivion to the science of information.

    When Dembski (later joined by Marks) talks about conservation of information, he’s invariable engaged in arguments from improbability, with improbability expressed on a logarithmic scale. To repeat what I’ve said a number of times, in communication theory (the context in which Shannon developed information theory), there is considerably more to information than log transformation of improbability. As for “functional information,” I note that Hazen et al. put the term in scare quotes, as I just have, and indicate repeatedly that they have developed a measure of complexity. If they indicate somewhere that they have developed a measure of information, then I have missed it. Szostak clearly was thinking of information in his original note. But that was just a one-page sketch of an interesting idea. What Szostak and coworkers say after further development of the idea trumps what Szostak initially said.

  18. DNA_Jock: Well, you should check out the link I provided, Bill:
    pre-specifying the analysis and the criteria for rejection of the null, are key.
    Sadly, IDists cannot even formulate the appropriate null.

    I mentioned way up the thread that gpuccio had abandoned Dembski’s requirement that a target have a detachable specification. The basic idea of a detachable specification is that, although it is given after observing an outcome, it might as well have been given beforehand (as in Dawkins’s “statistically improbable in a direction specified not with hindsight alone” characterization of complicated objects that give the appearance of design).

    We have no need to argue whether Dembski produced a workable notion of detachability (though Marks, Dembski, and Ewert in fact have said nothing about detachability since rebranding specified complexity as a measure of meaningful information). It’s enough to note, as you do, that Dembski knew that something was required, and that UD’s “functional information” meme has nothing at all to say about the matter.

    By the way, you can see an entirely legitimate attempt to do much the same as Dembski was trying to do in Gurevich and Passmore, “Impugning Randomness, Convincingly.”

  19. Tom English: What Szostak and coworkers say after further development of the idea trumps what Szostak initially said.

    To be clear, if Hazen et al. were to say now that they did not intend to rule out the interpretation of “functional information” as information, I would argue confidently that they were wrong.

  20. DNA_Jock: Sadly, IDists cannot even formulate the appropriate null.

    I thought that the null was that evolution can’t do it.

  21. Tom English: As for “functional information,” I note that Hazen et al. put the term in scare quotes, as I just have, and indicate repeatedly that they have developed a measure of complexity.

    Bingo.

    But if you think of it in terms of number of instructions needed to specify that level of complexity there is a clear connection to information.

    The formalism of “functional information,” which relates the information content of a complex system to its degree of function, provides a quantitative approach to modeling the origin and evolution of patterning in living and nonliving systems.”

    https://hazen.carnegiescience.edu/research/complexity

  22. Mung: I thought that the null was that evolution can’t do it.

    Oh dear.
    The null is that evolution can do it. In other words, the null is that the results observed are consistent with any explanation drawn from the set of “all relevant chance hypotheses”. So, before an IDist can even consider doing a Fisherian test, they need to model the distribution of outcomes expected under all possible naturalistic scenarios.
    Good luck!

  23. Mung,

    That’s right! Perhaps you could explain to Bill the meaning of “arbitrary” here. He won’t believe me…

  24. Tom English: As for “functional information,” I note that Hazen et al. put the term in scare quotes, as I just have, and indicate repeatedly that they have developed a measure of complexity.

    Mung: Bingo.

    But if you think of it in terms of number of instructions needed to specify that level of complexity there is a clear connection to information.

    I expect such nebulosity from Eric Holloway and Jonathan Bartlett (and, to a lesser degree, Winston Ewert). not from you. You’ve been paying close attention to the definition of “It,” functional information. Now “it” suddenly becomes a loose idea, to be thought about this, that, or the other way. Obviously, if “it” were not what “it” is, then “it” would be something different. Obviously, also, somebody wants “it” to be is algorithmic specified complexity.

    If you want to compare functional information to algorithmic specified complexity, then spell things out. Please don’t smuggle in an “it” that the two measures both “really are.” The differences in the “functional information” meme at UD and the published papers on algorithmic specified complexity are irreconcilable.

    (I am no friend of Winston Ewert, but when he and kairosfocus were in conflict at UD, some years ago, I was on his side. The “ID” mythology that has emerged at UD is incoherent and otherwise preposterous.)

    Hazen (quoted by Mung): The formalism of “functional information,” which relates the information content of a complex system to its degree of function, provides a quantitative approach to modeling the origin and evolution of patterning in living and nonliving systems.”

    https://hazen.carnegiescience.edu/research/complexity

    (Emphasis added by me.) Thanks for pointing me to this, Mung. I have not managed to run down the reference “Hazen (2008).” Have you?

    It’s easy to see that Hazen is wrong to say that the measure of “functional information” gives the quantity of information contained by a complex system. [EDIT: It might well be that he is thinking in terms of transmitting the description of the system as a message. Then the notion of “information content” is different from what I’ve assumed in the following. It’s always a bad idea to parse short passages like the one quoted above, and that’s why I went looking for Hazen (2008).] The proportion of systems with a given degree of function depends on how we define the class of systems (e.g., do we consider all Avida programs, or just those of length 300?). So we are at best talking about how much information is required to identify elements of a subset of a set that we have defined. This quantity of information (provisionally accepted as such) does not inhere in the systems in the subset.

    We’re back to Szostak’s good-as-gold remark that functional information is a property of the ensemble.

    [EDIT: I shouldn’t have responded to the quote of Hazen (2008) without reading the publication (whatever it is). But it is worth noting that there are at least two senses of information content, and that it’s important to keep them straight.]

  25. I think the date was mistyped on Hazen’s website, the publication is from 2009.

    Hazen RM 2009: The emergence of patterning in lifes origin and evolution.
    Int J Dev Biol. 2009;53(5-6):683-92. doi: 10.1387/ijdb.092936rh

    Abstract
    Three principles guide natural pattern formation in both biological and non-living systems: (1) patterns form from interactions of numerous individual particles, or agents, such as sand grains, molecules, cells or organisms; (2) assemblages of agents can adopt combinatorially large numbers of different configurations; (3) observed patterns emerge through the selection of highly functional configurations. These three principles apply to numerous natural processes, including the origin of life and its subsequent evolution. The formalism of functional information, which relates the information content of a complex system to its degree of function, provides a quantitative approach to modeling the origin and evolution of patterning in living and nonliving systems.

  26. dazz, I apologize for accusing you of being a sharpshooter when all you were doing was painting targets on the side of a barn.

  27. Tom English: You’ve been paying close attention to the definition of “It,” functional information.

    I don’t think anyone can be faulted for thinking that FI is a measure of information.

  28. colewd: So does 500 bits of FI falsify the null?

    H0: evolution can’t do it

    Ha: I told you, evolution can’t do it

  29. Corneel,

    H0: evolution can’t do it

    H0: Evolution can do it.

    Observation of X bits of functional information.

    Evolution hypothesis rejected. Design hypothesis validated.

  30. colewd:
    Corneel,

    H0: Evolution can do it.

    Observation of X bits of functional information.

    Evolution hypothesis rejected.Design hypothesis validated.

    Why?

  31. Rumraket,

    Why?

    X bits is defined as a number where the chance of finding function through a random search is exceedingly low.

  32. colewd: X bits is defined as a number where the chance of finding function through a random search is exceedingly low.

    Let’s rewind a bit. Did you read what DNA_Jock wrote?

    pre-specifying the analysis and the criteria for rejection of the null, are key.
    Sadly, IDists cannot even formulate the appropriate null.

    Hint: “What is the probability that a random choice amongst equiprobable sequences (the IDist workhorse) would yield a sequence this unusual?” is not an appropriate null. Dr. Dembski has explained why.

  33. colewd: X bits is defined as a number where the chance of finding function through a random search is exceedingly low.

    No it isn’t, lol. Not in Gpuccio’s usage, not anywhere. Where do you get this?

  34. Corneel,

    Rewind a little further.
    JOCK:

    Oh dear.
    The null is that evolution can do it.

    For evolution to “do It” it needs to find function through a random search. If that function is found then it can improve it, theoretically.

  35. colewd: Jacks or better to open.

    Eukaryotic cell division.

    Neither of those are being measured by gpuccio’s FI.

  36. colewd: Eukaryotic cell division.

    … and this one doesn’t need to be found through a random search.

  37. Corneel,

    Neither of those are being measured by gpuccio’s FI.

    Pieces are. He is measuring pieces such as the ubiquitin system proteins and the PRP8 spliceosome protein.

    For cell division to occur and selection to be possible you need the FI to build these proteins reliably.

    If we look at cell division as the minimum to get eukaryotic evolution going what do we need in place to get the party started?

    X bits of functional information.

  38. Corneel,

    … and this one doesn’t need to be found through a random search.

    You are right. The design inference does not require a random search.

  39. colewd: Pieces are. He is measuring pieces such as the ubiquitin system proteins and the PRP8 spliceosome protein.

    He (gpuccio) can’t calculate the FI for an optimized protein (since other peaks exist), but no matter, as that FI value is irrelevant. You guys need to calculate the proportion of sequences that carry a minimally selectable function (and not necessarily the function you are measuring, just one that can lead to the function of interest). Gpuccio has no idea whatsoever about how to do that, so he has decided to not even try. He would rather simply declare victory.

    For cell division to occur and selection to be possible you need the FI to build these proteins reliably.

    Humm. How reliably, and why?

    If we look at cell division as the minimum to get eukaryotic evolution going what do we need in place to get the party started?

    A coupla different prokaryotes, perhaps? Applying your tornado-in-a-junkyard math to endosymbiosis seems like a poor tactical choice, mate.

  40. colewd: If we look at cell division

    The archaeal and bacterial ancestors of eukaryotes were capable of cell division. What’s the problem? You seem to think that from one cell division to the next, a fully formed modern eukaryote arose from something completely unlike it, yet emerged incapable of cell-division which then had to evolve.

    I barely even imagine how confused one must be to think like that.

Leave a Reply