Does gpuccio’s argument that 500 bits of Functional Information implies Design work?

On Uncommon Descent, poster gpuccio has been discussing “functional information”. Most of gpuccio’s argument is a conventional “islands of function” argument. Not being very knowledgeable about biochemistry, I’ll happily leave that argument to others.

But I have been intrigued by gpuccio’s use of Functional Information, in particular gpuccio’s assertion that if we observe 500 bits of it, that this is a reliable indicator of Design, as here, about at the 11th sentence of point (a):

… the idea is that if we observe any object that exhibits complex functional information (for example, more than 500 bits of functional information ) for an explicitly defined function (whatever it is) we can safely infer design.

I wonder how this general method works. As far as I can see, it doesn’t work. There would be seem to be three possible ways of arguing for it, and in the end; two don’t work and one is just plain silly. Which of these is the basis for gpuccio’s statement? Let’s investigate …

A quick summary

Let me list the three ways, briefly.

(1) The first is the argument using William Dembski’s (2002) Law of Conservation of Complex Specified Information. I have argued (2007) that this is formulated in such a way as to compare apples to oranges, and thus is not able to reject normal evolutionary processes as explanations for the “complex” functional information.  In any case, I see little sign that gpuccio is using the LCCSI.

(2) The second is the argument that the functional information indicates that only an extremely small fraction of genotypes have the desired function, and the rest are all alike in totally lacking any of this function.  This would prevent natural selection from following any path of increasing fitness to the function, and the rareness of the genotypes that have nonzero function would prevent mutational processes from finding them. This is, as far as I can tell, gpuccio’s islands-of-function argument. If such cases can be found, then explaining them by natural evolutionary processes would indeed be difficult. That is gpuccio’s main argument, and I leave it to others to argue with its application in the cases where gpuccio uses it. I am concerned here, not with the islands-of-function argument itself, but with whether the design inference from 500 bits of functional information is generally valid.

We are asking here whether, in general, observation of more than 500 bits of functional information is “a reliable indicator of design”. And gpuccio’s definition of functional information is not confined to cases of islands of function, but also includes cases where there would be a path to along which function increases. In such cases, seeing 500 bits of functional information, we cannot conclude from this that it is extremely unlikely to have arisen by normal evolutionary processes. So the general rule that gpuccio gives fails, as it is not reliable.

(3) The third possibility is an additional condition that is added to the design inference. It simply declares that unless the set of genotypes is effectively unreachable by normal evolutionary processes, we don’t call the pattern “complex functional information”. It does not simply define “complex functional information” as a case where we can define a level of function that makes probability of the set less than 2^{-500}.  That additional condition allows us to safely conclude that normal evolutionary forces can be dismissed — by definition. But it leaves the reader to do the heavy lifting, as the reader has to determine that the set of genotypes has an extremely low probability of being reached. And once they have done that, they will find that the additional step of concluding that the genotypes have “complex functional information” adds nothing to our knowledge. CFI becomes a useless add-on that sounds deep and mysterious but actually tells you nothing except what you already know. So CFI becomes useless. And there seems to be some indication that gpuccio does use this additional condition.

Let us go over these three possibilities in some detail. First, what is the connection of gpuccio’s “functional information” to Jack Szostak’s quantity of the same name?

Is gpuccio’s Functional Information the same as Szostak’s Functional Information?

gpuccio acknowledges that gpuccio’s definition of Functional Information is closely connected to Jack Szostak’s definition of it. gpuccio notes here:

Please, not[e] the definition of functional information as:

“the fraction of all possible configurations of the system that possess a degree of function >=
Ex.”

which is identical to my definition, in particular my definition of functional information as the
upper tail of the observed function, that was so much criticized by DNA_Jock.

(I have corrected gpuccio’s typo of “not” to “note”, JF)

We shall see later that there may be some ways in which gpuccio’s definition
is modified from Szostak’s. Jack Szostak and his co-authors never attempted any use of his definition to infer Design. Nor did Leslie Orgel, whose Specified Information (in his 1973 book The Origins of Life) preceded Szostak’s. So the part about design inference must come from somewhere else.

gpuccio seems to be making one of three possible arguments;

Possibility #1 That there is some mathematical theorem that proves that ordinary evolutionary processes cannot result in an adaptation that has 500 bits of Functional Information.

Use of such a theorem was attempted by William Dembski, his Law of Conservation of Complex Specified Information, explained in Dembski’s book No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2001). But Dembski’s LCCSI theorem did not do what Dembski needed it to do. I have explained why in my own article on Dembski’s arguments (here). Dembski’s LCCSI changed the specification before and after evolutionary processes, and so he was comparing apples to oranges.

In any case, as far as I can see gpuccio has not attempted to derive gpuccio’s argument from Dembski’s, and gpuccio has not directly invoked the LCCSI, or provided a theorem to replace it.  gpuccio said in a response to a comment of mine at TSZ,

Look, I will not enter the specifics of your criticism to Dembski. I agre with Dembski in most things, but not in all, and my arguments are however more focused on empirical science and in particular biology.

While thus disclaiming that the argument is Dembski’s, on the other hand gpuccio does associate the argument with Dembski here by saying that

Of course, Dembski, Abel, Durston and many others are the absolute references for any discussion about functional information. I think and hope that my ideas are absolutely derived from theirs. My only purpose is to detail some aspects of the problem.

and by saying elsewhere that

No generation of more than 500 bits has ever been observed to arise in a non design system (as you know, this is the fundamental idea in ID).

That figure being Dembski’s, this leaves it unclear whether gpuccio is or is not basing the argument on Dembski’s. But gpuccio does not directly invoke the LCCSI, or try to come up with some mathematical theorem that replaces it.

So possibility #1 can be safely ruled out.

Possibility #2. That the target region in the computation of Functional Information consists of all of the sequences that have nonzero function, while all other sequences have zero function. As there is no function elsewhere, natural selection for this function then cannot favor sequences closer and closer to the target region.

Such cases are possible, and usually gpuccio is talking about cases like this. But gpuccio does not require them in order to have Functional Information. gpuccio does not rule out that the region could be defined by a high level of function, with lower levels of function in sequences outside of the region, so that there could be paths allowing evolution to reach the target region of sequences.

An example in which gpuccio recognizes that lower levels of function can exist outside the target region is found here, where gpuccio is discussing natural and artificial selection:

Then you can ask: why have I spent a lot of time discussing how NS (and AS) can in some cases add some functional information to a sequence (see my posts #284, #285 and #287)

There is a very good reason for that, IMO.

I am arguing that:

1) It is possible for NS to add some functional information to a sequence, in a few very specific cases, but:

2) Those cases are extremely rare exceptions, with very specific features, and:

3) If we understand well what are the feature that allow, in those exceptional cases, those limited “successes” of NS, we can easily demonstrate that:

4) Because of those same features that allow the intervention of NS, those scenarios can never, never be steps to complex functional information.

Jack Szostak defined functional information by having us define a cutoff level of function to define a set of sequences that had function greater than that, without any condition that the other sequences had zero function. Neither did Durston. And as we’ve seen gpuccio associates his argument with theirs.

So this second possibility could not be the source of gpuccio’s general assertion about 500 bits of functional information being a reliable indicator of design, however much gpuccio concentrates on such cases.

Possibility #3. That there is an additional condition in gpuccio’s Functional Information, one that does not allow us to declare it to be present if there is a way for evolutionary processes to achieve that high a level of function. In short, if we see 500 bits of Szostak’s functional information, and if it can be put into the genome by natural evolutionary processes such as natural selection then for that reason we declare that it is not really Functional Information. If gpuccio is doing this, then gpuccio’s Functional Information is really a very different animal than Szostak’s functional information.

Is gpuccio doing that? gpuccio does associate his argument with William Dembski’s, at least in some of his statements.  And William Dembski has defined his Complex Specified Information in this way, adding the condition that it is not really CSI unless it is sufficiently improbable that it be achieved by natural evolutionary forces (see my discussion of this here in the section on “Dembski’s revised CSI argument” that refer to Dembski’s statements here). And Dembski’s added condition renders use of his CSI a useless afterthought to the design inference.

gpuccio does seem to be making a similar condition. Dembski’s added condition comes in via the calculation of the “probability” of each genotype. In Szostak’s definition, the probabilities of sequences are simply their frequencies among all possible sequences, with each being counted equally. In Dembski’s CSI calculation, we are instead supposed to compute the probability of the sequence given all evolutionary processes, including natural selection.

gpuccio has a similar condition in the requirements for concluding that complex
functional information is present:  We can see it at step (6) here:

If our conclusion is yes, we must still do one thing. We observe carefully the object and what we know of the system, and we ask if there is any known and credible algorithmic explanation of the sequence in that system. Usually, that is easily done by excluding regularity, which is easily done for functional specification. However, as in the particular case of functional proteins a special algorithm has been proposed, neo darwininism, which is intended to explain non regular functional sequences by a mix of chance and regularity, for this special case we must show that such an explanation is not credible, and that it is not supported by facts. That is a part which I have not yet discussed in detail here. The necessity part of the algorithm (NS) is not analyzed by dFSCI alone, but by other approaches and considerations. dFSCI is essential to evaluate the random part of the algorithm (RV). However, the short conclusion is that neo darwinism is not a known and credible algorithm which can explain the origin of even one protein superfamily. It is neither known nor credible. And I am not aware of any other algorithm ever proposed to explain (without design) the origin of functional, non regular sequences.

In other words, you, the user of the concept, are on your own. You have to rule out that natural selection (and other evolutionary processes) could reach the target sequences. And once you have ruled it out, you have no real need for the declaration that complex functional information is present.

I have gone on long enough. I conclude that the rule that observation of 500 bits of functional information is present allows us to conclude in favor of Design (or at any rate, to rule out normal evolutionary processes as the source of the adaptation) is simply nonexistent. Or if it does exist, it is as a useless add-on to an argument that draws that conclusion for some other reason, leaving the really hard work to the user.

Let’s end by asking gpuccio some questions:
1. Is your “functional information” the same as Szostak’s?
2. Or does it add the requirement that there be no function in sequences that
are outside of the target set?
3. Does it also require us to compute the probability that the sequence arises as a result of normal evolutionary processes?

1,971 thoughts on “Does gpuccio’s argument that 500 bits of Functional Information implies Design work?

  1. Mung: Why would evolution?

    As a theistic evolutionist why don’t you attempt to answer that question yourself first?

  2. J-Mac:
    Yeah.. So does nonsense..

    1200 commentsandpending…

    Thank you for that penetrating analysis that is sure to be persuasive to the reader.

  3. Alan Fox: Yet still no suggestion from gpuccio or anyone else how “gpuccio’s argument that 500 bits of Functional Information implies Design” works.

    No, it’s been clarified. There was a clause which amounts to “implies Design, except in those cases where ordinary evolutionary forces can achieve 500 Bits of FI.”

    One can’t argue with that …

  4. Rumraket: Thank you for that penetrating analysis that is sure to be persuasive to the reader.

    It’s why Intelligent Design and it’s variants doing so well.

  5. Joe Felsenstein: No, it’s been clarified.There was a clause which amounts to“implies Design, except in those cases where ordinary evolutionary forces can achieve 500 Bits of FI.”

    One can’t argue with that …

    Well, no, I guess not. 😉

  6. Corneel,

    As to the redefining of the 500-bit rule by adding the new it-has-to-be-introduced-in-one-transition requirement; I will wait until gpuccio makes up his mind on this one (I suspect he considers both statements to be equivalent, because he regards function as binary on-off)

    If you can demonstrate that mutation generated 500 bits of FI then the 500 bit rule gets forced to the 750 bit rule 🙂

  7. colewd: If you can demonstrate that mutation generated 500 bits of FI then the 500 bit rule gets forced to the 750 bit rule

    The “rule” doesn’t accomplish anything. In the forms it has been presented and defended it is either just just an argument from ignorance, or based on extremely implausible assumptions, or a blind assertion.

  8. Rumraket,

    The “rule” doesn’t accomplish anything. In the forms it has been presented and defended it is either just just an argument from ignorance, or based on extremely implausible assumptions, or a blind assertion.

    Its based on the evidence that a conscious mind can generate 500, 5000 or 50000 bits of information repeatably and accurately. Currently there is no alternative to this mechanism for generating FI from scratch of any amount.

  9. colewd: Its based on the evidence that a conscious mind can generate 500, 5000 or 50000 bits of information repeatably and accurately. Currently there is no alternative to this mechanism for generating FI from scratch of any amount.

    See, this is the blind assertion part.

    And the fact that ID can generate it in principle doesn’t mean it actually did.

  10. colewd:
    Rumraket,

    Its based on the evidence that a conscious mind can generate 500, 5000 or 50000 bits of information repeatably and accurately.Currently there is no alternative to this mechanism for generating FI from scratch of any amount.

    You guys should gather in a yearly creotard convention to share with other creoturds how many pages of “functional information” you’ve been able to type throughout the year, hence pushing the threshold further and further. Unfortunately for you, you’re incapable of posting anything new and original, but I’m sure other IDiots will be up for the task… for a while… maybe

  11. Alan Fox: Well, no, I guess not.

    Welcome back Alan!
    I’d thought you’d retired from this nonsense to paint in France the visuques…

  12. colewd: Currently there is no alternative to this mechanism for generating FI from scratch of any amount.

    “Of any amount”? Such as mutation carrying a sequence from one which has less function to one which has more function? Just a single base change making a single amino acid substitution?

    Lots of examples of that. Wherever there is a sequence that can have mutations that can reduce function, we could also have a mutation that can go the other way. Every position in the genome can have mutations, from any base to any other base.

    Saying what colewd did is like saying that you can’t ever, in principle, have a typographical error that ends up changing a passage that is incorrect by one letter back to one that is correct. That “To bf or not to be” cannot ever have a typographic error introduced that makes it become “To be or not to be".

  13. Joe Felsenstein,

    “Of any amount”? Such as mutation carrying a sequence from one which has less function to one which has more function? Just a single base change making a single amino acid substitution?

    The term I used was scratch as this example requires the pre existence of functional information to make it work and retrieves a function versus creating one.

    A conscious mind can start from a blank sheet of paper and best explains the large jumps we are observing.

  14. Joe Felsenstein,

    That “To bf or not to be” cannot ever have a typographic error introduced that makes it become “To be or not to be”.

    Again, your example assumes the pre existence of “to bf or not to be”. To get to that point alone from random change is pretty challenging.

  15. colewd: A conscious mind can start from a blank sheet of paper and best explains the large jumps we are observing.

    The only conscious minds we are familiar with belong to us, and they cannot act in such ways. Over geological time periods and invisibility without even the possibility of detection and within all boundaries of error.

    In addition, such a mind would presumably need to know in some what, other then trial and error, what the changes they are making would result in. Nobody human has that sort of mind, or we’d be designing drugs without the need to test them first. We’d be curing diseases on the first attempt. The travelling salesman would just *know* the optimum route.

    Of course, when you say “conscious mind” it’s understood you mean “abrahamic god”. So carry on, carry on….

  16. colewd: Again, your example assumes the pre existence of “to bf or not to be”. To get to that point alone from random change is pretty challenging.

    What if the rules of grammar were such that all sentences did something and that sentences can combine with other sentences and the results of that combination also did something? Sounds a lot like chemistry to me…

    And once again we are back to the idea that there’s something special about “to bf ot not to be” when what it does is just one of a near infinite array of function. When all sentences have function your objections are moot.

    For your analogy to work properly, all sentences must “work”, all possible sentences must communicate some sort of meaning.

  17. colewd: Again, your example assumes the pre existence of “to bf or not to be”. To get to that point alone from random change is pretty challenging.

    Noted you don’t dispute the idea you can get from that sentence to “to be or not to be” with a random mutation. Pathetic.

  18. Rumraket: In the forms it has been presented and defended it is either just just an argument from ignorance, or based on extremely implausible assumptions, or a blind assertion.

    Are you talking about the claim that evolution can generate 500 bits of FI?

  19. Rumraket: And the fact that ID can generate it in principle doesn’t mean it actually did.

    Not “in principle” Rumraket, in actuality.

    Or are you now arguing that even ID cannot generate 500 bits of FI. Because I don’t think I’ve heard anyone raise that as an objection to gpuccio’s argument yet. So that would at least be something new and different.

    It seems to me that everyone was granting gpuccio’s premise.

  20. colewd: A conscious mind can start from a blank sheet of paper and best explains the large jumps we are observing.

    Except that gpuccio’s examples don’t start from a blank sheet of paper, they start with a presumed ancestral set of sequences which had some function.

    ETA: C’mon Bill. Your objections are missing the point. 🙁

  21. Mung:

    ETA: C’mon Bill. Your objections are missing the point. 🙁

    God created Bill so that Mung would finally have someone to condescend to.

  22. keiths: God created Bill so that Mung would finally have someone to condescend to.

    Must be true, coming from the expert. 🙂

  23. Mung: Not “in principle” Rumraket, in actuality.

    It is only in actuality when they design something. For hypotheticals it’s in principle. Human designers have in actuality designed entities that exhibit lots of FI, and they could in principle do it again in the future.

    And the only intelligent designer we know of is human designers, who weren’t around 700 million years ago. Which even at our technological levels couldn’t design and manufacture a large multicellular organism. Probably not even a single-celled one.

    So for another intelligent designer, purportedly designing the entities from actual biology, it’d be in principle.

    Or are you now arguing that even ID cannot generate 500 bits of FI. Because I don’t think I’ve heard anyone raise that as an objection to gpuccio’s argument yet. So that would at least be something new and different.

    It seems to me that everyone was granting gpuccio’s premise.

    Thanks for reminding me that Gpuccio hasn’t actually defined intelligence so I think that needs to happen first. Nor even design. It’s all just referred to losely as if we really know what it is or whether the thing he calls intelligent design is even a thing that is possible. What are it’s mechanisms and capabilities?

  24. Mung: Are you talking about the claim that evolution can generate 500 bits of FI?

    Hahahaha, No, the 500 bits rule of Gpuccio.

  25. Mung,

    Except that gpuccio’s examples don’t start from a blank sheet of paper, they start with a presumed ancestral set of sequences which had some function.

    Think it possible that you may be mistaken 🙂

  26. Rumraket: Thanks for reminding me that Gpuccio hasn’t actually defined intelligence so I think that needs to happen first.

    Pretty sure he has, so I think you’re wrong about that.

  27. Mung: Pretty sure he has, so I think you’re wrong about that.

    Yeah. Quantum interfacing from jeebus ville

  28. OMagain:

    colewd: Again, your example assumes the pre existence of “to bf or not to be”. To get to that point alone from random change is pretty challenging.

    Noted you don’t dispute the idea you can get from that sentence to “to be or not to be” with a random mutation. Pathetic.

    We have well-adapted sequences, like the “To be or not to be” quote. They can suffer mutations and be one letter off (or one amino acid off). That is easily verifiable. Now colewd states a principle that they (somehow) cannot mutate back the other way,

    Despite evidence that all sites in the genome are subject to mutation to all other possible bases (letters). Including ones that reverse the effect of the previous mutation.

    How colewd can deny that this is possible, I know not,

  29. Joe:

    How colewd can deny that this is possible, I know not…

    With Jeebus, all things are possible, including Bill’s denial.

  30. colewd: What if you don’t have well-adapted sequences?

    Given some non-zero level of function, then the chance of a random mutation resulting in a gain of function (meaning higher fitness) is even bigger than if the sequence was not well-adapted.

    C’mon Bill. This is getting pathetic. just admit that random mutation can, in principle, result in increases in FI.

  31. Mung: Given how arbitrary “function” and “degree of function” can be I would urge extreme caution.

    Arbitrary my elbow. If all the malate dehydrogenases in your body decided to no longer accept oxaloacetate / malate as substrates, you would quickly change your mind about that being the same function. Within minutes most likely.

    The Dave / Dale part matters quite a lot. The analogy is just meant to obfuscate the fact that the LDH truly performs a novel function.

  32. Corneel: just admit that random mutation can, in principle, result in increases in FI.

    Note that the issue is not whether on average, in a population, mutation can ever result in an increase of FI. The assertion is that there it cannot be shown that a single mutation event can ever result in a sequence that has increased FI.

    Of course there is one case in which that increase cannot happen — when all sequences have exactly the same level of “function”. Which is not a very interesting case.

  33. keiths: With Jeebus, all things are possible, including Bill’s denial.

    This is good news, for even keiths might be converted. 🙂

  34. Corneel: Given some non-zero level of function, then the chance of a random mutation resulting in a gain of function (meaning higher fitness) is even bigger than if the sequence was not well-adapted.

    I’m sorry, but this makes no sense.

  35. Joe Felsenstein: Of course there is one case in which that increase cannot happen — when all sequences have exactly the same level of “function”. Which is not a very interesting case.

    So if all the sequences have the same level of function, and we set our threshold level of function just above that, and one of those sequences changes to now meet or exceed that threshold, that would not be an “increase” in FI?

    Have you told Rumraket?

  36. Corneel: The analogy is just meant to obfuscate the fact that the LDH truly performs a novel function.

    But that’s not a very interesting case, as in that case there would be no increase in FI.

  37. Mung: I’m sorry, but this makes no sense.

    D’oh

    Given some non-zero level of function, then the chance of a random mutation resulting in a gain of function (meaning higher fitness) is even bigger than if the sequence was not well-adapted.

  38. Corneel: …just admit that random mutation can, in principle, result in increases in FI.

    I don’t see how. What I do see is a lot of confused people. When calculating FI you are supposed to take into account the entire space of sequences. Can we agree on that?

    So this supposed new and better sequence you are talking about is already in the space of possible sequences and its degree of function is already accounted for. You (and others) are acting as if it’s not already present and accounted for, but it is.

    No increase in FI.

  39. Mung: But that’s not a very interesting case, as in that case there would be no increase in FI.

    We see an increase in substrate preference for pyruvate of several orders of magnitude. How is that not going to result in an increase in FI for this specific function? More importantly, how is the emergence of a novel adaptive trait not going to be interesting? You seemed to be mighty disappointed that no new organs sprouted with every speciation event in the human evolution thread.

    Mung: I don’t see how. What I do see is a lot of confused people. When calculating FI you are supposed to take into account the entire space of sequences. Can we agree on that?

    So this supposed new and better sequence you are talking about is already in the space of possible sequences and its degree of function is already accounted for. You (and others) are acting as if it’s not already present and accounted for, but it is.

    I think I see confused people as well. Well, one at least.

    I am trying very hard, but I fail to see what good is the presence of some optimal sequence in the space of possible sequences, when it is not present in the actual population. Is this some kind of Zen thing?

  40. Joe Felsenstein,

    The assertion is that there it cannot be shown that a single mutation event can ever result in a sequence that has increased FI.

    If there is no FI in the system how can a mutation find it?

  41. Corneel: I am trying very hard, but I fail to see what good is the presence of some optimal sequence in the space of possible sequences, when it is not present in the actual population. Is this some kind of Zen thing?

    It’s some kind of FI thing. All sequences of the same length are in the population. For each sequence you assign or otherwise determine the level of function. You decide whether it is at or above your threshold of function. That’s how you calculate FI. Talk of sequences mutating is utterly irrelevant.

  42. Rumraket: It’s -log2(n/a^L)

    where n is the total number of sequences that meet the minimum threshold for function. While a is alphabet size, and L is sequence length of the enzyme in question.

    a^L includes all the possible sequences.

    ETA: In tszFI it is a^L – sequences that don’t actually exist.

  43. Mung: So if all the sequences have the same level of function, and we set our threshold level of function just above that, and one of those sequences changes to now meet or exceed that threshold, that would not be an “increase” in FI?

    He just told you that all sequences have the same level of function. Then it is not possible that one sequence can change and meet a threshold beyond the one that all sequences have.

    If all sequences have a level of function = X, then a sequence can’t change and have a level of function >X, because the sequence it changed into would also just have level of function = X.

  44. Mung:

    Joe Felsenstein: Of course there is one case in which that increase cannot happen — when all sequences have exactly the same level of “function”. Which is not a very interesting case.

    So if all the sequences have the same level of function, and we set our threshold level of function just above that, and one of those sequences changes to now meet or exceed that threshold, that would not be an “increase” in FI?

    Have you told Rumraket?

    If all possible sequences had the same level of function, and we set our threshold just above that, and one of our sequences changes to now meet or exceed that threshold, then that would be a major logical contradiction.

    Logic having been contradicted, most likely the universe would instantly disappear.

Leave a Reply