Does gpuccio’s argument that 500 bits of Functional Information implies Design work?

On Uncommon Descent, poster gpuccio has been discussing “functional information”. Most of gpuccio’s argument is a conventional “islands of function” argument. Not being very knowledgeable about biochemistry, I’ll happily leave that argument to others.

But I have been intrigued by gpuccio’s use of Functional Information, in particular gpuccio’s assertion that if we observe 500 bits of it, that this is a reliable indicator of Design, as here, about at the 11th sentence of point (a):

… the idea is that if we observe any object that exhibits complex functional information (for example, more than 500 bits of functional information ) for an explicitly defined function (whatever it is) we can safely infer design.

I wonder how this general method works. As far as I can see, it doesn’t work. There would be seem to be three possible ways of arguing for it, and in the end; two don’t work and one is just plain silly. Which of these is the basis for gpuccio’s statement? Let’s investigate …

A quick summary

Let me list the three ways, briefly.

(1) The first is the argument using William Dembski’s (2002) Law of Conservation of Complex Specified Information. I have argued (2007) that this is formulated in such a way as to compare apples to oranges, and thus is not able to reject normal evolutionary processes as explanations for the “complex” functional information.  In any case, I see little sign that gpuccio is using the LCCSI.

(2) The second is the argument that the functional information indicates that only an extremely small fraction of genotypes have the desired function, and the rest are all alike in totally lacking any of this function.  This would prevent natural selection from following any path of increasing fitness to the function, and the rareness of the genotypes that have nonzero function would prevent mutational processes from finding them. This is, as far as I can tell, gpuccio’s islands-of-function argument. If such cases can be found, then explaining them by natural evolutionary processes would indeed be difficult. That is gpuccio’s main argument, and I leave it to others to argue with its application in the cases where gpuccio uses it. I am concerned here, not with the islands-of-function argument itself, but with whether the design inference from 500 bits of functional information is generally valid.

We are asking here whether, in general, observation of more than 500 bits of functional information is “a reliable indicator of design”. And gpuccio’s definition of functional information is not confined to cases of islands of function, but also includes cases where there would be a path to along which function increases. In such cases, seeing 500 bits of functional information, we cannot conclude from this that it is extremely unlikely to have arisen by normal evolutionary processes. So the general rule that gpuccio gives fails, as it is not reliable.

(3) The third possibility is an additional condition that is added to the design inference. It simply declares that unless the set of genotypes is effectively unreachable by normal evolutionary processes, we don’t call the pattern “complex functional information”. It does not simply define “complex functional information” as a case where we can define a level of function that makes probability of the set less than 2^{-500}.  That additional condition allows us to safely conclude that normal evolutionary forces can be dismissed — by definition. But it leaves the reader to do the heavy lifting, as the reader has to determine that the set of genotypes has an extremely low probability of being reached. And once they have done that, they will find that the additional step of concluding that the genotypes have “complex functional information” adds nothing to our knowledge. CFI becomes a useless add-on that sounds deep and mysterious but actually tells you nothing except what you already know. So CFI becomes useless. And there seems to be some indication that gpuccio does use this additional condition.

Let us go over these three possibilities in some detail. First, what is the connection of gpuccio’s “functional information” to Jack Szostak’s quantity of the same name?

Is gpuccio’s Functional Information the same as Szostak’s Functional Information?

gpuccio acknowledges that gpuccio’s definition of Functional Information is closely connected to Jack Szostak’s definition of it. gpuccio notes here:

Please, not[e] the definition of functional information as:

“the fraction of all possible configurations of the system that possess a degree of function >=
Ex.”

which is identical to my definition, in particular my definition of functional information as the
upper tail of the observed function, that was so much criticized by DNA_Jock.

(I have corrected gpuccio’s typo of “not” to “note”, JF)

We shall see later that there may be some ways in which gpuccio’s definition
is modified from Szostak’s. Jack Szostak and his co-authors never attempted any use of his definition to infer Design. Nor did Leslie Orgel, whose Specified Information (in his 1973 book The Origins of Life) preceded Szostak’s. So the part about design inference must come from somewhere else.

gpuccio seems to be making one of three possible arguments;

Possibility #1 That there is some mathematical theorem that proves that ordinary evolutionary processes cannot result in an adaptation that has 500 bits of Functional Information.

Use of such a theorem was attempted by William Dembski, his Law of Conservation of Complex Specified Information, explained in Dembski’s book No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2001). But Dembski’s LCCSI theorem did not do what Dembski needed it to do. I have explained why in my own article on Dembski’s arguments (here). Dembski’s LCCSI changed the specification before and after evolutionary processes, and so he was comparing apples to oranges.

In any case, as far as I can see gpuccio has not attempted to derive gpuccio’s argument from Dembski’s, and gpuccio has not directly invoked the LCCSI, or provided a theorem to replace it.  gpuccio said in a response to a comment of mine at TSZ,

Look, I will not enter the specifics of your criticism to Dembski. I agre with Dembski in most things, but not in all, and my arguments are however more focused on empirical science and in particular biology.

While thus disclaiming that the argument is Dembski’s, on the other hand gpuccio does associate the argument with Dembski here by saying that

Of course, Dembski, Abel, Durston and many others are the absolute references for any discussion about functional information. I think and hope that my ideas are absolutely derived from theirs. My only purpose is to detail some aspects of the problem.

and by saying elsewhere that

No generation of more than 500 bits has ever been observed to arise in a non design system (as you know, this is the fundamental idea in ID).

That figure being Dembski’s, this leaves it unclear whether gpuccio is or is not basing the argument on Dembski’s. But gpuccio does not directly invoke the LCCSI, or try to come up with some mathematical theorem that replaces it.

So possibility #1 can be safely ruled out.

Possibility #2. That the target region in the computation of Functional Information consists of all of the sequences that have nonzero function, while all other sequences have zero function. As there is no function elsewhere, natural selection for this function then cannot favor sequences closer and closer to the target region.

Such cases are possible, and usually gpuccio is talking about cases like this. But gpuccio does not require them in order to have Functional Information. gpuccio does not rule out that the region could be defined by a high level of function, with lower levels of function in sequences outside of the region, so that there could be paths allowing evolution to reach the target region of sequences.

An example in which gpuccio recognizes that lower levels of function can exist outside the target region is found here, where gpuccio is discussing natural and artificial selection:

Then you can ask: why have I spent a lot of time discussing how NS (and AS) can in some cases add some functional information to a sequence (see my posts #284, #285 and #287)

There is a very good reason for that, IMO.

I am arguing that:

1) It is possible for NS to add some functional information to a sequence, in a few very specific cases, but:

2) Those cases are extremely rare exceptions, with very specific features, and:

3) If we understand well what are the feature that allow, in those exceptional cases, those limited “successes” of NS, we can easily demonstrate that:

4) Because of those same features that allow the intervention of NS, those scenarios can never, never be steps to complex functional information.

Jack Szostak defined functional information by having us define a cutoff level of function to define a set of sequences that had function greater than that, without any condition that the other sequences had zero function. Neither did Durston. And as we’ve seen gpuccio associates his argument with theirs.

So this second possibility could not be the source of gpuccio’s general assertion about 500 bits of functional information being a reliable indicator of design, however much gpuccio concentrates on such cases.

Possibility #3. That there is an additional condition in gpuccio’s Functional Information, one that does not allow us to declare it to be present if there is a way for evolutionary processes to achieve that high a level of function. In short, if we see 500 bits of Szostak’s functional information, and if it can be put into the genome by natural evolutionary processes such as natural selection then for that reason we declare that it is not really Functional Information. If gpuccio is doing this, then gpuccio’s Functional Information is really a very different animal than Szostak’s functional information.

Is gpuccio doing that? gpuccio does associate his argument with William Dembski’s, at least in some of his statements.  And William Dembski has defined his Complex Specified Information in this way, adding the condition that it is not really CSI unless it is sufficiently improbable that it be achieved by natural evolutionary forces (see my discussion of this here in the section on “Dembski’s revised CSI argument” that refer to Dembski’s statements here). And Dembski’s added condition renders use of his CSI a useless afterthought to the design inference.

gpuccio does seem to be making a similar condition. Dembski’s added condition comes in via the calculation of the “probability” of each genotype. In Szostak’s definition, the probabilities of sequences are simply their frequencies among all possible sequences, with each being counted equally. In Dembski’s CSI calculation, we are instead supposed to compute the probability of the sequence given all evolutionary processes, including natural selection.

gpuccio has a similar condition in the requirements for concluding that complex
functional information is present:  We can see it at step (6) here:

If our conclusion is yes, we must still do one thing. We observe carefully the object and what we know of the system, and we ask if there is any known and credible algorithmic explanation of the sequence in that system. Usually, that is easily done by excluding regularity, which is easily done for functional specification. However, as in the particular case of functional proteins a special algorithm has been proposed, neo darwininism, which is intended to explain non regular functional sequences by a mix of chance and regularity, for this special case we must show that such an explanation is not credible, and that it is not supported by facts. That is a part which I have not yet discussed in detail here. The necessity part of the algorithm (NS) is not analyzed by dFSCI alone, but by other approaches and considerations. dFSCI is essential to evaluate the random part of the algorithm (RV). However, the short conclusion is that neo darwinism is not a known and credible algorithm which can explain the origin of even one protein superfamily. It is neither known nor credible. And I am not aware of any other algorithm ever proposed to explain (without design) the origin of functional, non regular sequences.

In other words, you, the user of the concept, are on your own. You have to rule out that natural selection (and other evolutionary processes) could reach the target sequences. And once you have ruled it out, you have no real need for the declaration that complex functional information is present.

I have gone on long enough. I conclude that the rule that observation of 500 bits of functional information is present allows us to conclude in favor of Design (or at any rate, to rule out normal evolutionary processes as the source of the adaptation) is simply nonexistent. Or if it does exist, it is as a useless add-on to an argument that draws that conclusion for some other reason, leaving the really hard work to the user.

Let’s end by asking gpuccio some questions:
1. Is your “functional information” the same as Szostak’s?
2. Or does it add the requirement that there be no function in sequences that
are outside of the target set?
3. Does it also require us to compute the probability that the sequence arises as a result of normal evolutionary processes?

1,971 thoughts on “Does gpuccio’s argument that 500 bits of Functional Information implies Design work?

  1. Mung: His Response:

    It’s always a transition.

    So if you have 499 bits and one bit gets added you don’t then infer design because now you are at 500 bits. The transition itself must consist of at least 500 bits.

    You are not being entirely consistent here. On the one hand you are adamant that we use a “strict interpretation” of Hazen-Szostak functional information, but on the other one you are perfectly fine with gpuccio saying that functional information has been “carried over” from MDH to LDH. Excuse me? What function might that be? Would that be sequence similarity, fitness, “enzyme capability”? NOT lactate dehydrogenase activity in any case.

    As to the redefining of the 500-bit rule by adding the new it-has-to-be-introduced-in-one-transition requirement; I will wait until gpuccio makes up his mind on this one (I suspect he considers both statements to be equivalent, because he regards function as binary on-off).

  2. Rumraket: But here’s the problem; Mung, Gpuccio, and Bill Coll can just say “nah, I don’t like fitness as the arbiter of the minimum threshold for a biological function”, and we’re back to just throwing opinions at each other.

    If they reject fitness as an appropriate measure of function, then they should also refrain from using purifying selection as an argument that some optimal configuration has been reached. That doesn’t make sense.

  3. colewd: I just did a calculation using Gpuccio’s method at uniprot. The total FI count of LDH initially was 100 bits as only 6% is conserved between rabbits and bacteria.

    We were discussing lactate dehydrogenase in Apicomplexa (Plasmodium and such). It has a different evolutionary history from that in animals. Are you sure you looked at the proper protein?

  4. gpuccio@UD

    Now, Dave and Dale are of course the two substrates: for example, malate and lactate. The existing phrase is the deydrogenase structure: the two enzymes, as said, “share a fold and catalytic mechanism”. That corresponds to the information in the phrase about the hair, etc.

    But the two enzymes also “possess strict specificity for their substrates”: in our example, that specificity is the name.

    The name is all that changes. And the change is rather simple. It is 5 bits in the language example, it is certainly more in the enzymes example, but not much more.

    This is mighty interesting.

    So gpuccio has admitted that a promiscuous enzyme can switch substrate by evolutionary mechanisms. This seems to run counter to his stubborn refusal to admit that such a thing was possible in E3 ubiquitin-protein ligases. So what happened to all the specificity and strict control he claimed was present in the ubiquitin system? Do E3 ubiquitin-protein ligases use different biochemistry than metabolic enzymes?

  5. Oh and can somebody please tell the man that

    “The boy called Dave is from London, and he has brown hair and brown eyes. He is less than 30 years old.”

    and

    “The boy called Dale is from London, and he has brown hair and brown eyes. He is less than 30 years old.”

    communicate different messages and therefore have a different function. After all, the first sentence tells us absolutely nothing about Dale, right? So if he wants to calculate the FI involved in the transition to this novel function (giving us information about Dale), he should use the difference between zero (first sentence) and the FI of the complete latter sentence.

  6. colewd: I just did a calculation using Gpuccio’s method at uniprot. The total FI count of LDH initially was 100 bits as only 6% is conserved between rabbits and bacteria.

    Rofl how the hell are you calculating FI?

    It’s -log2(n/a^L)
    where n is the total number of sequences that meet the minimum threshold for function. While a is alphabet size, and L is sequence length of the enzyme in question.

    At most we could say we know of a few tens of thousands variants of LDHs. But let’s just be charitable and say we know of one million different variants. And there’s 20 different possible amino acids, and the sequence is about 330 long.

    -log2(10.000/20^330) = 1406 bits.

    How many sequences would it take to reduce it to 100 bits?

    -log2((10^399)/(20^330)) = 100 bits.

    So 10^399 sequences is required. There are not that many LDH enzymes known I’m quite certain of that.

  7. But Bill, here’s another problem: None of us actually know the total amount of sequences that meet the minimum threshold for function. We see that a handful of sequences are used in organisms on Earth, but are those the only ones possible? That’d be like saying there are no other possible species of living organism besides the ones we happen to know have existed on Earth.

  8. What I conclude from gpuccio’s responses is that the observation of 500 bits of FI does not allow us to conclude for Design. In spite of gpuccio saying that in his Sharpshooter post.

    In a population with more than one sequence, natural selection can change the frequencies of the sequences until one sequence replaces another. The individual changes of FI are small, as the average FI of sequences in the population can change by, say, 0.001 of a bit in one generation.

    Instead, gpuccio only concludes for Design when there is no path to the current sequence that natural selection with mutation could have followed. So basically gpuccio is just making the isolated island argument, and the 500 Bits Rule is just gpuccio’s rule for how big a step there has to be on a path for mutation to be unable to climb it. Mostly this is a matter of needing to step from one sequence over to another which is different at many sites. If there is a continuously uphill path on a fitness surface, with each step being a change at one position in a protein, then whether the fitness steps are small or large is not the issue.

    The case where gpuccio’s argument works is where all sequences outside of a set have zero “function” and where one starts hundreds of amino acid substitutions away from the isolated island. In cases where many sequences have small amounts of function, the argument will not work.

    Note that the 500 Bits Rule was used very differently by Dembski, in his Law of Conservation of Complex Specified Information, which was supposed to guarantee that natural selection could not reach the presently observed sequences, without any consideration of paths. However Dembski’s theorem did not show what he wanted it to show.

  9. Rumraket,
    Versed as I am in cole-puccio calculations, I think I can help you out here.
    Here’s what you do:
    You take two (WTF? Yes, Two!) distant relatives, like bacteria and rabbit, and select what you think are homologs (they have the same name, okay?) and see how many identical amino acids they share. You assume, based on your two-way comparison(!), that every single shared amino acid is absolutely required for function. If somebody (REC @ UD in 2014) points out that, despite having hundreds of 2-way shared aa’s, your favorite ATP synthase subunit carries ZERO universally conserved amino acids, you silently switch to talking about the other subunit. You also assume that function is binary: “either it makes ATP, or it doesn’t”.
    Finally, you assume that amino acids that are not identical in your two-way comparison are completely free to vary: then can be any amino acid. This “let’s ignore conservative changes because I don’t understand them” shortcut has hilarious consequences, particularly if you take the human homolog as the acme, and calculate how “far away” lesser beasts lie. As you move away to less closely related sequences, the increase in conservative changes causes your C-P Functional Ignorance measure to drop precipitously.
    Here’s the calculation.
    LDH in bacteria and rabbit share 23 identical amino acids (6.9% identity), that’s 4.32 x 23 bits of Functional Ignorance = 99.4
    Leading to the inevitable conclusion that there are in fact 10^399 different 330 amino acid sequences that will perform LDH function, because there are 307 unrestricted amino acids. 20^307= 10^399
    It’s deranged.

  10. DNA_Jock: It’s deranged.

    There are few other words for it. I’m speechless. What the fuck are they doing?

    It has nothing to do with Hazen & Szostak. It’s vaguely similar in that they take -log2 of something and call it FI, but that’s it, that’s where the similarity stops. Fucking LOL.

  11. Rumraket: There are few other words for it. I’m speechless. What the fuck are they doing?

    It has nothing to do with Hazen & Szostak. It’s vaguely similar in that they take -log2 of something and call it FI, but that’s it, that’s where the similarity stops.

    I bet you don’t even see the irony.

    It’s vaguely similar in that you take -log2 of some things and add them up and call it FI, but that’s it, that’s where the similarity stops.

  12. Rumraket: Then your post was badly worded because you seem to explicitly reject my statement that it is a side-issue whether FI for individual functions can be added:

    Or you took the first sentence of what I wrote out of context, because right after I wrote that sentence I gave an explanation of what I meant in the following sentences. It happens.

  13. Rumraket: The matter is pretty much settled then, we are left with only Gpuccio and Bill Cole thinking that calculating 500 bits of FI establishes that evolution couldn’t do it and so on that basis one can infer design.

    Seems like they’ve provided you with a falsification criterion.

  14. Joe Felsenstein: I must have missed the part of his clarification where he said that his statement in the Sharpshooter post was incorrect and that readers should not pay attention to it.

    Feel free to hop over there and take it up with him. You won’t get banned here for posting at UD.

  15. Corneel: On the one hand you are adamant that we use a “strict interpretation” of Hazen-Szostak functional information, but on the other one you are perfectly fine with gpuccio saying that functional information has been “carried over” from MDH to LDH. Excuse me?

    Please don’t misunderstand me. I am not endorsing the argument.

    I’m trying to get both sides to be talking about the same thing, about the same concepts, about the same requirements, so that they are not talking past each other and just arguing for the sake of arguing. 😉

    Earlier in this very thread I objected to the approach and intend to stick with that unless someone can show I should change my mind. So I am not “perfectly fine” with anything at this point. I remain skeptical of both sides.

  16. Corneel: communicate different messages and therefore have a different function.

    No no. They have the same function. To provide information about hair color, eye color, age, and place of origin. 😉

  17. Joe Felsenstein: In a population with more than one sequence, natural selection can change the frequencies of the sequences until one sequence replaces another.

    The frequency of the sequences seem irrelevant to me. For calculating FI all sequences that are the same are treated as one single sequence.

  18. Mung: No no. They have the same function. To provide information about hair color, eye color, age, and place of origin. 😉

    I have just decided that I hate analogies

  19. Corneel: I have just decided that I hate analogies

    Given how arbitrary “function” and “degree of function” can be I would urge extreme caution.

  20. Rumraket: It’s -log2(n/a^L)
    where n is the total number of sequences that meet the minimum threshold for function. While a is alphabet size, and L is sequence length of the enzyme in question.

    Another question for Rumraket:

    Alphabet consists of two characters, 0 and 1
    Sequence length is 8 characters
    Minimum degree of function is all characters in the sequence are the same

    00000000
    11111111

    256 possible sequences
    2 meet the minimum degree of function
    I calculate 7 bits of FI

    puts -Math.log2( 2.0 / (2**8) )

    Are you with me so far?

    Now let’s decompose that sequence into “parts” and calculate the FI for each part individually and add them up. I’ll divide into 4 sequences, each of length 2, and keep the same requirement. Now we have four sequences, two of which meet our minimum degree of function, 00 and 11. Still with me?

    puts -Math.log2( 2.0 / (2**2) )

    FI = 1 bit

    Now add them up: 1+1+1+1 = 4 bits of FI

    4 bits is not equal to 7 bits.

  21. Mung:
    Question for Rumraket:

    500 bits of FI – undefined = ??? bits of FI

    You can’t subtract something undefined from 500.

  22. Mung: It’s vaguely similar in that you take -log2 of some things and add them up and call it FI, but that’s it, that’s where the similarity stops.

    Yeah like Gpuccio does, but you can’t seem to say what’s wrong with that. Other than merely asserting that it’s wrong. Your whole case against doing this has been a colossal failure.

  23. Mung: Seems like they’ve provided you with a falsification criterion.

    And since you disagree with them I’ll leave you to explain to them where they go wrong.

  24. Rumraket: You can’t subtract something undefined from 500.

    Do you likewise agree that you cannot ADD something undefined to 500 bits of FI in order to arrive at a sum of 500 bits of FI?

    500 bits of FI + undefined = ??? bits of FI

  25. Rumraket: And since you disagree with them I’ll leave you to explain to them where they go wrong.

    I’ve never claimed that you can evolve 500 bits of FI.

  26. Mung,

    I’ve never claimed that you can evolve 500 bits of FI.

    Do you believe you can design 500 bits of FI 🙂

  27. colewd: Do you believe you can design 500 bits of FI

    I believe it is possible to intentionally (by design) create something to which the Hazen/Szostak measure could be applied and for which the value for FI would be at or above 500 bits.

    But I don’t believe that proves or otherwise demonstrates anything other than that it is possible to intentionally (by design) create something to which the Hazen/Szostak measure could be applied and for which the value for FI would be at or above 500 bits.

    Hardly profound. 🙂

  28. Mung: Do you likewise agree that you cannot ADD something undefined to 500 bits of FI in order to arrive at a sum of 500 bits of FI?

    500 bits of FI + undefined = ??? bits of FI

    Yes I agree with that.

  29. Mung:
    puts -Math.log2( 2.0 / (2**2) )

    FI = 1 bit

    Now add them up: 1+1+1+1 = 4 bits of FI

    4 bits is not equal to 7 bits.

    Thank you, finally you give an actual fucking argument. You have given an example that shows that we can’t necessarily expect the FI for individual sections of the sequence to sum up to the FI for the whole. Trying a host of different examples I see where the discrepancies crop up. When the sequences are very short, or when we divide the total sequence up into small enough parts, even a small discrepancy will sum to a large one. We could still sum FI for individual parts to get a useful ballpark number for the FI for the whole when we’re not dealing with very short sequences.

  30. Mung: I’ve never claimed that you can evolve 500 bits of FI.

    You don’t have to believe that in order to know as you do, that their inference is invalid.

  31. Rumraket: Thank you, finally you give an actual fucking argument.

    You’re welcome. Apparently my intuition functions better than yours.

    But it’s more than that, I addressed a specific claim of yours, which if you had done the simple math exercise I just did you could have avoided and saved the both of us much time and effort.

    You were just making stuff up. Doing the very thing you find so objectionable in others.

    Take for example this post of yours.

    iirc, you had another post where you said if the function of each part was the same then you’d get the same result whether you calculate it for the whole or sum up the parts.

  32. Mung,

    But I don’t believe that proves or otherwise demonstrates anything other than that it is possible to intentionally (by design) create something to which the Hazen/Szostak measure could be applied and for which the value for FI would be at or above 500 bits.

    So if we agree that design is capable of generating 500 bits why could it not be a reasonable inference for the observation of a protein that required 500 bits of FI to generate by the best approximation?

  33. Rumraket:

    DNA_Jock: It’s deranged.

    There are few other words for it. I’m speechless. What the fuck are they doing?

    It has nothing to do with Hazen & Szostak. It’s vaguely similar in that they take -log2 of something and call it FI, but that’s it, that’s where the similarity stops. Fucking LOL.

    It gets worse (as people have noted here). The lack of substitutions at individual sites in the sequence are taken by gpuccio as evidence that all possible combinations of substitutions at those sites have been explored and rejected, adequately enough that we can be pretty sure that there is no other “island” of sequences with that level of function. For which assertion there is no possible justification.

    So in the end it’s just an island-of-function argument, and the conservation-of-sites observation does not add anything meaningful.

  34. Mung: You’re welcome. Apparently my intuition functions better than yours.

    Perhaps, perhaps it just did for this case. Perhaps you happened to pick a particular example to test it out, not having any idea whether you could show it wrong or not, and found that you could and are now telling a pleasing fiction to account for your previous objections. Who knows? I don’t, but it is enough for me that you showed with a concrete example that the rule doesn’t hold for me to agree that we can’t sum FI for individual items and expect to get the exact same FI as we would if we treated it as one long sequence.

    But it’s more than that, I addressed a specific claim of yours, which if you had done the simple math exercise I just did you could have avoided and saved the both of us much time and effort.

    I did do an example (it’s at the bottom of my post you link). It just happened to be of the particular nature where the sum actually does exactly equal the whole. I now realize this is an artifact of the case where only 1 sequence meets the minimum threshold and that happened to be the one case I picked to “test” my intuition.

    If you’d just given a counterexample to begin with rather than merely declare it’s wrong with no argumentation, you’d also have saved us much time and effort.

    You were just making stuff up. Doing the very thing you find so objectionable in others.

    I was taking something and trying to put it to use in an area it had not been explicitly “designed for”. There’s a difference between that and just making shit up.

    But I can admit when I was wrong about something and I was wrong here.

    Take for example this post of yours.

    I’d change a coupe of things about it now. I’d clarify as I did in a later post, that the statement that more functional items adds up to more total FI than fewer functional items, does not necessarily hold, as a system composed of only a few individual sequences can exhibit more FI than a system composed of many more sequences, provided the system with few have sequences of sufficient length.

    And I’d use a few more examples to show that the relationship is not exact, but generally accurate enough to give good approximations for most of the “real world” cases from biology we have discussed.

    iirc, you had another post where you said if the function of each part was the same then you’d get the same result whether you calculate it for the whole or sum up the parts.

    You’re going to have to be specific as I don’t remember advancing that particular case.

  35. Joe Felsenstein,

    It gets worse (as people have noted here). The lack of substitutions at individual sites in the sequence are taken by gpuccio as evidence that all possible combinations of substitutions at those sites have been explored and rejected, adequately enough that we can be pretty sure that there is no other “island” of sequences with that level of function. For which assertion there is no possible justification.

    There are other sequences that can perform similar functions and that is what the protein super families are about. The sequences can be very different but when you compare them on uniprot over 100 million years or more of separation they are also in a preserved condition despite being on very different islands.

  36. colewd:
    Joe Felsenstein,

    There are other sequences that can perform similar functions and that is what the protein super families are about.The sequences can be very different but when you compare them on uniprot over 100 million years or more of separation they are also in a preserved condition despite being on very different islands.

    Exactly as evolution predicts.

  37. colewd: There are other sequences that can perform similar functions and that is what the protein super families are about. The sequences can be very different but when you compare them on uniprot over 100 million years or more of separation they are also in a preserved condition despite being on very different islands.

    Exactly as design predicts.

  38. Mung: Exactly as design predicts.

    Why would design make different ATP synthases instead of just re-using the same one? That doesn’t make sense.

    See, this is you providing an actual, genuine example of somebody just making shit up. You thought you could just say the same thing and pretend you were making just as much sense, without giving it a seconds thought.

    Explain why design would predict weakly similar ATP synthases?

  39. Joe Felsenstein: Can you point out to us something that design doesn’t predict?

    Sure. Chaos. A world without design. That there is no designer. That random genetic drift is a designer substitute. That the genome is nothing but junk. That pigs can fly. That Jesus is coming back in 1987. That Elizabeth will return soon to save us all. etc. etc.

  40. Rumraket: Explain why design would predict weakly similar ATP synthases?

    I don’t even know what “weakly similar” means.

  41. Rumraket: Why would design make different ATP synthases instead of just re-using the same one? That doesn’t make sense.

    Why would evolution?

    Why would design make different lymphocytes instead of just re-using the same one?

  42. Mung: I don’t even know what “weakly similar” means.

    Low levels of sequence similarity for structures that have the same function, and pretty much the same exact structure, like the much discussed alpha and beta subunits of the hexameric structure of ATP synthase. Comparing V and F-type subunits we get very low sequence similarity, but they still fold into the same structure and perform the same function.

    Mung: Why would evolution?

    I take your lack of an actual answer as a concession that design doesn’t actually predict it.

    Evolution predicts it because when two lineages split (and all life uses ATP synthases, and there has been lots of splitting, some ancient, some more recent), the mutations that happen in either are independent of each other. Mutations happening in lineage A are not passed on to lineage B, and mutations happening in lineage B are not passed on to lineage A. So over time they will inexorably diverge from each other. At the same time, because the function is important, there is purifying selection to keep the proteins functional.

    Why would design make different lymphocytes instead of just re-using the same one?

    I suppose you’re asking this because you think what applies to lymphocytes must apply to ATP synthase: there are differences between them because different functional constraints, or different jobs. Do I have that right?

  43. Mung: Sure. Chaos. A world without design. That there is no designer. That random genetic drift is a designer substitute. That the genome is nothing but junk. That pigs can fly. That Jesus is coming back in 1987. That Elizabeth will return soon to save us all. etc. etc.

    I can come up with a design hypothesis for all of those without exception. It is infinitely malleable.

  44. J-Mac:
    Yeah.. So does nonsense..

    1200 comments and pending…

    Yet still no suggestion from gpuccio or anyone else how “gpuccio’s argument that 500 bits of Functional Information implies Design” works.

Leave a Reply