Does gpuccio’s argument that 500 bits of Functional Information implies Design work?

On Uncommon Descent, poster gpuccio has been discussing “functional information”. Most of gpuccio’s argument is a conventional “islands of function” argument. Not being very knowledgeable about biochemistry, I’ll happily leave that argument to others.

But I have been intrigued by gpuccio’s use of Functional Information, in particular gpuccio’s assertion that if we observe 500 bits of it, that this is a reliable indicator of Design, as here, about at the 11th sentence of point (a):

… the idea is that if we observe any object that exhibits complex functional information (for example, more than 500 bits of functional information ) for an explicitly defined function (whatever it is) we can safely infer design.

I wonder how this general method works. As far as I can see, it doesn’t work. There would be seem to be three possible ways of arguing for it, and in the end; two don’t work and one is just plain silly. Which of these is the basis for gpuccio’s statement? Let’s investigate …

A quick summary

Let me list the three ways, briefly.

(1) The first is the argument using William Dembski’s (2002) Law of Conservation of Complex Specified Information. I have argued (2007) that this is formulated in such a way as to compare apples to oranges, and thus is not able to reject normal evolutionary processes as explanations for the “complex” functional information.  In any case, I see little sign that gpuccio is using the LCCSI.

(2) The second is the argument that the functional information indicates that only an extremely small fraction of genotypes have the desired function, and the rest are all alike in totally lacking any of this function.  This would prevent natural selection from following any path of increasing fitness to the function, and the rareness of the genotypes that have nonzero function would prevent mutational processes from finding them. This is, as far as I can tell, gpuccio’s islands-of-function argument. If such cases can be found, then explaining them by natural evolutionary processes would indeed be difficult. That is gpuccio’s main argument, and I leave it to others to argue with its application in the cases where gpuccio uses it. I am concerned here, not with the islands-of-function argument itself, but with whether the design inference from 500 bits of functional information is generally valid.

We are asking here whether, in general, observation of more than 500 bits of functional information is “a reliable indicator of design”. And gpuccio’s definition of functional information is not confined to cases of islands of function, but also includes cases where there would be a path to along which function increases. In such cases, seeing 500 bits of functional information, we cannot conclude from this that it is extremely unlikely to have arisen by normal evolutionary processes. So the general rule that gpuccio gives fails, as it is not reliable.

(3) The third possibility is an additional condition that is added to the design inference. It simply declares that unless the set of genotypes is effectively unreachable by normal evolutionary processes, we don’t call the pattern “complex functional information”. It does not simply define “complex functional information” as a case where we can define a level of function that makes probability of the set less than 2^{-500}.  That additional condition allows us to safely conclude that normal evolutionary forces can be dismissed — by definition. But it leaves the reader to do the heavy lifting, as the reader has to determine that the set of genotypes has an extremely low probability of being reached. And once they have done that, they will find that the additional step of concluding that the genotypes have “complex functional information” adds nothing to our knowledge. CFI becomes a useless add-on that sounds deep and mysterious but actually tells you nothing except what you already know. So CFI becomes useless. And there seems to be some indication that gpuccio does use this additional condition.

Let us go over these three possibilities in some detail. First, what is the connection of gpuccio’s “functional information” to Jack Szostak’s quantity of the same name?

Is gpuccio’s Functional Information the same as Szostak’s Functional Information?

gpuccio acknowledges that gpuccio’s definition of Functional Information is closely connected to Jack Szostak’s definition of it. gpuccio notes here:

Please, not[e] the definition of functional information as:

“the fraction of all possible configurations of the system that possess a degree of function >=
Ex.”

which is identical to my definition, in particular my definition of functional information as the
upper tail of the observed function, that was so much criticized by DNA_Jock.

(I have corrected gpuccio’s typo of “not” to “note”, JF)

We shall see later that there may be some ways in which gpuccio’s definition
is modified from Szostak’s. Jack Szostak and his co-authors never attempted any use of his definition to infer Design. Nor did Leslie Orgel, whose Specified Information (in his 1973 book The Origins of Life) preceded Szostak’s. So the part about design inference must come from somewhere else.

gpuccio seems to be making one of three possible arguments;

Possibility #1 That there is some mathematical theorem that proves that ordinary evolutionary processes cannot result in an adaptation that has 500 bits of Functional Information.

Use of such a theorem was attempted by William Dembski, his Law of Conservation of Complex Specified Information, explained in Dembski’s book No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2001). But Dembski’s LCCSI theorem did not do what Dembski needed it to do. I have explained why in my own article on Dembski’s arguments (here). Dembski’s LCCSI changed the specification before and after evolutionary processes, and so he was comparing apples to oranges.

In any case, as far as I can see gpuccio has not attempted to derive gpuccio’s argument from Dembski’s, and gpuccio has not directly invoked the LCCSI, or provided a theorem to replace it.  gpuccio said in a response to a comment of mine at TSZ,

Look, I will not enter the specifics of your criticism to Dembski. I agre with Dembski in most things, but not in all, and my arguments are however more focused on empirical science and in particular biology.

While thus disclaiming that the argument is Dembski’s, on the other hand gpuccio does associate the argument with Dembski here by saying that

Of course, Dembski, Abel, Durston and many others are the absolute references for any discussion about functional information. I think and hope that my ideas are absolutely derived from theirs. My only purpose is to detail some aspects of the problem.

and by saying elsewhere that

No generation of more than 500 bits has ever been observed to arise in a non design system (as you know, this is the fundamental idea in ID).

That figure being Dembski’s, this leaves it unclear whether gpuccio is or is not basing the argument on Dembski’s. But gpuccio does not directly invoke the LCCSI, or try to come up with some mathematical theorem that replaces it.

So possibility #1 can be safely ruled out.

Possibility #2. That the target region in the computation of Functional Information consists of all of the sequences that have nonzero function, while all other sequences have zero function. As there is no function elsewhere, natural selection for this function then cannot favor sequences closer and closer to the target region.

Such cases are possible, and usually gpuccio is talking about cases like this. But gpuccio does not require them in order to have Functional Information. gpuccio does not rule out that the region could be defined by a high level of function, with lower levels of function in sequences outside of the region, so that there could be paths allowing evolution to reach the target region of sequences.

An example in which gpuccio recognizes that lower levels of function can exist outside the target region is found here, where gpuccio is discussing natural and artificial selection:

Then you can ask: why have I spent a lot of time discussing how NS (and AS) can in some cases add some functional information to a sequence (see my posts #284, #285 and #287)

There is a very good reason for that, IMO.

I am arguing that:

1) It is possible for NS to add some functional information to a sequence, in a few very specific cases, but:

2) Those cases are extremely rare exceptions, with very specific features, and:

3) If we understand well what are the feature that allow, in those exceptional cases, those limited “successes” of NS, we can easily demonstrate that:

4) Because of those same features that allow the intervention of NS, those scenarios can never, never be steps to complex functional information.

Jack Szostak defined functional information by having us define a cutoff level of function to define a set of sequences that had function greater than that, without any condition that the other sequences had zero function. Neither did Durston. And as we’ve seen gpuccio associates his argument with theirs.

So this second possibility could not be the source of gpuccio’s general assertion about 500 bits of functional information being a reliable indicator of design, however much gpuccio concentrates on such cases.

Possibility #3. That there is an additional condition in gpuccio’s Functional Information, one that does not allow us to declare it to be present if there is a way for evolutionary processes to achieve that high a level of function. In short, if we see 500 bits of Szostak’s functional information, and if it can be put into the genome by natural evolutionary processes such as natural selection then for that reason we declare that it is not really Functional Information. If gpuccio is doing this, then gpuccio’s Functional Information is really a very different animal than Szostak’s functional information.

Is gpuccio doing that? gpuccio does associate his argument with William Dembski’s, at least in some of his statements.  And William Dembski has defined his Complex Specified Information in this way, adding the condition that it is not really CSI unless it is sufficiently improbable that it be achieved by natural evolutionary forces (see my discussion of this here in the section on “Dembski’s revised CSI argument” that refer to Dembski’s statements here). And Dembski’s added condition renders use of his CSI a useless afterthought to the design inference.

gpuccio does seem to be making a similar condition. Dembski’s added condition comes in via the calculation of the “probability” of each genotype. In Szostak’s definition, the probabilities of sequences are simply their frequencies among all possible sequences, with each being counted equally. In Dembski’s CSI calculation, we are instead supposed to compute the probability of the sequence given all evolutionary processes, including natural selection.

gpuccio has a similar condition in the requirements for concluding that complex
functional information is present:  We can see it at step (6) here:

If our conclusion is yes, we must still do one thing. We observe carefully the object and what we know of the system, and we ask if there is any known and credible algorithmic explanation of the sequence in that system. Usually, that is easily done by excluding regularity, which is easily done for functional specification. However, as in the particular case of functional proteins a special algorithm has been proposed, neo darwininism, which is intended to explain non regular functional sequences by a mix of chance and regularity, for this special case we must show that such an explanation is not credible, and that it is not supported by facts. That is a part which I have not yet discussed in detail here. The necessity part of the algorithm (NS) is not analyzed by dFSCI alone, but by other approaches and considerations. dFSCI is essential to evaluate the random part of the algorithm (RV). However, the short conclusion is that neo darwinism is not a known and credible algorithm which can explain the origin of even one protein superfamily. It is neither known nor credible. And I am not aware of any other algorithm ever proposed to explain (without design) the origin of functional, non regular sequences.

In other words, you, the user of the concept, are on your own. You have to rule out that natural selection (and other evolutionary processes) could reach the target sequences. And once you have ruled it out, you have no real need for the declaration that complex functional information is present.

I have gone on long enough. I conclude that the rule that observation of 500 bits of functional information is present allows us to conclude in favor of Design (or at any rate, to rule out normal evolutionary processes as the source of the adaptation) is simply nonexistent. Or if it does exist, it is as a useless add-on to an argument that draws that conclusion for some other reason, leaving the really hard work to the user.

Let’s end by asking gpuccio some questions:
1. Is your “functional information” the same as Szostak’s?
2. Or does it add the requirement that there be no function in sequences that
are outside of the target set?
3. Does it also require us to compute the probability that the sequence arises as a result of normal evolutionary processes?

1,971 thoughts on “Does gpuccio’s argument that 500 bits of Functional Information implies Design work?

  1. Even a qubit of “non-functional” information on subatomic level clearly implies design… Since everything is not really real (matter doesn’t exist) unless there is a conscious observer, your speculations are as limited as speculation can be: unverifiable nonsense… because you assume that bits of information just appeared out of nowhere not to mention functional information…

    You should look into quantum mechanics. It might change your life… It did change mine…

  2. From the OP:

    And gpuccio’s definition of functional information is not confined to cases of islands of function, but also includes cases where there would be a path to along which function increases.

    Hi Joe. Enjoyed your OP. is this “function increases” something brought up by gpuccio or is it something you introduced? Saying that a function can increase by 500 bits seems just vague enough to be meaningless. What is it that is increases, the function?

    Say you’re working on one of your software projects and you jsut keep adding more and more lines of code and more and more logic into one of your functions without breaking the function up into smaller functions. Would that be an increasing function?

  3. Mung,

    There are genotypes (say DNA sequences) and we have a biological function that we have defined. If we know, quantitatively, how well a given genotype carries out that function, we can associate that number with the genotype. (Mathematically, it is a function of the genotype — the word “function” is being used in two different senses here).

    Jack Szostak defined “functional information” as -\log_2(P), where P is the fraction of sequences whose (biological) function exceeds some threshold level that we have defined. You can read Szostak’s paper (for free) here.

    The issue with gpuccio’s use of FI is whether additional conditions have been added, and whether seeing 500 bits of FI allows us to infer Design.

    And as for whether “nonfunctional information” is an oxymoron, if you set the threshold to include genotypes with function zero, then you include all of them, and P = 0 so that the amount of FI is zero.

  4. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree.

    here

    Apparently you have to identify a “specific function” to a “specified degree” and then create arbitrary configurations of some system and calculate a probability.

    It seems that “evolution” and “natural selection” has nothing to do with it.

  5. Joe Felsenstein: There are genotypes (say DNA sequences) and we have a biological function that we have defined

    What happens when the specific DNA sequence doesn’t match the same function in a different genotype?

  6. Joe F:

    If we know, quantitatively, how well a given genotype carries out that function, we can associate that number with the genotype.

    Where does that number come from though, and is it a probability? Or are you talking about your own definition of functional information from your 1987 (did I get that right) paper?

  7. Did you really mean to post that link Joe?

    The challenge in determining experimentally the relationship between functional information and activity is the extreme rarity of functional sequences in populations of random sequences

    Seems like Szostak is also a fan of “islands of function.”

  8. As an aside, it’s utterly baffling to me that people still do not understand where the 500 bit number comes from. kairosfocus has written about it repeatedly.

  9. If 500 bits of information reliably indicates design, but 499 bits doesn’t, then it must follow that 1 bit makes all the difference. (This is, roughly, the heap paradox).

  10. Neil, if you can come up with something with 499 bits that is not designed I think it would shut gpuccio up. But I am not going to hold my breath.

  11. Mung:
    As an aside, it’s utterly baffling to me that people still do not understand where the 500 bit number comes from. kairosfocus has written about it repeatedly.

    Where it comes from is straightforward — it is descended from Seth Lloyd’s computation of the number of possible changes of state in the Universe from its beginning. It is used to argue that any random search that simply makes random samples will be unable to find any event whose probability is that small.

    It is used to rule out processes like random mutation because they are unable to find configurations that are that improbable.

    The argument is, however, unable to rule out natural selection as it does not carry out pure random sampling.

  12. Mung: Apparently you have to identify a “specific function” to a “specified degree” and then create arbitrary configurations of some system and calculate a probability.

    It seems that “evolution” and “natural selection” has nothing to do with it.

    True, unless the differences in function result in differences in fitness.

  13. Neil,

    If 500 bits of information reliably indicates design, but 499 bits doesn’t…

    That isn’t the IDers’ claim.

  14. Mung: Neil, if you can come up with something with 499 bits that is not designed I think it would shut gpuccio up.

    We need a very clear and precise definition of “designed”.

    Personally, I don’t have any problem with the idea that biological organisms are designed. That’s because I see evolution as a design system (a system of self design). Roughly speaking, populations are engaged in pragmatic trial and error testing to improve their chances of success.

    However, that not what most ID proponents mean by “design”.

  15. Mung: Neil, if you can come up with something with 499 bits that is not designed I think it would shut gpuccio up.

    I don’t. Because we already have (in fact it has over 500 bits), and he just invented some completely bullshit reason for why it doesn’t count.

  16. Mung: Seems like Szostak is also a fan of “islands of function.”

    Nobody here has a problem with the concept of islands of functions. It was evolutionists who invented the whole idea to begin with and this is now the 2nd or 3rd time you’re being informed of this.

    The real question isn’t whether some functions sit on “islands” in the fitness landscape, we already know that they do. The questions are:
    How big are those islands?
    Are they connected to other islands by small bridges?
    What are the distances between islands?
    Are fitness valleys between them outright lethal, or just deleterious, and if so how deleterious?
    Is the “landscape” static or does it change? If so, how quickly?

  17. Mung: Apparently you have to identify a “specific function” to a “specified degree” and then create arbitrary configurations of some system and calculate a probability.

    It seems that “evolution” and “natural selection” has nothing to do with it.

    Why would it? The goal is to have some way of calculating how much functional information there is whether the system has been designed or whether it evolved. How the system came to exist is not a factor that affects how much information there is in the system in the method of Hazen et al 2007.

    But the system can change when it exists, and the changes to the system can affect how much information there is. If we’re dealing with an enzyme for example, the enzyme has some function to some degree. It catalyzes chemical reactions, and changes to the enzyme can affect the rate of catalysis. Such changes are captured by the method. A faster enzyme, depending on the types of changes that make it faster, can result in an increase in functional information. There is a relationship between degree of function and the size of the system (like sequence length in the case of a protein like an enzyme). All of that is explained in the paper which can be accessed here:

    Functional Information and RNA Polymers.
    The previous two examples, sequences of letters and Avida machine commands, illustrate the utility of the functional information formalism in characterizing the properties of symbolic systems that can occur in combinatorially large numbers of configurations. Functional information also has applicability to complex biological and biochemical systems; indeed, it was originally developed (15, 34) to analyze aptamers (RNA structures that bind target ligands) and ribozymes (RNA structures that catalyze specific reactions). Thus, the degree of function, Ex, of these linear sequences of RNA letters (A, C, G, and U) can be defined quantitatively as the binding energy to a particular molecule or the catalytic increase in a specific reaction rate. We can easily specify every possible RNA sequence of length n, and we can (at least in principle) synthesize RNA strands and measure the degree of function of any given sequence. The behavior of aptamers and ribozymes thus lends itself to the type of quantitative analysis that we applied previously to letter sequences and Avida populations (34).

    In general, a single RNA nucleotide will display minimal catalytic or binding function, xmin. It follows that a minimum sequence length (nmin nucleotides) will be required to achieve any significant degree of ribozyme or aptamer function, Ex > Emin. Increasing the number of nucleotides (n > nmin) will generally lead to many more functional sequences, some of which will have a greater degree of function. Furthermore, for any given catalytic or binding function there exists an optimal RNA sequence of length nopt that attains the maximum possible degree of function, Emax. That sequence thus possesses the maximum possible functional information:
    (equation here)
    For degrees of function less than the maximum (Ex < Emax), an intermediate functional information obtains [I(Ex) < Imax(Emax)].

    With respect to Islands of Function, they state:

    Islands of Function.
    What is the source of the reproducible discontinuities in Figs. 1 and 2? We suggest that the population of random Avida sequences contains multiple distinct classes of solutions, perhaps with conserved sequences of machine instructions similar to those of words in letter sequences or active RNA motifs (52). Each class has a maximum possible degree of function; therefore, the discontinuities occur at degrees of function below which a major class of sequences is represented and above which it is not represented.

    Fig. 3 demonstrates one possible model for this stepped behavior, based on discrete “islands” of solutions. In Fig. 3, the islands, each of which represents a specific distinct set of solutions to the function [i.e., fitness (z axis)], are conceptually represented as being close to each other in sequence space (projected on the x–y plane). Note, however, that these islands are a visual simplification. For example, in the case of RNA sequences, any given “island” of closely related functional solutions may be more realistically represented by a densely interconnected network that spans all of sequence space (25, 53, 54). Similar consideration of function topologies has been applied to neural network connections (55) and viroid solutions to infecting the same plant host (56). Avida may be similar, because the commands relevant to a given solution do not necessarily need to appear sequentially at a specific location in the string but can occur in different registers and can be spread apart by neutral commands.

    My bold.

  18. J-Mac: What happens when the specific DNA sequence doesn’t match the same function in a different genotype?

    Are you talking about epistasis? Gene-by-gene-interactions?

    That negates a static fitness landscape, so would be a powerful argument against the claim that genotypes easily get trapped on islands of function.

  19. I think we can be more succinct with gpuccio’s idea and incorporate ‘islands of function’ by restating as:

    ‘When 500 bits of functional information are added to a system in a single step it is overwhelmingly likely that it is due to intelligent intervention’

    But of course the argument is and always has been that the islands of function are accessible to NS and drift. gpuccio’s statement is obvious and irrelevant to where the battle is being fought.

  20. petrushka:
    Why did I see the title as 500 bits of fictional information?

    Because you saw the word “gpuccio” in the title.

  21. Hi Professor Felsenstein,

    I would hazard a guess that gpuccio’s thinking lies along the following lines: even supposing there to be “islands of function” in our universe which make it possible for cells to arise out of simple organic chemicals, and for intelligent beings to evolve from one-celled microbes, given a sufficient period of time, there are still vast numbers of other “possible universes,” whose initial conditions, fundamental constants and/or laws are ever so slightly different from those in our own universe, where abiogenesis never gets off the ground, or where one-celled creatures form but evolution gets stuck in a rut.

    Viewed in this way, if one views these zillions of other possible universes as other ways in which this universe might have been, it makes perfect sense (philosophically speaking, at least) to disregard the existence of a chemical pathway leading to the formation of the first cell in our universe, and to look at the set of all possible universes instead. One could thus reason that even if abiogenesis is chemically inevitable in our universe, that merely invites the question of how the initial setup and laws came to be “just right.”

    Of course, a multiverse theorist would respond that all these other “possible universes” are actually real, and that we just happen to be living in a universe where a pathway to life (and intelligent life) exists. (After all, we could hardly be living in one where there was no pathway leading to life, could we?)

    I’m not too sure how gpuccio would respond to this argument, but one popular response in the ID camp is the “Boltzmann brain” argument: the universe contains a superabundance of functional complexity – much more than is required simply to generate intelligent life. (We live in a world in which there are not just intelligent brains, but intelligent animals [ourselves], and in which there are not only cows, chickens, grass and all the organisms that intelligent beings might need for food, but also innumerable other species – including an “inordinate” number of different kinds of beetles, as J.B.S. Haldane famously observed.) An ID theorist might argue that such a superabundance is exceedingly unlikely on a non-design hypothesis, as evolution would not be expected to yield such a biological cornucopia, but not at all unlikely on an intelligent design hypothesis: it might be a way for the designer to signal His presence, for instance. Thus on Bayesian grounds, so long as the antecedent likelihood of a Designer is not too low, and the support which life in this universe provides for the design hypothesis is high enough, we can rule out the multiverse hypothesis, leaving us with the hypothesis that there is, after all, just one universe which was a put-up job.

    On such a view, configurational complexity then becomes a perfectly legitimate way to compute the inherent probability of any complex structure (as opposed to its nomological probability, given the laws and initial conditions applying in our universe), and if it happens to possess a function whose inherent probability is so low that it would not be expected to arise even once in the history of the universe – i.e. less than 1 in 2^500 or 10^150 – then we can and should infer design.

    I’m just sketching the argument that might be made here. Obviously there are points at which its logic might be criticized, but I think that’s the general line of thought gpuccio is pursuing here. If I am mistaken, then I apologize in advance.

  22. vjtorley,

    I don’t think that the argument involves choosing universes out of sets of possible universes. For example, gpuccio’s ubiquitin example is a protein that arose long after the origin of the universe, and long after the origin of life, as it’s present in all eukaryotes but not in prokaryotes.

    In discussion of the 500-bits rule, we are asking whether it applies in our universe, without regard to where else it might apply.

  23. petrushka: Why did I see the title as 500 bits of fictional information?

    Because you see what you want to see, just like the creationists. 🙂

  24. Neil Rickert: However, that not what most ID proponents mean by “design”.

    Sure it is. Organisms are designed with the capacity to evolve. It’s one of their design features.

  25. Rumraket: Because we already have (in fact it has over 500 bits), and he just invented some completely bullshit reason for why it doesn’t count.

    Sorry. What are you referring to here?

  26. Rumraket: he questions are:
    How big are those islands?
    Are they connected to other islands by small bridges?
    What are the distances between islands?
    Are fitness valleys between them outright lethal, or just deleterious, and if so how deleterious?
    Is the “landscape” static or does it change? If so, how quickly?

    1) How big are those islands?
    It doesn’t matter how big or how small the islands are.

    2) Are they connected to other islands by small bridges?
    What does the size of the bridge have to do with it?

    3) What are the distances between islands?
    Again, it doesn’t matter, since you’ve managed to concoct bridges between them out of thin air.

    4) Are fitness valleys between them outright lethal, or just deleterious, and if so how deleterious?
    Irrelevant. We hazz bridgzez.

    5) Is the “landscape” static or does it change? If so, how quickly?
    Do you mean the seascape? Maybe the islands are floating islands and over around bumping into each other. No bridges needed.

  27. Dembski (2006?) used Seth Lloyd’s “Computational Capacity of the Universe” to justify increasing the “universal probability bound” from 2^{-500} to 2^{-400}. You’ll see at the end of the abstract of Lloyd’s paper the claim that the Universe cannot have performed more than 10^{120} \approx 2^{399} elementary operations on 10^{90} bits.

    Prior to 2010, I pointed out the change many times at Uncommon Descent. The unwillingness of people like gpuccio and GEM to assimilate a change that in fact suits their purposes is a sign of what pathetic dogmatists they are. They just keep saying what they’ve said that they’ve always said, no matter how many times they get bitch-slapped by Reality.

  28. Tom English,

    Prior to 2010, I pointed out the change many times at Uncommon Descent. The unwillingness of people like gpuccio and GEM to assimilate a change that in fact suits their purposes is a sign of what pathetic dogmatists they are. They just keep saying what they’ve said that they’ve always said, no matter how many times they get bitch-slapped by Reality.

    You appear to be quibbling here.

  29. Neil, if you can come up with something with 399 bits that is not designed I think it would shut gpuccio up. But I am still not going to hold my breath.

  30. Mung:
    Neil, if you can come up with something with 399 bits that is not designed I think it would shut gpuccio up. But I am still not going to hold my breath.

    How could someone show something was not designed by an unknown Being with unknown skills for unknown reasons no matter how many bits it contained?

  31. newton,

    How could someone show something was not designed by an unknown Being with unknown skills for unknown reasons no matter how many bits it contained?

    By showing a similar structure evolve in a lab by cell division alone.

  32. colewd:
    newton,

    By showing a similar structure evolve in a lab by cell division alone.

    Jeebus can do that in a heartbeat. Quantum interfacing is all it takes

  33. RodW: ‘When 500 bits of functional information are added to a system in a single step it is overwhelmingly likely that it is due to intelligent intervention’

    I don’t even agree to that. I don’t see why that must be so.

  34. The issue I have raised in the OP is whether the 500-Bit Rule is valid. For it to be invalid it only needs there to be some case in which there is an uphill path of function (from which fitness is derived) from outside of the target area to it. If there can be such a case then the 500-Bit Rule is dead.

    (Unless gpuccio is defining the target as having CFI only when it is already known that it can’t be reached by natural evolutionary forces. Which is the third of the three possibilities I mentioned, and the one that leaves CFI as an add-on label of no real use.)

  35. Mung: Sorry. What are you referring to here?

    This. Which I also posted about here.

    A random polypeptide of 139 amino acids could perform the function of assisting in infectivty to phage. It could further be improved by a factor of 17000 times higher level of infectivity through 20 iterations of selection and mutation.

    That 139 amino acid protein is almost 600 bits of information according to the method of Hazen et al 2007. Gpuccio just invented some bullshit excuse for why it doesn’t count. He basically said: the protein doesn’t work as well as the protein from wild-type phage, so it doesn’t count. Why? Because he just says so.

    So there you have it, Gpuccio just makes up shitty rules to deny that something other than design can create 500 bits of functional information.

  36. Tom English:
    Dembski (2006?) used Seth Lloyd’s “Computational Capacity of the Universe” to justify increasing the “universal probability bound” from to You’ll see at the end of the abstract of Lloyd’s paper the claim that the Universe cannot have performed more than elementary operations on bits.

    Prior to 2010, I pointed out the change many times at Uncommon Descent. The unwillingness of people like gpuccio and GEM to assimilate a change that in fact suits their purposes is a sign of what pathetic dogmatists they are. They just keep saying what they’ve said that they’ve always said, no matter how many times they get bitch-slapped by Reality.

    Surely I can’t be the only one who understands that there’s a difference between doing “elementary operations” and the probability of a particular outcome of those operations?

    If I have one trillion 100-sided dice, and we consider rolling one such die an “elementary operation”, and if we assume I can only roll each die once, then I can perform at most 100 trillion elementary operations.

    But the specific probability of any such 100 trillion elementary operations is the outcome of those 100 trillion rolls of 100-sided dice. Any outcome of those dice rolls will have a probability of 1 in 10^(2000000000000).

  37. Rumraket: I don’t even agree to that. I don’t see why that must be so.

    Not necessarily in a single step, but if islands of function were so pervasive, large “jumps” would be required, unless I’m missing something here

  38. dazz: Not necessarily in a single step, but if islands of function were so pervasive, large “jumps”would be required, unless I’m missing something here

    Depends on the average density of such islands. How frequently would de novo genes, like ORFan genes emerging from noncoding DNA, happen to constitute functional DNA sequences? It seems there’s some evidence that this has happened somewhat regularly over deep time. This is bona fide evidence that they just aren’t all that incredibly rare. Can ≥500 such bits arise in a single step? It did in Hayashi et al. (2007).

  39. dazz: Not necessarily in a single step, but if islands of function were so pervasive, large “jumps” would be required, unless I’m missing something here

    Sure, but why would those jumps necessarily require intelligent intervention? It’s not like we don’t know about natural processes that introduce a lot of functional information into the genome at once. Endosymbiosis and lateral gene transfer for example.

  40. Mung: 1) How big are those islands?
    Mung:It doesn’t matter how big or how small the islands are.

    Of course it does. If the islands are so large they take up most of sequence space, that certainly matters for these appeal-to-islands arguments ID proponents make.

    2) Are they connected to other islands by small bridges?
    Mung:What does the size of the bridge have to do with it?

    For the same reason as the size of the islands matter. Even a hypothetical bridge between two islands can be so small and narrow it becomes difficult to navigate. But that’s an empircal question.

    3) What are the distances between islands?
    Again, it doesn’t matter, since you’ve managed to concoct bridges between them out of thin air.
    Mung:

    They’re not concocted out of thin air. A phylogenetic analysis of enzyme functions determined that something like 70% of all chemical reactions catalyzed by living organisms (several tens of thousands of chemical reactions) could be catalyzed by a relatively small set of about 280 enzyme superfamilies, and that transitions between types of reactions even at the highest order of categorization (changes between radically different types of chemistry), have taken place over the history of life.

    Your personal ignorance of work done on this subject is not diagnostic of the state of the field. Just because you personally don’t happen to know what scientists might happen to know about the interconnectedness of functional sequence space doesn’t mean that when you get informed of this knowledge, it was conconcted out of thin air.

    4) Are fitness valleys between them outright lethal, or just deleterious, and if so how deleterious?
    Mung: Irrelevant. We hazz bridgzez.

    Of course it isn’t irrelevant, as when and if no connections exist, or the bridges are very small and narrow, it is still possible to traverse a valley under certain circumstances.

    It’s simply all relevant, and IDcreationists have to do the hard work of shedding some light on these questions if (instead of just declaring absurd conclusions the data collected does not support), as they seem to believe, the very idea of islands of function constitutes an argument against the possibility or plausibility of evolution.

    5) Is the “landscape” static or does it change? If so, how quickly?
    Mung:Do you mean the seascape? Maybe the islands are floating islands and over around bumping into each other. No bridges needed.

    See, now you’re getting it. Until you have good reasons for thinking you know the answers to these questions, merely blathering about “islands of function” can’t constitute an argument against evolution. And environments change, so what was a valley for a particular phenotype could become an island soon. You simply can’t make grand declarations about the absolute structure of functional sequence space from a handful of observations about how conserved some particular protein domain in ATP synthase or Ubiquitin is.

  41. Hi Professor Felsenstein,

    In discussion of the 500-bits rule, we are asking whether it applies in our universe, without regard to where else it might apply.

    If you’re confining yourself to this universe without regard to any ways in which its initial conditions or laws could have been different, then I would agree it’s impossible to defend the 500-bit rule, as it stands.

    You mentioned ubiquitin. An ID proponent who was a “front-loader ” (say, someone like Mike Behe) could argue that even though it appeared long after the first living things on Earth, the initial conditions of the cosmos were deliberately set with an eye to guaranteeing its emergence approximately 2.7 billion years ago, about 11 billion years after the Big Bang. I don’t know if that’s gpuccio’s view.

    Hi Tom English,

    Dembski (2006?) used Seth Lloyd’s “Computational Capacity of the Universe” to justify increasing the “universal probability bound” from 2^-500 to 2^-400. You’ll see at the end of the abstract of Lloyd’s paper the claim that the Universe cannot have performed more than 10^120 (approx. 2^399) elementary operations on 10^90 bits.

    Actually, it was in his 2005 paper, Specification: The Pattern That Signifies Intelligence that Dembski first made this proposal (see page 23 and footnote 31). However, in Addendum 1 to his 2005 paper, after pointing out that “instead of a static universal probability bound of (10^-150), we now have a dynamic one of (10^-120)/ϕS(T),” Dembski added: “as a rule of thumb, 10^(-120) /10^30 = 10^(-150) can still be taken as a reasonable (static) universal probability bound.

    You are right, however, about the mathematical fudge (2^500 vs. 2^400). The Glossary on Uncommon Descent cites the figure of 10^120, but it also cites Barry Arrington’s rhetorical challenge to Darwinists: “If you came across a table on which was set 500 coins (no tossing involved) and all 500 coins displayed the ‘heads’ side of the coin, would you reject ‘chance’ as a hypothesis to explain this particular configuration of coins on a table?” Notice that the number here is 500, not 400 and certainly not 399! It seems that most ID proponents never noticed this discrepancy.

Leave a Reply