Natural selection can put Functional Information into the genome

It is quite common for ID commenters to argue that it is not possible for evolutionary forces such as natural selection to put Functional Information or Specified Information) into the genome. Whether they know it or not, these commenters are relying on William Dembski’s Law of Conservation of Complex Specified information. It is supposed to show that Complex Specified Information cannot be put into the genome. Many people have argued that this theorem is incorrect. In my 2007 article I summarized many of these objections and added some of my own.

One of the sections of that article gave a simple computational example of mine showing natural selection putting nearly 2 bits of specified information into the genome, by replacing an equal mixture of A, T, G, and C at one site with 99.9% C.

This post is intended to show a more dramatic example along the same lines.

Suppose that we have a large population of wombats and we are following 100 loci in their genome. We will make the wombats haploid rather than diploid, to make the argument simpler (diploid wombats would give a nearly equivalent result). At each locus there are two possible alleles, which we will call 0 and 1. We start with equal gene frequencies 1/2 and 1/2 of these two alleles at each locus. We also assume no association (no linkage disequilibrium) between alleles at different loci. Initially the haploytypes (haploid genotypes) are all combinations from 00000…000 to 11111…111, all equiprobable.

Let’s assume that the 1 allele is more fit than the 0 allele at each locus. The fitness of 1 is 1.01, and the fitness of 0 is 1. We assume that the fitnesses are multiplicative, so that a haploid genotype with M alleles 1 and 100-M alleles 0 has fitness 1.01 raised to the Mth power. Initially the number of 1s and 0s will be nearly 50:50 in all genotypes. The fraction of genotypes that have 90:10 or more will be very small, in fact less than 0.0000000000000000154. So very few individuals will have high fitnesses.

What will happen to these multiple loci? This case results in the gene frequency of the 1 allele rising at each locus. The straightforward equations of theoretical population genetics show that after 214 generations of natural selection, the genotypes will now have gene frequency 0.8937253. The fraction of genotypes having 90:10 or more will then be 0.500711. So the distribution of genotypes has moved far enough toward ones of high fitness that over half of them have 90 or more 1s. If you feel that this is not far enough, consider what happens after 500 generations. The gene frequencies at each locus are then 0.99314, and the fraction of the population with more than 90 1s is then more than 0.999999999.

The essence of the notion of Functional Information, or Specified Information, is that it measures how far out on some scale the genotypes have gone. The relevant measure is fitness. Whether or not my discussion (or Dembski’s) is sound information theory, the key question is whether there is some conservation law which shows that natural selection cannot significantly improve fitness by improving adaptation. My paper argued that there is no such law. This numerical example shows a simple model of natural selection doing exactly what Dembski’s LCCSI law said it cannot do. I should note that Dembski set the threshold for Complex Specified Information far enough out on the fitness scale that we would have needed to use 500 loci in this example. We could do so — I used 100 loci here because the calculations gave less trouble with underflows.

I hope that ID commenters will take examples like this into account and change their tune.

Let me anticipate some objections and quickly answer them:

1. This is an oversimplified model, you were not realistic. Dembski’s theorems were intended to show that even in simple models, Specified (or Functional) Information could not be put into genomes. It is therefore appropriate to check that in such simplified models, where we can do the calculation. For if natural selection is in trouble in these simple models, it is in trouble more generally.

2. You have not allowed for genetic drift, which would be present in any finite population. For simplicity I left it out and did a completely deterministic model. Adding in genetic drift would complicate the presentation enormously, but would still result in the achievement of a population with all 11111…1111 genotypes after only a modest number more of generations.

3. If fitness differences are due to inviability of some genotypes, fitnesses could not exceed 1. Yes, but making the 0 allele have fitness 1/1.01 = 0.9900990099… and the 1 allele have fitness 1 could then be used, and the results would be exactly the same, as long as the ratio of fitnesses of 0 and 1 is still 1:1.01.

4. You just followed gene frequencies — what about frequencies of haplotypes? This case was set up with multiplicative fitnesses so that there would never be linkage disequilibrium, so only gene frequencies need to be followed.

I trust also that people will not raise all sorts of other matters (the origin of life, the bacterial flagellum, the origin of the universe, quantum mechanics, etc.) To do so would be to admit that they have no answer to this example, which shows that natural selection can put functional information into the genome.

254 thoughts on “Natural selection can put Functional Information into the genome

  1. I hope that ID commenters will take examples like this into account and change their tune.

    Sorry but CSI refers to origins- that is very clear in everything Dembski has written- and that is supported by Meyer.

    Ignoring that isn’t going to make it go away.

  2. Joe G: Sorry but CSI refers to origins- that is very clear in everything Dembski has written- and that is supported by Meyer.

    Ignoring that isn’t going to make it go away.

    Of course Dembski was applying it to evolution after the origin of life. Otherwise if it applied only to the OOL it would not help at all in proving that adaptations that arose afterward were really Design, it would not be of any use in making a Design Inference for them.

    Would be happy to discuss the after-the-OOL issue of Functional Information with anyone. Not interested in further dispute about this issue.

    So at any rate it sounds as if you do agree that natural selection can bring about adaptation after the OOL?

  3. Wait- Dembski wrote in “No Free Lunch” that existing CSI can give rise to SI- as I have said if living organisms were designed then they were designed to evolve/ evolved by design- also functional information refers to biological function, as in new proteins and especially new protein complexes- useful and functional.

    Can NS bring about adaptation? NS is just a result- if you have differential reproduction due to heritable random variation you have NS.

    With the finches the variation was there- the moths, already there- NS produces a wobbling stability

  4. BTW Joe, Dr Behe puts the limit at two new protein-to-protein binding sites- do you have any examples that deals with NS and protein-to-protein binding sites?

  5. Hello Joe, you say:

    “We start with equal gene frequencies 1/2 and 1/2 of these two alleles at each locus.”

    So we start with a population in a state of equilibrium, 1 bit. Then the fitness shifts, and the population frequency changes. This will cost the population a measure of uncertainty, i.e. Shannon entropy. This population is no longer maximally complex.

    You change the frequency in your example to:

    1 = 1.01
    0 = 1

    Unless I’m not getting this strait, this would follow that that the binary population would carry more than 1 bit of information per base(?) unless I’m misunderstanding what’s going on. Rather if the weight of one element changes in the binary population {1 | 0} then the frequency of the entire population changes and bit rate should reduce.

    If by 1 = 1.01, this means +.01

    Such that:
    1 = .51

    H(1) = .51 -log(.51) = .495

    and

    0 = .49

    H(0) = .49(-log(.49)) = .504

    Bit rate = .999

    The fitness weight applied to the element {1} decreases bit rate from 1 to .999. The population is no longer in a state of equilibrium, (maximum unpredictability). This population is no longer maximally complex, however slightly more compressible, or simply describable, and as such, the strings will continue swapping complexity for compressibility, to the point where populations will end up exhibiting complexity approaching 0 with compressibility approaching 1 similar to something like crystal formation. This is not a demonstration of CSI, unless I’m misunderstanding what is going one here. Unless NS is selecting at all times from the original population with equaprobable frequency, then CSI stalls.

    “The essence of the notion of Functional Information, or Specified
    Information, is that it measures how far out on some scale the genotypes
    have gone.”

    CSI is simply the measure of a string output from a population with equaprobable frequencies that is compressible, (simply describable), and exhibits function.

  6. Prof. Felsenstein: “Whether or not my discussion (or Dembski’s) is sound information theory, the key question is whether there is some conservation law which shows that natural selection cannot significantly improve fitness by improving adaptation.”

    I am not sure what you mean by improving adaptation. The whole point of adaptation is not needing improvements. Finch beaks get bigger, then get smaller again, and then bigger once again. Which is the improvement, the bigger beak or the smaller beak?

    Trait oscillations smack of maintenance routines, rather that home improvement.

  7. junkdnaforlife: CSI is simply the measure of a string output from a population with equaprobable frequencies that is compressible, (simply describable), and exhibits function.

    Well, Dembski, at least in Specification: the Pattern that Signifies Intelligence, doesn’t add the rider “exhibits function” (IIRC).

    As you say, however, he does seem to be talking about equiprobable distributions, which is one of the problems. Another of the problems is finding a practical measure of compressibility.

    However, the Hazen paper discussed in this thread would seem to offer a neat alternative that has the bonus of incorporating the concept of function. Would you accept that as a substitute?

  8. (1) You begin with the information (1’s) already in the genome, so NS didn’t “put” any information into the genome. You’re only arguing that it preserves and distributes it. Perhaps you meant “fix” information in the genome.

    (2) 1+1+1+1+1 doesn’t represent combinatorial information (complex specified) It only represents accumulative information. When one takes the entire genome, all 1s are already in the genome. Accumulating them all into a single group doesn’t increase the functional information contained in the genome.

    (3) Your argument assumes that all currently existent 1’s across the genome are compatible, accumulative and I suppose (but for which you make no case for here) combinatorial. You’ve ignored the fact that what might exist across the genome in small groups as 1s might in combination with other 1s in other groups prove deleterious, fatal, or turn both 1s into junk. As the number of 1s increase in any particular group, the chances that other 1s from other groups would prove delterious, fatal or junk-ifying to some of the 1s in the more accumulative group grows exponentially, drastically decreasing the chances that any information that is advantageous to some other group will prove advantageous to the more accumulative group.

    This is like arguing that I can take parts that are particularly advantageous to a Humvee and insert them willy-nilly into a Ferrari and expect the Ferrari to run better, and then take parts from a ford pickup and chevy van and stick them in the Ferrari and expect it to run even better. The more complex functional machinery is, the fewer and fewer things you can add to it, and the fewer and fewer parts you can modify willy-nilly without turning the whole thing into junk.

    (4) Since all the information was already in the genome to start with (in order for NS to act on it), and collecting all 1s that are cohabitable into a single group while eliminating (selecting against) 1s that are not cohabitable with the growing accumulation of 1s in our target group, all NS can possibly do is subtract information from the genome by killing off lines of 1s that do not play nice with the larger aggregates of 1s.

  9. This is like arguing that I can take parts that are particularly advantageous to a Humvee and insert them willy-nilly into a Ferrari and expect the Ferrari to run better

    As a general rule we get our genomes from our ancestors, not from our cousins. With rare exceptions.

  10. Thanks to all who gave on-topic responses. I will respond (to the on-topic ones) one at a time. For this comment by “junkdnaforlife” I will divide my response into two replies, as two different issues are raised.

    junkdnaforlife:
    Hello Joe, you say:

    “We start with equal gene frequencies 1/2 and 1/2 of these two alleles at each locus.”

    So we start with a population in a state of equilibrium, 1 bit. Then the fitness shifts, and the population frequency changes. This will cost the population a measure of uncertainty, i.e. Shannon entropy. This population is no longer maximally complex.

    You change the frequency in your example to:

    1 = 1.01
    0 = 1

    The 1.01 and the 1 are not frequencies, they are fitnesses. So we may be off on the wrong foot right away.

    In addition, I am not calculating measures of complexity of the population. You do a certain amount of the that, but it is not important to my argument. Rather, the argument is about whether natural selection can result in the population moving into the region of the scale (in this case, the fitness scale) that has high fitness.

    The fact that I happened to start with all gene frequencies 0.5 is not important to my argument. I could as easily have started with gene frequencies of the 1 allele at 0.1, and then we would still see them shift until they reached 0.9 or so, and the effect on the degree of adaptation would be strong.

  11. junkdnaforlife:

    [quoting me:]
    “The essence of the notion of Functional Information, or SpecifiedInformation, is that it measures how far out on some scale the genotypeshave gone.”

    CSI is simply the measure of a string output from a population with equaprobable frequencies that is compressible, (simply describable), and exhibits function.

    Dembski actually used compressibility as one possible way of defining a rejection region on a scale. In my 2007 article I set that aside. It has the problem that a lifeless perfect sphere is much more specified than an actual organism. I am using the scale of fitness. Using such a scale (in their case “functon”) is also what the Functional Information people did.

  12. Steve Proulx:
    Prof. Felsenstein: “Whether or not my discussion (or Dembski’s) is sound information theory, the key question is whether there is some conservation law which shows that natural selection cannot significantly improve fitness by improving adaptation.”

    I am not sure what you mean by improving adaptation.The whole point of adaptation is not needing improvements.Finch beaks get bigger, then get smaller again, and then bigger once again.Which is the improvement, the bigger beak or the smaller beak?

    Trait oscillations smack of maintenance routines, rather that home improvement.

    (I would say hi, good to hear from you, but I expect you are not this Stephen Proulx).

    Anyway, natural selection can act in all sorts of strange patterns. But Dembski’s conservation law is supposed to apply to all of them. That it does not work is made clear by a tractable counterexample such as the one I used. In that case his LCCSI is supposed to show that we cannot get out into the tail of the fitness distribution by causes other than design. But under natural selection with this fitness pattern the population does go there.

  13. It is quite common for ID commenters to argue that it is not possible for evolutionary forces such as natural selection to put Functional Information or Specified Information) into the genome.

    Well. I’m an ID advocate, and I have to agree, if an ID commenter says that, I think they are wrong. On the other hand, I don’t think that is what ID, properly understood, says. At its simplest, ID says that the probability of any mutation producing new ifnormation is dependent on the increase in information achieved, and that while small steps are plausible, and observed, very large steps, requiring multiple mutations are extremely rare or impossible, given the resources available. So, yes Avida works just fine. It takes nice easy steps. Two bits of information, or 20 bits of information doesn’t seem too much to me in large bacterial populations. I would, however, declare the inability of natural, mutational processes to add CSI (complex-specified information), that is 150 (or even 120) bits of information, in a single step.

  14. William J. Murray:
    (1) You begin with the information (1’s) already in the genome, so NS didn’t “put” any information into the genome. You’re only arguing that it preserves and distributes it. Perhaps you meant “fix” information in the genome.

    I would reject the notion that changes in gene frequency are inconsequential. You can say that the information comes from the natural selection, or you can say that the information comes from the mutation process that creates the alleles from other alleless, or if you are Dembski and Marks (in their recent papers on Search for a Search) you can even say the information was out there in the shape of the fitness surface, lying around. But in any case all this is an exercise in semantics.

    The point is that William Dembski has a conservation law (his LCCSI) that is supposed to show that a population cannot end up in the high end of the relevant scale (I am using fitness, which is a natural choice) by mechanisms other than Design. The example shows otherwise. (And in my 2007 paper I pointed to an existing argument by Elsberry and Shallit, and a new one by me, that Dembski’s theorem is not true, and not relevant to the Design Inference argument because it changes the specification in midstream). Thus there is an explanation for why Dembski’s argument does not work in this numerical example.

    Demsbki’s criterion is set up so as to make it implausible that a pure mutational process could get you into the region of the scale that defined CSI. If natural selection is around, the population can in fact get there (as my numerical example shows). So even if natural selection isn’t “putting” the information into the genome, its presence has the dramatic effect of invalidating the argument that the presence of CSI proves that natural processes other than Design could not be responsible for the adaptation.

    (2) 1+1+1+1+1 doesn’t represent combinatorial information (complex specified) It only represents accumulative information. When one takes the entire genome, all 1s are already in the genome. Accumulating them all into a single group doesn’t increase the functional information contained in the genome.

    Functional information is defined in terms of the population getting into the extreme region of the scale. Gene frequencies are highly relevant to that.

    (3) Your argument assumes that all currently existent 1’s across the genome are compatible, accumulative and I suppose (but for which you make no case for here) combinatorial.You’ve ignored the fact that what might exist across the genome in small groups as 1s might in combination with other 1s in other groups prove deleterious, fatal, or turn both 1s into junk. As the number of 1s increase in any particular group, the chances that other 1s from other groups would prove delterious, fatal or junk-ifying to some of the 1s in the more accumulative group grows exponentially, drastically decreasing the chances that any information that is advantageous to some other group will prove advantageous to the more accumulative group.

    Sure, all sorts of complex interactions can arise. Some of them so complicated that they will frustrate an evolutionary process. (I note, however, that the standard models of quantitative genetics deal mostly with noninteraction like that in my model, and an awful lot of animal and plant breeding uses these noninteractive models for predictions of selection response).

    The point is that William Dembski’s argument is supposed to apply to all of these cases, including the simple one I gave. And I have shown that his argument does not work for that one.

    This is like arguing that I can take parts that are particularly advantageous to a Humvee and insert them willy-nilly into a Ferrari and expect the Ferrari to run better, and then take parts from a ford pickup and chevy van and stick them in the Ferrari and expect it to run even better.The more complex functional machinery is, the fewer and fewer things you can add to it, and the fewer and fewer parts you can modify willy-nilly without turning the whole thing into junk.

    Yup, but see above.

    (4) Since all the information was already in the genome to start with (in order for NS to act on it), and collecting all 1s that are cohabitable into a single group while eliminating (selecting against) 1s that are not cohabitable with the growing accumulation of 1s in our target group, all NS can possibly do is subtract information from the genome by killing off lines of 1s that do not play nice with the larger aggregates of 1s.

    Yes, NS is very dumb, boring, incompetent etc. But (in my example) it does move the population along the scale, into the region of high adaptation which is where Dembski’s theorem says it cannot plausibly be expected to go unless Design is present.

  15. I’m somewhat curious where you have found a biologist who claims that more than a few bits are added at a time. Duplication events present a pathway for accumulating significant new information, but at the time they occur they are not really adding much information.

  16. SCheesman: Well. I’m an ID advocate, and I have to agree, if an ID commenter says that, I think they are wrong. On the other hand, I don’t think that is what ID, properly understood, says. At its simplest, ID says that the probability of any mutation producing new ifnormation is dependent on the increase in information achieved, and that while small steps are plausible, and observed, very large steps, requiring multiple mutations are extremely rare or impossible, given the resources available. So, yes Avida works just fine. It takes nice easy steps. Two bits of information, or 20 bits of information doesn’t seem too much to me in large bacterial populations. I would, however, declare the inability of natural, mutational processes to add CSI (complex-specified information), that is 150 (or even 120) bits of information, in a single step.

    OK, so you agree that William Dembski’s CSI argument is wrong, that his Law of Conservation of Complex Specified Information is wrong? And you instead rely more on Michael Behe’s arguments? Because it is quite clear that my numerical example (well, OK, a version of it with 500 loci instead) violates Dembski’s Law.

    Note also that it does not attain 500 bits of information in one step, it accumulates it by natural selection acting over many generations.

    I would add that ID advocates say all the time that CSI (or Functional Information) cannot arise except by Design. And I have never noticed them being corrected by their fellow ID advocates.

  17. Joe Felsenstein: OK, so you agree that William Dembski’s CSI argument is wrong, that his Law of Conservation of Complex Specified Information is wrong? And you instead rely more on Michael Behe’s arguments? Because it is quite clear that my numerical example (well, OK, a version of it with 500 loci instead) violates Dembski’s Law. Note also that it does not attain 500 bits of information in one step, it accumulates it by natural selection acting over many generations.I would add that ID advocates say all the time that CSI (or Functional Information) cannot arise except by Design. And I have never noticed them being corrected by their fellow ID advocates.

    Well, I think that if you read William Dembski’s work, the idea of “hitting the bullseye” implicitly contains the “single-step” assumption, as does the law of conservation of specified information. I’m quite ready to be shown it does not; however, I would be surprised to find he believed it in exactly the manner you attribute to him.

    I agree my ideas do closely follow Behe’s. I am also quite willing to entertain the possibility that someething with a lot of information might be built step-by-step, so if I am out-of-step with a lot of other ID commentators, so be it. I do, however believe that irreducibly complex systems exist (and cannot be built up step-by-step), and one way to measure that property is through the use of the CSI measure.

  18. I would reject the notion that changes in gene frequency are inconsequential.

    I didn’t say it was inconsequential; I said, it doesn’t add any information not already present in the genome

    You can say that the information comes from the natural selection, or you can say that the information comes from the mutation process that creates the alleles from other alleless, or if you are Dembski and Marks (in their recent papers on Search for a Search) you can even say the information was out there in the shape of the fitness surface, lying around. But in any case all this is an exercise in semantics.

    I hardly call being flat wrong about what puts functional information into the genome “semantics”.

    The example shows otherwise.

    Reasserting it doesn’t make it so.

    Functional information is defined in terms of the population getting into the extreme region of the scale. Gene frequencies are highly relevant to that.

    One has nothing to do with the other unless the info is combinatorial into a new function. Accumulating information doesn’t necessarily get you anywhere significant in terms of generating new information.

    The point is that William Dembski’s argument is supposed to apply to all of these cases, including the simple one I gave. And I have shown that his argument does not work for that one.

    No, you haven’t.

    Yes, NS is very dumb, boring, incompetent etc. But (in my example) it does move the population along the scale, into the region of high adaptation which is where Dembski’s theorem says it cannot plausibly be expected to go unless Design is present.

    It doesn’t move it anywhere in terms of inserting additional functional information into the genome, because all of the functional information in your argument was already there to begin with. Accumulating it all into one spot (even if one could without nasty stuff happening) doesn’t change the amount of functional information in the genome one bit. Only if the already-present functional information is successfully combinatorial as new functional information (IOW, does something new), and leaves the old information intact and still proliferating through the species has the functional information been increased. NS doesn’t combine information that way – only mutations of some sort can.

    NS cannot increase functional information in a genome – only mutations can. NS can then work to fix the new functional information into the genome. NS can only remove information from the genome, not add it, which is why if the job (generation of functional new information) can’t be excpected to get done via the infinite monkey theory (even if not fixed into the population) , appealing to NS to help out is absurd. NS can’t act on what doesn’t exist in the first place.

    This is a simple and straightforward logic issue.

  19. SCheesman: Well. I’m an ID advocate, and I have to agree, if an ID commenter says that, I think they are wrong. On the other hand, I don’t think that is what ID, properly understood, says. At its simplest, ID says that the probability of any mutation producing new ifnormation is dependent on the increase in information achieved, and that while small steps are plausible, and observed, very large steps, requiring multiple mutations are extremely rare or impossible, given the resources available. So, yes Avida works just fine. It takes nice easy steps. Two bits of information, or 20 bits of information doesn’t seem too much to me in large bacterial populations. I would, however, declare the inability of natural, mutational processes to add CSI (complex-specified information), that is 150 (or even 120) bits of information, in a single step.

    Welcome to TSZ 🙂

    You may be right (I’m not sure – I guess a duplicated gene might do the trick), so is your position that some features we observe in biological organisms must have been achieved in a single step?

    Because I’d note that AVIDA doesn’t, in fact, take “nice easy steps”. No function can be achieved in a single step From another function, and some of the functions (at least using the default settings) actually seem to require multiple neutral (non-selected( steps, as well steps that are actually deleterious (reduce fitness).

    What makes you think that some biological features require all the steps to be simultaneous? Why can’t they be achieved by a series of neutral, or even deleterious steps as in AVIDA?

  20. SCheesman: Well, I think that if you read William Dembski’s work, the idea of “hitting the bullseye” implicitly contains the “single-step” assumption, as does the law of conservation of specified information. I’m quite ready to be shown it does not; however, I would be surprised to find he believed it in exactly the manner you attribute to him.

    His theorems are posed in terms of a mapping from one state to another, and he argues that his theorem rules out arriving in the region which is Compex Specified Information (the top 10-to-the-minus-150th of all genotypes) as a result of that transformation. That affects the Design Inference because it is the justification for saying that if we see CSI then we can infer it was design that did this. Although posed in terms of a single step, he certainly wants to use this to rule out gradually getting there by processes such as natural selection. As his model of natural processes is a 1-1 mapping, that can be iterated generation by generation so that the net outcome after (say) 100 generations is itself a 1-1 mapping.

    I agree my ideas do closely follow Behe’s. I am also quite willing to entertain the possibility that someething with a lot of information might be built step-by-step, so if I am out-of-step with a lot of other ID commentators, so be it. I do, however believe that irreducibly complex systems exist (and cannot be built up step-by-step), and one way to measure that property is through the use of the CSI measure.

    I have given no argument about irreducible complexity. My focus here is on Dembski’s LCCSI and the use of it (endlessly) by ID commenters to argue that if we see (Complex) Specified Information or enough Functional Information, that we can conclude in favor of design. I’m glad to see you agree with me about this.

  21. I would add that ID advocates say all the time that CSI (or Functional Information) cannot arise except by Design.

    When we say that we mean from scratch, meaning starting with zero SI blind and undirected cannot produce CSI.

  22. Sorry but CSI refers to origins- that is very clear in everything Dembski has written- and that is supported by Meyer.

    Not so, Joe:

    The origin of biological information and the higher taxonomic categories, Stephen C. Meyer, Proceedings of the Biological Society of Washington, 117(2):213-239. 2004

    …One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93). Studies of modern animals suggest that the sponges that appeared in the late Precambrian, for example, would have required five cell types, whereas the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types. Functionally more complex animals require more cell types to perform their more diverse functions. New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information…

  23. William J. Murray: It doesn’t move it anywhere in terms of inserting additional functional information into the genome, because all of the functional information in your argument was already there to begin with. Accumulating it all into one spot (even if one could without nasty stuff happening) doesn’t change the amount of functional information in the genome one bit. Only if the already-present functional information is successfully combinatorial as new functional information (IOW, does something new), and leaves the old information intact and still proliferating through the species has the functional information been increased. NS doesn’t combine information that way – only mutations of some sort can.

    Could you say exactly what you mean by this, William? specifically what you mean by “inserting additional functional information into the genome”. Are you talking about an additional sequence (or new) into a specific genotype, or are you talking about the frequency/probability of an additional (or new) sequence in a population? Because I think this may matter.

    And what about this: “because all of the functional information in your argument was already there to begin with” – what functional information was where, exactly?

    Finally this: “Accumulating it all into one spot…” : what do you mean by “one spot”, specifically?

  24. William J. Murray: I didn’t say it was inconsequential; I said, it doesn’t add any information not already present in the genome

    I hardly call being flat wrong about what puts functional information into the genome “semantics”.

    Reasserting it doesn’t make it so.

    I disagree with you about whether natural selection can put functional information into the genome. Functional information is defined (by Hazen et al.) as a measure of how far into the tail of possible configurations the system is. In the case I gave I showed that in the presence of natural selection functional information increases. In the absence of natural selection, in that numerical example, the distribution of genotypes does not change. I am using Dembski’s SI and Hazen’s FI, not your own definitions.

    [Quoting me:]
    Functional information is defined in terms of the population getting into the extreme region of the scale. Gene frequencies are highly relevant to that.

    One has nothing to do with the other unless the info is combinatorial into a new function. Accumulating information doesn’t necessarily get you anywhere significant in terms of generating new information.

    Neither Dembski’s nor Hazen’s arguments are posed in terms of “new function”. Nor do they deal with “anywhere significant”. Those are constructs of yours.

    [Quoting me:]
    The point is that William Dembski’s argument is supposed to apply to all of these cases, including the simple one I gave. And I have shown that his argument does not work for that one.

    No, you haven’t.

    I have. I have shown where in Dembski’s argument he changes the specification — his Law of Conservation (of CSI) requires that you measure by a different yardstick before and after the natural processes act. Since use of this theorem in the Design Inference requires that we use the same yardstick throughout, this makes his theorem unusable for the Design Inference. In addition I pointed to Elsberry and Shallit (2003), who noted that Demsbki uses the process (the mapping which models the natural processes) itself in defining the specification, even though he previously required that the specification be defined independently of the mapping. And finally, I have given numerical examples (one here) that show the population moving farther and farther along the scale which defined CSI or FI. So this is not just a bald assertion.

    It doesn’t move it anywhere in terms of inserting additional functional information into the genome, because all of the functional information in your argument was already there to begin with. Accumulating it all into one spot (even if one could without nasty stuff happening) doesn’t change the amount of functional information in the genome one bit.Only if the already-present functional information is successfully combinatorial as new functional information (IOW, does something new), and leaves the old information intact and still proliferating through the species has the functional information been increased.NS doesn’t combine information that way – only mutations of some sort can.

    I doubt that you will find William Dembski arguing that complex specified information can be created by mutational processes alone!

    NS cannot increase functional information in a genome – only mutations can. NS can then work to fix the new functional information into the genome. NS can only remove information from the genome, not add it, which is why if the job (generation of functional new information) can’t be excpected to get done via the infinite monkey theory (even if not fixed into the population) , appealing to NS to help out is absurd. NS can’t act on what doesn’t exist in the first place.

    See above.

    This is a simple and straightforward logic issue.

    I am glad that there is something that we agree on.

  25. The title gives it away. Also read Meyer’s “Signature in the Cell” and Dembski’s “No Free Lunch”

  26. Ido:
    Has anybody here read Steve Frank’s paper on the link between natural selection and information? It’s a challenge!

    Thank you for pointing that out.

    Also, there’s my 1978 paper in American Naturalist

  27. Elizabeth: Because I’d note that AVIDA doesn’t, in fact, take “nice easy steps”. No function can be achieved in a single step From another function, and some of the functions (at least using the default settings) actually seem to require multiple neutral (non-selected( steps, as well steps that are actually deleterious (reduce fitness).
    What makes you think that some biological features require all the steps to be simultaneous? Why can’t they be achieved by a series of neutral, or even deleterious steps as in AVIDA?

    Well, “nice easy” I guess is relative to the number of bits of information changed in each step and the resources available to search the solution space around the current iteration. Avida is quite up to the task.

    The requirement of simultaneity is, I would suggest, a hallmark of irreducible complexity. In biological machines you need not just a single protein, but frequently a dozen or more. It is challenging enough to obtain a single protein that does what you need, but to get an ensemble arranged in just the right fashion, moreover able to construct the machine in the proper order… well that’s a tall order, I say, and if you’re the skeptical sort (I’m more the sceptical sort, being Canadian), that looks like the sort of thing that only intelligent designers can accomplish.

  28. that looks like the sort of thing that only intelligent designers can accomplish.

    I’d be interested in your evidence that an intelligent designer can produce the kind of complex biological structures you refer to.

  29. Joe Felsenstein:
    Elizabeth —

    I am getting emails from WordPress asking me to “moderate” certain comments.I think I should not be the one to do that, unless you hand them over to me.I find that subsequently you have posted the comments so I don’t need to moderate them.

    Sorry about that, Joe. I’ve just instituted a moderation system, and I didn’t know that happened. I’ll try to stop it.

  30. Joe Felsenstein: Thank you for pointing that out.

    Also, there’s my 1978 paper in American Naturalist

    The “macro-evolution in a model ecosystem” paper? Our library doesn’t subscribe to such ‘ancient’ volumes of amnat. Is there a link to a pdf somewhere?

  31. Ido: macro-evolution in a model ecosystem

    You can read it free online if you register with JSTOR.

    I just did! Very worth while 🙂

  32. Joe G:
    Venter has synthesized a ribosome- NS has never been observed to do such a thing.

    No, Venter didn’t “synthesise a ribosome”. Venter copied the genes to produce ribosomes – genes that had evolved in existing organisms.
    Odd, isn’t it – everything we know for certain to have been designed, was designed under the control of a material, tangible entity operating in the material universe.

  33. SCheesman: Well, “nice easy” I guess is relative to the number of bits of information changed in each step and the resources available to search the solution space around the current iteration. Avida is quite up to the task.

    Could you be more specific? Are you essentially saying that the AVIDA genomes – and genes – are smaller? And what exactly do you mean by “search the solution space around the current iteration”? Each virtual organism in AVIDA has its own genome, and replicates asexually, with mutations (point mutations, insertions and deletions). These mutations affect the efficiency with which the organism copies itself, and the rate at which it acquires the “energy” it needs to perform its functions (including its self-copying function, but also the functions that net it extra “energy”). In other words the virtual organisms in AVIDA are really quite life-like – they have to exploit their environment for food, which takes various forms and must be “caught” in various ways. Every so often an organism finds itself with a genotype that enables it to exploit a new environmental resource, which improves its capacity to reproduce. Those variants will clearly then become more prevalent. However, the majority of mutations in AVIDA are either deleterious or neutral; only a small proportion turn out to be advantageous. Moreover, some advantageous mutations are only advantageous if they happen to a genotype that contains certain mutations which, on their own, are quite markedely deleterious.

    And so, while I agree that AVIDA is “up to the task”, I don’t see that the task consists of particularly “nice easy” steps. The vast majority of “steps” are either neutral or deleterious, and some beneficial steps are actually dependent on prior deleterious ones.

    The requirement of simultaneity is, I would suggest, a hallmark of irreducible complexity. In biological machines you need not just a single protein, but frequently a dozen or more. It is challenging enough to obtain a single protein that does what you need, but to get an ensemble arranged in just the right fashion, moreover able to construct the machine in the proper order… well that’s a tall order, I say, and if you’re the skeptical sort (I’m more the sceptical sort, being Canadian), that looks like the sort of thing that only intelligent designers can accomplish.

    But what is your evidence that “simultaneity” is necessary? In AVIDA, certain functions (well, all, in fact, but some more than others) required several genetic sequences to be simultaneously present for the function to evolve. But those sequences did not have to appear on a single organism simultaneously de novo. Typically, all but one key mutations were present in the parent of the organism that was “born” with that missing piece, even though the other pieces did not, in the absence of the “key” piece, confer any advantage. In other words, every single AVIDA function is Irreducibly Complex, both by Behe’s original definition (remove any part and it doesn’t work), and by his allternate concept of IC “pathways” (evolved by many necessary neutral or deleterious steps unbroken by advantageous steps).

    I hope Richard Hoppe might show up in this thread, as he has done a lot of playing around with AVIDA (and was the person who got me interested in it!)

    But I guess the point I’m trying to get across is not: AVIDA works, therefore life evolved, but that the fact that AVIDA works falsifies the idea that Darwinian evolution cannot, in principle, generate Functional Complexity, which has been a strong plank of the ID case – certainly Dembski’s.

  34. Elizabeth:

    But I guess the point I’m trying to get across is not: AVIDA works, therefore life evolved, but that the fact that AVIDA works falsifies the idea that Darwinian evolution cannot, in principle, generate Functional Complexity, which has been a strong plank of the ID case – certainly Dembski’s.

    And the example I posted here shows that quite simple population genetics models also falsify the idea that evolution with natural selection cannot, in principle, generate Functional Information.

  35. Using the bacteria E. coli, Church and Research Fellow Michael Jewett extracted the bacteria’s natural ribosomes, broke them down into their constituent parts, removed the key ribosomal RNA and then synthesized the ribosomal RNA anew from molecules.

    He copied and pasted. I can write Shakespeare that way. It doesn’t make me an author or even a writer.

    I asked for evidence that a designer can design. That would mean making a gene from scratch. And since, to my knowledge, human protein designers use evolution in their work, I’d say ID has nothing to offer unless you can show that de novo design is possible without using some form of evolution. Otherwise the designer is just doing what evolution can demonstrably do.

  36. damitall: Citation?

    You made the claim so it is up to you to support it- There isn’t anything on the interwebs about what you said…

  37. petrushka: He copied and pasted. I can write Shakespeare that way. It doesn’t make me an author or even a writer.

    I asked for evidence that a designer can design. That would mean making a gene from scratch. And since, to my knowledge, human protein designers use evolution in their work, I’d say ID has nothing to offer unless you can show that de novo design is possible without using some form of evolution. Otherwise the designer is just doing what evolution can demonstrably do.

    How about NS making a gene from scratch?

    And what has evolution been demonstrated to do?

  38. that looks like the sort of thing that only intelligent designers can accomplish.

    We’ve heard from JoeG, but I’d like to hear from SCheesman or any other ID advocate or evolution skeptic.

    I’d be interested in your evidence that an intelligent designer can produce the kind of complex biological structures you refer to.

    I’d like to know your source for the claim that intelligent designers can design complex structures in living things without using some form of evolution.

    It seems self evident that if intelligent design appears to be the more likely explanation, you must have some evidence.

  39. Elizabeth — I found a typo in the post. In the last sentence of paragraph 6, the second “then” should be “than”. Hardly noticeable but as I can’t edit the post and you can, I need to request the fix. Thanks.

  40. Joe G: You made the claim so it is up to you to support it- There isn’t anything on the interwebs about what you said…

    In point of fact , It was YOU made the claim, without evidence, that Venter synthesised a ribosome.

    OTOH I know a little about how these things are done. Ribosomes contain both RNA and protein, as you know. And whilst one can synthesise RNA in vitro from the necessary individual nucleotides, when it comes to proteins, one is pretty limited as to the size of polypeptide that can be synthesised in vitro from the individual amino acids ( <100 aa from commercial peptide synthesising services, IIRC, and at least one of the ribosome proteins has around 400 aa in it)
    So if you want ribosomes, you EITHER extract them from living cells (which can be done) , OR nick the necessary genes for a ribosome, muck about with them if you need to, then bung them in an organism where they will be expressed and the ribosome assembled.

    Her's a quote from Venter
    "We talked about the ribosome; we tried to make synthetic ribosomes, starting with the genetic code and building them — the ribosome is such an incredibly beautiful complex entity, you can make synthetic ribosomes, but they don't function totally yet. Nobody knows how to get ones that can actually do protein synthesis. That is not building life from scratch but relying on billions of years of evolution."

    He tried, but failed.

    And that quote is reported in no less than Evolution News
    http://www.evolutionnews.org/2008/02/leading_biologists_marvel_at_t004789.html

  41. My point would be that even if we have the technology to construct a ribosome, we do not have the ability to design one from scratch.

    The Intelligent Designer has a tough row to hoe. Without being alive and without ever seeing a living organism, he must conceive of the possibility, navigate through all that impossibly sparse sequence space to those 500 bit islands, and build the stuff out of quarks and such.

    And this is considered more probable that chemical evolution. All I want to know is why design proponents assert that they know it can be done. I’d like to see an example of where it has been done.

  42. Oh and with respect to George Church,it seems that he didn’t synthesise a complete ribosome, but the RNA portions of ribosomes. Good work, but not what I would call a “synthetic ribosome”

  43. Anyone know if Prof Church has published his work on “synthetic ribosomes”? I can’t find it anywhere

Leave a Reply