445 thoughts on “Evolution Visualized

  1. JoeCoder:
    Mendel’s Accountant is wrong because it disproves evolution.
    . . . .

    No, it’s apparently wrong because it doesn’t reflect what we actually observe.

    The bigger problem is that the combination of lack of detailed documentation and incredibly difficult to understand code means that no one knows what the program is actually doing. Contrast this to Schneider’s ev that is so well documented that I was able to write an alternative implementation and compare results with the original.

    Has anyone been able to do that with Mendel’s Accountant? Do you even have something like a pseudo-code description of the algorithm that would allow an alternative implementation to be written?

    If not, it looks a lot like smoke and mirrors.

  2. JoeCoder,

    Nice to see you. Sorry I haven’t participated much, but I’ve not used nor studied Mendel’s Accountant. I did help in building a self-installing version of it for WindowsXP, but that really was more of a sys-admin type contribution.

    One thing to maybe settle this is to ask, “If you don’t like Mendel’s Accountant, what population genetic software would you prefer to use as a pedagogical and research tool?” Avida, tierra, and the incarnations of weasel?

    The best tool I heard of was by Jody Hey. Walter ReMine told me when he turned of the re-normalization parameter, Jody Hey’s simulation worked just like Mendel and populations went into mutational melt down….. What that tells me something I’ve said a lot here at TSZ, namely, “real populations renormalize their selection coefficients (to what extent they ever existed in the first place)”.

    There was an evolutionary biologist (named Eric) who was laughed off the stage at the International Conference on Creation, 2013. He said he didn’t want to argue science, and every one laughed at him. Between lectures, I was having lunch with him and Rob Carter, and Eric went on and on about how bad Mendel was. So I just asked him point blank, “so what population genetic software would you recommend instead to model accepted population genetics?” He was dumfounded and speechless and couldn’t answer. The stunned look on his face and speechless open mouth said it all. For all the evolutionary trumpeting of how good their theory is, Dawkins WEASEL remains the gold standard.

    So, if not Mendel, what do Mendel’s critics suggest we use? How about travelling saleman or Cordova’s remarkable genetic algorithm described here:
    http://www.uncommondescent.com/intelligent-design/dave-thomas-says-cordova%E2%80%99s-algorithm-is-remarkable/

  3. JoeCoder:
    I don’t know why you think it’s “practically impossible to obtain” the number of nucleotides in a gene that are sensitive to mutation?

    Because of how many possible mutations there are.

    I wasn’t looking for a sort of “rough average” effect of substitutions on the amount of functional information, I was looking for a way to calculate a specific amount in some key gene or a larger piece of functional DNA that might contain more than just exons, for example. Which is why I suggested the Lac-Operon.

    You wrote:

    In the Lac-operon, count the number of nucleotides that reduce or disable its function if they are substituted. If you want the information count in bits, multiply this number by two, since each nucleotide is two bits. If one substitution leads to a reduction in function, two bits of information are destroyed. If that mutation later reverts, two bits of information are created. If a mutation destroys the whole thing, all those bits of information are destroyed.

    You can’t just say that a nucleotide results in reduced function if it’s substituted, since there are three possible substitutions. So you have to test all three obviously. Even if you have a good estimate of the average effect of substitution in exons, there are more things than exons. So now we have to test all three substitutions at each nucleotide at every locus in the Lac-operon to find the phenotypic effect of them. Otherwise if we just use these ballpark peasures of the average effect of substitution, we are left with the general idea that all functional stretches of DNA of equal length contain the same amount of information.

    You make some suggestions below that we can just sort of ballpark it with some case-studies on the average effect of mutation. But I think I’ve discovered something silly about the whole thing. Let me elaborate.

    In fly exons:“the average proportion of deleterious amino acid polymorphisms in samples is ≈70%”.

    In human exons: “19% of [amino acid altering] mutations are effectively neutral.”

    humans again:“at most 30% of non-synonymous mutations are neutral”. In the bacterial beta lactamase gene:“about one in four random single-residue changes are functionally neutral”.

    Yes, this doesn’t seem to get us much closer to what I wanted to calculate. There are many other things than exons. And an amino acid polymorphism is not a nucleotide substitution. The latter may lead to the former.

    These references all talk about deleterious vs beneficial, rather than reduced vs increased function. I’m not sure whether they are relevant to what you wrote to begin with, you spoke about function, not about fitness. But okay, we can go with fitness as a substitute.

    Ribosomal proteins in Salmonella:“most mutations (120 out of 126) are weakly deleterious and the remaining ones are potentially neutral.” Substitution mutations are much more common than all the kinds

    No actually gene duplications are the most frequent type of all mutations, (by something like up to an order of magnitude IIRC).

    But okay, if we use that method you suggest I think it’s obvious that your method for calculating functional information leads to absurd conclusions for us already on the face of it. First of all, since there are three substitutions possible for each nucleotide, and if 70% of them on average reduce function in some way (are deleterious), the total amount of information in any given average protein-coding gene would come out as a negative value in bits.

    Isn’t that absurd? A negative value. Whether the gene in question was created or it evolved, it’s presence has reduced the amount of functional information in the genome, according to the method you detailed.

    Think about it.

    “If one substitution leads to a reduction in function, two bits of information are destroyed.”

    But most substitutions are deleterious, remember? Something like 70’ish % of them. So a 150 amino acid gene would require 450 nucleotides to encode. 70% of substitutions (of which there are 3 pr nucleotide) are deleterious (so their bit-value is negative, they destroy two bits).

    0,7 x 450 x 3 x -2 = -1890 bits

    Your average 150 amino acid gene contains minus one thousand eight hundred and ninety bits?

    The neutral ones, how do they count in terms of bits? How about if they are deleted? Why do we limit ourselves to just substitutions? That also implies nucleotides that can be neutrally substituted don’t have any information content. Yet their presence might be critical, since a deletion could be deleterious, and an insertion could be beneficial.

    “If that mutation later reverts, two bits of information are created.”

    The number of beneficial substitutions can’t make up for the number of deleterious ones. Your measure would basically always give negative values. If we invert the signs, we also run into the absurd result that deleterious mutations create information and beneficial ones delete it.

    I think your method is, well I think we need to come up with a better one 😛

  4. JoeCoder: Mendel’s Accountant is wrong because it disproves evolution. But we KNOW evolution is true, so Mendel’s Accountant has to be wrong! I’m so glad that’s finally settled 😛

    Projection is a wonderful defense mechanism, isn’t it?

  5. stcordova: He was dumfounded and speechless and couldn’t answer. The stunned look on his face and speechless open mouth said it all.

    Cool story bro.

    stcordova: Between lectures, I was having lunch with him and Rob Carter, and Eric went on and on about how bad Mendel was.

    Was he right?

    I mean, who cares if there’s a good alternative if Mendel is known to be broken? Wouldn’t it be rather idiotic to draw conclusions from a program that was known to be flawed?

  6. Patrick, what observation contradicts the Mendel results?

    Mendel is complicated because unlike Ev it simulates dozens of parameters and attempts to be as accurate of a model of evolution as possible. If Mendel were made to be as simple as Ev it would no longer be a realistic simulation.

    The code is archaic but in my discussion with Zachriel and selection.c, I was able to quickly figure out what was going by reading the comments that document every step of the algorithm. If you look in mendel.c on line 202 you can see the loop it uses for each generation, and the comments document what’s happening at every step.

    Since you have experience with Ev: What happens in Ev if you have 20 mutations per generation? If you have 2?

  7. This is Jody Hey’s website.
    https://bio.cst.temple.edu/~hey/

    He supposedly has something that works like Mendel’s Accountant according to one of the Mendel Architects, Walter ReMine. However to get Hey’s software to mimmick mendel, the renormalization flag has to be set to off, the default is to on.

    I don’t delve much into pop gen since there are problems like renormalization and variable S-coefficients that invalidate many models except for those special cases where the S-coefficient is relatively constant and tracked.

  8. Patrick: Do you even have something like a pseudo-code description of the algorithm that would allow an alternative implementation to be written?

    You need a rigorous description of the scientific models, not a different kind of code. Then the issue is whether the code implements the models correctly.

    Mendel’s Accountant is not even wrong. The notion that one should infer the model from the code is abominable. You should not give so much as the time of day who says, “What the program does is to divide fitness by a random number uniformly distributed on [\epsilon, 1], so let’s figure out the biological relevance of it.” That’s utterly backwards.

  9. rumraket, your gripe is not with me but with the entire field of population genetics. I’ve read dozens of papers that calculate functional nucleotides the same way I’ve shared with you, and I’ve never seen anyone do it differently. I have no desire to continue defending what’s standard methodology in the field.

    The number of bits are the number of SITES subject to deleterious mutations. Not the number of mutations, so you can’t get a negative number. This is the same way almost everyone calculates the functional genome size. The whole criticism of ENCODE was because they did not use this definition of function.

    gene duplications are the most frequent type of all mutations, (by something like up to an order of magnitude

    Larry Moran says “Substitutions are far more numerous than most insertions and deletions.” Most models of population genetics don’t even take the other kinds of mutations into account.

  10. JoeCoder: Most of [the function] seems to have evolved from and since sometime around the origin of multicellularity.

    Let’s do a crude summation of how much function could be obtained in 700m years, given our numbers so far. Using my third run of Mendel’s accountant, that gives us 11.7 beneficial mutations every 1000 generations.

    Your Mendel’s accountant runs aren’t using the definition of functional information you supplied, I don’t see how they are relevant at all to what we were discussing.

    If we grant 700 million generations in 700 million years, that comes to 8.2 million nucleotides of functional genome. That’s only 0.27% of the genome!

    Why do you think the entire genome consists of a gradual build-up of beneficial mutatations (apparently 11 pr 1000 generations), rather than huge excesses in the amount of neutral DNA proliferating over time? After all, that is the evolutionary explanations for genome size. Expansions in the number of transposons, repetitive elements, pseudogenes, introns and so on.

    I agree, it would be fully absurd to think the human genome evolved by a gradual increase, nucleotide by nucleotide, and all of them had to be beneficial mutations, 3.2 billion of them. Nobody thinks this, well except creationist straw-men who find it useful to argue the entire genome must be functional for this straw-man to work.

  11. JoeCoder:
    Patrick, what observation contradicts the Mendel results?

    The genetic meltdown it predicts is not observed.

    Mendel is complicated because unlike Ev it simulates dozens of parameters and attempts to be as accurate of a model of evolution as possible.If Mendel were made to be as simple as Ev it would no longer be a realistic simulation.

    That doesn’t make up for the fact that the algorithm is not clearly documented.

    The code is archaic but in my discussion with Zachriel and selection.c, I was able to quickly figure out what was going by reading the comments that document every step of the algorithm.If you look in mendel.c on line 202 you can see the loop it uses for each generation, and the comments document what’s happening at every step.

    Do you understand it well enough to provide a detailed description of the algorithm? That would both allow for alternative implementations and for the biologists here to evaluate how well it models the real world.

    Since you have experience with Ev:What happens in Ev if you have 20 mutations per generation?If you have 2?

    The code is available on Github if you’re curious.

  12. Tom English wrote: “The notion that one should infer the model from the code is abominable.”

    Tom, The original Mendel paper describes probability selection mode:

    In probability selection, the likelihood of reproductive success of an individual is proportional to its fitness

    So does the manual:

    wherein the probability of reproduction is proportional to an individual’s fitness ranking within the population

    And lo and behold, multiplying the fitness by a random value between 0 and 1 makes the probability of reproduction proportional to an individual’s fitness ranking within the population. That’s what I reproduced with my JS code.

    Tom, have you read the paper that describes Mendel’s algorithm? Have you read the manual?

  13. Tom English:

    Do you even have something like a pseudo-code description of the algorithm that would allow an alternative implementation to be written?

    You need a rigorous description of the scientific models, not a different kind of code. Then the issue is whether the code implements the models correctly.

    I agree. I would hope that a description of the design of the program would include those, but you’re correct that those descriptions are essential.

    Mendel’s Accountant is not even wrong. The notion that one should infer the model from the code is abominable. You should not give so much as the time of day who says, “What the program does is to divide fitness by a random number uniformly distributed on [\epsilon, 1], so let’s figure out the biological relevance of it.” That’s utterly backwards.

    Indeed. That’s why I’d rather have a top-down detailed description instead of spelunking in ugly code.

  14. JoeCoder:
    rumraket, your gripe is not with me but with the entire field of population genetics.

    That’s just demonstrably false.

    I’ve read dozens of papers that calculate functional nucleotides the same way I’ve shared with you

    Bring them.

    The number of bits are the number of SITES subject to deleterious mutations. Not the number of mutations, so you can’t get a negative number.

    That wouldn’t make a difference. More sites are subject to deleterious mutations than are subject to beneficial ones. The numbers you get would just have less digits for fucks sake when the proportions are the same.

    Think man, think!

    This is the same way almost everyone calculates the functional genome size.

    Prove it.

    The whole criticism of ENCODE was because they did not use this definition of function.

    No, it really wasn’t. The criticism of ENCODE was because of the definition of function they used was transcription maps, rather than sequence conservation from comparative genetics.

    Now you’re just making shit up.

    Larry Moran says “Substitutions are far more numerous than most insertions and deletions.”Most models of population genetics don’t even take the other kinds of mutations into account.

    Thank you for that complete and utter irrelevancy.

    Your method gives negative numbers. Try it out yourself. The human genome carries many more deleterious than beneficial mutations. That means, according to you, the human genome contains a negative amount of functional information.

    Dude…

  15. Seriously JoeCoder, this is a direct quote from you:

    In the Lac-operon, count the number of nucleotides that reduce or disable its function if they are substituted. If you want the information count in bits, multiply this number by two, since each nucleotide is two bits. If one substitution leads to a reduction in function, two bits of information are destroyed. If that mutation later reverts, two bits of information are created. If a mutation destroys the whole thing, all those bits of information are destroyed.

    There’s no way to use this method and not get negative numbers, whether you count the number of known deleterious and beneficial mutations, or whether you count the total number of nucleotides and generalize the proportion of them that can be deleteriously or beneficially substituted from rough estimates. Both of those methods would give negative numbers. This has nothing to do with me having an issue with population genetics, it’s you giving me a crappy method to calculate functional information. I submit that your method is used in no paper anywhere. Prove me wrong.

  16. Rumraket, the number of functional nucleotides is the number of nucleotides subject to deleterious mutation. It’s as simple as that, that is an extremely widely used definition of function, and that’s exactly what’s being plugged into Mendel.

    Half of what you write is misunderstanding what I’ve said. It’s as if I told you to “go to the store for some bread, and if they have eggs get a dozen.” You return with a dozen loaves of bread (because they had eggs) and call me an idiot for wanting that much.

    The other half of what you write is disputing methodology widely used in population genetics that nobody else disputes. I have no desire to keep defending widely accepted points (like most mutations being substitutions) when you admit no error and then make me defend another batch of widely accepted points.

    So I’m sorry, but I have no desire to continue this discussion with you.

  17. I might point out that Joe F has said population genetics does not try to model reality.

    MA claims to be biologically realistic. You have said it disproves evolution.

    This is simply bullshit. the claim for biological realism is obliterated by the behavior of actual populations.

  18. Patrick wrote:

    The genetic meltdown it predicts is not observed.

    Yes, because all of you repeating that point makes it right, no matter how many times I address it 😛 We see lots of species going extinct. We don’t have a good way of knowing how much of that is contributed to by deleterious load.

    Do you understand it well enough to provide a detailed description of the algorithm?

    If you want a very simple overview, see algorithm 1 in the original Mendel paper.

    If you want more detail, the manual describes how each of the parameters are applied during those steps.

    If you still want more detail, the main generation loop and the mendel-specific functions it calls are only a few thousand lines, including copious comments. mendel.c has the main loop. It calls relevant functions in init.c, offspring.c and selection.c

    But apparently Mendel can’t be trusted because none of these meet the exact granularity of detail that they should.

  19. petrushka wrote:

    Joe F has said population genetics does not try to model reality. MA claims to be biologically realistic

    It turns out we can’t trust Mendel because it tries to be too realistic! 😛

  20. Rumraket: I submit that your method is used in no paper anywhere. Prove me wrong.

    You… you… MEANIE!

    I call my ass “information,” and you really ought to be interested in my definition of the term — because it’s mine.

  21. You can’t trust MA because it is demonstrably wrong about what to expect in biology.

    No, I’m right!

    See, that’s how you win a debate by repeating your stance without addressing what’s already been said. 😛

    We see lots of species going extinct. We don’t have a good way of knowing how much of that is contributed to by deleterious load.

  22. JoeCoder:

    Tom, The original Mendel paper describes probability selection mode:

    In probability selection, the likelihood of reproductive success of an individual is proportional to its fitness

    So does the manual:

    wherein the probability of reproduction is proportional to an individual’s fitness ranking within the population

    Those statements are contradictory. “Fitness” is not the same as “fitness ranking within the population”.

  23. JoeCoder:
    Patrick wrote:

    The genetic meltdown it predicts is not observed.

    Yes, because all of you repeating that point makes it right, no matter how many times I address it 😛We see lots of species going extinct.We don’t have a good way of knowing how much of that is contributed to by deleterious load.

    So it could be none. You have no evidence to support the claim that MA models anything in reality.

    Do you understand it well enough to provide a detailed description of the algorithm?

    If you want a very simple overview, see algorithm 1 in the original Mendel paper.

    If you want more detail, the manual describes how each of the parameters are applied during those steps.

    If you still want more detail, the main generation loop and the functions it calls are only a few thousand lines, including copious comments.mendel.c has the main loop.It calls relevant functions in init.c, offspring.c and selection.c

    As Tom English noted, a full description of the model and an explanation of the implementation is required for a proper analysis.

    But apparently Mendel can’t be trusted because none of these meet the exact granularity of detail that they should.

    It can’t be trusted because the model it is implementing is undocumented and the implementation has some very questionable coding choices.

    It is also the product of creationists. Historically not the kind of people one trusts for scientific accuracy.

  24. keiths: When unrestricted probability selection is used, the members of the population have their fitness multiplied by a random number. They are then sorted (ranked) according to that result, and then those with the lowest values are truncated.

    keiths, did you pick up my dozen eggs like I asked? Or will we be having only large piles of bread for supper?

  25. So it could be none. You have no evidence to support the claim that MA models anything in reality.

    One of the main utilities of a model is to figure out what will happen in cases we don’t have the ability to observe. Theoretically, what would this evidence look like?

    So here’s our complete list of reasons why Mendel is an awful program that can’t be trusted:

    1. It was written by creationists–Ew!
    2. It uses programming practices from 1994.
    3. None of us have read the documentation but we know it’s not sufficient.
    4. Mendel predicts species go extinct from deleterious load, and we haven’t measured whether this actually happens.
    5. It’s not accurate enough.
    6. It’s too accurate!

  26. JoeCoder: One of the main utilities of a model is to figure out what will happen in cases we don’t have the ability to observe. Theoretically, what would this evidence look like?

    What is it we can’t observe?

  27. Petrushka, the part that hasn’t been observed is telling much deleterious load contributes to extinctions.

    I brought up the previous example of neanderthals. Neanderthals had a higher deleterious load than humans, and neanderthals are extinct. Even if we figure out exactly how they died (e.g. disease/war/starvation), how do you tell if they are extinct due to high deleterious load?

  28. JoeCoder: Rumraket, the number of bits is the number of nucleotides subject to deleterious mutation. It’s as simple as that

    Okay, but that’s not what you said. You’re saying something else now.

    JoeCoder: that is an extremely widely used definition of function

    It’s a method used to determine how much of the genome is functional, yes. In so far as a mutation in some place is deleterious, that’s a good candidate for a functional region.

    It’s NOT a method for estimating the amount of functional information in an arbitrary genetic sequence. It’s not what you gave to begin with and it’s not what I asked for.

    You want more proof than the quote of you I already gave? Here’s more:
    First I write this to you:

    What I mean is, there’s a genetic sequence, for example the Lac-operon. What’s the information density of the Lac-operon? Calculate it for me and show me your work, then we can talk. Until someone does that, information blather will not impress me and in fact I think it’s completely irrelevant.

    JoeCoder: “(1.)Measure how much unique information is in the genomes of various organisms.
    (2.)Use observation and population genetics to estimate the rate at which evolution can create such information.Whether de novo or by modifying existing sequences.
    (3.)Compare the rates of #1 and #2 along with proposed divergence dates to determine if evolution is an adequate explanation of any genomic feature.”

    I think something like this has already been done to death, using simple informational measures such as bit-size of the genome. Every time this has been done it turns out the nucleotide and genome-size differences are well within plausible evolutionary rates.

    You respond to that last part of mine:

    Rumraket: “Every time this has been done it turns out the nucleotide and genome-size differences are well within plausible evolutionary rates.”

    Certainly. But evolutionarily, raw bits are very easy to come by and they’re not what I’m interested in. I think there are multiple ways to define information, but for our purposes a simple definition will suffice: Nucelotides that must have a specific letter, or else function is degraded. Even if only slightly.

    Okay, I understand that you’re not interested in raw bits as you say. So I ask you to find a way to calculate functional information content rather than just raw genome-size bits.

    So you come up with this:

    In the Lac-operon, count the number of nucleotides that reduce or disable its function if they are substituted. If you want the information count in bits, multiply this number by two, since each nucleotide is two bits. If one substitution leads to a reduction in function, two bits of information are destroyed. If that mutation later reverts, two bits of information are created. If a mutation destroys the whole thing, all those bits of information are destroyed.

    My emphasis in bold.

    So I read that and complained I think it would be silly to do it like that, because it would require lots and lots experiments to find the phenotypic effects of all the substitutions one can do in the Lac-Operon.

    You responded by giving papers that show the average effect of substitutions to be deleterious roughly 70% of the time.

    So I worked with that. Literally directly did what you asked of me. I took that as a recipe for calculating information content. Assume a roughly 70% of all substitutions to be deleterious, then use that to calculate information content in an arbitrary sequence.

    Now you’re complaining that I literally use what you told me to use. Instead you want me to go by the number of “nucleotides subject to deleterious mutation”:

    Rumraket, the number of bits is the number of nucleotides subject to deleterious mutation. It’s as simple as that

    .

    Okay, how do we find the number of nucleotides subject to deleterious mutations in the Lac-Operon? After all, I wanted to calculate the amount of functional information in the Lac-Operon, and you started out by giving me a method for doing that, explicitly mentioning the Lac-Operon. When I did that and got negative numbers (do you concede that using that method you gave and which I quote here above, gives negative numbers?), you complained I had an issue with population genetics.

    Do you even know what you are trying to argue at this point?

    JoeCoder: Half your comments are misunderstanding what I’ve said.

    No mate, no. That shit isn’t going to fly.

    The other half of your comment is disputing methodology widely used in population genetics that nobody else disputes.

    You’ve yet to even come close to demonstrating the truth of this claim. Rather, you yourself seem to have argued different things at different points in time and you’re apparently not really sure what you want me to do.

    I have no desire to keep defending widely accepted points (like most mutations being substitutions)

    Among substitutions and indels, yes substitutions are most frequent. That don’t make them the most frequent of all possible mutations. And they aren’t, and they’re not widely accepted to be either. Not that it matters here, I’m not making any specific point regarding duplications as if that specifically throws your conclusion into doubt. I mentioned the duplications factoid as a matter of principle because I happened to know that what you wrote was strictly speaking not correct.
    Among all possible mutations, duplications are the most frequent. I’ll go dig up a reference for this. I’m not asking you to alter your conclusions on the basis of this, or that we start debating that.
    There’s no need for you to short-circuit the debate on this.

    when you admit no error

    I have no issue admitting error. It just requires me to actually make an error. If I have misunderstood something you wrote, so be it, I’ll admit it if you can show that.

    Rather, it seems to me as I detail (in detail) above, you’ve not been very clear or consistent in what you wanted me to do.

    Can we return to this you just wrote:

    Rumraket, the number of bits is the number of nucleotides subject to deleterious mutation. It’s as simple as that

    ?

    Is this the method you want me to use to calculate information content in the Lac-Operon? Can you elaborate? Can we try it on a synthetic example, using the references you gave that estimate the average number of deleterious substitutions?

    How about I just give you this information: There’s a gene with 450 nucleotides encoding a 150 amino-acid protein. 70% of all substitutions in the nucleotides for that gene are deleterious. 1% are beneficial. 29% are neutral.

    How much functional information, in bits, are in that gene? Show your work!

    Suppose there’s an insertion of nine nucleotides in that gene at one end (to keep it simple), so there’s just added three amino acid to it’s length (153 in total now). Now calculate it’s information content again, using the same basic facts as above (70% of all substitutions in the nucleotides for that gene are deleterious. 1% are beneficial. 29% are neutral.)

  29. petrushka: What is it we can’t observe?

    Cumulative selection. If would could observe it we wouldn’t need Weasel programs to demonstrate the power of cumulative selection.

  30. JoeCoder, I think you have let yourself be confused about the two parallell discussions here.
    First one about population genetics and Mendel’s accountant (which I’m mostly trying to stay out of because I know I’m not qualified to speak on).

    The second one a general attempt by me to have you give a method for calculating functional information content in an arbitrary genetic sequence. I think somehow you’ve got those two mixed up.

  31. I’m no longer reading these long comments from rumraket, for the reasons I already stated above. But if anyone else wants to adopt one of his points and debate it with me, then feel free to do so and I will respond.

  32. Petrushka, I agree that humans have a small percentage of neanderthal DNA. But I don’t think that makes a difference for our purposes here.

  33. JoeCoder: Petrushka, I agree that humans have a small percentage of neanderthal DNA. But I don’t think that makes a difference for our purposes here.

    I am having trouble figuring out what your purpose is.

    MA does not predict the behavior of bacteria or viruses or fungi. You seem to think it argues against living lineages having existed for 90,000 generations.

    There is so much rubbish involved and so much that conflicts with established facts.

    What are you arguing?

  34. JoeCoder:

    keiths, I don’t see a contradiction so you’ll have to go into more detail.

    Here’s a simple example. Consider a set of organisms with fitnesses {3, 9000, 2, 1, 8000}.

    If you produce a fitness ranking for that set, you get {1, 2, 3, 8000, 9000}.

    The third organism in that set has a fitness of 3 and a fitness ranking of 3. The fourth organism has a fitness of 8000 and a fitness ranking of 4. It is impossible for the probability of reproduction to be proportional to both the fitnesses and the rankings in that case.

    The probability of reproduction can’t be proportional to both the fitnesses and the rankings except in special cases where each fitness is proportional to its ranking. That obviously doesn’t hold in general, and my example illustrates the problem.

  35. It is impossible for the probability of reproduction to be proportional to both the fitnesses and the rankings in that case

    Kieths, I agree. But did you read the part directly after what I quoted? “but the correlation is imperfect, so reproduction is dependent in part on chance.”

    I think we’re still stuck with a dozen loaves of bread 😛

  36. JoeCoder,

    The difference between “proportional to 4” and “proportional to 8000” is not “chance”.

  37. JoeCoder: 1. It was written by creationists–Ew!

    Technically that one is good enough to never trust anything they say or do. Creationists start with the conclusion and then built everything around that. Universally. Those that don’t stop being creationists.

  38. JoeCoder: Yes, because all of you repeating that point makes it right, no matter how many times I address it We see lots of species going extinct. We

    But we can watch things for many many generations and see not even the slightest sign of such a meltdown. Sooooo, that means nothing?

  39. Petrushka, my purpose in mentioning neanderthals was to highlight a possible case where deleterious load contributed to their extinction. And also to say that even if we knew exactly how they died, it would still be difficult to say whether load contributed to their extinction?

    My main argument is that evolution can’t work in higher animals because del. mutations arrive faster than selection can remove them and they go extinct. What evolution can’t even preserve it could not have created.

    Mendel’s parameters default to values to use for humans. I don’t know enough about the population genetics of e coli to know all of their parameters to give Mendel. But I can pull in some of the more relevant ones and give it a try.

    Here are the values I changed from default:

    Mutation rate of 0.005 per generation, from your source.
    One in 1000 mutations are beneficial.
    Genome size: 5MB
    Population size: 1000
    Generations: 2000

    This image shows the result: http://i.imgur.com/V48pcpW.png

    The count of deleterious mutations goes up by 10 (middle graph), but those mutations have very tiny selection coefficients (right graph), so the fitness stays either constant or almost constant (left graph). In my scenario 3 with humans, there were 11,575 del. mutations after 1000 generations.

  40. OMagain, welcome to our discussion! Selection is weakest in organisms with large genomes, long generation times, and low reproductive rates. So in general the more generations we’re able to observe, the less likely the species has an issue with too much deleterious load.

    I explained this a couple pages back but with so many comments it’s easy to miss it.

  41. JoeCoder: My main argument is that evolution can’t work in higher animals because del. mutations arrive faster than selection can remove them and they go extinct. What evolution can’t even preserve it could not have created.

    It’s a mathy version of god of the gaps. Flat earth stupid.

Leave a Reply