The Programmer and N.E.C.R.O.

A computer programmer noticed that he was not able to type very much in a single day.  But he mused that if there were a large number of software bots working on his code, then they might be able to proceed via totally blind trial and error.  So he decided to try an experiment.

In the initial version of his experiment, he established the following process.

1. The software was reproduced by an imperfect method of replication, such that it was possible for random copying errors to sometimes occur.  This was used to create new generations of the software with variations.

2. The new instances of the software were subjected to a rigorous test suite to determine which copies of the software performed the best.  The worst performers were weeded out, and the process was repeated by replicating the best performers.

The initial results were dismal.  The programmer noticed that changes to a working module tended to quickly impair function, since the software lost the existing function long before it gained any new function.  So, the programmer added another aspect to his system — duplication.

3. Rather than have the code’s only copy of a function be jeopardized by the random changes, he made copies of the content from functional modules and added these duplicated copies to other parts of the code.  In order to not immediately impair function due to the inserted new code, the programmer decided to try placing the duplicates within comments in the software.  (Perhaps later, the transformed duplicates with changes might be applied to serve new purposes.)

Since the software was not depending on the duplicates for its current functioning, this made the duplicates completely free to mutate due to the random copying errors without causing the program to fail the selection process.  Changes to the duplicated code could not harm the functionality of the software and thereby cause that version to be eliminated.  Thus, in this revised approach with duplicates, the mutations to the duplicated code were neutral with regard to the selection process.

The programmer dubbed this version of his system N.E.C.R.O. (Neutral Errors in Copying, Randomly Occurring).  He realized that even with these changes, his system would not yet fulfill his hopes.  Nevertheless, he looked upon it as another step of exploration.  In that respect it was worthwhile and more revealing than he had anticipated, leading the programmer to several observations as he reflected on the nature of its behavior.

Under these conditions of freedom to change without being selected out for loss or impairment of current function, what should we expect to happen to the duplicated code sequences over time and over many generations of copying?

And why?

[p.s. Sincere thanks to real computer programmer OMagain for providing the original seed of the idea for this tale, which serves as a context for the questions about Neutral Errors in Copying, Randomly Occurring.]

 

283 thoughts on “The Programmer and N.E.C.R.O.

  1. In contrast, natural selection would not preserve the contents of any sequence that is not currently providing reproductive benefit. So long as this remains true, all such sequences would be randomized by the accumulation of random copying errors.

    This is one of those misconceptions I think, if “not providing reproductive benefit” = selectively neutral . Neutrality is an assessment of an allele against all other alleles in the population. They are neutral with respect to each other if they have approximately the same selection coefficient. When a single allele is fixed, it’s neutral against itself. It is not providing ‘reproductive benefit’, as a differential, but will only be randomised by the accumulation of copying errors if it cannot suffer a deleterious mutation. Some neutral sequences can suffer deleterious mutations, others can’t. The accumulation of copying errors is not a universal fate for all sequences currently neutral in the population: Natural Selection can prevent a sequence from degrading, by weeding out deleterious mutants.

    If producing a dipeptide by simple riboZYME synthesis enhances reproductive capacity, and then the entire population inherits it (so it no longer enhances reproductive capacity in the population), we would not then expect the function to degrade unless the dipeptide stops making that difference. The environment has to change, and stop favouring the phenotype (a la cave fish). Otherwise, the same force that eliminated non-dipeptide-producers on the way to fixation will eliminate them again should they reappear in the future through degradative mutation.

  2. #1. For each “step”, the “potential selective advantage” you offer comes after achieving the step and is based on the assumption the step would be achieved.

    Wrong, Eric, or rather right only in a trivial way. We’ve been over this. You say ‘tis, I say ‘tisn’t. If a new sequence, produced by random variation, is reproductively advantageous, it is reproductively advantageous in the first organism that possesses it. That’s kind of ‘after’, but only just. Obviously the mutation must happen; you don’t get selection of things that haven’t happened. But from then on, the sequence is just being copied into descendants more often than any alternative allele. It didn’t have to happen. But we can infer that it did, because it has left a trace in the modern process. How likely it would be to happen in a repeat trial – if we went back to RNA world and ‘did a Lenski’ – depends in part on the frequency of that phenotype in the genetic space. We are trying to explain a modern structure; you seem to be only willing to accept a causal explanation that would render that structure inevitable from my chosen start point.

    Could you set out what you think a satisfactory evolutionary explanation would look like? Or are you loading the criteria such that an evolutionary explanation is impossible?

    This does provide a reason to believe the step of adding system X would be preserved, if it occurred. However, since natural selection is oblivious to future benefit[…]

    How many more times? Yes, yes, I know.

    , it cannot provide any reason to justify the assumption that system X would occur.

    Not if we started back in the RNA world looking forward, no. But we have reason to suppose that System X did occur. We can reverse engineer the modern system and see if it contains traces of a precursor system which can offer a standalone benefit, at least in principle, when uncoupled from the full modern system, in an environment where NO organism had the modern system.

    Looking backwards from the modern process, one can infer selection in the parts that are highly conserved. It could be selection on the entire piece, but selection on the individual components serially fixed is a perfectly respectable alternative hypothesis. We also have data on the relative ages of the different parts of tRNA and rRNA, and the elements of the one that the other interacts with. These support a serial origin. So one has to be mindful of these factors when constructing a plausible sequence. You seem genuinely hung up on the notion that the TRS must be a TARGET, and evolution must explain how the trajectory could reach such a distant target as its inevitable conclusion, rather than something that became available contingent upon the nature of the precedent step

    At best, your proposed advantages provide support for the survival of hypothetical organisms that have that system. It provides no answers at all for the questions about the plausibility of the arrival of the system despite all obstacles. (The same problem can be described allegorically.)

    Best not. Attempts to analogise are guaranteed to confuse. Aminoacylation can be performed by a ribozyme 5 nt long. A 29-nt ribozyme has been isolated that generates a peptide bond. The search space for these, even if they were the only active member of their space, is tiny. Your probabilistic intuition is not in accord with the available evidence.

    #2. By claiming you have “offered a potential selective advantage for all steps”, it’s clear that you are referring to your large conceptual steps (leaps actually), not to the many small steps of actual change that would be physically required, e.g. to build the first TRS.

    You’re kidding. These early steps are tiny. They really are. I guess you want a change by change genealogy. There is a large grey animal currently checking your pockets for a bun: the absence of exhaustive genetic detail in extinct organisms is sufficient for you to declare Design – a process of whose details you have not the faintest whiff of a clue. Consistent much?

    The distance between your conceptual steps for which you’ve “offered a potential selective advantage” is not a small span that could be traversed by a short random search of the local neighborhood of possibilities. Consider the distance between the specific potential selective advantages you’ve offered that bracket the construction of the first TRS.

    What, consider the entire sequence of amendments from start to finish and argue that there could not be small steps because the whole adds up to a big step? A creative way of avoiding the issue, I must say!

    On one end, you’ve pointed to a subsequent potential selective advantage that could fix the system sometime after it is operational. (Therefore, it does not explain its origin.)

    The origin is always random variation, not by its nature amenable to a very precise ‘explanation’. Fixation means the entire population has X by descent from that variation. So of course there is a lag between the system being operational and its being fixed. But that does NOT mean there is a lag between origin and the availability of the selective advantage (give or take an hour or so). It’s not advantageous after Selection has done its work, but before.

    On the other end, you’ve pointed multiple times now to proposed advantages that would precede the TRS, especially the prospect of creating a dipeptide without needing a TRS. For example (my italics added):
    [snip]
    Since there is “no TRS” needed for this earlier advantage and the TRS has been fully operational for some time to produce the latter advantage you offered, the process of constructing a working system that actually reads triplets falls entirely in between this pair of “step” advantages that you’ve offered.
    Yet you claim to have “offered a potential selective advantage for all steps”? How about the steps whereby the organism to actually builds a system for matching a sequence as triplets?

    Just trying to get you to walk before you can run, Eric, if you’ll forgive the condescension. We’ll get to the triplets if we can agree on the manner in which variation and selection interact. If we aren’t talking the same language over these early steps, after all this back-and-forth, there’s no point arguing in the same vein over the rest.

    When I asked you “How does evolution move the organism toward, “Let’s process an mRNA with a sequence of matches of the nucleotides as groups of triplets.”, …?”, your response seems to fall back onto vague generalities and hand waving.

    Splutter! Accusations of handwaving from an IDist! And hardly fair. My response was that I had already covered this in detail elsewhere. I went on to describe the manner in which biology differs from ‘hard’ engineering, something you seem unwilling to take on board.

    What follows is more detail about forming a peptide bond — without using a TRS or needing one.

    Reverse engineering. Extracting the core of the modern complexities and seeing just how ‘irreducible’ they really are. You don’t need a TRS to make a peptide bond. But having pre-existing components that make a peptide bond from aminoacylated ACC-stems, a ‘TRS’ can build up around this peptidyl transferase core. So far, it’s only got peptidyl transferase activity and very short tRNA precursors, but still has the capacity to produce useful product.

  3. DNA_Jock: The only interpretation I can put on this exchange is that you are describing an organism that ” produces a ‘useful’ dipeptide’ using ribozymes.” as having a TRS.

    There appears to be a misunderstanding here. I am not in any sense “describing an organism that ” produces a ‘useful’ dipeptide’ using ribozymes.” as having a TRS.”

    There are two selectable developments, one before the TRS (i.e. note “no TRS”) that ”produces a ‘useful’ dipeptide’ using ribozymes.” and one after the TRS is operational (i.e. “one accidental success to fix the basic system”).

    The earlier of these bracketing developments I refer to as “this earlier advantage” and the latter one I refer to as “the latter advantage you offered”. Note also my earlier statement introducing this part of the evaluation.

    Consider the distance between the specific potential selective advantages you’ve offered that bracket the construction of the first TRS.

    Sorry if my statement contrasting the “earlier” and “latter” advantages was not clear enough. (Perhaps I can add some punctuation or other adjustment to make that a bit more evident.)

    p.s. Thanks for bringing the issue of unclarity to my attention.

    p.p.s. Whether one agrees or not, with a slight adjustment of a comma and an extra “since”, I hope it is more clear. It should now make more sense that the sentence ends by referring not to one step advantage but to “…this pair…”, i.e. to two different cases, the earlier before the TRS and the latter after the TRS.

    … the process of constructing a working system that actually reads triplets falls entirely in between this pair of “step” advantages that you’ve offered.

  4. The Accumulation of Mutations with Weak Signal Swamped by the Noise

    DNA_Jock: Going out on a limb here, I do not consider that ‘neutral’ mutations exist. However, many mutations have such a tiny selective effect that any signal is swamped my the noise. If you duplicate a gene, many more mutations fall into this category…

    I incorporated part of this statement into Observation #7.

    DNA_Jock’s statement has quite a bit of merit to it. There has been research suggesting “that even synonymous mutations and mutations in non-coding regions often have at least a very slightly deleterious effect [35, 36].” On the other hand, most deleterious mutations have so small an effect that selection cannot effectively weed them out and they do accumulate in the genome. They are indeed swamped by the noise.

    For more about this, anyone can check out at least the Abstract and Introduction portion of this conference paper available online. The authors have much to say about nearly-neutral mutations and the accumulation of low-impact deleterious mutations. It includes references to some of the published results on the topic of mutations that are not weeded out by selection.

    Can Purifying Natural Selection Preserve Biological Information?
    Paul Gibson, John R. Baumgardner, Wesley H. Brewer and John C. Sanford

    For the purposes of this thread, the important observation that remains, and is even increased, is that there is indeed a set of mutations that are not beneficial (i.e. do not enhance reproductive success) and yet that do accumulate randomly. They are not weeded out by natural selection. Even if they are slightly deleterious, they are lost in the noise, just as DNA_Jock said. (If anything, the bad news is that the indications are that more of this may accumulate than was formerly supposed.)

    The important point for this thread is the influence of the mutations that are not weeded out to randomize sequence content over time. This is especially so for any sequence that is not currently providing reproductive benefit, since unselected change could happen to any position in such a sequence without loss of the non-existent reproductive benefit.

    Thus, references to “Neutral” within this thread, in keeping with the definitions and distinctions given early on in Observation #1, is defined to include any mutations that are sufficiently near-neutral that they are not weeded out by natural selection. The key property is the accumulation of the random errors in copying due to the lack of preserving selection.

    One of their observations is that accumulation depends on the rate of mutation (among other factors). For the purposes of this thread, we can reasonably expect that the rates of errors in copying for any earliest pre-protein organisms would likely be higher than they are for mammalian organisms that have protein based proof reading systems to catch and correct errors.

  5. No Trait X (nucleotide sequence) is intrinsically beneficial or deleterious, in and of itself; instead, Trait X (nucleotide sequence) may be beneficial or deleterious depending on what sort of environment the Trait-X-possessing critter inhabits.
    Suppose Trait X has consequences which would result in Trait X’s being beneficial in one particular Environment Y, and only in that one particular Environment Y. In this case, natural selection will only, er, select critters with Trait X when those critters inhabit Environment Y.
    Suppose Trait X has consequences which would result in Trait X’s being beneficial in a set of Environments Y1. In this case, natural selection will select critters with Trait X when those critters inhabit any environment within Set-Of-Environments Y1.
    Suppose Trait X has Consequences C1 which would result in Trait X’s being beneficial in Environment Y1; and Consequences C2 which would result in Trait X’s being beneficial in Environment Y2. In this case, natural selection will select critters with Trait X when those critters inhabit either Environment Y1 or Environment Y2—but the reason why Trait-X-bearing critters are selected for in Environment Y1 is different from the reason why Trait-X-bearing critters are selected for in Environment Y2.

    One gets the impression that ericB thinks that if a present-day Trait X is selected for whatever present-day reason, that Trait X can only ever have been selected for whatever reason it’s selected for in the present day. The notion that there might be more than one reason for Trait X’s being selected-for, is apparently foreign to ericB’s mind.

  6. ericB,

    For the purposes of this thread, we can reasonably expect that the rates of errors in copying for any earliest pre-protein organisms would likely be higher than they are for mammalian organisms that have protein based proof reading systems to catch and correct errors.

    There is absolutely no basis whatsoever for such an “expectation” – or for any other “for-purposes-of-this-thread” claim made by ericB.

    EricB’s chemistry and physics are dead wrong from the outset and don’t account for hundreds of other factors that go into the evolution of complex molecules.

    EricB is now simply making up crap faster than anyone can answer it. This is the infamous Gish Gallop. “Pastor” Bob Enyart is likely one of ericB’s models for this tactic.

    All of ericB’s misconceptions and misrepresentations of fundamental science remain on the table unanswered by him.

  7. thorton: … Mutations to the eyes that shut down vision were therefore beneficial (not neutral) to the fish because those fish who lost didn’t have to expend energy building / maintaining vision. ….

    It is one thing to say that the mutations that led to blindness are beneficial rather than neutral. But just to be clear, is that all you are pointing out? Or are you also trying to suggest that there are no cases of neutral (or near-neutral) mutations that accumulate randomly as I have described?

    If it is the latter, please consider my post on The Accumulation of Mutations with Weak Signal Swamped by the Noise, especially the research reviewed in the Introduction of the paper I referred to.

    In any case, considering the cost of energy when it is not sufficiently compensated for by reproductive benefit would actually make my case stronger, not weaker.

    The point of the thread is to consider the plausibility of construction of complex molecular machines and systems, such as the Triplet-Reading System. Allan pointed to “one accidental success to fix the basic system” — an accidental success that happens sometime well after the system is operational. Just as with a vision system, operation costs energy. (Even before operation, construction costs energy.)

    Thus, the situation of an organism expending energy on a TRS prior to the “one accidental success to fix the basic system” is exactly equivalent to the vision system example as you’ve described it for cave fish. One of the easiest of accomplishments for random mutations is to break something, i.e. to stop something from working or operating. As you describe with the fish, it would be beneficial to turn off the TRS before it ever reaches that “one accidental success to fix the basic system”. Being of benefit, the mutation to shut it off (or to not spend energy on it in the first place) would be promoted through the population, increasing in frequency.

    In fact, before I made this a separate thread, in the previous thread I made the following point generally about eliminating disadvantages (which would include eliminating unrewarding energy costs).

    ericB: Whatever the nature of the disadvantage to the survival and reproduction of an organism, natural selection could work to weed out those organisms in favor of others that don’t have that disadvantage — either because they don’t have the defective duplicate or because they have hit upon a remedy that avoids the disadvantage. One of the simplest remedies that would be easily accessible to undirected change is to just not transcribe the rogue copy any more. As a software analogy, that would be something like commenting out a section of code.

    Turning something off is the easiest solution of all, one that an evolutionary process would almost surely be able to find. It could just become junk that is copied during replication (with more and more errors), but never transcribed, never put into action.

    Just as with comments in software, degrading changes to the junk would be completely free to accumulate, thereby further randomizing its contents over time.

    So far, this does not yet sound like a plausible recipe for explaining the origin of a Triplet-Reading System.

    My argument has never depended on the idea that eliminating spending energy on the TRS would be beneficial prior to its selection (since any lack of selection would allow mutation accumulation in any case), but that certainly adds to the argument and makes it stronger.

    thorton: Once again you flat out refuse to understand that any genetic mutation can only be judged deleterious, neutral, or beneficial with respect to its effects in the current environment.

    This seems to me the strangest part of your post. You have just quoted a passage of mine in which my explicit point is to indicate that selection is dependent on the current conditions and that features that might have been beneficial under other conditions (either past or future) may not be judged beneficial in the current environment. And it is in response to this passage that you say that I “refuse to understand that any genetic mutation can only be judged deleterious, neutral, or beneficial with respect to its effects in the current environment.” Really? X therefore, not X?

  8. ericB: For more about this, anyone can check out at least the Abstract and Introduction portion of this conference paper available online. The authors have much to say about nearly-neutral mutations and the accumulation of low-impact deleterious mutations. It includes references to some of the published results on the topic of mutations that are not weeded out by selection.

    Can Purifying Natural Selection Preserve Biological Information?
    Paul Gibson, John R. Baumgardner, Wesley H. Brewer and John C. Sanford

    Oh my. Sanford’s a Born Again YEC whose popular press book on “genetic entropy’ was shredded then laughed out of the room in the scientific community. I especially liked the part where he used the ages of Noah and the Biblical patriarchs as “evidence” the human life span was getting shorter due to his claimed genetic degradation.

    Now as part of your Gish gallop you’re going to cite a “scientific” paper from a ID-Creationist conference, co-authored by another YEC – John Baumgardner – making the same ridiculous claims. Genomes were created “perfect” and can only degrade over time. Why don’t you just cut to the chase and offer up Woodmorappe’s Noah’s Ark Feasibility Study?

    ETA: You should also know that the Mendel’s Accountant program that was the basis for the latest bit of ID-Creationist “research” you cite is a scientifically worthless program written by Sanford specifically to support his YEC beliefs. Just like Genetic Entropy, his Mendel’s Accountant is hopelessly flawed. Among other things it limits populations to an unrealistic maximum of 1000, doesn’t account for sexual recombination, and has parameters that guarantee a population crash no matter what initial values you put into it. And just like Genetic Entropy, his Mendel’s Accountant was shredded and laughed out of town by actual population geneticists.

  9. ericB

    (snip more of ericB’s version of the tired old Creationist “what good is half an eye” argument”)

    You aren’t going to lift even one little finger to read the scientific literature and try to learn about reality, are you?

  10. thorton:

    You aren’t going to lift even one little finger to read the scientific literature and try to learn about reality, are you?

    This is pointless. EricB was nailed way back when he couldn’t answer direct questions about Sewell’s and other ID/creationist’s knowledge of thermodynamics. He was nailed when he deliberately avoided answering questions about ID/creationist probability calculations for molecular assemblies.

    It was evident at the very beginning of his “challenge” that he was drawing from the “knowledge” base of ID/creationism; and we now see it is directly from Sanford’s “genetic entropy,” as we stated it was many comments ago. He was nailed the second it became evident he was reciting Sanford’s “genetic entropy” “argument” without actually saying the words.

    The shibboleths of ID/creationism are many; and that is because all their “arguments” are built on common misconceptions and misrepresentations of science that high school students can do.

    This tells us something about the state of ID/creationism and its followers. All of them, ericB included, presume to “argue” about science even though none of them have an adequate high school understanding of science.

    The last several posts by ericB are a clear indication of where this is going – actually, not going. It will be just more of the old Gish Gallop. All he is doing is practicing ID/creationist “debating” tactics, as we already noted before.

    I’ve seen enough. We will get no answers from ericB; and he will never answer for his misconceptions and misrepresentations because he can’t. He doesn’t even understand the questions; such is the depth of his ignorance.

  11. Nah, Elzinga. You’ll be back, just like Thorton. Two peas in a pod.

    You can’t resist spitballing the opposition.

    Funny thing is, I just can’t figure out why you and Thorton haven’t just recorded your rants, and just copy and paste on every thread.

    But then your fingers would go on unpaid leave. I get it.

    But what’s even funnier is how you didn’t even bother to do like Allen Miller and actually rebut EricB, point by point.

    Now this is a taunt. I repeat. This is a taunt: You haven’t gotten it in you to rebut EricB. Come on Elzinga, I dare you to reply to him point by point.

  12. Lizzie, go ahead and put my last comment along with Mike Elzinga’s and Thorton’s numerous off topic comments into the sandbox or guano.

    It is where they ALL belong!

  13. Here is a fundamental misconception and misrepresention coming from Mike Elzinga.

    Eric’s observation has nothing to do with physics and chemistry. It has to do with logic. Modern organisms have error-control mechanisms in place. But from a Darwinian POV, early life did not have these mechanisms in place. So it logically follows that early life had more copy errors. Or else, why the need for error-control mechanisms?

    But then programmed design does not have this issue since error-control mechanisms would have been designed from the outset.

    Physics and chemistry tells you the nuts and bolts of how the systems operate. It does not tell you how they came to be there.

    Mike, stop misrepresenting and misconceiving what physics and chemistry can tell us!!!

    Even a high school student understands this, for crying out loud.

    Elzinga: “EricB’s chemistry and physics are dead wrong from the outset and don’t account for hundreds of other factors that go into the evolution of complex molecules.”

  14. Steve: Physics and chemistry tells you the nuts and bolts of how the systems operate. It does not tell you how they came to be there.
    Mike, stop misrepresenting and misconceiving what physics and chemistry can tell us!!!

    Your anger and hatred are misdirected. Stop stalking, take a cab home, and sober up. Learn some science for a change.

    Even high school students know that physicists and chemists can model pretty accurately the properties of molecules. For example, it is done routinely in coming up with new molecules used in medicine to target specific molecular sites in biological systems.

    The recipe(s) leading to the molecules of life are unknown because the pathways are buried in history. But no one working in these areas doubts that there is at least one recipe out there. People who actually work with molecules know these things. That is why the field is extremely interesting to people who don’t have dogmas that are threatened by the implications of molecular and organism evolution.

  15. Steve: Now this is a taunt. I repeat. This is a taunt: You haven’t gotten it in you to rebut EricB. Come on Elzinga, I dare you to reply to him point by point.

    It has already been done. You and he just ignore the specific questions that pinpoint the crucial misconceptions and misrepresentations ID/creationists have about fundamental scientific concepts. That has been thoroughly demonstrated on just this thread alone.

    The reason you and he ignore these questions is because you both already know that you don’t get the fundamentals of science at even the high school level.

    The repetition of such questions occurs only because the ID/creationist’s misconceptions and misrepresentations of science haven’t changed in something like fifty years. One doesn’t need to get into any science beyond the high school level because that is precisely where ID/creationism goes off the rails.

    ID/creationists, such as yourself, simply never take the time to think about any of it. Reminders don’t seem to work with any of you.

    Instead you just sneer and taunt. That’s all you know; and that is pretty lame.

  16. Steve: But then programmed design does not have this issue since error-control mechanisms would have been designed from the outset.

    Then explain cancer from an ID perspective?

  17. Steve: Physics and chemistry tells you the nuts and bolts of how the systems operate. It does not tell you how they came to be there.

    And neither does ID! Or perhaps you know better?

    Go on Steve, explain the origin of a single thing from an ID perspective? Just one!

    Does the fact that you cannot ever actually do that but make the claim over and over give you pause? I guess not…

  18. Mike Elzinga,

    Eric: we can reasonably expect that the rates of errors in copying for any earliest pre-protein organisms would likely be higher than they are for mammalian organisms that have protein based proof reading systems to catch and correct errors.

    Mike Elzinga: There is absolutely no basis whatsoever for such an “expectation”

    Actually, I’m going to disagree with Mike here. Which does not make Eric or Steve’s case, but proof reading is a complex function that one would not expect a VERY early system to possess. On the evolutionary paradigm, it’s no problem, and requires no programming. Intact preservation of successful sequences is reasonably expected to have a selective advantage over slipshod replication, and so evolution of proofreading is favoured, which survives along with that genome. A genome that has been successful is more likely to be successful again than the same genome copied with errors, so proofreading genes can back winners.

    There is no particular conflict between poor replication fidelity and lineage persistence or the evolution of beneficial change through mutation, however, provided it attains a reasonable level. If it makes an error every base, it’s hardly a replication mechanism anyway. If a copying mechanism is slipshod, it still needs only to turn out replicator copies with an average exponent of increase >1. Even if many copies are duds, the lineage remains viable through production of the ones that aren’t, including a fraction that may be selectively ‘better’. Fidelity cannot attain perfection for two reasons: it is probably impossible as a practical matter due to the wobbly environment down at that scale, and the closer a sequence gets to achieving it, the harder further improvement becomes, since mutation itself is the source of such improvement.

    One probable effect of poorer fidelity is a restraint on genome size, so conversely an effect of increasing fidelity is to lift the lid and allow genomes and everything in them to become larger and hence more complex.

  19. ericB: DNA_Jock’s statement has quite a bit of merit to it. There has been research suggesting “that even synonymous mutations and mutations in non-coding regions often have at least a very slightly deleterious effect [35, 36].” On the other hand, most deleterious mutations have so small an effect that selection cannot effectively weed them out and they do accumulate in the genome. They are indeed swamped by the noise.

    Close, but no cigar.
    You are failing to consider the fate of mutations with a slightly advantageous effect. As noted in your ref 35 (which btw is “Akashi, 1995”):

    “In the absence of positive selection for preferred changes at silent sites, a gradual accumulation of unpreferred fixations will drive synonymous codon usage toward mutational equilibrium. The maintenance of major codon preference for nearly neutral synonymous DNA changes requires a slow, but constant, rate of “compensatory” adaptive fixation.”

    And before you start quote-mining the first sentence here, it is presented as a counter-factual hypothetical.
    To put it in your terms: “very slightly advantageous mutations are more likely to fix than very slightly detrimental mutations” (duh!). This produces the heavy codon bias observed in D. simulans. A point that was apparently beyond Sanford et al.
    And Eric.
    Also, since Allan’s phrase “one accidental success to fix the basic system” refers to the first mRNA, your repeating it endlessly is another GSW to the foot.
    Which you may realize after you have answered Mike’s probability question and my “minimum number of different codons” question…
    Or not.
    But I am touched to see that you do read my posts, I guess.🙂

  20. DNA_Jock,

    I’d forgotten the provenance of that ‘one accidental success’ bit myself, even though I know it originated with me! A net One Product With Selective Advantage is all that is needed to stop the mechanism that produces it being degraded. And the more, the merrier.

    I wish I hadn’t said ‘fix’. I meant neither ‘repair’ nor ‘get to 100% frequency in the population’.

  21. ericB: If it is the latter, please consider my post on The Accumulation of Mutations with Weak Signal Swamped by the Noise, especially the research reviewed in the Introduction of the paper I referred to.

    NOTE to those who may have missed the point:

    What I pointed toward is “especially the research reviewed in the Introduction of the paper I referred to.” That includes some historical review.

    The only point of interest to me for this thread is the recognition that some mutations that are even potentially slightly deleterious are nevertheless sufficiently near neutral that selection cannot effectively weed them out. Instead, they are lost in the noise — just as DNA_Jock said. Therefore, they accumulate. The random accumulation is the point of interest.

    I’m not making any claims about the conclusions of the paper itself or the program they used. And for the record, I’m not a YEC.

  22. DNA_Jock: But I am touched to see that you do read my posts, I guess.

    I don’t have enough time for this, certainly not as much as everyone else combined (including some who are retired) and so I do prioritize. Some authors I skim or just skip. Even then, there remain comments I’d like to get back to and respond to, but that I haven’t (at least not yet). Sometimes I can catch up a bit, but at other times life has other priorities.

    So, yes, it does mean something that your posts are worth reading.

    DNA_Jock: You are failing to consider the fate of mutations with a slightly advantageous effect. As noted in your ref 35 (which btw is “Akashi, 1995″):

    “In the absence of positive selection for preferred changes at silent sites, a gradual accumulation of unpreferred fixations will drive synonymous codon usage toward mutational equilibrium. The maintenance of major codon preference for nearly neutral synonymous DNA changes requires a slow, but constant, rate of “compensatory” adaptive fixation.”

    And before you start quote-mining the first sentence here, it is presented as a counter-factual hypothetical.
    To put it in your terms: “very slightly advantageous mutations are more likely to fix than very slightly detrimental mutations” (duh!). This produces the heavy codon bias observed in D. simulans.

    You seem to say that as if I wouldn’t agree that beneficial mutations are more likely to fix. I’m not sure why. See Observation #1. Throughout the Observations I repeatedly qualify my claims by distinguishing them from cases where there is a current reproductive benefit.

    DNA_Jock: …Allan’s phrase “one accidental success to fix the basic system” refers to the first mRNA, …

    It quite explicitly does not refer to the first RNA sequence that is attempted to be read by the system. Here is a longer quote with my emphasis added.

    Allan Miller: It is true that the first sequences passed through any primitive coding ribosome were not designed to be coded from. But it only takes one accidental success to fix the basic system, which may then uncover further novelties, further cementing its role and selecting for its refinement.

    Quite reasonably, there is no assumption that the first sequence through is the accidental success. There is also a reasonable explicit acknowledgement that “the first sequences passed through any primitive coding ribosome were not designed to be coded from.” Allan gives no indication here of making the question begging teleological assumption that the starting sequences are already suited to produce results out of the gate.

    Given that Allan has chosen to posit a system that starts with only one tRNA and that the sequences passed in were not designed as specifically coded for that tRNA, the vast majority of sequences that could be tried can be expected to produce literally nothing at all, since they will be overwhelmingly likely to encounter one of the many STOP codes (i.e. any codon at all other than the one that matches the single tRNA). I mentioned more on that here.

    To reach the “one accidental success” requires not only producing something, but also that the something is reproductively beneficial.

    BTW, Allan adds this qualification:

    Allan Miller: I wish I hadn’t said ‘fix’. I meant neither ‘repair’ nor ‘get to 100% frequency in the population’.

    That is fine. I didn’t take “fix” to be intended to mean either of the other ideas. I think it was clear enough all along that Allan was pointing to an eventual reproductively beneficial product of a system that had not been designed to produce such a benefit.

  23. ericB

    The only point of interest to me for this thread is the recognition that some mutations that are even potentially slightly deleterious are nevertheless sufficiently near neutral that selection cannot effectively weed them out.Instead, they are lost in the noise — just as DNA_Jock said.Therefore, they accumulate.The random accumulation is the point of interest.

    In your scenario if such slightly deleterious mutations do accumulate enough to have a negative impact on reproductive success then they *will* be subject to selection pressures and weeded out. If they don’t have a negative impact on reproductive success then who cares?

    There’s zero evidence of any issues with sexually reproducing species going extinct from ‘genetic degradation’ anywhere in the 3+ billion year history of life on the planet.

    So what’s the problem?

  24. DNA_Jock: Eric, what is the minimum number of different codons that your “functional(tm), operating, reproductively beneficial TRS” must recognize?

    I think there is a bit of confusion about roles in this picture. I am not the one making the proposal. Allan is. It is for Allan to propose whatever he wants and then defend the plausibility of his proposal as being able to become operational and reproductively beneficial rather than being eliminated.

    Similarly, if Allan had proposed “that between the Earth and Mars is a china teapot revolving about the sun in an elliptical orbit”, it would likewise be his responsibility to make the case for its plausibility. More on that here.

    Allan has proposed the system starts with effectively only one tRNA. In the earlier thread, I did indicate that I thought the TRS would need a full set to avoid certain severe problems, but it was wrong for me to not be more careful to distinguish between that opinion of mine and his actual proposal, which included only one to start with. So I did apologize and went on to mention some of the difficulties that that starting assumption entails. See here and the p.s. here.

    Unfortunately, I’ve just learned from Allan that he wasn’t around at that time, which explains why he didn’t respond.

    Allan Miller: From 12th Sep to 30th I was yomping my way round the mountains of Norway, so I missed this and other comments relating to my arguments.

    Allan, I hope you had a great time in Norway!

  25. thorton: In your scenario if such slightly deleterious mutations do accumulate enough to have a negative impact on reproductive success then they *will* be subject to selection pressures and weeded out. If they don’t have a negative impact on reproductive success then who cares?

    I haven’t been making the case that the accumulation itself makes “a negative impact on reproductive success”. On the contrary, the whole thread emphasizes the accumulation of Neutral Errors in Copying, Randomly Occurring (which to be more specific includes those that are sufficiently near neutral as to not be weeded out).

    The reason it matters is that accumulating random errors in copying effectively randomize the contents of the sequences where the lack of selection allows that to occur. The organism goes on reproducing, but those sequences cannot be prevented from becoming effectively random sequences that change randomly. That is a blind random walk.

    The only hope for positive exit from that trajectory is for those changes to hit on some sequence that is selectable as reproductively beneficial in the current environment. That may be plausible if it has been a short random walk starting from an existing reproductively beneficial sequence. It may be discovering a beneficial variation in the near neighborhood of the original sequence.

    But if it started from a random sequence or if it has been a long walk from a formerly beneficial sequence (such that the former sequence information has now been significantly altered by accumulating random changes), this makes the prospect of hitting on a reproductively beneficial sequence far more bleak.

  26. But if it started from a random sequence or if it has been a long walk from a formerly beneficial sequence (such that the former sequence information has now been significantly altered by accumulating random changes), this makes the prospect of hitting on a reproductively beneficial sequence far more bleak.

    Two things:
    1. Function isn’t that rare, by actual experiment.
    2. Duplications.

    Okay, three things:
    3. If genetic meltdown happens it would happen first in organisms that reproduce rapidly, especially those that have relatively little non-coding DNA. It doesn’t happen. Bacteria are essentially immortal. they copy themselves without dying. Every living bacterium is the mutated version of its 4 billion year old self.

  27. ericB
    To support your contention that “very slightly deleterious mutations will not be selected out by NS”, that they are “lost in the noise”, you cited Akashi, 1995, thus:

    DNA_Jock’s statement has quite a bit of merit to it. There has been research suggesting “that even synonymous mutations and mutations in non-coding regions often have at least a very slightly deleterious effect [35 {= Akashi 1995}, 36].” On the other hand, most deleterious mutations have so small an effect that selection cannot effectively weed them out and they do accumulate in the genome. They are indeed swamped by the noise.

    In response, I quote from Akashi, revealing that they demonstrate that such VSDM are in fact fixed less often than “very slightly beneficial mutations” (the backmutations) are fixed. Thus, your own source for the degrading effect of VSDMs completely and utterly refutes your point.
    You reply

    You seem to say that as if I wouldn’t agree that beneficial mutations are more likely to fix.I’m not sure why.

    So you agree that very slightly beneficial mutations fix more often than very slightly detrimental mutations? In which case your NECRO argument vanishes. Remember, YOUR reference demonstrates than they fix more often, not merely that they are more likely to fix.
    As to what the “one accidental success to fix the basic system” refers to, you state:

    It quite explicitly does not refer to the first RNA sequence that is attempted to be read by the system.

    True. It refers to the first RNA sequence that is successfully read by the system, that is the first “mRNA”.
    This “one accidental success to fix the basic system” occurs moments after a triplet reading system is created. It is the fortuitous discovery of an RNA sequence that ‘codes for’ a useful peptide, given the TRS. That is the first mRNA. By repeating this phrase endlessly, you are hanging your hat on the idea that, even if a TRS were created, it would not find an mRNA to read. This is a GSW to the foot.

    Given that Allan has chosen to posit a system that starts with only one tRNA and that the sequences passed in were not designed as specifically coded for that tRNA, the vast majority of sequences that could be tried can be expected to produce literally nothing at all, since they will be overwhelmingly likely to encounter one of the many STOP codes (i.e. any codon at all other than the one that matches the single tRNA).

    Excellent. So you appear to recognize that the minimum number of different codons that your “functional(tm), operating, reproductively beneficial TRS” must recognize is ONE. Could you confirm this? The follow-up question would be “how long does a peptide have to be to confer a selectable advantage?”
    Once that is out of the way, we can move on to your “vast majority of sequences that could be tried”, which is wrong. If you answer Mike’s longstanding question about probabilities, you may see why. Perhaps.
    ETA: I see we have cross-posted. You cite your previous post, which highlights your problem with the math.

  28. One issue I think it important to recognise is that, for a peptide, ‘function’ is not synonymous with catalytic activity. A monotonous polypeptide can have function without actually doing anything chemical. Conversely, catalysis is not solely the preserve of folded globular polypeptides with many different subunits. You do appear to need a few different amino acids before you consistently get that kind of fold, due to the importance of a mixture of hydrophobic and hydrophilic residues, which is why I tend away from regarding the early function of what we appear to have come to call ‘the TRS’ as the production of protein enzymes.

    There is also the possibility of coevolution by complexing peptides with ribozymes, allowing more subtle folds of the latter without providing direct catalysis (as we see in the protein components of the ribosome).

  29. Allan Miller: Actually, I’m going to disagree with Mike here. Which does not make Eric or Steve’s case, but proof reading is a complex function that one would not expect a VERY early system to possess.

    Ah; but that makes many unwarranted assumptions about the first replicating molecules and the temperature of the environment in which they formed. The probabilities of copying errors are proportional to exp(- φ/kT), where φ is the binding energy.

    Extremophiles are another possible example of organisms that exist in environments at higher or lower temperature ranges. The binding energies of molecules making up early replicating systems could very well have been higher; having been formed at higher temperatures. And as these systems get shuttled into less energetic environments, i.e., lower temperatures, they would be quite robust compared to systems we find today. Such systems would have been templates for later systems that now exist in the temperature ranges we normally find today.

  30. In case it isn’t already evident, let me expand on the significance of exp(- φ/kT).

    Replication and copying errors are a part of many systems in chemistry and physics; not just living systems. Organic molecules and the molecules of life are molecular systems that exist close to the edge of melting; i.e., coming apart. We know a great deal about such systems; both inorganic and organic.

    The reason that the ID/creationists are so wrong about their mathematical calculations of probabilities is because they don’t understand the basics of chemistry and physics. They are wrong because they believe that life violates the second law of thermodynamics. They know they are not supposed to say that, but every misconception they have about molecular assemblies is a misconception about why the second law is required for life and molecular assemblies.

    All replicating systems – this includes crystals among the simpler and easiest to understand – require a set of conditions in their environments in order to replicate faithfully. Either the systems replicate atom-by-atom with enough time for atoms to find their lowest energy positions before being knocked askew by other atoms, or they condense and are annealed by fluctuations in the temperature of the environment in which they are forming.

    Annealing repairs copying errors and leaves the system intact. The notion of the warm little pond is not consistent with the early formation of systems that replicate. Those early environments have to have been very extreme and fluctuating environments, in which systems formed, were annealed, and then shuttled into less energetic environments. Those temperatures fluctuated regularly and over a much larger range than the environments into which they were ultimately shuttled.

    The temperature ranges of such environments can be estimated from the sizes of the binding energies of replicating systems.

    The second law of thermodynamics is necessary for the formation of condensed matter systems because energy has to be released and spread around in order for atoms to find their lowest energy configurations relative to the environmental conditions in which they exist.

    So NONE of the assertions about molecular assemblies and replication made by the ID/creationists have anything whatsoever to do with reality; they are not even close.

  31. ericB: I don’t have enough time for this, certainly not as much as everyone else combined (including some who are retired) and so I do prioritize. Some authors I skim or just skip. Even then, there remain comments I’d like to get back to and respond to, but that I haven’t (at least not yet). Sometimes I can catch up a bit, but at other times life has other priorities.

    If you presume to keep Gish Galloping and recycling ID/creationist “debating” points, you give up any credibility about your “lack of time.”

    You have been confronted repeatedly with your complete lack of understanding of the most basic concepts in biology, chemistry, and physics; yet you find time to type long screeds that keep the Gish Gallop going while at the same time ignoring any questions about your knowledge of basic science.

    If time is really so valuable to you, shouldn’t you at least consider learning some real science instead of constantly reciting your ID/creationist caricatures of science?

  32. I guess we have to be more specific about where we place ‘early’. I’m out of my depth somewhat, but the extremophile DNA polymerase usually used in PCR is optimal at 72 degrees. I don’t know if there are temperature dependence curves for polymerisation fidelity at different temperatures or not; I’d be interested to know as it would support your viewpoint.

    But the kind of organism I was thinking of would be one that could use a little fidelity. There must be a reason for proofreading, and that reason seems logically to be that there was a stage when fidelity was less close to optimal.

    Your exp() formula caught my eye as, prompted by wondering about the quantitative relationship between genome size and mutation rate, I’d just been fiddling round with a formula for the proportion of unmutated genome copies of different lengths given different mutation rates. It appears to be 1/e(rate * size). Even with a rate that gives 2 mutations per genome on average, 13.5% of genomes are unmutated, which is slightly counterintuitive (to a nonmathematician such as myself).

  33. Allan Miller,

    I don’t know if there are temperature dependence curves for polymerisation fidelity at different temperatures or not; I’d be interested to know as it would support your viewpoint.

    I am quite sure these are easy to find, at least the ones that aren’t proprietary in some company data base.

    Recall that polymers are a big part of applied chemistry; and the manufacture of them depends very strongly on temperature and temperature variation.

    When making samples for laboratory study, there are oodles of “recipes” that involve temperature cycling.

    You can even find online the calculated molar entropies, enthalpies, and heat capacities of organic compounds. These tell us about the distribution of energy states in a molecule. There are entire books of these things.

    But the kind of organism I was thinking of would be one that could use a little fidelity. There must be a reason for proofreading, and that reason seems logically to be that there was a stage when fidelity was less close to optimal.

    RNA-like molecules may have been among the first replicators. There is no reason to be wedded to the notion of mRNA for replication. One of the earlier replicators could simply have been a long polymer that simply doubled back on itself and found complementary sites that formed an early “double helix” or something like it. Cycling it through thermal gradients could lead to unzipping it and having it replicate. You don’t need long strands of complimentary matches.

    Your exp() formula caught my eye as, prompted by wondering about the quantitative relationship between genome size and mutation rate, …

    In any complex molecules there are literally hundreds of φs for the various binding sites and barriers to different molecular configurations. This is another reason that ID/creationist calculations of molecular assemblies are verschlecht. Not all bonds and barrier heights are the same; there is a distribution.

    Also, the mechanical bending of molecules can change the barrier heights between different molecular configurations thereby changing the probability of a transition to another state. That is part of what is behind catalysis. Thermal bombardments by other molecules can do the same thing.

  34. Allan Miller: It appears to be 1/e(rate * size).

    I suspect that your formula involving size may be related to the average energy state of a molecule, and that has something to do with size.

    If there is a distribution of φ(i), the probability of the ith energy state is exp(- φ(i)/kT)/Z, where Z = ∑exp(-φ(i)/kT); often called the “partition function.”

    The average binding energy then is ∑ φ(i) exp(-φ(i)/kT)/Z, where the sum over i goes from the ground state up to the total number of energy states.

    That number of energy states depends, of course, on the number of binding sites as well as the number of barriers between various configurations of the molecule. I suspect this is where size comes into the picture.

    I don’t understand the (rate x size), however. That product should come out to be dimensionless since it is an exponent.

  35. If mutations are independent, then the number of mutations in each genome is poisson distributed with mean (n x p) where n is the size and p is the per nucleotide rate.
    Poisson (x =0) = exp( -n x p).
    hence your 1/exp(nxp)

    Prussian cavalry officers and all that.

  36. DNA_Jock: If mutations are independent, then the number of mutations in each genome is poisson distributed with mean (n x p) where n is the size and p is the per nucleotide rate.Poisson (x =0) = exp( -n x p).hence your 1/exp(nxp)Prussian cavalry officers and all that.

    Something is missing here; because n x p would have the units of reciprocal time is p is a rate per nucleotide and n is the number of nucleotides. That doesn’t make sense. It should be dimensionless.

    Where does this formula come from?

    When you say rate, is that mutations per unit time or something like average number of mutations per nucleotide?

  37. DNA_Jock:

    Mike Elzinga,

    “Rate” as in mutation probability per nucleotide.

    N x p is thus the mean number of mutations per genome…

    Good! That makes sense.

    I’m used to thinking of rates in terms of time.

  38. ericB

    The reason it matters is that accumulating random errors in copying effectively randomize the contents of the sequences where the lack of selection allows that to occur.The organism goes on reproducing, but those sequences cannot be prevented from becoming effectively random sequences that change randomly.That is a blind random walk.

    For the dozenth time, so? If it doesn’t harm reproductive fitness it doesn’t matter.

    The only hope for positive exit from that trajectory is for those changes to hit on some sequence that is selectable as reproductively beneficial in the current environment.That may be plausible if it has been a short random walk starting from an existing reproductively beneficial sequence.It may be discovering a beneficial variation in the near neighborhood of the original sequence.

    Amazing! After all this time a Creationist finally figures out how evolution actually works.

    But if it started from a random sequence or if it has been a long walk from a formerly beneficial sequence (such that the former sequence information has now been significantly altered by accumulating random changes), this makes the prospect of hitting on a reproductively beneficial sequence far more bleak

    You make less sense every time you post. If the sequence was formerly beneficial then changes that reduce its effectiveness aren’t neutral, they’re deleterious and subject to selection. If it was completely neutral before then the probabilities of a new mutation being beneficial don’t change one iota. Do you ever think about what you write before you offer it here?

  39. DNA_Jock,

    Yep – I got there ‘backwards’ – used Excel and multiplicative powers of per-base probabilities, then noticed that the probabilities when mutation rate and no. of base pairs generated an integer product were converging on the familiar percentages from 1/ e^1, 1/e^2, etc.

  40. You make less sense every time you post. If the sequence was formerly beneficial then changes that reduce its effectiveness aren’t neutral, they’re deleterious and subject to selection. If it was completely neutral before then the probabilities of a new mutation being beneficial don’t change one iota.

    We can trace the history of many sequences over time by comparing related genomes, and in some cases, reconstruct intermediates. Perhaps Eric could make a more compelling case by citing some specific examples.

    ETA:

    I guess my point is that Eric is in the position of someone who has categorically proved that the moon must fall down. Or at least its orbit must decay over time, resulting in an eventual collision.

    And yet we observe the moon falling up. It’s orbital distance gradually increases.

    A sane person, noticing that his prediction from first principles fails, would be interested in why it fails.

  41. petrushka: A sane person, noticing that his prediction from first principles fails, would be interested in why it fails.

    First rule of an ideology: Never, EVER, acknowledge the existence of counter examples in nature.

    Second rule of an ideology: Always accuse your enemies of refusing to look at counter examples that you fabricated from your ideology.

    Third rule of an ideology: Always accuse your enemies of ideological bias.

  42. petrushka: 1. Function isn’t that rare, by actual experiment.

    I expect you are thinking of experiments to look at the near neighborhood of cases where function is known to exist. I explicitly acknowledged that a short walk from a functional case could lead to discovery of functional variations.

    But if you really are trying to say that function is not rare in sequence space and configuration space in general, that is quite implausible. As even Dawkins admitted, there are far more ways for arrangements to not work than to work. That was also a main theme in the article, The Levinthal paradox of the interactome. For both, see The Triplet-Reading System and the Exponential Space of Interacting Parts.

    petrushka: 2. Duplications.

    Yes, and …? I’ve already discussed duplications before, including in Observations #6 and #7.

    petrushka: 3. If genetic meltdown happens it would happen first in organisms that reproduce rapidly, especially those that have relatively little non-coding DNA. It doesn’t happen. Bacteria are essentially immortal. they copy themselves without dying. Every living bacterium is the mutated version of its 4 billion year old self.

    Why are you talking about genetic meltdown? That isn’t any part of what I’ve been discussing.

    Even concerning the article I pointed with regard to near neutral mutations, that was to point out the correctness of DNA_Jock’s earlier statement about very slightly deleterious mutations whose signal is lost in the noise. In that case, they cannot be weeded out by selection. As I mentioned in response to thorton, the key point is simply that they accumulate randomly and so work to randomize the content of those sequences. (But I am going to guess you hadn’t yet seen that response before you posted.) For the purposes of these considerations, they are effectively Neutral. Nothing in what I’ve been saying is making a case for genetic meltdown.

  43. Since evolution is always working with nearby space, your objection is moot.

    You have avoided about twenty requests to answer the “so what” question. If non-selectable sequences drift, so what?

    Duplicate genes can drift, and by actual experiment, quickly “find” new functionality.

  44. thorton: For the dozenth time, so? If it doesn’t harm reproductive fitness it doesn’t matter.

    It doesn’t matter to that organism’s ability to reproduce.

    The issue at hand is explaining the origin of complex molecular machines and systems, such as the TRS. The plausibility of that depends upon the capabilities, limitations, and tendencies of the means available to produce such systems. For that purpose, it does indeed matter that the trajectory of random changes is to randomize the content of any sequences that are not protected by natural selection.

    thorton: Amazing! After all this time a Creationist finally figures out how evolution actually works.

    I’ve understood this all along. What’s interesting is your apparent assumption that this is Amazing.

    What matters is how much can be plausibly attributed to what this can and cannot reasonably accomplish. To assume uncritically that “evolution can do anything” would be to take an attitude of blind faith. I don’t think you want to appeal to blind faith in evolution. Therefore, it is necessary to consider in a critical manner what are the limitations and what is the reasonable reach for such processes.

    thorton: You make less sense every time you post. If the sequence was formerly beneficial then changes that reduce its effectiveness aren’t neutral, they’re deleterious and subject to selection. … Do you ever think about what you write before you offer it here?

    Your assumption is incorrect and you should know why. If you had thought about it a bit first, I think you would have realized your mistake. Wasn’t it you who wrote:

    thorton: Once again you flat out refuse to understand that any genetic mutation can only be judged deleterious, neutral, or beneficial with respect to its effects in the current environment. You keep making the same beginner’s mistake over and over and over and over.

    As I recall, you made that statement in response to a specific quote of mine in which I was explicitly pointing out how a sequence might not be beneficial in its current environment, even if it had been beneficial in the past in a different environment or could be beneficial in the future in a different environment. You quoted my statement where I was explicitly saying X, and from this portrayed me as taking the position “not X” — the exact opposite.

    In any case, when you apply your own earlier statement, you can see that it does not follow that “If the sequence was formerly beneficial then changes that reduce its effectiveness aren’t neutral, they’re deleterious and subject to selection.”

  45. Eric, in the Lensky experiment, two nearly neutral mutations accumulated before enabling a third useful mutation.

    Now I realize this the same level of invention as the interpreter of the genetic code, but it does illustrate that neutral mutations can enable IC functions.

    ETA:

    The scenario that you think is unlike has in fact happened and has been observed. A small example, but it happened in a tiny population in a geologic microsecond.

    Now give us your scenario, so we can do some probability comparison.

  46. ericB:

    The issue at hand is explaining the origin of complex molecular machines and systems, such as the TRS.The plausibility of that depends upon the capabilities, limitations, and tendencies of the means available to produce such systems.For that purpose, it does indeed matter that the trajectory of random changes is to randomize the content of any sequences that are not protected by natural selection.

    I thought you understood that complex biological systems can evolve from simpler precursors, and that the precursors don’t need to be as effective or even have the same function as the extant system. Looks like I was wrong – you’re still clueless on the whole topic.

    What matters is how much can be plausibly attributed to what this can and cannot reasonably accomplish.To assume uncritically that “evolution can do anything” would be to take an attitude of blind faith.

    No one in science says or thinks “evolution can do anything”. What science does say is given all the evidence we have of evolutionary processes and how they work we’ve yet to find any barriers that would prevent evolution from producing the functions we see.

    I don’t think you want to appeal to blind faith in evolution.Therefore, it is necessary to consider in a critical manner what are the limitations and what is the reasonable reach for such processes.

    If you present anything reasonable we’ll consider it. All you’ve managed so far are the same old Creationist misunderstandings and arguments from ignorance based personal incredulity.

    (snip the rest of the repetitive blithering)

    You can lead a Creationist to the evidence but you can’t make him think.

  47. DNA_Jock: This “one accidental success to fix the basic system” occurs moments after a triplet reading system is created. It is the fortuitous discovery of an RNA sequence that ‘codes for’ a useful peptide, given the TRS.

    Do you have some actual basis for your apparent suggestion that a reproductively useful peptide would be produced “moments after a triplet reading system is created”? If you’ve got something, go ahead and set it out.

    DNA_Jock: Thus, your own source for the degrading effect of VSDMs completely and utterly refutes your point.

    About slightly beneficial mutations vs. slightly deleterious mutations, etc., you seem to think this is somehow significant to the points I’ve been making. Some others have made an incorrect assumption and have assumed I was trying to argue for a cumulative genetic collapse or the failure of the organism or something like that. That isn’t what you are supposing, is it?

    My point concerning “the degrading effect” of accumulating random changes in unprotected sequences is that they continually work toward randomizing the content of those sequences, just as I’ve pointed out in the Observations, and elsewhere. I don’t see that you’ve pointed out anything that refutes that. If you think one of the Observations is false, please indicate which one and why.

  48. My point concerning “the degrading effect” of accumulating random changes in unprotected sequences is that they continually work toward randomizing the content of those sequences…

    That might be true in some sequences, but nearby sequences can have completely new functionality.

Leave a Reply