I thought I would give a comment by a poster with the handle “ericB” a little more publicity as it was buried deep in an old thread where it was unlikely to be seen by passing “materialists / evolutionists”.
§§§§§§§§§§§§§§§§§§§§
Calling all evolutionists / materialists! Your help is needed! Alan Fox has not been able to answer a particular challenge, but perhaps you know an answer.
The issue is simple and the bar is purposely set low. The question is whether there exists one or more coherent scenarios for the creation of a translation system by unguided chemicals.
The translation system in cells indicates intelligent design. I would submit that, regardless of how many billions of years one waited, it is not reasonable to expect that unguided chemicals would ever construct a system for translating symbolic information into functional proteins based on stored recipes and a coding convention.
[I realize people have thoughts about what happened earlier (e.g. that might not need proteins, for example) and what happened later (e.g. when a functioning cell provides the full benefits of true Darwinian evolution). For the purposes here, attention is focused specifically on the transition from a universe without symbolic translation to construct proteins to the origin of such a system. Whatever happened earlier or later, sooner or later this bridge would have to be crossed on any path proposed to lead to the cells we see now.]
One of the key considerations leading to this conclusion is that a translation system depends upon multiple components, all of which are needed in order to function.
+ Decoding
At the end, one needs the machinery to implement and apply the code to decode encoded symbolic information into its functional form. (In the cell, this is now the ribosome and supporting machinery and processes, but the first instance need not be identical to the current version.) Without this component, there is no expression of the functional form of what the symbolic information represents. The system as a whole would be useless as a translation system without this. Natural selection could not select for the advantages of beneficial expressed proteins, if the system cannot yet produce any. A DVD without any player might make a spiffy shiny disk, but it would be useless as a carrier of information.
+ Translatable Information Bearing Medium
There must be a medium that is both suitable for holding encoded information and that is compatible with the mechanism for decoding. Every decoding device imposes limitations and requirements. It would be useless to a DVD player if your video was on a USB thumb drive the DVD player could not accept instead of a suitable disk. In the cells we see, this is covered by DNA and ultimately mRNA.
+ Meaningful Information Encoded According to the Same Coding Convention
One obviously needs to have encoded information to decode. Without that, a decoding mechanism is useless for its translation system purpose. If you had blank DVDs or DVDs with randomly encoded gibberish or even DVDs with great high definition movies in the wrong format, the DVD player would not be able to produce meaningful results, and so would have no evolutionary benefit tied to its hypothetical but non-functioning translation abilities. In the cell, this information holds the recipes for functional proteins following the same encoding convention implemented by the ribosome and associated machinery.
+ Encoding Mechanisms
This is perhaps the least obvious component, since the cell does not contain any ability to create a new store of encoded protein recipes from scratch. Indeed, this absence is part of the motivating reasons for the central dogma of molecular biology. Nevertheless, even if this capability has disappeared from view, there would have to be an origin and a source for the meaningful information encoded according to the same coding convention as is used by the decoding component.
(For the moment, I will just note in passing that the idea of starting out with random gibberish and running the system until meaningful recipes are stumbled upon by accident is not a viable proposal.)
So there has to be some source capable of encoding, and this source must use the same coding convention as the decoding component. To have a working, beneficial DVD player, there must also be a way to make a usable DVD.
+ Meaningful Functional Source Material to Represent
It would do absolutely no good to have the entire system in place, if there did not also exist in some form or other a beneficial “something” to represent with all this symbolic capability. If you want to see a movie as output, there needs to be a movie that can be encoded as input. If you want functional proteins as output, there needs to be access to information about proper amino acid sequences for functional proteins that can serve as input. Otherwise, GIGO. Garbage In, Garbage Out. If there is no knowledge of what constitutes a sequence for a functional protein, then the result produced at the end of the line would not be a functional protein.
+ Some Other Way To Make What You Want The System To Produce
If we supposed that the first movie to be encoded onto a DVD came from being played on a DVD player, we would clearly be lost in circular thinking, which does not work as an explanation for origins. Likewise, if the only way to produce functional proteins is to get them by translating encoded protein recipes, that reveals an obvious problem for explaining the origin of that encoded information about functional proteins. How can blind Nature make a system for producing proteins, if there has never yet been any functional proteins in the universe? On the other hand, how does blind Nature discover and use functional proteins without having such a system to make them?
The core problem is that no single part of this system is useful as a translation system component if you don’t have the other parts of the system. There is nowhere for a blind process to start by accident that would be selectable toward building a translation system.
The final killer blow is that chemicals don’t care about this “problem” at all. Chemicals can fully fulfill all the laws of chemistry and physics using lifeless arrangements of matter and energy. Chemicals are not dissatisfied and have no unmet goals. A rock is “content” to be a rock. Likewise for lifeless tars.
The biology of cells needs chemistry, encoded information, and translation, but chemicals do not need encoded information or biology. They aren’t trying to become alive and literally could not care less about building an encoded information translation system.
§§§§§§§§§§§§§§§§§§§§§
I’m hoping ericB will find time to respond to any comments his challenge might elicit.
Isn’t this…
…exactly what Dawkins described? And what I’ve described as a real position? And also the position you seemed to be excluding when you said, “But what we did NOT have is a Natural Designer.”
I’m sorry, but I don’t understand how to reconcile these statements. You seemed to disagree, and then you seemed to say exactly what I quoted Dawkins to describe.
I do claim that the cell’s translation system is Artificial and therefore teleological. However, if it did originate from natural processes of law+chance, then my claim is false and the cell’s translation system is not teleological.
I agree with you that the term “guidance” suggests teleology, which is why when I say “unguided” I mean to include any combination natural processes of law + chance (including natural systems resulting from law+chance), while excluding choice/teleology.
I would consider the tornado to be an example of a natural process, not one requiring intelligent agency or choice. Molecules don’t act in isolation. They function in systems of interaction, just as you allude to.
The core distinction I am making is between what such natural systems can be expected to do and the nonempty set of those things that are detectably Artificial such that it is evident that someone has intervened — someone capable of intention and choice, such as working toward a goal for a future benefit.
What you claim and $4.50 will get you a venti latte at Starbucks.
Come back when you can demonstrate what you claim.
As an example of guidance, consider a river. It is a guided flow of water. The river banks provide the guidance. The river banks themselves were carved out by the river or perhaps a creek that later grew into the river. So, in some sense, the river (the guided water flow) has created its own guidance system.
My method of finding who provided the guidance is “follow the money”. Or, investigate who benefits. In the case of biological organisms, the main beneficiaries are those organisms themselves. So I see this as pointing to the organism, or their predecessors, providing the guidance needed. So I see it as something akin to how the river created its own guidance system. In short, biological organisms look evolved; they do not look like artifacts.
We already know he can’t explain the temperature dependence of living systems; for example hypothermia and hyperthermia.
Temperature dependence rules out vitalism of any sort; but there is no way he can know that because, despite the flurry of words, he doesn’t understand atomic and molecular interactions in soft-matter systems maintained at a constant temperature.
I think we have demonstrated pretty conclusively that he can’t even grasp that little high school level calculation. Fifty years of conscious ID/creationist avoidance of science pretty much tells the whole story behind ID/creationism.
Alan Fox,
Tell me Alan, why is every polypeptide not in the protein database?
Thanks for the response. I will have more to add later, hopefully tomorrow. For now, I notice that some of your exasperation is coming simply from miscommunications between us, such as just using different meanings for a word. Here is one example that popped up repeatedly — the word “enzyme”.
One can find multiple definitions for “enzyme” that assume that the only kind of an enzyme there can possibly be is a protein, and so the word “protein” occurs explicitly in the definition. Of course, at one time that was the state of knowledge. We did not know about enzymes made of RNA, i.e. ribozymes.
On the other hand, other sources will define the term “enzyme” but what it does rather than imply assumptions about whether it is made of protein or RNA.
Some of these will explicitly indicate that most but not all enzymes are proteins.
or
You’ll notice I had said
When I talked about enzymes, my statements need to be understood in terms of this broader, inclusive meaning of the term. I wasn’t suggesting, for example, that you needed to have proteins to build the ribosome and related translation machinery. I was saying that appropriate enzymes (i.e. ribozymes in this case) are needed (and need to be justified).
I may have time for one other quick post tonight. If so, my next will clarify what I think may have been another misunderstanding/miscommunication.
p.s. I perceive that you were using the other meaning of enzyme that was specific to proteins, which of course would lead to exasperating misunderstandings when you thought that was the meaning I was using.
Friday night, while looking for something else entirely, I stumbled upon an older thread that was surprisingly relevant.
Andre asks an excellent question regarding DNA as a part of an in-cell irreducibly complex communication system
here at UD
Even if one does not read through the thread, there is a diagram showing the logical structure of a communication system that is worth examining. BTW, the caption reads:
I had not seen that thread at all until just two days ago, but that diagram may clear up a misunderstanding.
When you mention you “… have placed the origin prior to…”, I get the sense you were thinking I meant “upstream” in a sense of ordering of historical origin. That’s not what I meant.
When I referred to “upstream”, I was talking about the flow of information within the translation system. It is also expressed in the idea of the central dogma of molecular biology.
If there is a “Source” and an “Encoder”, the store of recipes (or “Channel” in the diagram in terms of communication) can be downstream from those entities in terms of information flow. But if they are considered not to exist (as the central dogma describes, or as I believe Allan is stating), then the store (or “Channel”) is strictly upstream from the rest of the system.
That raises questions about how consequences downstream do or do not influence the state upstream.
I wonder if we’ll ever see an Intelligent Design Creationist with enough basic scientific understanding to grasp that argument by analogy just doesn’t cut it as evidence in the scientific world?
Looks like today’s not the day.
As best I can tell, the only truly difficult bit of a wholly naturalistic, wholly unguided Origin Of Life is the bit about imperfect self-reproduction. Once you’ve got that up & running, it’s all over; from that point on, there simply isn’t any need for intelligent intervention, end of discussion.
The key insight: For any given self-reproducing whatzit, the total number of whatzit-variations which can be achieved in one generation’s worth of change is finite.
And since we’re talking about a self-reproducing whatzit, the total population of whatzits will, by definition, rise along an exponential curve. Which means that the population of whatzits will exhaust all possible single-generation-worth-of-change variants of the ur-whatzit—and it will exhaust all those variations—all those mutations—in a hell of a lot less time than you might expect.
And apart from the ‘exhaustive search’ quality of exponential self-reproduction, there’s also the ‘massively parallel’ thing, too. If the probability that one individual Whatzit W carries a particular Mutation X is P, the probability that Whatzit W doesn’t carry Mutation X is (1-P). Given a population of N whatzits, the probability that none of those N whatzits will carry that Mutation X is (1-P)^N… and the probability that at least one of those N whatzits will carry Mutation X is, therefore, 1-((1-P)^N).
So you can calculate how large of a population of whatzits must exist before there’s a 50% chance that Mutation X occurs in at least one of the whatzits in that population.
If Mutation X is a one-in-a-million longshot, the whatzit-population has to be just about 693,000 in order for there to be a 50% chance of Mutation X occurring in the population. For comparison, 2^19 is 524,288; whatever the ‘doubling time’ for the whatzit’s population happens to be, it only takes 20 ‘doubling times’ for the whatzit’s population to rise to the point where there’s more than a 50-50 chance of a one-in-a-million mutation to show up in that population..
If Mutation X is a one-in-a-billion longshot, the whatzit-population has to be just about 693 million in order for there to be a 50% chance of Mutation X occurring in the population. For comparison, 2^29 is 536,870,912; whatever the ‘doubling time’ for the whatzit’s population happens to be, it only takes 30 ‘doubling times’ for the whatzit’s population to rise to the point where there’s more than a 50-50 chance of a one-in-a-billion mutation to show up in that population.
The corresponding calculations for more-improbable mutations (trillion-to-one, quadrillion-to-one, etc) are left as an exercise for the reader…
And that is why I say that once you’ve got imperfect self-reproduction up & running, intelligent intervention is superfluous.
Yes it is just that. A self replicator. Unfortunatly a self replicator with that caracteristics do not exists. The best candidate OOL researches have is RNA. But RNA have a coule of difficulties. First his formation in the enviroment that you want. Any natural sintetic rute you can immagine will lead to a quantity of isomers of the components that the formation of a functional molecule of RNA as we tested is able to self replicate will be chemically impossible.
The second is that for keep RNA self replicating at a speed at least equal at his degradation you need a flow of nucletides that it is impossible to immagine.
Then you have the problem to relate the self replicator RNA with proteins, because the actual ribozimes do not use peptides as cofactors.
You can immagine a naturale appearance of the code, the point is if it is relistic.
Let’s recap, mung.
I said;
You posted:
and then quoted a perfectly acceptable definition of a polypeptide thus:
“A polypeptide is a single linear polymer chain of amino acids bonded together by peptide bonds between the carboxyl and amino groups of adjacent amino acid residues.”
Absolutely fine except that
A protein is a single linear polymer chain of amino acids bonded together by peptide bonds between the carboxyl and amino groups of adjacent amino acid residues
is equally correct for a protein. Protein and polypeptide and protein are synonyms. A polypeptide is a protein; a protein is a polypeptide. The apparent confusion appears to have been yours, not mine.
And, having not answered my previous question, you now ask:
Presumably you are referring to the Protein Data Bank. I wonder if my previous answer now makes your question redundant. Remember “protein” and “polypeptide” are synonyms.
Otherwise it seems rather a daft question. Maybe you’ll clarify as I am disinclined to try and guess what your intended meaning is.
Enzymes are catalysts. Catalysts speed up the rate of chemical reactions without themselves being changed. The process often involves a mechanism that brings the molecules of the reaction into proximity. Typically, enzymes do this by binding the substrate, where binding may typically involve a chemical attraction such as hydrogen bonding.
If you want to discuss specific scientific concepts, you must either use a term as it is normally accepted and understood (in the particular field of study) or define how you are using it. The best thing is to create and define another name for whatever concept you wish to discuss. Communication requires effort from both parties but if you want to step outside well-understood and accepted conventions, the onus is on you to make yourself clear.
It seems to me that I have not been sufficiently clear in communicating the nature of my objections to your proposal. I think I’m beginning to see where you are understanding my statements differently than I intended, and also why that may be. So I’m sorry that I was not sufficiently clear.
A big portion of the disconnect is regarding the question of whether you have or have not provided a justification for each step in your proposal. You can see that I’m saying you are missing this, whereas you are equally sincere in your belief that you have provided this (which can be feel exasperating), particularly through your appeals to natural selection and the competitive advantages of, for example, the polypeptides produced by the early ribosome.
Since I am evidently not persuaded by this, you infer that I either must not understand natural selection or perhaps I’m not allowing for the selective advantage of the generated polypeptides. Here are a couple statements in your own words toward that effect.
Nevertheless, I do understand natural selection, and the problem or disconnect between what you and I are saying does not come primarily from whether the polypeptides generated by an early ribosome could confer selective advantage.
As I have considered your post, I would suggest that the primary reason we have been talking past one another to some extent is because we have been operating with two different meanings to the idea of a “step.”
I would suggest that you have been operating largely in terms of important conceptual steps or stages in development. Toward the end of your post, you provide 8 steps in a progressive sequence. I have no doubt that you consider it reasonable that each of these steps provides selective advantages over its predecessor. In your perspective, that provides the justification for each of these conceptual steps. Here again is your own wording.
Obviously, since you are providing all the “steps” and the immediate selective advantage that comes from achieving each one in the list, it is clear you are operating in terms of the conceptual steps, such as in your list, which is quite short (only about 8 such steps). These are the major, distinctive innovations or breakthroughs or advancements in your proposed scenario.
When I have written about how you are not justifying the necessary steps, I am not referring to large conceptual steps. Even if superman can leap tall buildings in a single bound, I am confident you and everyone else realizes that evolution cannot take each of these conceptual steps in a single leap at a time.
To go from one conceptual stage to another would require a great many individual steps of change over many generations. No one is expected to document these individually. The nature of the challenge is to justify a proposal that blind chemical processes would successfully navigate through to the next stage. Relevant considerations include but are not limited to the size of the search space and whether or not natural selection would guide the way through or if the variations are left adrift to chance and a blind search.
So, for example, it does no good at all to merely offer that there could be advantages to generating polypeptides from random RNA sequences (once one has reached the conceptual step of translating a noncoded RNA sequence into some polypeptide). Whether that is true or not is irrelevant if it is unreasonable to expect to reach that stage within many billions of years of blind search in the “hope” of building the necessary mechanisms to implement such a facility.
If the selective advantage offered comes from the polypeptides that would be produced, then
Suppose that among the RNA entities that have no translation ability of any kind (no early ribosome yet), one candidate entity has 2% of the new structures needed to build its first polypeptide sequence from a random RNA sequence, while another has 4%. That would not mean that the one with 4% of what it would need has double the selective advantage of the one that has only 2%. For both, the current output of translated polypeptides is zero — no advantage. Ditto for 48% ready or 96% ready.
This is the point I was trying to make about a key difference between natural processes and the choices of intelligent agents. The natural processes (including natural selection) cannot choose according future benefit, whereas an intelligent agent can.
I need to stop for tonight, so I don’t have the time to grab the many individual quotes. But if one looks through your proposal, all the advantages are derived after one has achieved the next level of functionality, such as the ability to make the polypeptide sequences, given a random RNA sequence. Even then the advantage may not be immediate. For example:
Final thought for tonight is that one needs to consider not just how one stage (i.e. conceptual step) compares to the previous one, but how natural selection and other considerations will weigh upon the process of change within the intervals.
For example, all other things being equal, why should we think that an RNA entity that is larger and bulkier (but with no advantage yet from its not yet functional future-ribosome) would compete effectively against smaller, more efficient models with all the same current capability, but without the excess bulk to impede replication? The future benefit of what could be made out of that extra bulk counts for nothing. If natural selection is operating, the present handicap of having the extra bulk is something that natural selection could use to select against the bulky, less efficient candidates.
Take a look at Life’s Ratchet, as well as this video.
I assume you think the definition you just gave for enzyme as a catalyst is acceptable. Notice that it is the same as the concept I quoted and that I was using. Compare your definition, for example, with the excerpt I gave from Concise Encyclopedia (provided by Merriam-Webster), this time with my emphasis added:
Your own definition did not specify that the catalyst had to be a protein and could not be made of RNA. Rather, it specified the nature of what it does and how it does it — which is exactly my point and exactly the way I was using the term.
So, is there a problem? Or did you just want to restate my point in your own words as a way of agreeing with me that enzymes are catalysts, etc. (whatever they are made of)?
Since I was operating within well-understood and accepted conventions (which you then restated), I take it that my use was just fine.
That said, I recognize and acknowledged that many older definitions of enzyme show only awareness of protein catalysts and no awareness of RNA catalysts. It is one of those areas where new discoveries (e.g. ribozymes) enlarge the set of entities that inhabit a concept (e.g. “Enzymes are catalysts. Catalysts speed up the rate of chemical reactions without themselves being changed. …”).
When Allan assumed by “enzyme” I was meaning “protein”, that was not particularly surprising. Now that we’ve spotted the different usages, Allan can be relieved that I was not suggesting what he thought I was suggesting.
All along I’ve been operating with the same standard definition that Alan Fox just gave, and will likely continue to do so.
Speaking of definitions (emphasis added),
Thus, according to Alan Fox’s desire to use terms as they are “normally accepted and understood (in the particular field of study)”, polypeptide and protein are not synonyms. At best (using the more inclusive meaning of polypeptide), proteins are a subset of polypeptide.
[As the passage above indicates, however, it is common to use “polypeptide” for those cases that lack a defined conformation, i.e. cases where one would not use “protein”.]
Since proteins can be considered a subset of polypeptides, that is why Alan was thrown off by the fact that the definition of polypeptide also applies to all subsets (e.g. to proteins).
The quoted definitions only show that proteins are a subset of polypeptides, not that the terms have ever been synonyms according to any “normally accepted and understood” definitions of the terms. In particular, the suggestion that any “polypeptide is a protein” is contrary to “normally accepted and understood” definitions of the terms.
While there are many aspects to Allan Miller’s proposal that could be questioned, I would request that we focus attention for the present on his “starting point” of a “simple peptide bonding ribozyme.”
I would also grant that in baseball it would be advantageous and “perfectly plausibly beneficial to a” team if they could choose third base as the starting point for all their runners. Just skip over that messy business about hitting the ball and getting safely through first and second base.
Showing that a destination would be “beneficial” — if it is reached — is not the same thing as showing it is reasonable to expect to reach the destination, given the known obstacles.
So let’s look at some of the obstacles that I presented and described as part of the challenge. I was quite explicit and clear about the need for any proposal to address the obstacles. Choosing a “starting point” that bypasses the obstacles is not a legitimate way of circumventing them.
For example: Obstacle #3: The Permanent Inability to Consider Future Benefit
At every point where Allan appeals to the beneficial effect of natural selection as a justification for the mechanisms that are needed, it is in light of the assumption that one has already arrived at the completed, working mechanism needed for that conceptual step. It might be working imperfectly, but it is built and working. During the preceding process through which that mechanism might be built, all of that is obviously a future benefit. It has no ability to guide or encourage any process to actually build the mechanism where no such mechanism existed before.
What is worse, if one considers the actual effect of natural selection prior to achieving a functional new mechanism, the tendency would be to eliminate any excess that detracts from efficiency of replication and instead to promote those entities that are at that time more efficient at replication.
So at present, it looks like achieving the “starting point” (i.e. a simple ribosome capable of receiving a random non-coded RNA sequence as guidance in any sense to producing a random polypeptide) is at best the result of an unbelievably lucky random walk blind search through innumerable configurations of RNA. The more realistic picture is that natural selection would actively discourage any progress toward building any such mechanism, since only present consequences matter, not future benefits.
Or consider Obstacle #2 and the need for ribozymes (as well as structural considerations) that would be required for building such a mechanism and making it function. Allan has acknowledged that ribozymes don’t work as well as proteins. Where one protein might suffice for a given job (e.g. a supporting role in the current translation mechanisms), it may be necessary to have multiple ribozymes working in concert.
In order to function, every ribozyme depends on having a suitable sequence. Again, is this simply a matter of an unbelievably lucky random walk, blind search through sequence space to find the ribozymes that are needed? If one ribozyme needed for the working system happened to be found, but others that it must collaborate with were not yet found, how does nature know to preserve it for its future use?
The overarching obstacle for production of suitably sequenced ribozymes comes from Obstacle #1. The chemistry is neutral with regard to the sequencing of the nucleotides, and it doesn’t care at all about making any progress toward any goal. The search is blind and sequence space is huge for ribozyme sized sequences. Even a “simple” ribosome cannot be merely assumed to be either small or simple in order to bypass this obstacle by mere assumption without warrant.
(There is also another issue regarding Obstacle #4. While I would prefer to focus mostly on the other issues, I will note in passing for now that there is the matter of the unit of replication. The mechanism of duplication for RNA (or DNA) only works on individual strands that are not bound to anything else. So this raises issues for any assumption of the complete reproduction of a cell-like organism as a whole and for how natural selection would operate. The cell can depend upon translation, which by definition does not exist yet for this challenge. Also, because it has translation, it can use proteins for reproducible structural components, which are not available yet. That said, the other issues above are the ones I would most like to hear responses for with regard to the first “simple” ribosome.)
ericB,
You have conflated Allan’s
“simple peptide bonding ribozyme”
with a later step, the
“a simple ribosome capable of receiving a random non-coded RNA sequence as guidance”
No wonder you are having difficulty understanding the pathway he has laid out. Please re-read his description more carefully.
They are all made the same way. The product of the translation system is a chain of peptide bonded amino acids. Call it what you like. The origin of the translation system is tied to a peptide bonded chain of some length which had utility for the cell. Whether we would call it a protein, peptide or polypeptide is unimportant.
ericB,
As I have noted, some very small ribozymes have been identified, such as a 5-base ribozyme that will aminoacylate ATP. The search space for that ribozyme contains 1024 members. It suits your purposes to make ribozymes (and proteins) as big as possible, but you have no ‘warrant’ for that, beyond modern exemplars. It’s a tradition going back to Fred Hoyle and beyond to take the modern large size of these macromolecules and insist that anyone believing they can be smaller has no warrant to do so. It is akin to saying that computers can only work if they have more than 4K of RAM.
Further, your objections to the role of natural selection are every bit as ‘hand-wavy’ as you accuse mine of being in promoting its role. Your initial proposal was looking for a ‘chemical’ rationale for translation. I insisted the rationale had to lie within Natural Selection. Now you claim to know all about it. It is only ever eliminative. You reckon. I propose a selective advantage to peptide bonding, and you invent some cost to ‘excess’ that will (you are sure) eliminate the peptide-bonders in short order. Yet, if these peptide bonds are providing some function that their rivals lack, all the benefit has to do is outweigh any cost. Since I don’t know anything about the organisms themselves, inferring only from modern traces, I know neither the first peptide product nor the selective milieu. But you are only guessing that it was insufficient for the competition (and therefore falls within the capacity of aliens and Gods, but not chemistry).
And, again to remind you, you conceded the prior existence of a functional replicating nucleic acid organism. This necessarily implies a set of functional ribozymes – the full set required to gather energy and raw materials and actually perform the task of replicating, without translation. It appears that you are now reneging on that concession – when you allowed the pre-existence of DNA and RNA, they weren’t allowed to actually do anything! Nonbiological DNA/RNA? My initial comment that it was ‘refreshing’ that such sidetracks were to be avoided now appears premature. The origin of DNA/RNA is a different question. It is not the origin of translation. It is the origin of replication (in my view). My argument is that translation can be discovered by a replicating nucleic acid organism, and become a complex code without having to start off a complex code. I should not have to explain the origin of nucleic acids or replicating nucleic acid organisms to do so (though I have some ideas…).
ericB,
I’ve been careful to offer an immediate benefit to each step, and repeatedly find myself denying appeals to future benefit. So this idea that I propose an organism “with no advantage yet from its not yet functional future-ribosome” does not come directly from my words. If an organism has a small suite of ribozymes which generate peptide bonds, it has the basic components of the system: aminoacylation, transacylation and peptidyl transferase (the latter alone is the functional ‘future-ribosome’; the first 2 generate its substrates). Of course, I don’t know for sure what any of those steps might have accomplished alone, but we’d have to have somewhat more substantial grounds for insisting that any one step cannot possibly be beneficial without the second and third. For example, attachment of amino acid to tRNA could be a precursor to transportation, and initially nothing to do with making protein. But – as I have said – it only takes sufficient peptide-bonded product to offset the costs, and selection will favour it. One may do. Two, three, four, etc, and RNA-only organisms start to look distinctly pasty.
When I did biochemistry, enzymes and ribozymes were regarded as two different things, one protein and one RNA. I still prefer that distinction, since there appears to be no simple word for ‘protein enzymes’ otherwise. But I see that more recent usage may be going against me.
Allan Miller:
Not even all proteins are made the same way, much less all polypeptides.
But even if this were true (which it isn’t), so what?
The last refuge of someone who can’t admit that a fellow “skeptic” has made a simple yet avoidable error.
May as well call all chemists biochemists, since it’s all chemistry.
To Alan Fox:
I’m still waiting, Alan.
Here is a paper espousing similar ideas to those I have outlined – especially, that the initial peptide products were not enzymatic but structural, and that the early role of amino acids (and pre-tRNA complexes such as aminoacyl-AMP) was as cofactors in ribozymal activity.
obviously wrong
Evidently it is obviously correct. Sorry, mung.
Glutathione is a polypeptide and is not a protein nor is it encoded via DNA.
Tangentially related: http://phys.org/news/2013-08-insight-genetic-code.html
Here ya go:
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0072225
Mung,
Not sure what you mean. That a ‘finished’ protein may involve combination of subunits, cleavage, methylation, etc, does not change the fact that the chains are made in a ribosome. I’m unaware of anything earning the label ‘protein’ whose peptide chain(s) originates in any other manner in living organisms. By ‘made the same way’, I was referring to the ribosomal origin of the peptide chains, not any post-translational modification. And I am aware of non-ribosomal peptides, but tend to think of them as ‘oligopeptides’ rather than ‘polypeptides’ (oligos – few, rather than polus, many), as they rarely contain more than a handful of subunits, and are rarely repetitively linked (a mix of alpha and other linkages, rather than all alpha). Calling them ‘polypeptide’ isn’t wrong, but neither is choosing not to do so, since there is a fair bit of wiggle room as to how many subunits make a ‘poly’. If you think ‘many’ means two or more, then yes, anything with two or more peptide-bonded subunits is a polypeptide. Those were not what I was referring to, however; my point was that ribosomal products do not organise themselves into neat categories, it is us who choose to label them (and there are, as you note, other things made in other ways that may fall into those categories. But so – as you might say – what?).
Ribosomal products are simply peptide alpha-chains of whatever length sits between Start and STOP. The benefit accruing to an early peptide-coder does not depend upon producing folded mature proteins, but upon the production of functional peptide-bonded products of some length. There are many possible functions, and not all require either folding or significant length or great sequence specificity.
So if one is looking for an advantage for the generation of repetitive peptide bonding, one should not restrict oneself by thinking too rigidly in terms of categories such as ‘protein’, ‘polypeptide’, ‘peptide’, ‘oligopeptide’ etc. They are categories for our convenience. If you think the only functional product is a folded protein, and they are always big & rare in their search space, you are missing numerous other possibilities.
No, the first refuge of someone who is more interested in the fundamentals of the process than in some point-scoring exercise you may have carried over from other discussions on other blogs.
You seem unduly hung up on labels. “It’s a code, I tells ya”. “They’re symbols, they are”. Case in point:
You don’t need to explain the origin of nucleic acids. What is implied by and included under “replicating nucleic acid organisms” may be another matter entirely.
I am not reneging on the concession I actually made. Here is the quote from my first post.
That was the concession (which is unchanged) that preceded your statement:
When you tried to suggest that I might be “departing from the terms of the challenge”, I reiterated and elaborated on the actual concessions.
Finally, there is the matter of the distinction of replication of DNA or RNA strands (which I am hand waving and allowing as an assumed freebie) vs. the reproduction of a supposed organism as a whole after the manner of reproduction of cellular organism. The latter I have never granted as a free assumption. I addressed this at length explicitly in the fourth obstacle.
More recently, I alluded to this same manner again, though with mixed feelings. I’m not wanting to distract from the discussion that is more focused on translation, but I’m a bit concerned at an apparent willingness to assume for granted that one has not only replication of individual strands of DNA or RNA but rather “a functional replicating nucleic acid organism” (your words).
I’m quite willing to grant the assumption of RNA World replication of RNA strands as the unit of replication and have assumed as much all along. I’ve never granted as a mere assumption the reproduction of a cellular organism as a whole along with anything one might want to freely pack into such a wide open assumption.
One of the explicit obstacles to address is the limited nature of replication of individual RNA strands and the influence natural selection would actually have in such an environment, i.e. tending away from complex structures, not toward them. The free strands are the ones available for replication. Those that are not free and not available would be selected out by natural selection.
ericB,
This is rather presposterous. You are effectively saying that as part of an ‘in-principle’, hypothetical scenario in which a translated genome arises from an untranslated one, I cannot assume that the precursor state was viable. What are you expecting to be supplied? The actual sequence of nucleotides? A working RNA organism? And if I can’t, that means that Nature was incapable of the task, and we must resort to aliens or Gods?
To repeat my response to that, modern examples are large and specific. They have been through a process of evolution. And quite likely grew large early on as well – larger structures appear to be more effective than smaller ones.
But your challenge was clearly the origin of translation. I don’t need to start with any translational capacity. I start with a transcribed and replicated genome, and take nothing for granted in that genome other than the ribozymal capacity to transcribe and replicate it, and break free of any encapsulating secretions in the process. If you want me to explain the origin of transcription, replication and division, that is a different challenge. Your focus was the code-like qualities of the translation system, which are not present in the base-pairing essence of transcription, replication and ribozyme folding.
Would it hurt? If DNA/RNA is replicating, it is an ‘organism’ under the terms by which I defined, minimally, life. The fact that DNA/RNA is typically encapsulated within a cell does not depend fundamentally on translation, but on the ability to synthesise lipid and transport through it. There are clear advantages to separating the replicating material from the environment. That separation is provided by material manufactured from raw materials imported from the environment by the xNA. Clearly, if an organism encapsulated itself and then was unable to replicate its way out of its own secretions, it would not leave any descendants. Encapsulation therefore must be co-ordinated with replication. But with or without a cellular nature, replicating xNA will encounter Natural Selection, which few other molecules can. It makes a world of difference.
If by ‘limited nature of replication of individual RNA strands’ you mean that RNA is typically single stranded, I’d argue that is only its commonest form in modern cells. Double stranded RNA is entirely possible, and forms the basis of many lab techniques eg in situ hybridisation. This technique only works because complementary RNA strands have high affinity, and can colocate each other from a mass of apparent noise. DNA forms a tighter coil, because of the absence of a bulky oxygen and the hydrophobic nature of methyluridine (thymidine) that occurs every 2 base pairs on average. But an RNA organism would almost certainly have a double stranded genome, transcribed to single strands for ribozyme formation.
It’s not a given that natural selection would always tend away from complex structures. Complexity and reproductive capacity are not universally correlated. If the more complex feature has more benefit than cost, measured in the currency of offspring, then it will increase at the expense of the simpler. We can see this in the tendency of enzymes to become (and remain) larger, despite assumedly greater cost and potential fragility.
Do you really consider it preposterous that I don’t let you “merely assume that the DNA or RNA has whatever sequence of nucleotides one may need”?
(And BTW, thank you for not trying to claim that I’ve changed what I’ve allowed.)
I’ve said that you could merely assume as viable the precursor state that you have RNA and/or DNA molecules with everything needed for replication of individual strands (which includes the existence of ribozymes to enable that replication process). You can also assume access to a random variability in the sequencing in the strands and different rates of replication due to variations in that sequencing.
That assumed world of RNA (and/or DNA, if you want it) allows a type of natural selection to operate, though I submit it won’t select in the direction you need it to go.
What I expect is a persuasive evaluation of the obstacles a proposal needs to overcome in order to reach the proposed destinations, i.e. your conceptual steps. (Providing and depending on the advantages that would accrue if they were reached would not be legitimate for this purpose, since it makes the teleological assumption of an ability to be lead toward a future benefit.)
If you want to depend merely on blind search and fantastic luck, your evaluation would need to consider the size of the relevant search space, such as for the ribozyme you hope to discover (which is not the same as the size of the search space for the smallest possible RNA sequence with any catalytic effect).
You also need to address the obstacles of the way the playing field is tilted against your process, such as the thermodynamic tendency to prefer to roll downhill and the expected relevant influence of natural selection as it works through the generations of variation (in contrast to merely invoking natural selection to compare your conceptual destinations).
If you want to call some long RNA sequence a kind of “genome”, I’ll even grant the assumption of random transcription of random parts of any strand. (In one of your earlier posts, IIRC, I believe I recall you alluding to the issue of how to know what to transcribe. That indeed is an issue when you don’t have a coordinated, complex organism with translation and all it enables.)
But the method of replication or of transcription — by its very nature — always produces a reversed and complementary strand of more RNA (or DNA), and always requires that the template strand bases are free and available.
By the inherent and unchangeable limitations of that process, that unit of replication doesn’t automatically give you “reproduction” of anything more than an RNA or DNA strand at a time (e.g. not of a larger organism as a whole, whose reproduction process would reliably create other structural components not made of RNA or DNA or would directly create complex structures as such, rather than individual strands of nucleotides).
No, I’m glad to clarify any misunderstanding about that. You can have single or double stranded in any combination. I am referring to the essential requirements for what it takes to create a complementary reversed sequence from a template.
I repeat for emphasis, this limitation comes from the inherent and unchangeable limitations of that process of strand replication via complementary binding of nucleotides. If you want greater abilities (e.g. the ability to construct something else reliably and repeatedly during “reproduction”), you would need to find some other process of “reproduction” that has greater abilities and that would be available to you before having translation.
This is necessarily part of the challenge of the creation of the translation system because it would be illegitimate to merely assume you could use the benefits of having a translation system in order to create the first translation system. The circular nature of such an illegitimate move is obvious. What tends to slip by unnoticed is how much of the cell’s ability to exist and function as an organism depends directly upon the fact that it depends upon translation — which is far more capable than RNA World replication of individual strands.
Now if you think you can justify something better even than strand replication without having translation, that path is open to you. You just cannot use an assumption of the benefits that come from what you are trying to create. That would be an illogical illegitimate move.
To do so successfully, reliably, and repeatably with appropriate controls and coordination that make it non-“lethal” may very well depend on the greater capabilities of an organism that can resort to stored programming that can be accessed with the help of translation. But if you can build the required coordination and controls without translation, have at it.
Keep in mind, however, that you cannot assume you have a coordinated “cell” without translation. The RNA/DNA only mechanisms provide strands of RNA and DNA, i.e. a colony of strands that can interact. You don’t yet have an “it” that is a coordinated and controlled organism with the built in programming to act as one.
In other words, you cannot assume a change of replication unit. You cannot assume a reproduced translation-based organism when the pre-translation universe gives you replication of strands.
This is clearly true. (In passing, notice the use of “organism”, “itself”, “its”, when nucleotide strand replication gives you “them”.)
I hope you are not sliding into a backward looking logic, e.g.
“Since X outcome would not leave descendants, Y outcome that would leave descendants is advantageous (if it occurs), and therefore Y (including all the coordination it requires) must have occurred(?).”
Or, if reaching my goal needs Y (which is better than X), then clearly it can and will create Y.
There would be clear advantages, if it could be made to work via a stepwise path without creating a “lethal” (no replication) scenario, which requires controls and coordination and programming — something that doesn’t come from a bunch of individual strands blindly following strand replication, strand natural selection, and entropic gradients.
A working cell based on a translation system can have what it needs built in and preprogrammed.
Here again, your analysis seems to only consider the benefits of the working / functional feature, as if natural selection only turned on after the creation process had reached a conceptual goal. It operates at every generation.
Not yet functional =
Not functional =
Not providing the benefits that come with reaching functionality =
Those future benefits cannot be considered by natural selection =
They cannot serve to outweigh the present real disadvantages in the nonfunctional transition.*
You seem to want to repeatedly invoke the benefits of future states as an implicit justification for the assumption they will be reached. That is an improper interpretation of natural selection, which is blind to future benefit but does consider present disadvantages — at every generation.
*p.s. I am not claiming there could not be something that would outweigh or justify a change. I am saying that future benefits cannot provide that justification to compensate present disadvantages.
While looking for something else unrelated this morning, I came upon this bit of advice that provides some food for thought with regard to the wanderings of prebiotic chemicals left to themselves.
You’ll get somewhere! And if you look back from where you are, you might convince yourself that was where you were headed all along! 🙂
Prebiotic chemicals, of course, cannot discover translation directly; on that we are agreed.
Regrettably, ericB’s post was stuck in moderation for a while. I should remember to check for that more often.
ericB,
Give that quote a read once more, because you missed something important:
ericB,
I don’t need a specific sequence of nucleotides, though clearly I propose (with some evidential support) a viable replicator lacking a translation system, whose nucleotide composition could be anything in the set of viable translation-free replicators. You believe that set has no members (I’d have to ask: why? My inability to create or provide one, or give much genetic detail, is hardly conclusive). If you don’t allow that as a plausible organism, then I must stare at my shoes and mumble “I got nuthin'”. Because Natural Selection is the only process that can cause chemistry to elaborate in this way, and it does not happen without competitive replication. As I say, it’s your game and your rules. You can (and, I’m sure, will) deny anything I adduce.
But I do think it preposterous that you demand detailed operational workings of an organism that, if it existed, did so 3.8 billion years ago. Specifically to truncate the pathway of argumentation that addresses your challenge using a mechanism you omitted to consider: Natural Selection operating on such organisms has the capacity to build complexities that ‘unguided chemicals’ do not.
Absent which, you invoke aliens/gods with the capacity to cross interstellar distances and perform complex atom-level arrangements that somehow avoid following entropic gradients until the full configuration is in place. And can pluck and combine complex sequences out of vast search spaces using only ‘intelligence’. You simply assume that they have just the capacities they need in order to do what ‘Nature’ cannot. Care to supply a bit more detail on that proposal? Inconsistency, thy name is ericB.
Eric,
It’s late; I may reply more fully on your other points another time. You seem persistently to misunderstand the mechanism of NS, and you keep banging on about translation as if it is the only means by which anything can happen in an ‘organism’ – any organism, not just a modern one. I don’t know why you think genetic control and co-ordination is only available with translated genomes, not purely transcribed ones.
Allan
Allan Miller:
Unfortunately your “case in point” isn’t. You know it, I know it, Alan knows it.
Direct me to a Protein Database that contains in it every conceivable polypeptide.
If all polypeptides are proteins, why do they not all appear in the Protein Databases?
Darwinism in a nutshell.
Mike Elzinga:
I took a look. I’m guessing that I took a look before you did. You found something in it of interest? Do tell. If you look in the index, Genetic code turns up one reference (p 202)
Not required if there is no code. Right Mike?
[Yes, these are all quotes from the book Mike E. cited. I’m guessing he never actually read it.]
Mike Elzinga:
WHY?
http://en.wikipedia.org/wiki/Extinction_event
“Over 98% of documented species are now extinct…”
So?
My case in point was you being unduly hung up on labels. Which you confirm by another post in which you are unduly hung up on labels. It doesn’t matter, Mung. Things don’t happen contingent on the way we define them. We’re human beings. We like to classify and categorise, but nature – especially biology – does not always permit simple binning into discrete categories. Words, servants, masters, etc.
You’re surely not saying that the difference is that proteins must be ‘real’ whereas ‘polypeptide’ includes sequences that have never actually existed?
A single example of each ‘conceivable polypeptide’, actual size, would fill many, many universes the size of ours to a mind-boggling degree. So the curators of Protein Databases probably have to be a little selective. And their purpose is likely to catalogue Nature, not sequence space.
Take one polypeptide. Allow it to fold. You have a protein, and a polypeptide. Cleave its backbone in one place. You now have two polypeptides, but still only one protein. “A” protein can be a monomer, dimer, trimer etc. “A” polypeptide can only be a monomer. You can separate the subunits of a multimeric protein and you would have as many proteins as you had polypeptides. If you unravelled the tertiary structure, you might drop the appelation ‘protein’ – even though the ‘Protein Database’ may contain primary structure only. Hey ho. To what extent does all this matter when it comes to peptide synthesis?
EricB says:
I rather think it suggests that polypeptides are a subset of proteins, in that, if one wishes to make a distinction, one may limit the use of “polypeptide” to below a certain threshold of number of residues in a protein chain as your quote goes on to say. But I think Allan Miller makes a reasonable point in suggesting “oligopeptide” is a better description for the subset. I still think the simple point to bear in mind is that all proteins (and polypeptides, if you insist on needing a separate category) have the same linear structure of amino acids linked by peptide bonds; i. e. from the amino group of one to the carboxyl group of its neighbour. All nuances can be covered by adding adjectives to “protein” (big, small, functional, unknown, etc etc).
I responded upthread.
Agreeing with Allan, just to add that the set of all conceivable polypeptides is theoretically infinite. Start at n=2 and calculate n raised to the 20th power. Then do it for all other positive integral values of n greater than 2 and add the results. It’s a big number. But of course there are vastly more unknown proteins than those that are so far known in nature or have been synthesized. I can’t imagine how or why you could include anything about (so far) unknown protein sequences.
If the answer is not now obvious to you, I suggest rephrasing your query.
So this world contains replicating double stranded RNA/DNA, with transcription into functional RNA (ribozymes, riboswitches, etc)? Good. What leads you to think such a system won’t select in the direction I ‘need’ it to go? Selection boils down to consistent reproductive advantage. You think that nothing in the steps I propose could possibly have such an advantage? You think that such an organism has something additional to ‘solve’ before primitive peptide synthesis can occur, given that is assumed replicatively competent, with all that implies?
You repeatedly assert, and I repeatedly deny, that I am invoking ‘future function’ as the rationale for any step (or sub-step). It’s pretty clear that I know, from my vantage point on the timeline, what my explanandum is: modern protein synthesis. So naturally I propose steps that lead towards that. What would be the point of proposing steps that lead anywhere else? But this does not mean that getting ‘here’ is the reason my earlier steps were taken, at the time they arose. I simply know where ‘here’ is, and what clues there are stuck to the wheels and windshield.
Hah! Why should I restrict myself to the ‘relevant search space’, when Hoyle-style hogwash based on the combinatorics of modern long sequences of 20 acids emanates repeatedly from your side! A 5-base ribozyme may indeed not be an actual actor in the history of life. But the existence of such structures is a useful corrective against those inflationary games that your colleagues play with gay abandon. You don’t need ‘fantastic luck’ to pull out a plum from a small space that contains functional catalysts. On the other hand, if your space really is huge and your target small, how does ‘intelligence’ help you find it? I have buried a gold doubloon on the continent of Australia. Using intelligence alone, in place of ‘blind search and fantastic luck’, locate it.
If I have a viable transcribing replicator, it has clearly already solved the problem of energy (and raw material) cycling. And you may give further thought to an issue I have noted before – if a viable replicator must be assembled, as a large and complex whole, how is this atomic structure brought into the desired configuration without entropy going off all around you? You may be an engineer (I don’t know, but IDers frequently are), but your engineering intuition loses traction at the atomic scale, as Mike Elzinga regularly points out.
I don’t know where you get that from. As noted above, Natural Selection says nothing about ‘destination’. It is an immediate effect based upon the differential in reproduction caused by possession or lack of a particular phenotype in a population where there are others.
Why this restriction on the extent of transcriptional control? ‘Random’ transcription of ‘random parts of the strand’? How could such an organism fulfil my criteria of being a viable transcribing replicator? If it makes ribozymes, they must be delimited by transcriptional Start and End marks. And there must be some element of transcriptional control. All of this is well within the capacity of non-ribozymal RNAs. This continues to the present. A pervasive mechanism of gene control is siRNA (small interfering). Of course, as with catalysis, many molecules in transcriptional control today are translated proteins. They bind RNA. Can RNA bind RNA? Of course it can!
Indeed, replication and transcription both require separation of the relevant portion of double stranded xNA. Is that chemically beyond RNA to achieve? Not as far as I know.
Suppose you have such a process that could only replicate a single xNA strand. This must proceed 3’ to 5’, so that xTP monomer energy can be applied in making the bond. All you need to replicate double stranded xNA is to separate the strands and set two versions of this process off in opposite directions. Replicating both strands is the essence of reproduction. Modern cells do it slightly differently, replicating both strands in the same physical direction simultaneously, with much fiddling on the lagging strand (although bacteria continue to send two replicating complexes off in opposite directions round their circular chromosome, taking half each). The number of strands you can replicate with one ‘replisome’ is a trivial problem – just use two!
Which is why I don’t make it, and I’m surprised you persist in perceiving me as doing so. The benefits of having uncoded peptide synthesis, for example, are nothing to do with the benefits of translation. Short uncoded peptides can function as antioxidants, can assist structural integrity, can act as cofactors, can bind RNA for all manner of reasons many of which can be beneficial (increase reproductive capacity).
I think I’m fairly well up on the extent of integration of translated peptides into the modern cell. What is illegitimate on your part is the assumption that this clear molecular superiority (in many respects) equates to essentiality. And the insistence that, though you acknowledge the clear and wide-ranging benefit that even uncoded peptides provide today, an organism that does not have them cannot evolve them – at all, even down to simple condensation – because selection would disfavour it! How’s that work? Beneficial molecules that decrease reproduction?
Most of the ‘co-ordination and controls’ take place at the transcriptional level even now, and many involve RNAs. It is hardly a wild fantasy to envisage the whole system of switching and transcript delineation to be entirely under RNA control. Are you aware of a transcriptional control molecule that has to be protein?
Oh, you absolutely have an ‘it’. Each covalently linked xNA ‘genome’ is an ‘it’, for whose ‘benefit’ all phenotype serves. The xNA genome is transcribed in the service of this ‘it’. Those transcripts that enhance replicative capacity/survival will find their xNA replicated as part of the whole more frequently than those that hinder it.
I don’t assume a change of replication unit. The replication unit is, was, and always will be, the genome. It is copied into transcription units, which may or may not be further processed as translation units. I’m not invoking anything more than a new means of generating phenotype via change in post-transcriptional processes.
Neither of those caricatures represents my position. If a specific change X causes its bearers to leave no descendants, the population will only contain not-X. If Y occurs and is better than (a different, resident) X (irrespective of ‘my’ goals) the population will tend to become enriched in Y and impoverished in X, to the point of extinction of X.
You obviously read me that way, and I wish you’d stop! You were the one who said “natural selection would tend away from complex structures”, which implies a generational and universal tendency to eliminate even incremental departures from the simple. I’m saying that universal generational assumption is invalid, which is NOT an argument promoting the future benefit of the complex feature, but simply looking at simplicity/complexity on the same scale as you were. Complexity can be acquired (and lost) serially. A longer sequence is more complex than a shorter one, and fractionally more costly. Be that as it may, if that sequence is neutral or better, it suffers no inherent tendency to be lost from the population. The new sequence can become longer still. If that sequence … etc etc. At no point do I regard future benefit as a driver of Natural Selection. Never. Not once. If you read me that way again, read it more carefully, because you have almost certainly misread me. 🙂