The Nirenberg and Matthaei experiment was a scientific experiment performed on May 15, 1961, by Marshall W. Nirenberg and his post doctoral fellow, Heinrich J. Matthaei. The experiment cracked the genetic code by using nucleic acid homopolymers to translate specific amino acids.
The Nirenberg and Leder experiment was a scientific experiment performed in 1964 by Marshall W. Nirenberg and Philip Leder. The experiment elucidated the triplet nature of the genetic code and allowed the remaining ambiguous codons in the genetic code to be deciphered.
The Marshall W. Nirenberg Papers Public Reactions to the Genetic Code, 1961-1968
Nevertheless, the problem of the genetic code at least in the restricted one-dimensional sense (the linear correlation of the nucleotide sequence of polynucleotides with that of the amino acid sequence of polypeptides) would appear to have been solved.
In the years after 1953, scientists scrambled to be the first to decipher the genetic code. In an attempt to make the race interesting, theoretical physicist and astronomer George Gamow came up with a plan. He organized an exclusive club, the “RNA Tie Club,” in which each member would put forward ideas as to how the nucleotide bases were translated into proteins in the body’s cells. His club had twenty hand-picked members, one for each amino acid, and each wore a tie marked with the symbol of that amino acid. The group—which did not include Marshall Nirenberg—met several times during the 1950s but did not manage to be the first to break the code.
Genetic memory resides in specific molecules of nucleic acid. The information is encoded in the form of a linear sequence of bases of 4 varieties that corresponds to sequences of 20 varieties of amino acids in protein. The translation from nucleic acid to protein proceeds in a sequential fashion according to a systematic code with relatively simple rules. Each unit of nucleic acid defines the species of molecule to be selected, its position relative to the previous molecule selected, and the time of the event relative to the previous event. The nucleic acid therefore functions both as a template for other molecules and as a biological clock. The information is encoded and decoded in the form of a one-dimensional string. The polypeptide translation product then folds upon itself in a specific manner predetermined by the amino acid sequence, forming a complex, three-dimensional protein.
Scads of scientists. Two Nobel Prizes. Isn’t consensus science grand?
“…the fact is that present life requires semiotic control by coded gene strings.”
– Howard H. Pattee
Yes.
Those are entirely epistemic claims. As indicated above, true or false, they have no bearing whatever on any of the Design arguments as I understand them.
I don’t think I actually said they were arguments against ID. They were just musings.
One problem is I don’t recognize myself as saying what you quote me as saying. I suspect a glitch.
BTW, Allan, I didn’t mean to suggest by my use of “bio-babble” that your remarks on the biology here are not true, cogent, important, etc. Only that, to the extent they don’t distinguish between our ignorance and intrinsic randomness it’s impossible to tell what, if any, effect they have on any argument for design. Once you clearly indicated that you agree with keiths’ statement of hard determinism, it was clear that the biological details of DNA progressions have (in your view) no bearing on any design argument. If the DNA processes entailed actual (intrinsic) arbitrariness, the Frankieites would at least TRY to use them to support their claims that they may be uniquely tied to the only other mechanisms they claim to have the feature of arbitrariness, viz., stuff involving mentality or cogitation.
Obviously one could still criticize that analogy in various ways. But if determinism is true, it’s not necessary to do that. Instead, the appropriate response is just that (2) confuses randomness with human ignorance.
There is a glitch. Take another look. I was talking about Allan there, not you.
I didn’t say that you actually said they were arguments against ID, did I? I just pointed out that they aren’t.
Looks like all right-thinking people have ended up in a similar place.
Now, there are claims that Newtonian physics or GR are not deterministic. But these are controversial. And I rather doubt they are what ID proponents have in mind.
I think you are correct here.
Once the idea of information enters the picture we are introducing the concept of choice, by which it would seem to me to mean metaphysical arbitrariness.
I detect the possibility of an infinite regress.
To sum up, Upright is attempting an argument by elimination. If physical law alone doesn’t determine the codon-to-AA mapping, then intelligence must have been involved, according to UB.
This is quite stupid, because arguments by elimination work only if you eliminate all but one of the competing alternatives. Upright hasn’t come close to doing so, because he hasn’t considered the possible roles of boundary conditions and quantum randomness.
That’s not how I’d sum things up, but OK.
You’re welcome to present your own summation.
Allan,
Some mutations are caused by cosmic rays whose emission is subject to quantum indeterminacy. Wouldn’t you agree?
I’ve already done so a couple of times above. But, to put it in a nutshell, I’d say that Design arguments from “arbitrariness” are incompatible with determinism, but DNA sequencing (at least according to both you and Allan who know a deal more about it than I do) is not incompatible with determinism. Therefore, one can’t infer Design from the existence of DNA sequencing via any argument from arbitrariness.
I don’t think we need to reach that far to establish that the shaper of living things has been stochastic processes. The most important thing about evolution is that so much of it is neutral.
Neutral changes can become frozen when subsequent changes render them essential.
petrushka,
We do have to reach that far if we are trying to decide whether mutations are ever metaphysically random, versus merely epistemically random.
I see no way to decide that and consider it a waste of time. Unless you are researching physics at the bleeding edge.
walto,
But then your argument relies on the possibility that the universe is deterministic.
My point is that Upright’s argument fails even if the universe isn’t deterministic, because he has failed to eliminate all but one of the possible alternatives.
petrushka,
Physicists are working on it and making very good progress.
I’ve noticed that you’re relatively incurious and quick to dismiss questions that you don’t see as having an immediate practical relevance.
The alternative to design is no design–i.e determinism and indeterminism. The ‘boundary conditions’ biz is irrelevant.
“…because arguments by elimination only work if you eliminate all but one of the competing alternatives.”
Argument by elimination? I hope that’s not what it sounds like.
http://rationalwiki.org/wiki/Holmesian_fallacy
walto,
Keep in mind that design is possible even in a deterministic universe. It’s just that the outcome of the design process is inevitable, just like every other outcome.
The argument does stink, but it’s all ID has.
Yes, I agree with that. That fact makes it harder to express the alternatives to Design as ‘positives’ though. Because you could have a designed world that contained at least some indeterminism too! So, again, to put it tautologously the alternative to design is simply no design.
And you could add that arguments from ignorance are not only fallacious, but particularly pernicious in this area.
walto,
Not sure I agree with that ‘any’. If someone is invoking some kind of argument from appearances (which covers a lot of it), the biological process of DNA copying and sequence-correlated elimination/preservation is a thing that can give the appearance of design, and so has a bearing on the argument.
Even neutral sequences, with no correlation to survival/reproduction, can get swept into this ‘designed’ view, once one has opened that door. ‘We wouldn’t have these sequences if we didn’t need them’.
I agree that many of the arguments for ID amount to arguments from ignorance or God-of-the-gaps.
But I think the best way to understand the core issue is to consider it as science versus non-science.
The best way to think about the issue is not determinism versus randomness. For as you and Keith have agreed, a scientific explanation for the code can involve QM randomness, which could be ontological.
I also think designed versus not-designed is not the best approach since ID could claim a design argument is scientific and superior to a no-design scientific argument. For it is possible to explain the code as the design of an non-supernatural agent: perhaps we are all participants in some experiment being conducted by a super-intelligence. (BTW, this also covers arguments for the code as a based on the original intentionality of an agent).
But all that does is shift the question to providing a scientific explanation for the origins and actions of that intelligence.
If you say that only a supernatural agent would work, then your explanation is not scientific.
Or if you say that there is nothing we can know that would let us explain that agent scientifically, then again that approach is not scientific, or if you grant it is for the sake of argument, then you can argue that it is not a good scientific explanation compared to what eg Allan M has suggested.
So I think the key philosophical* issues in confronting ID are science versus non-science, what constitutes a scientific explanation, and how do we make inferences to the best scientific explanation.
This is a point KN has made about ID several times, BTW.
(In another post, you say boundary conditions don’t matter. Here is why they do. If we say science can provide an explanation of the genetic code, in the end this will involve the initial state of the universe, which is a boundary condition. Only certain initial states will work for the scientific explanation of the code: eg low entropy, certain values of constants. So an ID proponent could push the design argument to that point).
————————-
*I added the adjective “philosophical” because, in the US at least, the important issues are political and legal, not scientific or philosophical.
Yes, you’re right. I overstated that.
I just meant that In keiths’ alternatives list, they’re just placeholders for some stuff we may not know. I.e., we might know all the physical laws, including statistical probabilities right down to the zillionth place , but still not know all the prior conditions, so our inability to predict outcomes doesn’t require that there’s been a designer. IOW, they’re just another way of saying that the Frankie argument confuses–or may be confusing–intrinsic with epistemic arbitrariness. (And if there’s no commission of that error, they have no evidence for their claims at all, and then, as you say, it’s just a “God of the [epistemic] gaps” claim.)
Fair enough. I had the common scientific usage for “boundary conditions” in mind.
By the way, you mention prediction in the above. That’s separate from determinism and randomness, of course, and brings in issues of what can be computed, those issues being chaos (unpredictable effects of small differences in initial conditions) and also the physical limitations of computation in the real world.
I’m not sure whether these two are epistemic or not.
If Laplace’s demon can overcome them by magic powers, does that make them epistemic?
Magic powers are needed because infinite resources seem to be required to overcome these two issues.
I probably should think about this more, but my initial impression is that it’s question-begging against the theist who claims that science is essentially constructed in such a way that it can’t answer these questions (and only a God theory can).
It might be worth pointing out the possibility of equivocation on ‘prediction’. It’s not simply a question of predicting the future, but predicting a consequence of a hypothesis. For example, my model of codon assignments being added stepwise predicts the fault-tolerance of the table, since codon group subdivision is a constraint (forced by existing protein codon usage) that tends substitutions towards being chemically conservative. This gives fault tolerance on single misreads for free. Likewise, the constraint on substitution vs assignment of STOPs predicts that most codon variation in a ‘settled’ code will be around a STOP, and in one direction only.
I obviously knew these facts first before looking for a model that fit with them. But the model still ‘predicts’ these features.
I’d say so, yeah. In fact, even if Laplace’s demon can’t overcome them, they’re epistemic barriers to knowledge.