# Natural selection can put Functional Information into the genome

It is quite common for ID commenters to argue that it is not possible for evolutionary forces such as natural selection to put Functional Information or Specified Information) into the genome. Whether they know it or not, these commenters are relying on William Dembski’s Law of Conservation of Complex Specified information. It is supposed to show that Complex Specified Information cannot be put into the genome. Many people have argued that this theorem is incorrect. In my 2007 article I summarized many of these objections and added some of my own.

One of the sections of that article gave a simple computational example of mine showing natural selection putting nearly 2 bits of specified information into the genome, by replacing an equal mixture of A, T, G, and C at one site with 99.9% C.

This post is intended to show a more dramatic example along the same lines.

Suppose that we have a large population of wombats and we are following 100 loci in their genome. We will make the wombats haploid rather than diploid, to make the argument simpler (diploid wombats would give a nearly equivalent result). At each locus there are two possible alleles, which we will call 0 and 1. We start with equal gene frequencies 1/2 and 1/2 of these two alleles at each locus. We also assume no association (no linkage disequilibrium) between alleles at different loci. Initially the haploytypes (haploid genotypes) are all combinations from 00000…000 to 11111…111, all equiprobable.

Let’s assume that the 1 allele is more fit than the 0 allele at each locus. The fitness of 1 is 1.01, and the fitness of 0 is 1. We assume that the fitnesses are multiplicative, so that a haploid genotype with M alleles 1 and 100-M alleles 0 has fitness 1.01 raised to the Mth power. Initially the number of 1s and 0s will be nearly 50:50 in all genotypes. The fraction of genotypes that have 90:10 or more will be very small, in fact less than 0.0000000000000000154. So very few individuals will have high fitnesses.

What will happen to these multiple loci? This case results in the gene frequency of the 1 allele rising at each locus. The straightforward equations of theoretical population genetics show that after 214 generations of natural selection, the genotypes will now have gene frequency 0.8937253. The fraction of genotypes having 90:10 or more will then be 0.500711. So the distribution of genotypes has moved far enough toward ones of high fitness that over half of them have 90 or more 1s. If you feel that this is not far enough, consider what happens after 500 generations. The gene frequencies at each locus are then 0.99314, and the fraction of the population with more than 90 1s is then more than 0.999999999.

The essence of the notion of Functional Information, or Specified Information, is that it measures how far out on some scale the genotypes have gone. The relevant measure is fitness. Whether or not my discussion (or Dembski’s) is sound information theory, the key question is whether there is some conservation law which shows that natural selection cannot significantly improve fitness by improving adaptation. My paper argued that there is no such law. This numerical example shows a simple model of natural selection doing exactly what Dembski’s LCCSI law said it cannot do. I should note that Dembski set the threshold for Complex Specified Information far enough out on the fitness scale that we would have needed to use 500 loci in this example. We could do so — I used 100 loci here because the calculations gave less trouble with underflows.

I hope that ID commenters will take examples like this into account and change their tune.

Let me anticipate some objections and quickly answer them:

1. This is an oversimplified model, you were not realistic. Dembski’s theorems were intended to show that even in simple models, Specified (or Functional) Information could not be put into genomes. It is therefore appropriate to check that in such simplified models, where we can do the calculation. For if natural selection is in trouble in these simple models, it is in trouble more generally.

2. You have not allowed for genetic drift, which would be present in any finite population. For simplicity I left it out and did a completely deterministic model. Adding in genetic drift would complicate the presentation enormously, but would still result in the achievement of a population with all 11111…1111 genotypes after only a modest number more of generations.

3. If fitness differences are due to inviability of some genotypes, fitnesses could not exceed 1. Yes, but making the 0 allele have fitness 1/1.01 = 0.9900990099… and the 1 allele have fitness 1 could then be used, and the results would be exactly the same, as long as the ratio of fitnesses of 0 and 1 is still 1:1.01.

4. You just followed gene frequencies — what about frequencies of haplotypes? This case was set up with multiplicative fitnesses so that there would never be linkage disequilibrium, so only gene frequencies need to be followed.

I trust also that people will not raise all sorts of other matters (the origin of life, the bacterial flagellum, the origin of the universe, quantum mechanics, etc.) To do so would be to admit that they have no answer to this example, which shows that natural selection can put functional information into the genome.

This entry was posted in Uncategorized by Joe Felsenstein. Bookmark the permalink.

Been messing about with phylogenies, coalescents, theoretical population genetics, and stomping bad mathematical arguments by creationists for some years.

## 228 Replies to “Natural selection can put Functional Information into the genome”

1. Allan Miller
Ignored
says:

SCheeseman:

“What is the global increase in CSI in the universe when a chicken lays an egg?” The answer to that question, pretty well without exception, is zero, or negative. Of course “evolution” requires it to be, on average, greater than or equal to zero, or you have stasis or “devolution”.

Not really. Evolution does not require it at all. If it is zero (and you mean by that that any change is neutral wrt the parental version), no harm donem but evolution still happens if there has been a change. If it is less than zero then natural selection will tend towards elimination of the negatives – the population is not necessarily dragged down by the negatives (in what I suspect is a version of Sanford’s ‘genetic entropy’).

As far as adaptation is concerned, evolution does not require it to be greater than zero on average, but only on occasion. Natural selection promotes the fraction that is beneficial, even if the detrimental fraction is far greater – and it tends towards elimination of those too.

2. SCheesman
Ignored
says:

…And do note I said “pretty well without exception”, not “never”. I do accept that under special conditions small amounts of information could be added; just that it is not the normal result. The amount added would make sense given the reproductive rate and available “probabilistic resources” to search the space of possibilities – and of the course the new “solution” has to be there for the taking.

3. SCheesman
Ignored
says:

Thank-you Allen. I really am in pretty well in agreement with you on all this. When I said it is required to be slightly above average, I meant averaged over all time from the origin of life to the present, seing as how the current level of genetic information is non-zero, and we began with zero.

4. SCheesman
Ignored
says:

Allan Miller: If it is less than zero then natural selection will tend towards elimination of the negatives – the population is not necessarily dragged down by the negatives (in what I suspect is a version of Sanford’s ‘genetic entropy’).

This is certainly possible, but in proffered examples of evolution like antibiotic resistance what we actually observe is loss of function conferring a local survival benefit. Burning the drawbridge, so to speak, keeps the invaders out. Remove the environmental challenge and the evolved offspring that survived don’t normally outperform the original population. In the business world they say you can’t expect to grow in the long term when all you do is cut staff to save money.

5. SCheesman
Ignored
says:

Other interesting examples of course are things like the loss of eyes in cave-dwelling amphibians — Use it or lose it.

6. Allan Miller
Ignored
says:

SCheeseman – There are bound to be many examples of lineages that have painted themselves into evolutionary corners. Although antibiotic resistance proves a ‘short-termist’ strategy, this is largely because the stress is short-term. The ancestral population has survived, and is ready to re-establish supremacy in the absence of the stress. This is also evolution! But to extrapolate that to a general rule that lineages typically paint themselves into such corners is not, I think, justified. The vast majority of species that have ever existed are extinct. The ones that are not extinct are fortunate as much as adapted.

Modern survivors are fortunate as individuals – any one event in their four billion years history could be removed and – ping! – they never were. And they are fortunate as species. Despite demands as varied as survival as a bacterium in a world without eukaryotes, through to single-celled eukaryote, protochordate, fish, amphibian, dinosaurian, bird and finally chicken, the lineage has had the fortune to twist and turn, to swerve past the 5 major mass extinctions and many smaller ones and come out, so far, unscathed. But chickens could just as easily be the end of the line for that particular twig.

This is the problem with drawing inferences from ‘forward-looking’ experiments on modern lineages – particularly lab-bred lines – and extrapolating back to infer past constraints or general rules. Their ancestors evolved; their descendants may or may not.

There is simply a great deal of chance involved in evolutionary history, and the next great clade (cf birds) may presently be an obscure little flatworm. By the time it reaches current levels of diversity, the Age of Birds could be over.

7. SCheesman
Ignored
says:

Thank-you, Allen. I enjoy your posts.

8. Creodont
Ignored
says:

SCheesman: What is telling is your misunderstanding of the issues. Forget CSI for a moment. How much information is in a chicken egg? The same as in a chicken. What is that information except the instructions and machinery necessary to create a chicken? This is hardly different than asking to quantify the information required to create life. Give me the instructions to create life from non-life, and I’ll be well along the way telling you the quantity of “Chicken-Specifying-Information”.

CSI is about origins, as others on this and other threads have pointed out. It is virtually incoherent to ask how much CSI is in a chicken egg. What you can answer are questions like “What is the global increase in CSI in the universe when a chicken lays an egg?”The answer to that question, pretty well without exception, is zero, or negative. Of course “evolution” requires it to be, on average, greater than or equal to zero, or you have stasis or “devolution”.

Information is one of the words in CSI (complex specified information). Adding the word origins to a discussion about CSI (or functional information) doesn’t change the fact that ID proponents claim that CSI is measurable in biological things and in many or all non-biological things. Since ID proponents are the ones claiming that CSI or functional information or information or whatever term is in fashion at the time is measurable, it should be measurable.

Frankly, I feel that you’re doing what IDists always do and that is trying to confuse the issue and bringing diversionary stuff into the discussion so as to get out of supporting their claims. Either CSI is measurable or it isn’t. If it is, I’d like to see it done with a variety of biological and non-biological things. A common chicken egg should be easy, and if CSI is a measurable, useful metric, and is about origins, you or another IDist should be able to give me the instructions to create life from non-life, and be able to state the quantity of “Chicken-Specifying-Information” in a chicken egg.

9. SCheesman
Ignored
says:

Creodont: Information is one of the words in CSI (complex specified information). Adding the word origins to a discussion about CSI (or functional information) doesn’t change the fact that ID proponents claim that CSI is measurable in biological things and in many or all non-biological things. Since ID proponents are the ones claiming that CSI or functional information or information or whatever term is in fashion at the time is measurable, it should be measurable.

In fact, I am trying to clarify things by explaining what CSI is and is not. Not all information is CSI. CSI in an ID context can only be understood with respect to irriducible complexity. IC is a series of instructions that cannot be reduced if a given function is to be obtained; And IC is CSI if the number of such instructions is above a threshhold which prevents its attainment through mere chance, typically 120-150 bits of information. It is the bulls-eye that must be achieved in order for something to exist. If you ask how much CSI is in a chicken, you must divide the chicken into all its constituent functions, processes, molecular machines and examine the origins of each individually to see which are IC. Like a bacterial flagellum (an example, not a part of a chicken!) You are free to deny that any part is in fact irriducibly complex (and if you didn’t, then you’d be an ID’ist). This description is hardly original. It is essentially what Richard Dawkins wrote about in Climbing Mount Improbable. CSI is the height of the cliffs that cannot be scaled on the way to the top.

In any process where there are M paths to achieve a required result and a total of N equally probable paths that can be taken, then the number of bits of information required to describe a successful path is – log_2 (N / M). One way to describe such a path is that M decisions must be correctly chosen, where each has a 50% chance of being the correct one. No it’s not easy to apply that to biological systems, but with the genetic code there is glimmer that you could begin to make such a measurement.

10. olegt
Ignored
says:

SCheesman: In any process where there are M paths to achieve a required result and a total of N equally probable paths that can be taken, then the number of bits of information required to describe a successful path is – log_2 (N / M). One way to describe such a path is that M decisions must be correctly chosen, where each has a 50% chance of being the correct one. No it’s not easy to apply that to biological systems, but with the genetic code there is glimmer that you could begin to make such a measurement.

This definition does not suffer from being too specific to biological objects. Can we apply it to systems in statistical mechanics? For example, take the Ising model of a magnet, where magnetic dipoles are represented by numbers +1 (up) and −1 (down). A state of lowest energy is achieved when all moments are parallel, either all up or all down.

In a system with N dipoles there are 2^N possible states. Only two of them are ground states. In order to go from an arbitrary initial state to one of the ground states, one has to choose the state of each of the N dipole and there are 2 valid answers. So it looks to me that there are M = 2 out of 2^N paths, so we are dealing with N−1 bits of specified information.

Does that make sense?

11. petrushka
Ignored
says:

I’m still a bit confused about how one determines that a biological state is a “result” rather than a state. Do we have some independent way of knowing that a configuration was a goal?

My other source of confusion is in figuring out how one knows how many paths were available and how may alternate paths would have maintained viability.

It would seem that without these you have a kind of Drake equation where all the important variables are unknown.

12. Norm Olsen
Ignored
says:

CSI is a useless tool that no one was looking for, designed to answer a question that no one was asking.

Consider a boulder lying at the base of a large cliff. Is the interesting question about the weathering processes that loosened the boulder from its perch at the top? Apparently, no, it’s calculating the exceedingly small probability that the boulder happened to come to rest at that exact spot and determining exactly how many times, and where exactly, the boulder struck on its way down.

We should be focusing on processes that can be understood and not probabilities that can’t be calculated.

13. SCheesman
Ignored
says:

olegt: In a system with N dipoles there are 2^N possible states. Only two of them are ground states. In order to go from an arbitrary initial state to one of the ground states, one has to choose the state of each of the N dipole and there are 2 valid answers. So it looks to me that there are M = 2 out of 2^N paths, so we are dealing with N−1 bits of specified information.

Before I proceed, two quick corrections to what I’ve written above: I got the sign wrong in the “number of bits” calculation, it should really be -log_2(P), where P is the probability, and P = M/N, not N/M. Secondly, the “UPB” upper probability bound is generally given as 10^120 to 10^150, where I gave that value in bits; substitue 150 * log_2 (10) bits, or roughly 500.

As for the magnetic domain query, this really doesn’t fit the definition of CSI, because the final, lowest energy state is not complex. This is really the same as crystalization, or water in puddles. The process which leads to it is bound to progress to the expected result as it seeks to minimize the total energy. A better “magnetic” example would be a magnetic key card, where a code is created of 500 bits in length, with the bits defined as domains of opposite polarity. If even one bit is the wrong way, the key card doesn’t open the door. In your example above, the specification is really a single command (make all bits align, or minimize the energy), not 500 individual commands which must be fulfilled in the proper sequence, and where, in every individual bit there is no “natural preference”. There is no way to approach the final result slowly, because even having just a single bit wrong gives the same result (failure) as having the bits completely random.

There are lots of cases which are complex, but not specified, or specified, but not complex.

14. SCheesman
Ignored
says:

Norm Olsen: Consider a boulder lying at the base of a large cliff. Is the interesting question about the weathering processes that loosened the boulder from its perch at the top? Apparently, no, it’s calculating the exceedingly small probability that the boulder happened to come to rest at that exact spot and determining exactly how many times, and where exactly, the boulder struck on its way down.

How do you determine if a rock wall is artificial, or man-made?

15. Geoxus
Ignored
says:

petrushka: I’m still a bit confused about how one determines that a biological state is a “result” rather than a state. Do we have some independent way of knowing that a configuration was a goal?

I think that is the most fundamental misconception behind ID. Until they do adopt concrete assumptions about the designer’s intent*, they are painting the target around the arrow. In this sense, plan and explicit creationism is better science than ID. Creationism commits to specific design models, sometimes even falsifiable (and falsified) models.

* Most likely, such assumptions would be quite difficult to justify, and, more importantly, prohibited by the legal strategy from which ID was born.

16. Geoxus
Ignored
says:

I’m quite late to this, but happy birthday Elizabeth!

17. SCheesman
Ignored
says:

petrushka: I’m still a bit confused about how one determines that a biological state is a “result” rather than a state. Do we have some independent way of knowing that a configuration was a goal?

This is a great question, but such a can of worms that I beg leave to attempt to address it at another time, where we can hash it out better — perhaps on its own thread, in a few weeks after this thread is truly put ot rest.

18. Thorton
Ignored
says:

SCheesman: How do you determine if a rock wall is artificial, or man-made?

You start by identifying the attributes of the hypothesized designer, in this case humans. You compare the formation under examination with other known-human-designed ones. You search for signs of human workmanship like tool marks. You also factor into your decision whether humans were present at the time/place of the formation’s origin, and whether those humans had the technology to perform the wall building.

You have to consider the designer as well as the object.

19. petrushka
Ignored
says:

Take your time, but unless you can actually produce the list of possible histories (not just the possible histories of the current state, but also the possible histories that would be viable) there is nothing to calculate and no reason even to discuss calculations.

The Lensky experiment indicates that even in a tiny laboratory population, all possible histories can be tested in a fraction of a human lifetime, so I have trouble figuring out what the ID hypothesis is about.

If there is even one possible history. it can be found by known processes. But there is no reason to assume that the path taken was intended. From it fossil record it appears that most paths lead eventually to extinction. The whole process resembles water finding a path downhill,

20. Geoxus
Ignored
says:

SCheesman: How do you determine if a rock wall is artificial, or man-made?

Well, that is quite easy. Compare it to known man-made rock walls and look for marks of known human tools on it. Of course, that is not to say the determination would be infallible. One might not realise the human origin of a badly eroded rock wall, or a rock wall could be designed to mimic “natural” rock walls.

21. Mike Elzinga
Ignored
says:

When Joe Felsenstein maps a relationship between fitness and gene frequency, he is making a connection between two objectively measurable quantities in an evolving species and showing how natural selection does in fact shift the distribution of the frequencies of certain genes that are related to the distribution of physical characteristics of the members of the population.

One could make a number of mathematical mappings from fitness to these specific gene frequencies that may or may not highlight some patterns; but calling the mapping “information” is problematic even if it has become a convention within some of the subdivisions of population studies.

Where the problem arises is in the misconceptions that ID/creationists have regarding the relationships between things like genes or molecular bonding arrangements and specific phenotypes. The hidden assumption is that the phenotype was a specific target and that there are only a few arrangements of molecules in a DNA sequence out of an essentially infinite number of possible arrangements that lead to that specific phenotype.

This notion again goes back to at least two major misconceptions: (1) it is all “spontaneous molecular chaos” at the atomic and molecular level, and (2) the specific phenotype in question is the necessary outcome of evolution from a given starting arrangement of molecules.

Note that the misconceptions never consider “first cousins,” “second cousins,” or other branches of an evolutionary bush on which different paths to other phenotypes were taken. The arrangements of molecules are constrained by the rules of chemistry and physics. And of the many possible arrangements of these molecules consistent with physical constraints, a great many of these lead to something that finds a niche in the environment that allows them to survive also.

This allows one in retrospect to try to find the mappings from the fitness of each these organisms in each of their environments to some frequency of certain alleles in their particular phenotype.

ID/creationist misconceptions always turn a blind eye to the billions of species of living organisms that exist and have existed on this planet. There are literally billions of directions evolution can take given the billions of other contingencies that nudge an evolving system in those various directions.

The demand that ID/creationists make to “demonstrate the exact evolutionary path from a given arrangement of molecules to an exact phenotype” is betraying fundamental ID/creationist misconceptions and misrepresentations of evolutionary processes.

There aren’t many paths to a specified phenotype; there are many paths to phenotypes that are similar enough to be in a common gene pool. That is why there are such things as distributions of given characteristics.

It is no different from any measurement of most quantities in the universe. There are many paths that lead to a distribution around a given mean, all of which constitute the same measurement which we specify by a mean and a standard deviation. If natural selection trims a tail of a distribution, the mean is shifted.

22. olegt
Ignored
says:

SCheesman: As for the magnetic domain query, this really doesn’t fit the definition of CSI, because the final, lowest energy state is not complex. This is really the same as crystalization, or water in puddles.

That’s too bad. The Ising-model example otherwise fit your previous definition perfectly. Oh well, I must say that your answer wasn’t entirely unexpected. 🙂

The process which leads to it is bound to progress to the expected result as it seeks to minimize the total energy.

Exactly. Energy of the dipoles would be the equivalent of fitness. We could then run a Monte Carlo simulation, which mimics Darwinian evolution. It combines random mutations (attempts to flip a dipole at random) with natural selection (bias toward accepting those moves which lower the energy). In this way, the ground states would be easily reached in polynomial time. In this example, a Monte Carlo process would create N−1 bits of specified information.

In fact, the model need not have simple ground states as in the case of a ferromagnet. Magnets where interactions have randomized signs (spin glasses) would have complex ground states that are not easy to determine, or even describe. Nonetheless, a Monte Carlo process would lead the system toward states of lower energy. As long as the fitness landscape is not too rugged, this approach will work. The system can be caught in metastable minima at times, but as long as the temperature is finite, it will be making progress. We can discuss that, too, if there is interest.

A better “magnetic” example would be a magnetic key card, where a code is created of 500 bits in length, with the bits defined as domains of opposite polarity. If even one bit is the wrong way, the key card doesn’t open the door. In your example above, the specification is really a single command (make all bits align, or minimize the energy), not 500 individual commands which must be fulfilled in the proper sequence, and where, in every individual bit there is no “natural preference”. There is no way to approach the final result slowly, because even having just a single bit wrong gives the same result (failure) as having the bits completely random.

The key example has nothing to do with complexity. It merely has an extremely rugged landscape, in which any optimization algorithm works as well as a random search. This straw man has nothing to do with evolution.

23. SCheesman
Ignored
says:

On the contrary, it has much to do with evolution. You must explain how functional proteins can be created due to the random mutation of a genetic code, when from all appearances the number of combinations that produce even one that folds is vanishingly smally. Then you have to get an ensemble of the right ones performing the right tasks to get even the simplest of the molecular machines we observe. Your point about any optimization algorithm working as well as a random search in a rugged landscape is exactly on point with what I am saying. That is where the talk of an upper probability bound, irreducible complexity, and probablilistic resources come in. It is the contention of ID that when it comes to a molecular machine the ruggedness of that landscape make design the only alternative, and talk of evolving such an outcome have as much hopefullness as alchemy.

The great thing is that this contention is quite falsifiable, if you can come up with even one (not even necessarily the actual) smooth pathway to evolve it from a substantially simpler state. The fact that no one has come close yet is not fatal to the unguided evolutionary hypothesis, so in the fullness of time you must expect that these pathways will be discovered. Forgive my sceptism while I wait.

24. olegt
Ignored
says:

SCheesman: It is the contention of ID that when it comes to a molecular machine the ruggedness of that landscape make design the only alternative, and talk of evolving such an outcome have as much hopefullness as alchemy.

Great, so at least we now understand what your objection boils down to. We all agree here that an evolutionary algorithm cannot work in an entirely random fitness landscape exemplified by a password (fitness is zero unless all bits are correct).

25. Mike Elzinga
Ignored
says:

SCheesman: It is the contention of ID that when it comes to a molecular machine the ruggedness of that landscape make design the only alternative, and talk of evolving such an outcome have as much hopefullness as alchemy.

Do you know what the ratios of electrical forces to gravitational forces among atoms and molecules are?

26. SCheesman
Ignored
says:

Mike Elzinga: The demand that ID/creationists make to “demonstrate the exact evolutionary path from a given arrangement of molecules to an exact phenotype” is betraying fundamental ID/creationist misconceptions and misrepresentations of evolutionary processes.

I’d be happy with ANY path. I say none exists.

Here’s an easier question: What was the first protein? What did it do? What is the function of a single protein all by itself?

27. SCheesman
Ignored
says:

In fact, yes, that is what it all boils down to. I’ve tried to make this point a hundred different ways. Elizabeth understands this. CSI, irreducible complexity, upper probability bounds, probablilstic resources are all about the landscape. When it comes to molecular machines, multiple proteins, each performing a specific function in a greater whole, where the lack of any renders the machine functionless, and that have the added property that they allow the machine to virtually build itself from scatch… I’m saying there is no fitness landscape to climb — it’s a vertical wall all the way up Mount Improbably on all sides – more like a bin standing on its end.

28. SCheesman
Ignored
says:

“pin”, not “bin” — but I guess they both have smooth, steep sides.

29. SCheesman
Ignored
says:

Well, it’s very large. Why do you ask?

30. SCheesman
Ignored
says:

Mike Elzinga: Do you know what the ratios of electrical forces to gravitational forces among atoms and molecules are?

(Sorry, missed the quote for others)

Yes, it is very large, why do you ask?

31. olegt
Ignored
says:

SCheesman: In fact, yes, that is what it all boils down to. I’ve tried to make this point a hundred different ways. Elizabeth understands this. CSI, irreducible complexity, upper probability bounds, probablilstic resources are all about the landscape.

OK then. Your objection has nothing to do with the topic discussed in this thread. Joe’s post is about natural selection. Natural selection does not work in random landscapes. We are not dealing with this here. A separate thread can be opened for your topic.

32. petrushka
Ignored
says:

I’m saying there is no fitness landscape to climb — it’s a vertical wall all the way up Mount Improbably on all sides – more like a bin standing on its end.

This is, of course, totally contradicted by evidence. Try looking at the Evolution on a Chip evidence, that function is not at all isolated.

33. SCheesman
Ignored
says:

Mike Elzinga: Do you know what the ratios of electrical forces to gravitational forces among atoms and molecules are?

Actually, the comparison is apt. Of in alchemy it was the disparity in the nuclear and electromagnetic forces that prevented them from trasnmuting base metals into gold. Today, it is the vast disparity between the probabilistic resources of simple mutational changes and the enormous task of creating self-assembling molecular machines from an ensemble of proteins which is the insuperable barrier. But believe if you must… biochemists are no doubt on the verge of proving me wrong.

34. SCheesman
Ignored
says:

petrushka: The Lensky experiment indicates that even in a tiny laboratory population, all possible histories can be tested in a fraction of a human lifetime, so I have trouble figuring out what the ID hypothesis is about.

We ID/Creationists love Lenski. So far everthing he has found confirms what we believe about the evolvability of the genome. The apple does not drop far from the tree, and the way it drops really is a function of simple probabilites – no “magic” leaps across functional space.

35. SCheesman
Ignored
says:

Well that should stir the pot for a while… I shall check in again in a few days…thanks all for contributing.

36. SCheesman
Ignored
says:

petrushka: This is, of course, totally contradicted by evidence. Try looking at the Evolution on a Chip evidence, that function is not at all isolated.

They are not trying to create molecular machines on a chip. It is the nature of the problem which is important. I have never said that evolution is impossible for the simple problems posed in ev-on-a-chip experiments. These are little bumps on the ground in comparison. Lenski-changes-like.

37. SCheesman
Ignored
says:

olegt: OK then. Your objection has nothing to do with the topic discussed in this thread. Joe’s post is about natural selection. Natural selection does not work in random landscapes. We are not dealing with this here. A separate thread can be opened for your topic.

OK, that’s fine. I’m happy to oblige, but cannot open a thread on my own. I did acknowlege the power of natural selection to add strictly limited amounts of information right from the start (perhaps more than some of my colleagues might), but since then have simply tried to respond to others’ comments.

38. petrushka
Ignored
says:

The issue is the connectability of function. Address the issue. Same with Lensky. Address the issue posed by facts vs your vertical wall isolating function.

39. Mike Elzinga
Ignored
says:

SCheesman: it is the vast disparity between the probabilistic resources of simple mutational changes and the enormous task of creating self-assembling molecular machines from an ensemble of proteins which is the insuperable barrier. But believe if you must… biochemists are no doubt on the verge of proving me wrong.

I was asking to find out if you know anything about chemistry and physics. As suspected, it appears that you don’t. We just had a thread that made the comparison.

Do you know that it is not all “spontaneous molecular chaos” at the atomic and molecular level? Do you also know that assemblies build up from other assemblies? They don’t go jumping all over the map and reassemble every time they change a little bit. High school students know this.

Your description of a “landscape” has nothing to do with reality; or this thread.

Since you came into the discussion just to “stir the pot” and run away laughing, pick up some chemistry and physics textbooks and study them while you are gone.

40. llanitedave
Ignored
says:

I’m saying there is no fitness landscape to climb — it’s a vertical wall all the way up Mount Improbably on all sides – more like a bin standing on its end.

You say this, but how does does it relate to reality? It’s just a pulled-out-of-thin-air assertion.

41. SCheesman
Ignored
says:

Mike Elzinga: Do you know that it is not all “spontaneous molecular chaos” at the atomic and molecular level? Do you also know that assemblies build up from other assemblies? They don’t go jumping all over the map and reassemble every time they change a little bit. High school students know this.

This is sheer obfuscation. Any evolutionary change must occur through mutation of the genetic code. And I do have a Ph.D. in physics.

42. Mike Elzinga
Ignored
says:

SCheesman: This is sheer obfuscation. Any evolutionary change must occur through mutation of the genetic code. And I do have a Ph.D. in physics.

Sorry, but it doesn’t show in the least.

43. SCheesman
Ignored
says:

llanitedave: You say this, but how does does it relate to reality? It’s just a pulled-out-of-thin-air assertion.

At this point I am merely restating the position of both sides of the debate. As for evidence, ID takes what we readily observe, that machines are universally the result of intelligent design and construction, as is coded and transcribed information, and extends it to the biological world. We assert that life cannot spontaneously arise from non-life. The best attempts of many brilliant biochemists have only demonstrated the truth of this assertion. If and until they make a great deal more progress on OOL, and indeed the origin of any observed molecular machine, as the result of subsequent stochastic mutational processes in the genetic code, it is not me that is pullling things out of thin air.

44. olegt
Ignored
says:

SCheesman: And I do have a Ph.D. in physics.

Oh. I am so impressed.

45. olegt
Ignored
says:

SCheesman: If and until they make a great deal more progress on OOL

Isn’t origin of life a separate problem from biological evolution? You know, in physics (that field in which you have a Ph.D.), electromagnetic theory postulates the existence of electric charges without shedding light (so to speak) on their origin. No one complains.

46. Thorton
Ignored
says:

SCheesmanAs for evidence, ID takes what we readily observe, that machines are universally the result of intelligent design and construction,

Equivocation over the definition of ‘machine’ noted.

as is coded and transcribed information,

Equivocation over the definitions of ‘coded’ and ‘information’ noted.

We assert that life cannot spontaneously arise from non-life.

Vague, lack of precise definitions of ‘life’ and ‘non-life’ noted.

If and until they make a great deal more progress on OOL, and indeed the origin of any observed molecular machine, as the result of subsequent stochastic mutational processes in the genetic code, it is not me that is pullling things out of thin air.

The “Designer of the Gaps’ argument was dead and buried twenty years ago. Didn’t you get the memo?

47. Mike Elzinga
Ignored
says:

SCheesman: The best attempts of many brilliant biochemists have only demonstrated the truth of this assertion. If and until they make a great deal more progress on OOL, and indeed the origin of any observed molecular machine, as the result of subsequent stochastic mutational processes in the genetic code, it is not me that is pullling things out of thin air.

It is you pulling things out of the air.

Given all the examples of complexity and condensed matter in the universe, what laws of physics stand in the way of abiogenesis? Do you have some kind of insurmountable barrier in mind?

We hear all sorts of things about various kinds of “barriers” from ID/creationists, but no ID/creationist seems to be able to articulate what these are and at what point they kick in. Nor can they tell us where along the chain of complexity the laws of chemistry and physics stop operating.

Why are you intimidated by the fact that the origin of life is such an interesting research problem?

48. llanitedave
Ignored
says:

SCheesman: We assert …

And that’s really all you have.

49. SCheesman
Ignored
says:

Mike Elzinga: Why are you intimidated by the fact that the origin of life is such an interesting research problem?

I never said it wasn’t interesting. I find the current results rather comforting given my position. The gap between what we know “natrue” can accomplish and what we know is required only continues to increase, and OOL research continues to find new avenues that are unpromising. Uninteresting? Hardly!

This site uses Akismet to reduce spam. Learn how your comment data is processed.