The blogs of creationists and ID advocates have been buzzing with the news that a new paper by William Basener and John Sanford, in Journal of Mathematical Biology, shows that natural selection will not lead to the increase of fitness. Some of the blog reports will be found here, here, here, here, here, and here. Sal Cordova has been quoting the paper at length in a comment here.
Basener and Sanford argue that the Fundamental Theorem of Natural Selection, put forward by R.A. Fisher in his book The Genetical Theory of Natural Selection in 1930, was the main foundation of the Modern Evolutionary Synthesis of the 1930s and 1940s. And that when mutation is added to the evolutionary forces modeled by that theorem, it can be shown that fitnesses typically decline rather than increase. They argue that Fisher expected increase of fitness to be typical (they call this Fisher’s Theorem”).
I’m going to argue here that this is a wrong reading of the history of theoretical population genetics and of the history of the Modern Synthesis. In a separate post, in a few days at Panda’s Thumb, I will argue that Basener and Sanford’s computer simulation has a fatal flaw that makes its behavior quite atypical of evolutionary processes.
Was the mathematics of natural selection, and the mathematics of mutation, ignored in theoretical population genetics until Fisher’s 1930 book? Well, actually, no. Here is the major work on this before 1930:
1. In 1903, three years after the rediscovery of Mendel’s work, the mammalian geneticist William Ernest Castle showed in Proceedings of the American Academy of Arts and Sciences a numerical calculation of the elimination of a lethal recessive allele from population.
2. In 1915, in an Appendix to a book Mimicry in Butterflies by the well-known geneticist R. C. Punnett, H. T. J. Norton showed numerical calculations for a case of natural selection, showing that selection was effective in favoring an advantageous allele. Norton’s mathematical equations were not given until later, in 1928. Jennings (1916) and Wentworth and Remick (1917), in papers in Genetics, did further work on the elimination of recessive lethal alleles.
3. In 1922, R. A. Fisher published a major paper in the Proceedings of the Royal Society of Edinburgh, showing the algebra of natural selection for dominant alleles and for alleles of intermediate dominance, as well as the effects of mutation and of genetic drift (which he called the “Hagedoorn effect”). His treatment of genetic drift was pioneering, but made a technical mistake later corrected by Sewall Wright in 1929.
4. J. B. S. Haldane, starting in 1924, published a numbered series of papers under the general title “A mathematical theory of natural and artificial selection”, the first in Transactions of the Cambridge Philosophical Society and all the rest except the 10th in Proceedings of the Cambridge Philosophical Society. These treated many cases of natural selection and different mating systems.
5. In his 1927 paper in that series, whose subtitle is “Selection and mutation”, Haldane gives the probability of fixation of a new favored mutant when it is present in just a single copy in the presence of genetic drift. For infinite populations where genetic drift is absent, he derived the equilibrium frequency of a mutant allele when its increase is countered by natural selection.
6. In a paper in 1928 in American Naturalist, R. A. Fisher put forth an argument that natural selection would alter the degree of dominance of a deleterious allele that was recurring by mutation. Sewall Wright and he then debated this back and forth in that journal in 1929, with Wright arguing that the strength of selection on modifiers of dominance would be too weak to be effective, and that the recessiveness of many mutants was inherent in the biochemical kinetics of the genes. (Wright was backed up in this later by Haldane and by H. J. Muller).
7. Wright was already at work on the distributions of gene frequencies under natural selection, mutation, migration, and genetic drift. This work, which was the foundation of modern work using diffusion equations, was not published in full until 1931. An abstract Wright published in 1929 shows that Wright had many of the results by then.
Conclusion: the mathematics of mutation and natural selection had been well worked-out before R. A. Fisher published his 1930 book. That book puts forward many important and original arguments in addition to summarizing in verbal form the mathematics of natural selection and mutation. The Fundamental Theorem of Natural Selection is one of the least consequential things in the book — Fisher did not give a precise derivation, and what the terms mean has been the subject of a recent literature, with papers by the late George Price, by Anthony Edwards, and by Warren Ewens. The conclusions leave considerable doubt as to the fundamentalness of the theorem.
Thus the literature on the theory of natural selection, of mutation, and of their joint action, did not wait until 1930, and in its 1920s development did not rely at all on the Fundamental Theorem of Natural Selection. In addition, “Fisher’s Theorem”, so-called by Basener and Sanford, will not be found in Fisher’s work — he was in fact quite critical of Sewall Wright’s 1932 arguments that highlighted maximization of mean fitness as a major principle in evolutionary genetics.
I hope to follow this post up with one at Panda’s Thumb in the next few days, showing that the ineffectiveness of natural selection in Basener and Sanford’s simulations comes from an unfortunate choice of the parameters in their simulation.
In the Lenski study, the evolution of the ability to metabolize citrate aerobically involved the destruction of no functions. It involved a gene duplication as an early step, allowing the ancestral gene to continue performing its original function.
It seems your information about the study is only gleaned from creationist propaganda.
Nobody claims it shows an accumulation of functions. It does show new functions evolving, though there is a net loss of functions in the experiment. But the reasons are well understood: The environment is very simple, so many of the genes that E coli depend on in it’s natural habitat are not necessary in the flask environment, so are gradually lost to mutations due to the absence of purifying selection on those genes. Despite this, fitness compared to the ancestor is continously increasing. It is becoming better and better adapted to that environment.
A more realistic and complex environment with many novel potential resources and natural cycles would have the opposite effect, as those would be the substrate for adaptive innovation.
Here’s a quote from Will Provine. His book The Genetic Drift Fallacy which, is to my view, more of a biography of Sewall Wright.
The Hagedoorns, inbreeding, and “random genetic drift”.
The mammalian geneticists A. L. and A. C. Hagedoorn, whose favorite experimental organism was the rat, pointed out the effects of inbreeding in natural populations. Castle, his students Sewall Wright and C.C. Little, and geneticist/embryologist Helen Dean King of the Wistar Institute in Philadelphia, thought the genetic work of the Hagedoorns lacked precision (Wright, personal interview). Like Wright, the Hagedoorns had a deep interest in evolution in nature, and used their expertise in mammalian genetics and breeding to infer evolutionary processes. The views of the Hagedoorns on evolution in nature bore a basic similarity to Wright’s views in 1931 and 1932, but one would never guess this from Wright’s dismissal of their work. The Hagedoorns’s book did not cite the work of East, Jones, or Wright.
Fisher read the views of the Hagedoorns in their 1921 book, The Relative Value of the Processes Causing Evolution. Their fundamental argument was that Darwinian or artificial selection and Mendelian breeding both caused a reduction of chromosomal variation in isolated populations. An isolated population just had many fewer numbers; this meant they had many fewer of the chromosomes*. Selection eliminated inferior hereditary factors stemming from inbreeding, but could not explain most of the observable differences between closely related geographical variations of a natural population in nature. Natural selection caused an inevitable reduction of chromosome variability with only a small proportion of the population produced the next generation, or spawned founder populations in nearby environments.
Chromosomes* were eliminated because of inbreeding combined with Mendelian inheritance. In the chapter, “Reduction of variability,” the Hagedoorns gave many examples of census size being far larger than actual breeding size, citing evidence from pigs, rats, field-mice, flies, common flowering plants, wheat, and many other organisms. Suppose, they theorized, two very small populations, derived from the same large population, were established on two islands. Soon they would differ consistently from each other. The fact that islands are frequently found to have species of plants or animals which exist nowhere else, need not be taken as proof for the adaptation of these species to the conditions on those islands. To explain how all the individuals on one island have come to be pure for one set of characters, we need not ascribe any selection value to those characters. . . .
A group of organisms may become pure for a genotype which causes them to possess some organ or peculiarity, which in their present mode of life is absolutely useless. (A.L. and A.C. Hagedoorn 1921: 123-124)
The Hagedoorns used the equivalent of the production of inbred lines of maize or guinea pigs; inbreeding was less, but had the same basic result.
The Hagedoorns use the term “random sampling” only once in the book: “Even in those cases where colonization is random sampling, the sample will seldom be wholly representative” (Hagedoorns 111). They meant a “random sampling” of whole organisms as a founder population. For the Hagedoorns, inbreeding was the clue leading them to understanding evolution in nature.
What drew Fisher to the Hagedoorns’ book was their argument that in small populations of rats on different islands, their favorite example, the differentiation had nothing to do with natural selection. They just differed by their fixed chromosomes* as did inbred strains of maize or guinea pigs. Fisher thought that the Hagedoorns underestimated the importance of natural selection in these populations. Fisher disagreed with the Hagedoorns about the role of selection in small inbred populations of rats. Fisher wanted to reject the Hagedoorns’ thesis that differences between inbred populations were in any sense random. The Hagedoorns’s view of speciation pointed to Moritz Wagner as their intellectual predecessor (Wagner, 1868), but the work of John T. Gulick on Hawaiian snails (Gulick 1888), or many other taxonomists, who clearly believed as the Hagedoorns, suggested most characters of closely related species had little to do with natural selection and were basically random.
Provine, William. The “Random Genetic Drift” Fallacy (p. 20). Édition du Kindle.
*I think Provine means alleles or genes, rather than chromosomes. This book was written shortly before his untimely death from brain cancer and it has some rough edges.
ETA I see The Relative Value of the Processes Causing Evolution is still in print.
Thanks. I do think that the Hagedoorns did not develop any population genetic theory to describe genetic drift. Fisher developed distributions of gene frequencies under mutation and selection (1922), and Wright worked out distributions of gene frequencies under genetic drift in his great 1931 paper, which corrected an error by Fisher. Wright also developed his 1921 work on inbreeding coefficients in the 1930s into a highly usable method for describing the time course of genetic drift, and he defined effective population size and worked out formulas for it.
By contrast, the Hagedoorns’ contributions were nonmathematical verbal descriptions, albeit correct ones.
There are quite a few other references to the Hagedoorns, which Provine uses to support his contention that genetic drift is a fallacy.
Cross-posted!
I wish people would be consistent in the use of the words alleles and genes.
petrushka,
Who’s that aimed at?
First speaking of the OP and Fisher’s theorem:
http://www.nbi.dk/~natphil/salthe/Critique_of_Natural_Select_.pdf
So this relates somewhat to my following responses to Corneel.
No, but I don’t have to because in principle if every one has a different heritable disease but more of them over time, that is genetic deterioration. Someone has a bad liver, another a bad kidney, another a bad pancreas, etc. If that sort of thing increases over time, that is genetic deterioration and the sort that will negate the fixation of even ONE beneficial mutation. That is the implication of Muller’s limit and the rather simple math derived above. I don’t know a of any major population geneticist who thinks the human genome is improving, nor any multicellular eukaryotic genome. Lots of them think the human genome is accumulating damage.
As far as examples of mutational meltdown, we have some experiments which I mentioned to Dr. Sanford but he didn’t seem impressed with it for some reason, but here is one:
https://www.ncbi.nlm.nih.gov/pubmed/11430651
As and aside, I thought the mouse utopia experiments were equally interesting:
I passed the following papers also onto him which deals with plants. We haven’t talked about it that much. John said it is hard to establish pure meltdown as the mode of extinction because when the population gets small, it’s hard to tell what actually finishes it off.
But that said, here are some papers to consider which I mentioned to John in process of my news gathering.
http://www.nature.com/articles/338065a0
and then another one I passed on to John recently but haven’t had the chance to discuss with him. He gave me a call yesterday to get the latest on my reporting activities and news gathering, and we really didn’t talk about the paper by Wiens nor the paper he co-authored with Bill Basener. I informed him of discoveries on tandem repeats, centromeres, lots of the stuff that I discuss at TSZ like the phospho proteome, the many introns humans share with plants rather than animals (A. Thaliana vs. Fruitflies), the vindication of the 80 megabyte RAM figure I published in 2015/16 by an article in PLOS, the stuff on nylonases, etc.
But Wiens wrote this paper which suggests quasi meltdown as a contributing mechanism to today’s mass extinction. It parallels the problem I pointed out with Lenski’s experiments which Stanley Salthe, interpreting Fisher’s theorem (above), saw years ago. The creatures under selection get so specialized that when an environmental perturbation happens, they get wiped out. Scott Minnich pointed out all those bacteria Lenski evolves and keeps boasting of fitness improvement won’t survive in real environments. Salthe connected this to Fisher’s theorem.
But anyway, here is Wien’s paper that echoes Salthe on Fisher’s theorem, and thus indirectly connects Fisher’s theorem to a mechanism of extinction:
http://onlinelibrary.wiley.com/doi/10.1111/j.1095-8312.2011.01819.x/references
But one does not need mutational meltdown (extinction) to have substantial reductive evolution. Both theoretically and empirically this has been confirmed on many levels:
http://onlinelibrary.wiley.com/doi/10.1002/bies.201300037/full
That agrees with the formulas I posted above and Basener and Sanford’s paper for normal modes of evolution. What ever the those “punctuated episodes of complexification” are, it clearly is something no mechanistic model I’m aware of predicts nor any direct experiment has demonstrated. However, as Behe pointed out (which even Coyne agreed was accurate), lots of reductive (destructive) evolution has been observed, more on average than constructive evolution.
If one doesn’t like the criticism of Fisher’s because supposedly Fisher’s work was shown not so fundamental, one might try to criticize the other theorems listed in Joe Felsenstein’s book, Evolutionary Theoretical Genetics, such as the one we discussed here at TSZ a while back:
Joe was very kind to respond:
Ok, the reason I focused on this was this related to what Joe said in the book page 90:
As far as I can tell, that is true if no bad mutations are added continuously as in the Muller/Graur meltdown scenarios related to the equations above like that from Kimura which I derived above:
It’s a bit problematic to merge these conflicting notions into one equation! Maybe Bill Basener can re-work the Sewall-Wright equation like he did for the Fisher equation. Bwahaha!
Bottom line, some equation should be able to model the mutational meltdown of the yeast experiment. Clearly that is a scenario that can happen since it is experimentally demonstrated.
I never had the chance to pursue studying Joe’s book in more detail, but reading his other works, I regret not spending more time admiring and pondering the beautiful math and clear and lucid writing it contains. I also regret having so many sharp disagreements with Joe, but I (and many other creationists) have always admired his math.
And why wouldn’t adaptive evolution proceed in a similar manner? Many traits are quantitative and polygenic, so these would evolve by subtle allele frequency changes at many loci and it would be hard to single out an isolated beneficial mutation.
Haha, you cited one of Arjan’s (de Visser) papers. Did you know he became very impressed with the power of purging selection during those experiments? 😀
Also you seemed to have missed this:
Reductive evolution is not the same as destructive, Sal. The very abstract you quote mentions that reductive evolution is often neutral or adaptive.
Kind of like how cars no longer have cranks on the front that you have to wind up before they start, right? This does not mean cars have been destroyed as a result.
stcordova,
Great work Sal! As usual…:-)
I have been admiring your keen insight into this matter, but I’m confused…
Are you saying that if I read Joe’s book, all those issues with Fisher’s shortcoming will be resolved? Will the facts that human genome is deteriorating, rather then increasing in fitness, be resolved by Joe’s speculations?
BTW: You should definitely look into quantum mechanics; i.e. quantum coherence controlled mitosis and mutations… you will be blown away…
😀
Are you?
Thanks for the replies to my comment, especially to Gordon Davisson. Synonymous substitutions are a nice example.
Consider a simplified model of a genome. At each site there are two possible alleles, ‘good’ and ‘bad’. I will use y to mean the absolute difference in fitness between the two alleles. (I should talk about a random variable Y and a value y, but I’m just going to munge them together.) Suppose g(y) is the probability density for the distribution over sites. If the site is good, the only possible mutation is a deleterious one with selection coefficient -y. If the site is bad, the only possible mutation is a beneficial one with selection coefficient y. Assume all mutations, good to bad, bad to good, at all sites, occur at the same rate u. (I’m also ignoring dominant/recessive issues and other things which I think are inessential complications.)
What we want is the density f(s) for the selection coefficient s for a new mutation. The density for |s| is g(|s|).
I’ll denote the Basener and Sanford choices for g and f with BS. So g_BS(y) = Gamma(y; 500, 0.5), and f_BS(s) = .999 g(s) for s < 0, f_BS(s) = .001 g_BS(s) for s > 0. I think there are problems with both their choice of g(y) and the relationship they chose between g(y) and f(s).
If we assume an effective population N=10000 for humans, then g_BS implies that Pr(y > 10/4N) = Pr(y > 0.00025) = .62. That would mean 62% of our genome is subject to strong purifying selection. (Selection starts to be effective around 1/4N. By 10/4N it is pretty powerful.) We don’t see anything like that much conservation. There’s also a problem at the other end of the distribution. g_BS implies that Pr(y > .1 ) = 1.5e-23, so that mutations with selection coefficient less than -.1 would hardly ever occur.
I don’t think that’s the real problem though. The Pr(y > 0.00025) = .62 issue could be avoided by assuming that 90% or more of our genome is completely useless junk, with y=0, and that g_BS only applies to the remainder. The Pr(y > .1 ) = 1.5e-23 issue is not why their simulations gave the results they did.
I still reckon the real problem is what I said earlier, but now I can make it more precise. Given my simple genome model, and Nu << 1, we expect that a site will spend about 1/(1+exp(4Ny)) with the bad allele, 1/(1+exp(-4Ny)) with the good allele, and a small fraction of time transiting between. A more sensible relationship between g(y) and f(s) is f(s) = g(|s|)/(1+exp(4Ns)).
Graham Jones, http://www.indriid.com
Citation please.
Given the rate of deterioration J-Mac, Sal, can you work backwards from that rate to determine a creation date for when the genome was perfect?
I’m sure you know how to make a graph. Can you? Will you?
I admire the way that telic language oozes from every utterance.
Control: the place from which a system or activity is directed or where a particular item is verified
is one definition.
It’s plain your use of “control” is not in the sense that it is an entailment but where what is controlled is being consciously directed.
There’s utterly no evidence of this, but you blithely carry on without even realizing the depths to which you are unable to think objectively about things. Why should Sal look into that particularly? Is that how you think “god does it”? Does god change hidden quantum variables in order to get the results he wants? Is that what you think?
Then the question I have for you is how is it then that such interference results in a setup where the mathematics of mutation and natural selection can be determined? If the results of mutation are simply down to a capricious god then how can it be modeled mathematically?
Unless, of course, you demote your deity to an entity that replaces physics?
Yes, sickle cell anemia and blindness in cave fish is adaptive, but one can say destructive evolution can be adaptive like blowing up a bridge in the process of retreating from the battle field is beneficial for the retreating army. “Adaptive” in real world evolution is destructive of function: Behe’s first law of Adaptive evolution, “most adaptive evolution involving mutation is loss of function” (my formulation of Behe’s law.
I like Behe’s law of adaptation much better than the concept of Irreducible Complexity.
I guess I missed it. 🙂 I’ll have to ask John why he didn’t like that paper even though it demonstrate genetic entropy.
A mutation does not have to become fixed in order to classify it as deleterious or beneficial, as far as I know. That was the problem Kondrashov was agonizing over, the fixation of slightly deleterious alleles.
I based that on Sal’s claim that the majority of population geneticist see it..
I definitely see it in may area of interest; the increase of cancer cases due to mutations… How could we be evolving if cancer rates are drastically increasing? 60 years ago Japan had 18 cases of prostate cancer in the entire country, per year. Today they have 13% of men diagnosed with it ever year….
OMagain,
You have no idea what’s talking about.. I would start with QM for dummies, if I were you… but then again… you need to know the basic difference between classical physics and QM…
Anyone is welcome to answer the following question.
Stepping back a bit, and decoupling the question of the model in Fisher’s Theorem, but rather experimental observation — if we subject something to a continuous flow of mutation, which Muller was obviously concerned about and which is related to his Nobel Prize-winning research on the effects of radiation, what is the correct way to mathematically model the fitness and change of fitness in scenarios under increased mutatagens. If there is a general equation to model the effect of mutations on fitness, can’t we just plug in the mutation rate and/or deleterious-to-beneficial ratio and crank out the effect on fitness under a variety of values for mutation rate (mu) and/or deleterious-to-benefical ratios?
Would such an equation be kind of intractable or is it tractable? Does such an equation exist in the literature?
Sure. What could someone who thinks the earth is 6000 years old and the cosmos young possibly get wrong regarding the opinions of population geneticists?
http://www.sciencemag.org/news/2016/08/tasmanian-devils-are-rapidly-evolving-resistance-contagious-cancer
https://skeptics.stackexchange.com/questions/31887/did-just-18-people-die-from-prostate-cancer-in-japan-in-1958
https://link.springer.com/chapter/10.1007/978-1-4615-3398-6_3
Pot paging kettle, we’ve a caller on line one.
Could you, would you, on a train? Could you, would you, on a plane?
Anyone here is welcome to cite a well-known geneticist who argues the human genome is improving.
I would *love* to see some supporting evidence of that.
Anyway, adaptive evolution cannot be an instance of genetic deterioriation, mutational meltdown or whatever term you conjure up, by definition. Adaptive evolution tends to reduce the chance of extinction, instead of increasing it.
No you didn’t. That was a personal communication 🙂
Compared to what?
The field is called population genetics. Look mainly for Susumu Ohno and for Moto Kimura back in the 60s or 70s.
stcordova,
Worth mentioning that the human genome – and any other one might think of special significance – could easily be on the high road to Hell; this does not invalidate historic evolution to get to this point, which is really the problem people have.
I do find it odd when people talk of genetic entropy in our huge population. How do these near-neutral deleterious genetic mutations get successively fixed? Advantageous mutations suffer an insurmountable barrier, but deleterious ones seem to get a free pass.
Aren’t at least some flightless birds and blind animals living caves without access to light the examples of adaptive evolution due to the loss of gene function?
“Between 1976 and 1994, prostate cancer rates doubled and mortality increased by 20%2 (Table 1). The reasons for the increase are not known.”
“…Prostate cancer is also increasing in significance worldwide (Table 2). Clinical incidence is low in Asian men and highest in African-Americans and Scandinavians.1 However, even in Japan, where the age-adjusted death rate per 100,000 population is 4.0 in comparison with 17.5 for men in the United States, the number of newly identified cases is expected to double by the year 2000 and to quadruple by 2010. When Japanese men move to the United States, their incidence and mortality rates increase and approximate those of American men.8 The reasons behind these marked differences must be illuminated so that we may learn what causes the disease process and, consequently, devise rational strategies for prevention, early diagnosis, and treatment
Alan Fox,
I don’t see how drift could be a fallacy myself. There must be some fraction of alleles that fix despite not having positive selection coefficients.
J-Mac,
You think population genetics is bollocks, so you aren’t really entitled to any conclusions drawn from it.
And this supports “genetic entropy”…. How, exactly? Do the genomes of Japanese men undergo sudden post-natal mutations when they move to the US?
Unlike Sal, I don’t believe the earth is 6000 years old. But we can’t prove that it’s older either because our beliefs are based on the many assumptions that are hard to verify…
The model of our beliefs is that the big bang had a beginning and that nothing travels faster than light….
Here is the thing; if the expansion of the universe is accelerating at cosmological constant, as it seems to be, then the extreme parts of the universe will, or are traveling faster then the speed of light…
Can you see the problems these present us with?
J-Mac,
The prime evidence is in radioisotopes.
But it depends what you mean by ‘prove’. If one discounts over-zealous interpretation of a few passages in a book, literally nothing else supports 6000 years, Sal and coal notwithstanding.
What are the other?
Who said those are my conclusions?
There are different interpretations with unspecified time length before creative days…
Behe’s paper which was actually praised by Jerry Coyne:
https://www.ncbi.nlm.nih.gov/pubmed/21243963
It was a gruelingly boring paper, but I suppose that’s what it took to deliver Behe’s point. As I said, it was such an well-done paper even Jerry Coyne praised it!
As and aside, beyond plasmid exchange, most anti-biotic resistance is due to compromised or broken function:
https://www.trueorigin.org/bacteria01.php
Deleterious mutations don’t have to be fixed to cause damage to the entire population. If Billy has a bad mutation causing kidney problems and Johnny a mutation causing a liver problem, etc., then genetic entropy is happening. That’s what Kumura’s formula (which I derived from first principles above), and what Muller (Nobel Prize winner) conveys.
Problematic is the fact even supposing a good mutation is fixed, if for every 1 good mutations fixed (and that is practically impossible today for the human genome given the fact time to fixation is proportional to population size), thousands of bad mutations per individual are added, that’s not a good situation. What do I mean by thousands added? Even though most mutations will drift out of the population due to random gametic sampling (ala Kimura), the steady influx of new bad mutations keeps increasing the quantity bad in the genome. For every bad one that drifts out of the population, more are added.
Crow suggested there will be an equilibrium point where drift will equalize against the new influx of bad mutations, but I don’t know what that equibrium point is.
To emphasize, that isn’t just a creationist argument, look at the “bonkers” citation I provided where Dan Graur independently confirmed the derivation I made above. And Graur is about as die-hard and evolutionist as they get.
PS
Total aside, Allan. Total aside, and nothing to do with the OP. I notice the Rugby matches in the UK (I think) have the Christian Hymn “Abide with me” sung. I stumbled on this video of one such event. It was very moving. I thought of you since you’re in the UK. It’s an amazing tradition and almost made me wish I lived where you live!
It is a definitional thing. Mutational meltdown is usually reserved for situations where mutation pressure and genetic drift overwhelm purifying selection. This is expected to result in a decrease in population size and eventual extinction.
It is confusing to use this term for instances of regressive evolution, because flightless birds and blind cave critters are doing just fine.
Thanks for that. Looks interesting. But I do note that you left out the “modification of a pre-existing molecular function” part and just focused on the “loss of function”. I also believe Behe reviews mostly results from short-term microbial evolution. That may not generalize well to all instances of adaptive evolution. Agree?
Sal:
As you might guess, Coyne’s appraisal of the paper was less effusive and more nuanced than Sal lets on:
https://whyevolutionistrue.wordpress.com/2010/12/12/behes-new-paper/
Some purifying selection may be involved as well, I think. The equilibrium is called “mutation-selection balance” and there is an extensive literature on it.
My understanding of genetic drift has evolved. 😉 I’m quoting Provine regarding the Hagerdoorn’s contributions, not to argue that drift doesn’t happen.
As Joe Felsenstein says, they didn’t quantify drift but they were early observers of the phenomenon, looking at small isolated rodent populations on neighbouring islands, concluding that variation between the populations was due to random loss of alleles rather than adaptation.