About Joe Felsenstein

Been messing about with phylogenies, coalescents, theoretical population genetics, and stomping bad mathematical arguments by creationists for some years.

Does gpuccio’s 150-safe ‘thief’ example validate the 500-bits rule?

At Uncommon Descent, poster gpuccio has expressed interest in what I think of his example of a safecracker trying to open a safe with a 150-digit combination, or open 150 safes, each with its own 1-digit combination. It’s actually a cute teaching example, which helps explain why natural selection cannot find a region of “function” in a sequence space in such a case.  The implication is that there is some point of contention that I failed to address, in my post which led to the nearly 2,000-comment-long thread on his argument here at TSZ. He asks:

By the way, has Joe Felsestein answered my argument about the thief?  Has he shown how complex functional information can increase gradually in a genome?

Gpuccio has repeated his call for me to comment on his ‘thief’ scenario a number of times, including here, and UD reader “jawa” taken up the torch (here and here), asking whether I have yet answered the thief argument), at first dramatically asking

Does anybody else wonder why these professors ran away when the discussions got deep into real evidence territory?

Any thoughts?

and then supplying the “thoughts” definitively (here)

we all know why those distinguished professors ran away from the heat of a serious discussion with gpuccio, it’s obvious: lack of solid arguments.

I’ll re-explain gpuccio’s example below the fold, and then point out that I never contested gpuccio’s safe example, but I certainly do contest gpuccio’s method of showing that “Complex Functional Information” cannot be achieved by natural selection.   gpuccio manages to do that by defining “complex functional information” differently from Szostak and Hazen’s definition of functional information, in a way that makes his rule true. But gpuccio never manages to show that when Functional Information is defined as Hazen and Szostak defined it, that 500 bits of it cannot be accumulated by natural selection.

Continue reading

Eric Holloway needs our help (new post at Panda’s Thumb)

Just a note that I have put up a new post at Panda’s Thumb in response to a post by Eric Holloway at the Discovery Institute’s new blog Mind Matters. Holloway declares that critics have totally failed to refute William Dembski’s use of Complex Specified Information to diagnose Design. At PT, I argue in detail that this is an exactly backwards reading of the outcome of the argument.

Commenters can post there, or here — I will try to keep track of both.

There has been a discussion of Holloway’s argument by Holloway and others at Uncommon Descent as well (links in the PT post). gpuccio also comments there trying to get someone to call my attention to an argument about Complex Functional Information that gpuccio made in the discussion of that earlier. I will try to post a response on that here soon, separate from this thread.

Does gpuccio’s argument that 500 bits of Functional Information implies Design work?

On Uncommon Descent, poster gpuccio has been discussing “functional information”. Most of gpuccio’s argument is a conventional “islands of function” argument. Not being very knowledgeable about biochemistry, I’ll happily leave that argument to others.

But I have been intrigued by gpuccio’s use of Functional Information, in particular gpuccio’s assertion that if we observe 500 bits of it, that this is a reliable indicator of Design, as here, about at the 11th sentence of point (a):

… the idea is that if we observe any object that exhibits complex functional information (for example, more than 500 bits of functional information ) for an explicitly defined function (whatever it is) we can safely infer design.

I wonder how this general method works. As far as I can see, it doesn’t work. There would be seem to be three possible ways of arguing for it, and in the end; two don’t work and one is just plain silly. Which of these is the basis for gpuccio’s statement? Let’s investigate …

Continue reading

Rejoinder to Basener and Sanford’s reply, part I

William Basener and John Sanford have responded here to my post concerning whether R.A. Fisher’s Fundamental Theorem of Natural Selection is critical to work on the theoretical population genetics of the interaction between mutation and natural selection. (This reply by Basener and Sanford is also reposted here.) Continue reading

Does Basener and Sanford’s model of mutation versus selection show that deleterious mutations are unstoppable?

by Joe Felsenstein and Michael Lynch

The blogs of creationists and advocates of ID have been abuzz lately about exciting new work by William Basener and John Sanford. In a peer-reviewed paper at Journal of Mathematical Biology, they have presented a mathematical model of mutation and natural selection in a haploid population, and they find in one realistic case that natural selection is unable to prevent the continual decline of fitness. This is presented as correcting R.A. Fisher’s 1930 “Fundamental Theorem of Natural Selection”, which they argue is the basis for all subsequent theory in population genetics. The blog postings on that will be found here, here, here, here, here, here, and here.

One of us (JF) has argued at The Skeptical Zone that they have misread the literature on population genetics. The theory of mutation and natural selection developed during the 1920s, was relatively fully developed before Fisher’s 1930 book. Fisher’s FTNS has been difficult to understand, and subsequent work has not depended on it. But that still leaves us with the issue of whether the B and S simulations show some startling behavior, with deleterious mutations seemingly unable to be prevented from continually rising in frequency. Let’s take a closer look at their simulations.

Continue reading

Does all of evolutionary theory rest on Fisher’s Fundamental Theorem?

The blogs of creationists and ID advocates have been buzzing with the news that a new paper by William Basener and John Sanford, in Journal of Mathematical Biology, shows that natural selection will not lead to the increase of fitness. Some of the blog reports will be found here, here, here, here, here, and here. Sal Cordova has been quoting the paper at length in a comment here.

Basener and Sanford argue that the Fundamental Theorem of Natural Selection, put forward by R.A. Fisher in his book The Genetical Theory of Natural Selection in 1930, was the main foundation of the Modern Evolutionary Synthesis of the 1930s and 1940s. And that when mutation is added to the evolutionary forces modeled by that theorem, it can be shown that fitnesses typically decline rather than increase. They argue that Fisher expected increase of fitness to be typical (they call this Fisher’s Theorem”).

I’m going to argue here that this is a wrong reading of the history of theoretical population genetics and of the history of the Modern Synthesis. In a separate post, in a few days at Panda’s Thumb, I will argue that Basener and Sanford’s computer simulation has a fatal flaw that makes its behavior quite atypical of evolutionary processes.

Continue reading

Betting on the Weasel

… with Mung.   In a recent comment Mung asserted that

If Darwinists had to put up their hard earned money they would soon go broke and Darwinism would be long dead. I have a standing $10,000 challenge here at TSZ that no one has ever taken me up on.

Now, I don’t have $10,000 to bet on anything, but it is worth exploring what bet Mung was making. Perhaps a bet of a lower amount could be negotiated, so it is worth trying to figure out what the issue was.

Mung’s original challenge will be found here.  It was in a thread in which I had proposed a bet of $100 that a Weasel program would do much better than random sampling.  When people there started talking about whether enough money could be found to take Mung up on the bet, they assumed that it was a simple raising of the stake for my bet.  But Mung said here:

You want to wager over something that was never in dispute?

Why not offer a meaningful wager?

So apparently Mung was offering a bet on something else.

I think I have a little insight on what was the “meaningful wager”, or at least on what issue.  It would lead us to a rather extraordinary bet.  Let me explain below the fold …

Continue reading

Jonathan McLatchie still doesn’t understand Dembski’s argument

Over at Uncommon Descent, Jonathan McLatchie calls attention to an interview that Scottish Christian apologist David Robertson did with him.  The 15-minute video is available there.

The issue is scientific evidence for intelligent design.  As so often occurs, they very quickly ran off to the origin of life, and from there to the origin of the Universe.  I was amused that from there they tried to answer the question of where God came from, by saying that it was unreasonable to push the origin issue quite that far back.  There was also a lot of time spent being unhappy with the idea of a multiverse.

But for me the interesting bit was toward the beginning, where McLatchie argues that the evidence for ID is the observation of Specified Complexity, which he defines as complex patterns that conform to a prespecified pattern.  He’s made that argument before, in a 2-minute-long video in a series on 1-minute apologetics.  And I’ve complained about it before here.  Perhaps he was just constrained by the time limit, and would have done a better job if he had more than 2 minutes.

Nope.  It’s the same argument.

Continue reading

Boltzmann Brains and evolution

In the “Elon Musk” discussion, in the midst of a whole lotta epistemology goin’ on, commenter BruceS referred to the concept of a “Boltzmann Brain” and suggested that Boltzmann didn’t know about evolution. (In fact Boltzmann did know about evolution and thought Darwin’s work was hugely important). The Boltzmann Brain is a thought experiment about a conscious brain arising in a thermodynamic system which is at equilibrium. Such a thing is interesting but vastly improbable.

BruceS explained that he was thinking of a reddit post where the commenter invoked evolution to explain why we don’t need extremely improbable events to explain the existence of our brains (the comment will be found here).

What needs to be added is that all that does not happen in an isolated system at thermodynamic equilibrium, or at least it has a fantastically low probability of happening there.  The earth-sun system is not at thermodynamic equilibrium.  Energy is flowing outwards from the sun, at high temperature, some is hitting the earth, and some is taken up by plants and then some by animals, at lower temperatures. Continue reading

Wright, Fisher, and the Weasel

Richard Dawkins’s computer simulation algorithm explores how long it takes a 28-letter-long phrase to evolve to become the phrase “Methinks it is like a weasel”. The Weasel program has a single example of the phrase which produces a number of offspring, with each letter subject to mutation, where there are 27 possible letters, the 26 letters A-Z and a space. The offspring that is closest to that target replaces the single parent. The purpose of the program is to show that creationist orators who argue that evolutionary biology explains adaptations by “chance” are misleading their audiences. Pure random mutation without any selection would lead to a random sequence of 28-letter phrases. There are 27^{28} possible 28-letter phrases, so it should take about 10^{40} different phrases before we found the target. That is without arranging that the phrase that replaces the parent is the one closest to the target. Once that highly nonrandom condition is imposed, the number of generations to success drops dramatically, from 10^{40} to mere thousands.

Although Dawkins’s Weasel algorithm is a dramatic success at making clear the difference between pure “chance” and selection, it differs from standard evolutionary models. It has only one haploid adult in each generation, and since the offspring that is most fit is always chosen, the strength of selection is in effect infinite. How does this compare to the standard Wright-Fisher model of theoretical population genetics? Continue reading

Jonathan McLatchie fails to define Specified Complexity

At Uncommon Descent, a News posting by Denyse O’Leary shows us a video by Jonathan McLatchie. News then expects “Darwin faithful” to “create a distraction below”.

McLatchie defines Specified Complexity as information that matches a predefined pattern, such as specific protein folds needed to have a particular function. His video is in a series entitled “One Minute Apologist” (he takes 2 minutes).

He never says anything to clarify whether natural selection can put this information into the genome. We’ve discussed these points many times before, but let me briefly mention the dilemma that he doesn’t resolve for us:

1, Complex Specified Information was defined by William Dembski in No Free Lunch in this way. The high level of improbability that he required was supposed to show that random mutation could not produce CSI. And a Law of Conservation of Complex Specified Information was supposed to show that natural selection could not achieve CSI. Unfortunately the LCCSI is not formulated so as to be able to do that, because it changes the specification in the before and after states.

2. So in 2005-2006 Dembski instead defined Specified Complexity. Now it is a measure of how improbably far out we are on the scale of specification, with the improbability defined this time as computed taking not only mutation into account, but also natural selection. Dembski does not say how to compute that probability. Now SC really does rule out natural selection — simply by being defined so as to do so. It thereby becomes a useless add-on quantity, computable only once one has already found some other way to show that the information cannot be put into the genome by natural selection.

McLatchie presumably wants to clear us all up on this, but he seems to be using the definition of 1 with the name of 2. So we end up confused as to whether his quantity can be put into the genome by natural selection, or whether it is a useless after-the-fact add-on to some other argument which establishes that it can’t. And he’s had a whole extra minute.

Was denial of the Laws of Thought a myth?

Discussion of A = A seems to have died down some here. As much as people find the topic a fun exercise in logic and philosophy, it might be worth reminding everyone how all this got started here, on a site largely devoted to critiquing creationist and ID arguments.

It started when the owner of the site Uncommon Descent declared that some basic Laws of Thought were being regularly violated by anti-ID commenters on that site.

In a post on February 16, 2012 Barry Arrington wrote, in justification of his policy,
that:

The issue, then, is not whether persons who disagree with us on the facts and logic will be allowed to debate on this site. Anyone who disagrees about the facts and logic is free to come here at any time. But if you come on here and say, essentially, that facts and logic do not matter, then we have no use for you.

The formal announcement of Barry’s policy was four days earlier, in this UD post where Barry invoked the Law of Non-Contradiction and declared that

Arguing with a person who denies the basis for argument is self-defeating and can lead only to confusion. Only a fool or a charlatan denies the LNC, and this site will not be a platform from which fools and charlatans will be allowed to spew their noxious inanities.

For that reason, I am today announcing a new moderation policy at UD. At any time the moderator reserves the right to ask the following question to any person who would comment or continue to comment on this site: “Can the moon exist and not exist at the same time and in the same formal relation?” The answer to this question is either “yes” or “no.” If the person gives any answer other than the single word “no,” he or she will immediately be deemed not worth arguing with and therefore banned from this site.

Continue reading

What is obvious to Granville Sewell

Granville Sewell, who needs no introduction here, is at it again. In a post at Uncommon Descent he imagines a case where a mathematician finds that looking at his problem from a different angle shows that his theorem must be wrong. Then he imagines talking to a biologist who thinks that an Intelligent Design argument is wrong. He then says to the biologist:

“So you believe that four fundamental, unintelligent forces of physics alone can rearrange the fundamental particles of physics into Apple iPhones and nuclear power plants?” I asked. “Well, I guess so, what’s your point?” he replied. “When you look at things from that point of view, it’s pretty obvious there must be an error somewhere in your theory, don’t you think?” I said.

As he usually does, Sewell seems to have forgotten to turn comments on for his post at UD. Is it “obvious” that life cannot originate? That it cannot evolve descendants, some of which are intelligent? That these descendants cannot then build Apple iPhones and nuclear power plants?

As long as we’re talking about whether some things are self-evident, we can also discuss whether this is “pretty obvious”. Discuss it here, if not at UD. Sewell is of course welcome to join in.

At Panda’s Thumb: An evaluation of Dembski, Ewert, and Marks’s Search For a Search argument

Tom English and I have posted at Panda’s Thumb a careful evaluation of William Dembski, Winston Ewert, and Robert Marks’s papers on their Active Information argument. We find that it does not show that we require a Designer in order to have an evolutionary system that finds genotypes with higher fitness. Basically, their space of “searches” is not limited to processes that have genotypes with different fitnesses — many of their “searches” can ignore fitness or even actively look for genotypes of worse fitness. Once one focuses on evolutionary searches with genotypes whose reproduction is affected by their fitnesses, one gets searches with a much greater chance of finding genotypes with higher fitnesses.

I suspect that most discussion of our argument will occur at PT — I have posted here to point to that post. If people want to discuss the matter here, I will try to comment here as well. But you can also comment at PT.

Circularity of using CSI to conclude Design?

At Uncommon Descent, William Dembski’s and Robert Marks’s coauthor Winston Ewert has made a post conceding that using Complex Specified Information to conclude that evolution of an adaptation is improbable is in fact circular. This was argued at UD by “Keith S.” (our own “keiths”) in recent weeks. It was long asserted by various people here, and was argued in posts here by Elizabeth Liddle in her “Belling the Cat” and “EleP(T|H)ant in the room” series of posts (here, here, and here). I had posted at Panda’s Thumb on the same issue.

Here is a bit of what Ewert posted at UD:

CSI and Specified complexity do not help in any way to establish that the evolution of the bacterial flagellum is improbable. Rather, the only way to establish that the bacterial flagellum exhibits CSI is to first show that it was improbable. Any attempt to use CSI to establish the improbability of evolution is deeply fallacious.

I have put up this post so that keiths and others can discuss what Ewert conceded. I urge people to read his post carefully. There are still aspects of it that I am not sure I understand. What for example is the practical distinction between showing that evolution is very improbable and showing that it is impossible? Ewert seems to think that CSI has a role to play there.

Having this concession from Ewert may surprise Denyse O’Leary (“News” at UD) and UD’s head honcho Barry Arrington. Both of them have declared that a big problem for evolution is the observation of CSI. Here is Barry in 2011 (here):

All it would take is even one instance of CSI or IC being observed to arise through chance or mechanical necessity or a combination of the two. Such an observation would blow the ID project out of the water.

Ewert is conceding that one does not first find CSI and then conclude from this that evolution is improbable. Barry and Denyse O’Leary said the opposite — that having observed CSI, one could conclude that evolution was improbable.

The discussion of Ewert’s post at UD is interesting, but maybe we can have some useful discussion here too.

Critique of Dembski’s not-so-new argument, at PT

We interrupt all this philosophy for a brief announcement: I have written a critique of the arguments William Dembski used in his talk on 14 August at the Computations in Science Seminar at the University of Chicago, which can watch on this Youtube video. These were based primarily on the Conservation of Information (CoI) argument of William Dembski and Robert Marks, and these were in turn based on their earlier Search For a Search (SFS) argument. Neither those arguments nor my response are new, but I hope that the new post will explain the issues clearly.

The critique will be found here, at Panda’s Thumb.

I suspect that most of the discussion will occur at PT but I will try to respond here as well.

Adam and Eve and Jerry and Bryan and Vincent

Bryan College in Dayton, Tennessee has recently added to its statement of faith, to which faculty members must subscribe, a “clarification” that

We believe that all humanity is descended from Adam and Eve. They are historical persons created by God in a special formative act, and not from previously existing life forms.

Jerry Coyne at his Why Evolution Is True blog has pointed at this with alarm here, and he linked back to the Chattanooga Times Free Press story here. Jerry cites studies showing from the amount of variability in human populations, that effective population size of the individuals leaving Africa in the Out-Of-Africa event cannot have been much less than 2250, and the effective population size in Africa cannot have been much less than 10,000.

VJTorley at Uncommon Descent has published a firm response, saying Jerry was “In a pickle about Adam and Eve” and saying that when he said that “2250 is greater than two”

Evidently math is not Professor Coyne’s forte.

Note: 2,500 isn’t the same as 2,250.

Note: 2,250 + 10,000 = 12,250.

The math lesson is over.

He also quotes a paper by Luke Harmon and Stanton Braude, which notes that effective population sizes can be larger than actual population sizes, and says

It’s rather embarrassing when a biology professor makes mistakes in his own field, isn’t it?

Has Jerry gotten himself into a pickle? I have some background in this area — I have worked on coalescent trees of ancestry of genes within a species, I wrote one of the two basic papers on effective population size of populations with overlapping generations, and I even shared a grant with Luke Harmon two years ago.

A few simple points:

1. 10,000 + 2,250 = 12,250 all right, but in fact that number is even greater than 2.

2. Effective population size can be greater than population size. It can get as much as 2 times higher. That still leaves us with a long way to go.

3. The Bryan College administration does not know how to write a Clarification. Their statement says that all humanity are descended from Adam and Eve, but does not make it clear whether there could have been other ancestors too. I suspect they meant that there weren’t any.

4. According to UD’s own statements, Intelligent Design arguments are supposedly not statements about religion, so that ID arguments do not predict anything about Adam and Eve. ID proponents are being slandered when they are called creationists, we are told repeatedly. So why the concern about Adam and Eve at UD?

So was Jerry wrong? About Adam and Eve, no. Though he is wrong when he says that his “website” is not a blog.

Evolution disproven — by Hardy and Weinberg?

Over at Uncommon Descent, “niwrad” has argued that the equations of theoretical population genetics show that evolution is unlikely.  niwrad says that the equations of theoretical population genetics

consist basically in two main equations: the Hardy-Weinberg law and the Price equation.

Furthermore niwrad says that

The Hardy-Weinberg law mathematically describes how a population is in equilibrium both for the frequency of alleles and for the frequency of genotypes. Indeed because this law is a fundamental principle of genetic equilibrium, it doesn’t support Darwinism, which means exactly the contrary, the breaking of equilibrium toward the increase of organization and the creation of entirely new organisms.

I just finished teaching my course in theoretical population genetics (with lots of equations, but actually not the Price Equation, as it happens). And I can say that the statement about the Hardy-Weinberg law shows niwrad to be mixed up about the import of Hardy-Weinberg proportions. Let me explain …
Continue reading

Is evolution of proteins impossible?

At Uncommon Descent, “niwrad” has posted a link to a Sequences Probability Calculator. This webserver allows you to set a number of trials (“chemical reactions”) per second, the number of letters per position (20 for amino acids) and a sequence length, and then it calculates how long it will take for you to get exactly that sequence. Each trial assumes that you draw a sequence at random, and success is only when you exactly match the target sequence. This of course takes nearly forever.

So in effect the process is one of random mutation without natural selection present, or random mutation with natural selection that shows no increase in fitness when a sequence partially matches the target. This leads to many thoughts about evolution, such as:

  • Do different species show different sequences for a given protein? Typically they do, so the above scheme implies that they can’t have evolved from common ancestors that had a different protein sequence. They each must have been the result of a separate special creation event.
  • If an experimenter takes a gene from one species and puts it into another, so that the protein sequence is now that of the source species, does it still function? If not, why are people so concerned about making transgenic organisms (they’d all be dead anyway)?
  • If we make a protein sequence by combining part of a sequence from one species and the rest of that protein sequence from another, will that show function in either of the parent species? (Typically yes, it will).

Does a consideration of the experimental evidence show that the SPC fails to take account of the function of nearby sequences?

The author of the Sequences Probability Calculator views evolution as basically impossible. The SPC assumes that any change in a protein makes it unable to function. Each species sits on a high fitness peak with no shoulders. In fact, experimental studies of protein function are usually frustrating, because it is hard to find noticeable difference of function, at least ones big enough to measure in the laboratory.

Natural selection can put Functional Information into the genome

It is quite common for ID commenters to argue that it is not possible for evolutionary forces such as natural selection to put Functional Information or Specified Information) into the genome. Whether they know it or not, these commenters are relying on William Dembski’s Law of Conservation of Complex Specified information. It is supposed to show that Complex Specified Information cannot be put into the genome. Many people have argued that this theorem is incorrect. In my 2007 article I summarized many of these objections and added some of my own.

One of the sections of that article gave a simple computational example of mine showing natural selection putting nearly 2 bits of specified information into the genome, by replacing an equal mixture of A, T, G, and C at one site with 99.9% C.

This post is intended to show a more dramatic example along the same lines.

Continue reading