Probabilistic thinking is pervasive in evolutionary theory. It’s not a bad thing, just something that needs to be acknowledged and appropriately handled.
At Uncommon Descent, poster gpuccio has expressed interest in what I think of his example of a safecracker trying to open a safe with a 150-digit combination, or open 150 safes, each with its own 1-digit combination. It’s actually a cute teaching example, which helps explain why natural selection cannot find a region of “function” in a sequence space in such a case. The implication is that there is some point of contention that I failed to address, in my post which led to the nearly 2,000-comment-long thread on his argument here at TSZ. He asks:
By the way, has Joe Felsestein answered my argument about the thief? Has he shown how complex functional information can increase gradually in a genome?
Gpuccio has repeated his call for me to comment on his ‘thief’ scenario a number of times, including here, and UD reader “jawa” taken up the torch (here and here), asking whether I have yet answered the thief argument), at first dramatically asking
Does anybody else wonder why these professors ran away when the discussions got deep into real evidence territory?
and then supplying the “thoughts” definitively (here)
we all know why those distinguished professors ran away from the heat of a serious discussion with gpuccio, it’s obvious: lack of solid arguments.
I’ll re-explain gpuccio’s example below the fold, and then point out that I never contested gpuccio’s safe example, but I certainly do contest gpuccio’s method of showing that “Complex Functional Information” cannot be achieved by natural selection. gpuccio manages to do that by defining “complex functional information” differently from Szostak and Hazen’s definition of functional information, in a way that makes his rule true. But gpuccio never manages to show that when Functional Information is defined as Hazen and Szostak defined it, that 500 bits of it cannot be accumulated by natural selection.
Just a note that I have put up a new post at Panda’s Thumb in response to a post by Eric Holloway at the Discovery Institute’s new blog Mind Matters. Holloway declares that critics have totally failed to refute William Dembski’s use of Complex Specified Information to diagnose Design. At PT, I argue in detail that this is an exactly backwards reading of the outcome of the argument.
Commenters can post there, or here — I will try to keep track of both.
There has been a discussion of Holloway’s argument by Holloway and others at Uncommon Descent as well (links in the PT post). gpuccio also comments there trying to get someone to call my attention to an argument about Complex Functional Information that gpuccio made in the discussion of that earlier. I will try to post a response on that here soon, separate from this thread.
I am hoping that some members here are familiar with Bayes’ Theorem and willing to share their knowledge or at the very least interested enough in the topic to do some research and share their opinions.
– What is Bayes Theorem
– What can it tell us
– How does it work
– Can Bayes’ Theorem be abused and if so how
Defending the validity and significance of the new theorem “Fundamental Theorem of Natural Selection With Mutations, Part II: Our Mutation-Selection Model
– Bill Basener and John Sanford
Joe Felsenstein and Michael Lynch (JF and ML) wrote a blog post, “Does Basener and Sanford’s model of mutation vs selection show that deleterious mutations are unstoppable?” Their post is thoughtful and we are glad to continue the dialogue. We previously wrote a first part of a response to their post, focusing on the impact of R. A. Fisher’s work. This is the second part of our response, focusing on the modelling and mathematics. Our paper can be found at: https://link.springer.com/article/10.1007/s00285-017-1190-x Continue reading
– Bill Basener and John Sanford
Joe Felsenstein and Michael Lynch (JF and ML) wrote a blog post, “Does Basener and Sanford’s model of mutation vs selection show that deleterious mutations are unstoppable?” Their post is thoughtful and we are glad to continue the dialogue. This is the first part of a response to their post, focusing on the impact of R. A. Fisher’s work. Our paper can be found at: https://link.springer.com/article/10.1007/s00285-017-1190-x
First, a short background on our paper:
The primary thesis of our paper is that Fisher was wrong, in a fundamental way, in his belief that his theorem (“The Fundamental Theorem of Natural Selection”), implied the certainty of ongoing fitness increase. His claim was that mutations continually provide variance, and selection turns the variance into fitness increase. Central to his logic was that collectively; mutations have a net zero effect on fitness. While Fisher assumed mutations are collectively fitness-neutral, it is now known that the vast majority of mutations are deleterious. So mutations can potentially push fitness down – even in the presence of selection. Continue reading
Darwin’s conforming of his theory to the old vera causa ideal shows that the theory of natural selection is probabilistic not because it introduces a probabilistic law or principle, but because it invokes a probabilistic cause, natural selection, definable as nonfortuitous differential reproduction of hereditary variants.
Evolution is often presented as problem-solving. Genetic algorithms are often offered as proofs of evolution’s ability to solve problems. Genetic algorithms are as search algorithms.
As one book says:
Fundamentally, all evolutionary algorithms can be viewed as search algorithms which search through a set of possible solutions looking for the best – or “fittest” – solution.
Tom has asked me to specify a problem independently from the evolutionary process. Now I have to admit that I don’t really understand what that means. But I like Tom and I have a lot of respect for him, so I want to give it my best shot and see where it takes us. I’m also hoping this will shed some light on claims about how problem-solving genetic algorithms are designed to solve a particular problem.
There’s been some debate here at TSZ recently about probability and the interpretation of probability.
I took some flak (my personal subjective opinion) for attempting to distinguish between calculating probabilities and estimating probabilities.
Yet in recent reading I came across this bit of text:
How do you determine the probability that a given event will occur? There are two ways: You can calculate it theoretically, or you can estimate it experimentally by performing a large number of trials.
– Probability: For the Enthusiastic Beginning. p. 335
Yes, Tom English was right to warn us not to buy the book until the authors establish that their mathematical analysis of search applies to models of evolution.
But some of us have bought (or borrowed) the book nevertheless. As Denyse O’Leary said: It is surprisingly easy to read. I suppose she is right, as long as you do not try to follow their conclusions, but accept it as Gospel truth.
In the thread Who thinks Introduction to Evolutionary Informatics should be on your summer reading list? at Uncommon Descent, there is a list of endorsements – and I have to wonder if everyone who endorsed the book actually read it. “Rigorous and humorous”? Really?
Dembski, Marks, and Ewert will never explain how their work applies to models of evolution. But why not create at list of things which are problematic (or at least strange) with the book itself? Here is a start (partly copied from UD):
This Discoveroid article is amazing. Could Atheism Survive the Discovery of Extraterrestrial Life?. I wish I could make a new post about it. They say that if life is found elsewhere, that too is a miracle, so then you gotta believe in the intelligent designer. They say:
“The probability of life spontaneously self-assembling anywhere in this universe is mind-staggeringly unlikely; essentially zero. If you are so unquestioningly naïve as to believe we just got incredibly lucky, then bless your soul.”
Actually, “they” who posted at Evolution News and Views is someone we all love dearly, and see occasionally in the Zone — that master of arguments from improbability, Kirk Durston.
Here, one of my brilliant MD PhD students and I study one of the “information” arguments against evolution. What do you think of our study?
I recently put this preprint in biorxiv. To be clear, this study is not yet peer-reviewed, and I do not want anyone to miss this point. This is an “experiment” too. I’m curious to see if these types of studies are publishable. If they are, you might see more from me. Currently it is under review at a very good journal. So it might actually turn the corner and get out there. An a parallel question: do you think this type of work should be published?
I’m curious what the community thinks. I hope it is clear enough for non-experts to follow too. We went to great lengths to make the source code for the simulations available in an easy to read and annotated format. My hope is that a college level student could follow the details. And even if you can’t, you can weigh in on if the scientific community should publish this type of work.
“Functional Information”—estimated from the mutual information of protein sequence alignments—has been proposed as a reliable way of estimating the number of proteins with a specified function and the consequent difficulty of evolving a new function. The fantastic rarity of functional proteins computed by this approach emboldens some to argue that evolution is impossible. Random searches, it seems, would have no hope of finding new functions. Here, we use simulations to demonstrate that sequence alignments are a poor estimate of functional information. The mutual information of sequence alignments fantastically underestimates of the true number of functional proteins. In addition to functional constraints, mutual information is also strongly influenced by a family’s history, mutational bias, and selection. Regardless, even if functional information could be reliably calculated, it tells us nothing about the difficulty of evolving new functions, because it does not estimate the distance between a new function and existing functions. Moreover, the pervasive observation of multifunctional proteins suggests that functions are actually very close to one another and abundant. Multifunctional proteins would be impossible if the FI argument against evolution were true.
Given the importance of information theory to some intelligent design arguments I thought it might be nice to have a toolkit of some basic functions related to the sorts of calculations associated with information theory, regardless of which side of the debate one is on.
What would those functions consist of?
The writings and life work of Ed Thorp, professor at MIT, influenced many of my notions of ID (though Thorp and Shannon are not ID proponents). I happened upon a forgotten mathematical paper by Ed Thorp in 1961 in the Proceedings of the National Academy of Sciences that launched his stellar career into Wall Street. If the TSZ regulars are tired of talking and arguing ID, then I offer a link to Thorp’s landmark paper. That 1961 PNAS article consists of a mere three pages. It is terse, and almost shocking in its economy of words and straightforward English. The paper can be downloaded from:
Thorp was a colleague of Claude Shannon (founder of information theory, and inventor of the notion of “bit”) at MIT. Thorp managed to publish his theory about blackjack through the sponsorship of Shannon. He was able to scientifically prove his theories in the casinos and Wall Street and went on to make hundreds of millions of dollars through his scientific approach to estimating and profiting from expected value. Thorp was the central figure in the real life stories featured in the book
Fortune’s Formula: The Untold Story of the Scientific Betting System that Beat the Casino’s and Wall Street by William Poundstone.
TSZ has made much ado about P(T|H), a conditional probability based on a materialistic hypothesis. They don’t seem to realize that H pertains to their position and that H cannot be had means their position is untestable. The only reason the conditional probability exists in the first place is due to the fact that the claims of evolutionists cannot be directly tested in a lab. If their claims could be directly tested then there wouldn’t be any need for a conditional probability.
If P(T|H) cannot be calculated it is due to the failure of evolutionists to provide H and their failure to find experimental evidence to support their claims.
I know what the complaints are going to be- “It is Dembski’s metric”- but yet it is in relation to your position and it wouldn’t exist if you actually had something that could be scientifically tested.
Richard Dawkins’s computer simulation algorithm explores how long it takes a 28-letter-long phrase to evolve to become the phrase “Methinks it is like a weasel”. The Weasel program has a single example of the phrase which produces a number of offspring, with each letter subject to mutation, where there are 27 possible letters, the 26 letters A-Z and a space. The offspring that is closest to that target replaces the single parent. The purpose of the program is to show that creationist orators who argue that evolutionary biology explains adaptations by “chance” are misleading their audiences. Pure random mutation without any selection would lead to a random sequence of 28-letter phrases. There are possible 28-letter phrases, so it should take about different phrases before we found the target. That is without arranging that the phrase that replaces the parent is the one closest to the target. Once that highly nonrandom condition is imposed, the number of generations to success drops dramatically, from to mere thousands.
Although Dawkins’s Weasel algorithm is a dramatic success at making clear the difference between pure “chance” and selection, it differs from standard evolutionary models. It has only one haploid adult in each generation, and since the offspring that is most fit is always chosen, the strength of selection is in effect infinite. How does this compare to the standard Wright-Fisher model of theoretical population genetics? Continue reading
Michael Behe is best known for coining the phrase Irreducible Complexity, but I think his likening of biological systems to Rube Goldberg machines is a better way to frame the problem of evolving the black boxes and the other extravagances of the biological world.
A century later we know that the overwhelming obstacle facing spontaneous generation is probability, or rather improbability, resulting from life’s enormously complex phenotypes. If even a single protein, a single specific sequence of amino acids, could not have emerged spontaneously, how much less so could a bacterium like E. coli with millions of proteins and other complex molecules? Modern biochemistry allows us to estimate the odds, and they demolish the spontaneous creation of complex organisms.
Looks like IDists aren’t the only ones to appeal to probability arguments. How does Wagner know what the probabilities are, or that spontaneous generation is even within the realm of what is possible?
(Edited Feb 2, 2016 to add eight figures)
Since 2005, Uncommon Descent (UD) – founded by William Dembski – has been the place to discuss intelligent design. Unfortunately, the moderation policy has always been one-sided (and quite arbitrary at the same time!) Since 2011, the statement “You don’t have to participate in UD” is not longer answered with gritted teeth only, but with a real alternative: Elizabeth Liddle’s The Skeptical Zone (TSZ). So, how were these two sites doing in 2015?
Number of Comments 2005 – 2015
In 2015, there were still 17% more comments at UD than at TSZ – 53,100 to 45,200.
Though UD is still going strong, there is a slight downwards trend (yellow line) in the daily number of comments.