Reality consists of 3 spatial dimensions with time adding a fourth dimension. But what reason could we possibly have for putting such limits on reality? Do higher dimensions have any reality apart from their construction within a mathematical framework?
This past Friday, I bumped into Dr. Michael Behe, and again on Saturday, along with Drs. Brian Miller (DI), Research Coordinator CSC, and Robert Larmer (UNB), currently President of the Canadian Society of (Evangelical) Christian Philosophers. Venue: local apologetics conference (https://www.diganddelve.ca/). The topic of the event “Science vs. Atheism: Is Modern Science Making Atheism Improbable?” makes it relevant here at TSZ, where there are more atheists & agnostics among ‘skeptics’ than average.
On the positive side, I would encourage folks who visit this site to go to such events for learning/teaching purposes. Whether for the ID speakers or not; good conversations are available among people honestly wrestling with and questioning the relationship between science, philosophy and theology/worldview, including on issues related to evolution, creation, and intelligence in the universe or on Earth. Don’t go to such events expecting miracles for your personal worldview in conversation with others, credibility in scientific publications or in the classroom, if you are using ‘science’ as a worldview weapon against ‘religion’ or ‘theology’. That argument just won’t fly anymore and the Discovery Institute, to their credit, has played a role, of whatever size may still be difficult to tell, in making this shift happen.
A question arises: what would be the first question you would ask or thing you would say to Michael Behe if you bumped into him on the street?Continue reading
Back in 2016, William Dembski officially ‘retired’ from ‘Intelligent Design’ theory & the IDM. He wrote that “the camaraderie I once experienced with colleagues and friends in the movement has largely dwindled.” https://billdembski.com/personal/official-retirement-from-intelligent-design/ This might have come rather late after Dembski’s star had already started to fade. Indeed, it was more than 10 years after the Dover trial debacle and already long after I personally heard another of the leaders of the IDM at the DI in 2003 say he no longer reads Dembski’s books. Yet no doubt Dr. Dembski was one of, if not the leading voice of the IDM for almost 2 decades. Here’s one UK IDist lamenting Dembski’s statement: https://designdisquisitions.wordpress.com/2017/02/19/william-dembski-moves-on-from-id-some-reflections/ Yet when a new paycheck from the Discovery Institute was offered in the Bradley Center, Dembski seems to have gotten right back on the ideological bandwagon in Seattle & reversed his dwindling of IDist camaraderie.
Paul Davies, cosmologists, physicist and agnostic, with Sara Imari Walker, proposed a theory that information, and not chemicals, is at the very foundation of life…Here
Why? Continue reading
The greatest story ever told by activists in the intelligent design (ID) socio-political movement was that William Dembski had proved the Law of Conservation of Information, where the information was of a kind called specified complexity. The fact of the matter is that Dembski did not supply a proof, but instead sketched an ostensible proof, in No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2002). He did not go on to publish the proof elsewhere, and the reason is obvious in hindsight: he never had a proof. In “Specification: The Pattern that Signifies Intelligence” (2005), Dembski instead radically altered his definition of specified complexity, and said nothing about conservation. In “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information” (2010; preprint 2008), Dembski and Marks attached the term Law of Conservation of Information to claims about a newly defined quantity, active information, and gave no indication that Dembski had used the term previously. In Introduction to Evolutionary Informatics, Marks, Dembski, and Ewert address specified complexity only in an isolated chapter, “Measuring Meaning: Algorithmic Specified Complexity,” and do not claim that it is conserved. From the vantage of 2018, it is plain to see that Dembski erred in his claims about conservation of specified complexity, and later neglected to explain that he had abandoned them.
I am hoping that some members here are familiar with Bayes’ Theorem and willing to share their knowledge or at the very least interested enough in the topic to do some research and share their opinions.
– What is Bayes Theorem
– What can it tell us
– How does it work
– Can Bayes’ Theorem be abused and if so how
As Tom English and others have discussed previously, there was a book published last year called Introduction to Evolutionary Informatics, the authors of which are Marks, Dembski, and Ewert.
The main point of the book is stated as:
Indeed, all current models of evolution require information from an external designer to work.
(The “external designer” they are talking about is the modeler who created the model.)
Another way they state their position:
We show repeatedly that the proposed models all require inclusion of significant knowledge about the problem being solved.
Somehow, they think it needs to be shown that modelers put information and knowledge into their models. This displays a fundamental misunderstanding of models and modeling.
It is a simple fact that a model of any kind, in its entirety, comes from a modeler. Any information in the model, however one defines information, is put in the model by the modeler. All structures and behaviors of any model are results of modeling decisions made by the modeler. Models are the modelers’ conceptions of reality. It is expected that modelers will add the best information they think they have in order to make their models realistic. Why wouldn’t they? For people who actually build and use models, like engineers and scientists, the main issue is realism.
To see a good presentation on the fundamentals of modeling, I recommend the videos and handbooks available free online from the Society for Industrial and Applied Mathematics (SIAM.) “[Link]”.
For a good discussion on what it really means for a model to “work,” I recommend a paper called “Concepts of Model Verification and Validation”, which was put out by the Los Alamos Laboratories.
Defending the validity and significance of the new theorem “Fundamental Theorem of Natural Selection With Mutations, Part II: Our Mutation-Selection Model
– Bill Basener and John Sanford
Joe Felsenstein and Michael Lynch (JF and ML) wrote a blog post, “Does Basener and Sanford’s model of mutation vs selection show that deleterious mutations are unstoppable?” Their post is thoughtful and we are glad to continue the dialogue. We previously wrote a first part of a response to their post, focusing on the impact of R. A. Fisher’s work. This is the second part of our response, focusing on the modelling and mathematics. Our paper can be found at: https://link.springer.com/article/10.1007/s00285-017-1190-x Continue reading
– Bill Basener and John Sanford
Joe Felsenstein and Michael Lynch (JF and ML) wrote a blog post, “Does Basener and Sanford’s model of mutation vs selection show that deleterious mutations are unstoppable?” Their post is thoughtful and we are glad to continue the dialogue. This is the first part of a response to their post, focusing on the impact of R. A. Fisher’s work. Our paper can be found at: https://link.springer.com/article/10.1007/s00285-017-1190-x
First, a short background on our paper:
The primary thesis of our paper is that Fisher was wrong, in a fundamental way, in his belief that his theorem (“The Fundamental Theorem of Natural Selection”), implied the certainty of ongoing fitness increase. His claim was that mutations continually provide variance, and selection turns the variance into fitness increase. Central to his logic was that collectively; mutations have a net zero effect on fitness. While Fisher assumed mutations are collectively fitness-neutral, it is now known that the vast majority of mutations are deleterious. So mutations can potentially push fitness down – even in the presence of selection. Continue reading
by Joe Felsenstein and Michael Lynch
The blogs of creationists and advocates of ID have been abuzz lately about exciting new work by William Basener and John Sanford. In a peer-reviewed paper at Journal of Mathematical Biology, they have presented a mathematical model of mutation and natural selection in a haploid population, and they find in one realistic case that natural selection is unable to prevent the continual decline of fitness. This is presented as correcting R.A. Fisher’s 1930 “Fundamental Theorem of Natural Selection”, which they argue is the basis for all subsequent theory in population genetics. The blog postings on that will be found here, here, here, here, here, here, and here.
One of us (JF) has argued at The Skeptical Zone that they have misread the literature on population genetics. The theory of mutation and natural selection developed during the 1920s, was relatively fully developed before Fisher’s 1930 book. Fisher’s FTNS has been difficult to understand, and subsequent work has not depended on it. But that still leaves us with the issue of whether the B and S simulations show some startling behavior, with deleterious mutations seemingly unable to be prevented from continually rising in frequency. Let’s take a closer look at their simulations.
The blogs of creationists and ID advocates have been buzzing with the news that a new paper by William Basener and John Sanford, in Journal of Mathematical Biology, shows that natural selection will not lead to the increase of fitness. Some of the blog reports will be found here, here, here, here, here, and here. Sal Cordova has been quoting the paper at length in a comment here.
Basener and Sanford argue that the Fundamental Theorem of Natural Selection, put forward by R.A. Fisher in his book The Genetical Theory of Natural Selection in 1930, was the main foundation of the Modern Evolutionary Synthesis of the 1930s and 1940s. And that when mutation is added to the evolutionary forces modeled by that theorem, it can be shown that fitnesses typically decline rather than increase. They argue that Fisher expected increase of fitness to be typical (they call this Fisher’s Theorem”).
I’m going to argue here that this is a wrong reading of the history of theoretical population genetics and of the history of the Modern Synthesis. In a separate post, in a few days at Panda’s Thumb, I will argue that Basener and Sanford’s computer simulation has a fatal flaw that makes its behavior quite atypical of evolutionary processes.
There is a pretty interesting discussion going on in Noyau regarding the many definitions of “fitness” in evolutionary biology. It would be a shame for it to be lost in that particular venue here at TSZ. At the risk of being censored by the admins for posting too many OPs in one month I thought I’d start this thread.
Here’s my take so far:
Allan Miller was charged by phoodoo with resorting to different definitions of fitness. Allan denied the charge and when asked for a definition of fitness Allan provided one. Allan later stated that his definition only properly applied to asexual species.
Others chimed in to say that the definition of fitness depends on the context, which hardly seems to contradict what phoodoo was saying.
My own position is that fitness has its definition within a particular mathematical framework. My position is also that fitness can be defined generically but that such a definition is tautological. Special definitions of fitness are required to make the concept testable.
Here’s hoping we can move the discussion about fitness out of Noyau.
Yes, Tom English was right to warn us not to buy the book until the authors establish that their mathematical analysis of search applies to models of evolution.
But some of us have bought (or borrowed) the book nevertheless. As Denyse O’Leary said: It is surprisingly easy to read. I suppose she is right, as long as you do not try to follow their conclusions, but accept it as Gospel truth.
In the thread Who thinks Introduction to Evolutionary Informatics should be on your summer reading list? at Uncommon Descent, there is a list of endorsements – and I have to wonder if everyone who endorsed the book actually read it. “Rigorous and humorous”? Really?
Dembski, Marks, and Ewert will never explain how their work applies to models of evolution. But why not create at list of things which are problematic (or at least strange) with the book itself? Here is a start (partly copied from UD):
… the authors establish that their mathematical analysis of search applies to models of evolution.
I have all sorts of fancy stuff to say about the new book by Marks, Dembski, and Ewert. But I wonder whether I should say anything fancy at all. There is a ginormous flaw in evolutionary informatics, quite easy to see when it’s pointed out to you. The authors develop mathematical analysis of apples, and then apply it to oranges. You need not know what apples and oranges are to see that the authors have got some explaining to do. When applying the analysis to an orange, they must identify their assumptions about apples, and show that the assumptions hold also for the orange. Otherwise the results are meaningless.
The authors have proved that there is “conservation of information” in search for a solution to a problem. I have simplified, generalized, and trivialized their results. I have also explained that their measure of “information” is actually a measure of performance. But I see now that the technical points really do not matter. What matters is that the authors have never identified, let alone justified, the assumptions of the math in their studies of evolutionary models.a They have measured “information” in models, and made a big deal of it because “information” is conserved in search for a solution to a problem. What does search for a solution to a problem have to do with modeling of evolution? Search me. In the absence of a demonstration that their “conservation of information” math applies to a model of evolution, their measurement of “information” means nothing. It especially does not mean that the evolutionary process in the model is intelligently designed by the modeler.1
Denyse O’Leary, an advocacy journalist employed by one of the principals of the Center for Evolutionary Informatics, reports that I have essentially retracted the first of my papers on the “no free lunch” theorems for search (1996). What I actually have done in my online copy of the paper, marked “emended and amplified,” is to correct an expository error that Dembski and Marks elevated to “English’s Principle of Conservation of Information” in the first of their publications, “Conservation of Information in Search: Measuring the Cost of Success.” Marks, Dembski, and Ewert have responded, in their new book, by deleting me from the history of “no free lunch.” And the consequence is rather amusing. For now, when explaining conservation of information in terms of no free lunch, they refer over and over to performance.1 It doesn’t take a computer scientist, or even a rocket scientist, to see that they are describing conservation of performance, and calling it conservation of information.
The mathematical results of my paper are correct, though poorly argued. In fact, the theorem I provide is more general than the main theorem of Wolpert and Macready, which was published the following year.2 If you’re going to refer to one of the two theorems as the No Free Lunch Theorem, then it really should be mine. Where I go awry is in the exposition of my results. I mistake a lemma as indicating that conservation of performance in search is due ultimately to conservation of information in search.
Here, one of my brilliant MD PhD students and I study one of the “information” arguments against evolution. What do you think of our study?
I recently put this preprint in biorxiv. To be clear, this study is not yet peer-reviewed, and I do not want anyone to miss this point. This is an “experiment” too. I’m curious to see if these types of studies are publishable. If they are, you might see more from me. Currently it is under review at a very good journal. So it might actually turn the corner and get out there. An a parallel question: do you think this type of work should be published?
I’m curious what the community thinks. I hope it is clear enough for non-experts to follow too. We went to great lengths to make the source code for the simulations available in an easy to read and annotated format. My hope is that a college level student could follow the details. And even if you can’t, you can weigh in on if the scientific community should publish this type of work.
“Functional Information”—estimated from the mutual information of protein sequence alignments—has been proposed as a reliable way of estimating the number of proteins with a specified function and the consequent difficulty of evolving a new function. The fantastic rarity of functional proteins computed by this approach emboldens some to argue that evolution is impossible. Random searches, it seems, would have no hope of finding new functions. Here, we use simulations to demonstrate that sequence alignments are a poor estimate of functional information. The mutual information of sequence alignments fantastically underestimates of the true number of functional proteins. In addition to functional constraints, mutual information is also strongly influenced by a family’s history, mutational bias, and selection. Regardless, even if functional information could be reliably calculated, it tells us nothing about the difficulty of evolving new functions, because it does not estimate the distance between a new function and existing functions. Moreover, the pervasive observation of multifunctional proteins suggests that functions are actually very close to one another and abundant. Multifunctional proteins would be impossible if the FI argument against evolution were true.
I am working on a series of tutorials to cover the basics of Intelligent Design, especially the mathematics of it. This is my tutorial on Specified Complexity, and I would appreciate any thoughtful criticism of it.
Given the importance of information theory to some intelligent design arguments I thought it might be nice to have a toolkit of some basic functions related to the sorts of calculations associated with information theory, regardless of which side of the debate one is on.
What would those functions consist of?
The writings and life work of Ed Thorp, professor at MIT, influenced many of my notions of ID (though Thorp and Shannon are not ID proponents). I happened upon a forgotten mathematical paper by Ed Thorp in 1961 in the Proceedings of the National Academy of Sciences that launched his stellar career into Wall Street. If the TSZ regulars are tired of talking and arguing ID, then I offer a link to Thorp’s landmark paper. That 1961 PNAS article consists of a mere three pages. It is terse, and almost shocking in its economy of words and straightforward English. The paper can be downloaded from:
Thorp was a colleague of Claude Shannon (founder of information theory, and inventor of the notion of “bit”) at MIT. Thorp managed to publish his theory about blackjack through the sponsorship of Shannon. He was able to scientifically prove his theories in the casinos and Wall Street and went on to make hundreds of millions of dollars through his scientific approach to estimating and profiting from expected value. Thorp was the central figure in the real life stories featured in the book
Fortune’s Formula: The Untold Story of the Scientific Betting System that Beat the Casino’s and Wall Street by William Poundstone.