Evo-Info review: Do not buy the book until…

Introduction to Evolutionary Informatics, by Robert J. Marks II, the “Charles Darwin of Intelligent Design”; William A. Dembski, the “Isaac Newton of Information Theory”; and Winston Ewert, the “Charles Ingram of Active Information.” World Scientific, 332 pages.
Classification: Engineering mathematics. Engineering analysis. (TA347)
Subjects: Evolutionary computation. Information technology–Mathematics.

… the authors establish that their mathematical analysis of search applies to models of evolution.

I have all sorts of fancy stuff to say about the new book by Marks, Dembski, and Ewert. But I wonder whether I should say anything fancy at all. There is a ginormous flaw in evolutionary informatics, quite easy to see when it’s pointed out to you. The authors develop mathematical analysis of apples, and then apply it to oranges. You need not know what apples and oranges are to see that the authors have got some explaining to do. When applying the analysis to an orange, they must identify their assumptions about apples, and show that the assumptions hold also for the orange. Otherwise the results are meaningless.

The authors have proved that there is “conservation of information” in search for a solution to a problem. I have simplified, generalized, and trivialized their results. I have also explained that their measure of “information” is actually a measure of performance. But I see now that the technical points really do not matter. What matters is that the authors have never identified, let alone justified, the assumptions of the math in their studies of evolutionary models.a They have measured “information” in models, and made a big deal of it because “information” is conserved in search for a solution to a problem. What does search for a solution to a problem have to do with modeling of evolution? Search me. In the absence of a demonstration that their “conservation of information” math applies to a model of evolution, their measurement of “information” means nothing. It especially does not mean that the evolutionary process in the model is intelligently designed by the modeler.1

Continue reading

Evo-Info sidebar: Conservation of performance in search

Introduction to Evolutionary Informatics, by Robert J. Marks II, the “Charles Darwin of Intelligent Design”; William A. Dembski, the “Isaac Newton of Information Theory”; and Winston Ewert, the “Charles Ingram of Active Information.” World Scientific, 332 pages.
Classification: Engineering mathematics. Engineering analysis. (TA347)
Subjects: Evolutionary computation. Information technology–Mathematics.

Denyse O’Leary, an advocacy journalist employed by one of the principals of the Center for Evolutionary Informatics, reports that I have essentially retracted the first of my papers on the “no free lunch” theorems for search (1996). What I actually have done in my online copy of the paper, marked “emended and amplified,” is to correct an expository error that Dembski and Marks elevated to “English’s Principle of Conservation of Information” in the first of their publications, “Conservation of Information in Search: Measuring the Cost of Success.” Marks, Dembski, and Ewert have responded, in their new book, by deleting me from the history of “no free lunch.” And the consequence is rather amusing. For now, when explaining conservation of information in terms of no free lunch, they refer over and over to performance.1 It doesn’t take a computer scientist, or even a rocket scientist, to see that they are describing conservation of performance, and calling it conservation of information.

The mathematical results of my paper are correct, though poorly argued. In fact, the theorem I provide is more general than the main theorem of Wolpert and Macready, which was published the following year.2 If you’re going to refer to one of the two theorems as the No Free Lunch Theorem, then it really should be mine. Where I go awry is in the exposition of my results. I mistake a lemma as indicating that conservation of performance in search is due ultimately to conservation of information in search.
Continue reading

Evolution and Functional Information

Here, one of my brilliant MD PhD students and I study one of the “information” arguments against evolution. What do you think of our study?

I recently put this preprint in biorxiv. To be clear, this study is not yet peer-reviewed, and I do not want anyone to miss this point. This is an “experiment” too. I’m curious to see if these types of studies are publishable. If they are, you might see more from me. Currently it is under review at a very good journal. So it might actually turn the corner and get out there. An a parallel question: do you think this type of work should be published?

 

I’m curious what the community thinks. I hope it is clear enough for non-experts to follow too. We went to great lengths to make the source code for the simulations available in an easy to read and annotated format. My hope is that a college level student could follow the details. And even if you can’t, you can weigh in on if the scientific community should publish this type of work.

Functional Information and Evolution

http://www.biorxiv.org/content/early/2017/03/06/114132

“Functional Information”—estimated from the mutual information of protein sequence alignments—has been proposed as a reliable way of estimating the number of proteins with a specified function and the consequent difficulty of evolving a new function. The fantastic rarity of functional proteins computed by this approach emboldens some to argue that evolution is impossible. Random searches, it seems, would have no hope of finding new functions. Here, we use simulations to demonstrate that sequence alignments are a poor estimate of functional information. The mutual information of sequence alignments fantastically underestimates of the true number of functional proteins. In addition to functional constraints, mutual information is also strongly influenced by a family’s history, mutational bias, and selection. Regardless, even if functional information could be reliably calculated, it tells us nothing about the difficulty of evolving new functions, because it does not estimate the distance between a new function and existing functions. Moreover, the pervasive observation of multifunctional proteins suggests that functions are actually very close to one another and abundant. Multifunctional proteins would be impossible if the FI argument against evolution were true.

Dice Entropy – A Programming Challenge

Given the importance of information theory to some intelligent design arguments I thought it might be nice to have a toolkit of some basic functions related to the sorts of calculations associated with information theory, regardless of which side of the debate one is on.

What would those functions consist of?

Continue reading

Thorp, Shannon: Inspiration for Alternative Perspectives on the ID vs. Naturalism Debate

The writings and life work of Ed Thorp, professor at MIT, influenced many of my notions of ID (though Thorp and Shannon are not ID proponents). I happened upon a forgotten mathematical paper by Ed Thorp in 1961 in the Proceedings of the National Academy of Sciences that launched his stellar career into Wall Street. If the TSZ regulars are tired of talking and arguing ID, then I offer a link to Thorp’s landmark paper. That 1961 PNAS article consists of a mere three pages. It is terse, and almost shocking in its economy of words and straightforward English. The paper can be downloaded from:

A Favorable Strategy for Twenty One, Proceedings National Academy of Sciences.

Thorp was a colleague of Claude Shannon (founder of information theory, and inventor of the notion of “bit”) at MIT. Thorp managed to publish his theory about blackjack through the sponsorship of Shannon. He was able to scientifically prove his theories in the casinos and Wall Street and went on to make hundreds of millions of dollars through his scientific approach to estimating and profiting from expected value. Thorp was the central figure in the real life stories featured in the book
Fortune’s Formula: The Untold Story of the Scientific Betting System that Beat the Casino’s and Wall Street by William Poundstone.
Continue reading

Wistar Day

Koprowski and I, the only biologists present, were confronted by a rather weird discussion between four mathematicians – Eden, Schutzenberger, Weisskopf, and Ulam – on mathematical doubts concerning the Darwinian theory of evolution. At the end of several hours of heated debate, the biological contingent proposed that a symposium be arranged to consider the points of dispute more systematically, and with a more powerful array of biologists who could function adequately in the universe of discourse inhabited by mathematicians.

– Martin Kaplan

Continue reading

A Beautiful Question

I’ve just completed the book A Beautiful Question: Finding Nature’s Deep Design by Nobel Prize winning physicist Frank Wilczek.

This book is a long meditation on a single question:

Does the world embody beautiful ideas?

Our Question may seem like a strange thing to ask. Ideas are one thing, physical bodies are quite another. What does it mean to “embody” an “idea”?

Embodying ideas is what artists do. Starting from visionary conceptions, artists produce physical objects (or quasi-physical products, like musical scores that unfold into sound). Our Beautiful Question then is close to this one:

Is the world a work of art?

Continue reading

Wright, Fisher, and the Weasel

Richard Dawkins’s computer simulation algorithm explores how long it takes a 28-letter-long phrase to evolve to become the phrase “Methinks it is like a weasel”. The Weasel program has a single example of the phrase which produces a number of offspring, with each letter subject to mutation, where there are 27 possible letters, the 26 letters A-Z and a space. The offspring that is closest to that target replaces the single parent. The purpose of the program is to show that creationist orators who argue that evolutionary biology explains adaptations by “chance” are misleading their audiences. Pure random mutation without any selection would lead to a random sequence of 28-letter phrases. There are 27^{28} possible 28-letter phrases, so it should take about 10^{40} different phrases before we found the target. That is without arranging that the phrase that replaces the parent is the one closest to the target. Once that highly nonrandom condition is imposed, the number of generations to success drops dramatically, from 10^{40} to mere thousands.

Although Dawkins’s Weasel algorithm is a dramatic success at making clear the difference between pure “chance” and selection, it differs from standard evolutionary models. It has only one haploid adult in each generation, and since the offspring that is most fit is always chosen, the strength of selection is in effect infinite. How does this compare to the standard Wright-Fisher model of theoretical population genetics? Continue reading