A group of scientists and economists has devised a simple, tunable strategy for achieving exponential decrease in the number of new cases of Covid-19 while partially reopening the economy — or so it seems to me. The simplest form of the strategy is to alternate between consecutive days of work and consecutive days of lockdown. Although I am reluctant to add to the cacophony of inexpert opinions on how to deal with the pandemic, I will say that the strategy obviously works in an epidemiologic sense if the number of workdays per two-week cycle is sufficiently small. Furthermore, it is obvious that the number of workdays can be adjusted in response to the number of new cases. However, it is not obvious that can be set sufficiently high for the strategy to work in an economic sense. Modeling reported in the preprint “Cyclic Exit Strategies to Suppress COVID-19 and Allow Economic Activity” indicates that is likely to be sufficiently small. In other words, it seems that people might work half-time ( hours per two-week period) while driving the number of new cases toward zero. I am not qualified to judge epidemiological models, but will note that the results make sense if it is indeed the case that there is a “three-day delay on average between the time a person is infected and the time he or she can infect others.”
To be perfectly clear, I have not become a true believer in a strategy addressed in a preprint. I am saying that we should reject the notion that the pandemic will end only with herd immunity. It is not irrational to say that there may be, in the absence of an effective vaccination program, practicable methods of preventing most people from being infected, and that we should keep looking for them.
In Section 4 of their article, Nemati and Holloway claim to have identified an error in a post of mine. They do not cite the post, but instead name me, and link to the homepage of The Skeptical Zone. Thus there can be no question as to whether the authors regard technical material that I post here as worthy of a response in Bio-Complexity. (A year earlier, George Montañez modified a Bio-Complexity article, adding information that I supplied in precisely the post that Nemati and Holloway address.) Interacting with me at TSZ, a month ago, Eric Holloway acknowledged error in an equation that I had told him was wrong, expressed interest in seeing the next part of my review, and said, “If there is a fundamental flaw in the second half, as you claim, then I’ll retract it if it is unfixable.” I subsequently put a great deal of work into “The Old Switcheroo,” trying to anticipate all of the ways in which Holloway might wiggle out of acknowledging his errors. Evidently I left him no avenue of escape, given that he now refuses to engage at all, and insists that I submit my criticisms to Bio-Complexity. Continue reading →
David Nemati and Eric Holloway, “Expected Algorithmic Specified Complexity.” Bio-Complexity 2019 (2):1-10. doi:10.5048/BIO-C.2019.2. Editor: William Basener. Editor-in-Chief: Robert J. Marks II.
Eric Holloway has littered cyberspace with claims, arrived at by faulty reasoning, that the “laws of information conservation (nongrowth)” in data processing hold for algorithmic specified complexity as for algorithmic mutual information. It is essential to understand that there are infinitely many measures of algorithmic specified complexity. Nemati and Holloway would have us believe that each of them is a quantification of the meaningful information in binary strings, i.e., finite sequences of 0s and 1s. If Holloway’s claims were correct, then there would be a limit on the increase in algorithmic specified complexity resulting from execution of a computer program (itself a string). Whichever one of the measures were applied, the difference in algorithmic specified complexity of the string output by the process and the string input to the process would be at most the program length plus a constant. It would follow, more generally, that an approximate upper bound on the difference in algorithmic specified complexity of strings and is the length of the shortest program that outputs on input of Of course, the measure must be the same for both strings. Otherwise it would be absurd to speak of (non-)conservation of a quantity of algorithmic specified complexity. Continue reading →
I was short with Joe Felsenstein in the comments section of “Stark Incompetence,” a post in which I address, well, um, the stark incompetence on display in a recent publication of Eric Holloway. I have apologized to Joe, and promised to make amends with a brief post on the topic that he wants to address. Now, the topic is a putative model that Eric introduced in “Mutual Algorithmic Information, Information Non-growth, and Allele Frequency” (or perhaps an improved version of the model). Here is a remark that I addressed to Joe:
Tom English: As you know, if a putative model is logically inconsistent, then it is not a model of anything. I claim that that EricMH’s putative model is logically inconsistent. You had better prove that it is consistent, or turn it into something that you can prove is consistent, before going on to discuss its biological relevance.
I will not have to go far into Eric’s post to identify inconsistencies. After explaining the inconsistencies, which I doubt can be eliminated, I will remark on why the “model” is not worth salvaging. The gist is that Eric’s attempted analysis puts a halting, output-generating simulator of a non-halting, non-output-generating evolutionary process in place of the process itself. An analysis of the simulator would not, in any case, be an analysis of the simuland.
David Nemati and Eric Holloway, “Expected Algorithmic Specified Complexity.” Bio-Complexity 2019 (2):1-10. doi:10.5048/BIO-C.2019.2. Editor: William Basener. Editor-in-Chief: Robert J. Marks II.
Let us start by examining a part of the article that everyone can see is horrendous. When I supply proofs, in a future post, that other parts of the article are wrong, few of you will follow the details. But even the mathematically uninclined should understand, after reading what follows, that
the authors of a grotesque mangling of lower-level mathematics are unlikely to get higher-level mathematics correct, and
the reviewers and editors who approved the mangling are unlikely to have given the rest of the article adequate scrutiny.
… proved without reference to infinity and the empty string.
Some readers have objected to my simple proof that computable transformation of a binary string can result in an infinite increase of algorithmic specified complexity (ASC). Here I give a less-simple proof that there is no upper bound on the difference in ASC of and To put it more correctly, I show that the difference can be any positive real number.
Updated 12/8/2019: The assumptions of my theorem were unnecessarily restrictive. I have relaxed the assumptions, without changing the proof. Continue reading →
Marks et al. claim that algorithmic specified complexity is a measure of meaning. If this is so, then algorithmic mutual information is also a measure of meaning. Yet no one working in the field of information theory has ever regarded it as such. Thus Marks et al. bear the burden of explaining how they have gotten the interpretation of algorithmic mutual information right, and how everyone else has gotten it wrong.
It should not come as a shock that the “law of information conservation (nongrowth)” for algorithmic mutual information, a special case of algorithmic specified complexity, does not hold for algorithmic specified complexity in general.
My formal demonstration of unbounded growth of algorithmic specified complexity (ASC) in data processing also serves to counter the notion that ASC is a measure of meaning. I did not explain this in Evo-Info 4, and will do so here, suppressing as much mathematical detail as I can. You need to know that a binary string is a finite sequence of 0s and 1s, and that the empty (length-zero) string is denoted The particular data processing that I considered was erasure: on input of any binary string the output is the empty string. I chose erasure because it rather obviously does not make data more meaningful. However, part of the definition of ASC is an assignment of probabilities to all binary strings. The ASC of a binary string is infinite if and only if its probability is zero. If the empty string is assigned probability zero, and all other binary strings are assigned probabilities greater than zero, then the erasure of a nonempty binary string results in an infinite increase in ASC. In simplified notation, the growth in ASC is
for all nonempty binary strings Thus Marks et al. are telling us that erasure of data can produce an infinite increase in meaning.
The online intelligent-design journal, BIO-Complexity (Robert J. Marks II, editor-in-chief; Douglas Axe, managing editor), has revised at least one of its published articles without giving any indication of change. “A Unified Model of Complex Specified Information,” by George D. Montañez, states that it was published on December 14, 2018, and makes no note of having been revised since. However, the article presently has two more entries in the reference list than it did on December 17, 2018, when I downloaded it. The announcments page of the journal says nothing about the change.
BIO-Complexityclaims to be an archival publication. Thus the content should not change at all once it is released. The editors have given us reason to wonder how much of journal has silently morphed over the years. They should have required the author to submit an erratum or an addendum, no matter how benign the changes he wanted to make to the article.
I suspect, but cannot be sure, that Montañez changed the article merely to give credit to A. Milosavljević for a theorem, after learning of it from my post “Evo-Info 4: Non-Conservation of Algorithmic Specified Complexity.” If that is the case, then Montañez should have submitted an addendum explaining that he had learned of the theorem from me after his article was published. Changes to supposedly archival material are wrong even when announced, and are doubly wrong when unannounced.
It now behooves the editors of BIO-Complexity to make an announcement detailing the changes to Montañez’s article, and indicating whether any other articles have been modified since publication. If they have any sense at all, then they will announce also that they will never again change material that they represent as archival.
The greatest story ever told by activists in the intelligent design (ID) socio-political movement was that William Dembski had proved the Law of Conservation of Information, where the information was of a kind called specified complexity. The fact of the matter is that Dembski did not supply a proof, but instead sketched an ostensible proof, in No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2002). He did not go on to publish the proof elsewhere, and the reason is obvious in hindsight: he never had a proof. In “Specification: The Pattern that Signifies Intelligence” (2005), Dembski instead radically altered his definition of specified complexity, and said nothing about conservation. In “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information” (2010; preprint 2008), Dembski and Marks attached the term Law of Conservation of Information to claims about a newly defined quantity, active information, and gave no indication that Dembski had used the term previously. In Introduction to Evolutionary Informatics, Marks, Dembski, and Ewert address specified complexity only in an isolated chapter, “Measuring Meaning: Algorithmic Specified Complexity,” and do not claim that it is conserved. From the vantage of 2018, it is plain to see that Dembski erred in his claims about conservation of specified complexity, and later neglected to explain that he had abandoned them.
Designer was riding Her submarine through the depths of the ocean one day, taking stock of Her work, and decided, “I’ve learned just about everything I’m ever going to learn from these prototypes. It’s high time to take the next big step toward the ultimate goal, a species of animal in which to ripen souls for harvest.” (Of course, souls that turn out goatlike go to Hell, to suffer eternal torment at the hands of Satan, and souls that turn out sheeplike go to Heaven, to kowtow forever at the feet of God. But Designer had to come up with something considerably more sophisticated than sheep and goats, to satisfy God’s requirement that the Fate of Souls be contingent instead of determined.)
Now, if Designer had done a complete redesign, when advancing from aquatic to terrestrial organisms, Hell might well have frozen over before there were any goatlike souls to fuel the flames. So Designer said, “I know that the optics are different in air than in water, but fish eyes are gonna have to do.” Lacrimal system After observing that Her transitional prototype frequently took dips in the marsh to wash its eyes, She invented an organ to wet the eyes with saltwater. Compared to the eyes themselves, the lacrimal glands were a cinch to get right. As for eyelids, Designer had already tested them on some sharks. She did not anticipate that drainage would be a problem, but found that mammals with drops of water running down their faces looked very sad. In a flash of brilliance, Designer realized that eyewash could be reused to moisten the nostrils. And that was when She invented the lacrimal and naso-lacrimal ducts. What initially was supposed to be an aesthetic feature turned out to serve a useful function. God was highly impressed, and gave Designer, whom He called Asherah, a generous bonus at Christmas.
“Yahweh [front, flaunting large penis] and His Asherah [rear, working at computer]”
Notice.Masterpiece Cookies sells baked goods, not the services of specific artists. There is no guarantee that any particular artist will be inspired to produce a masterpiece that meets your needs and desires. Masterpiece Cookies sometimes enters into contracts with other businesses to fulfill special orders.
Masterpiece Cookies does not mention that it takes no profit on orders that its owner, Jack Philips, finds morally objectionable. In other words, Jack walks the extra mile, and stores up riches in heaven. He does not regard marginalization of sinners by society as an effective means of winning them over to Jesus.
Joe Felsenstein, who posts and comments in The Skeptical Zone, presented the 37th Fisher Memorial Lecture on January 4, 2018. The video recording of his lecture is now available. I’d say that the cover frame, at the very least, was well worth the wait.
Rooting out confusion is much harder than sowing it
Excuse me for attaching to this post a brief rejoinder to a pathetic response to the lecture. Andrew Jones’s “The Law of Zero Magic” appeared in the flagship publication of the intelligent design (ID) movement, Evolution News & Science Today. The title is hugely ironic, inasmuch as the movement conceives of intelligent design as violation of a law of nature, and struggles to devise the law that is violated. Continue reading →
What is The Skeptical Zone, William Basener and John Sanford? Why should you care?
The Skeptical Zone is where a couple of distinguished biologists, Joe Felsenstein and Michael Lynch, have dignified a recently published article of yours with a response. That is all you need to know. If they had responded on the back of a cereal box instead, providing you with a form to clip, then it would have behooved you to clip the form, fill it out, and send it, along with a self-addressed, stamped envelope, to their post office box in Battle Creek, Michigan.
Of course, I am dating myself — and also you. That is just the point. You ought to know that, even as the computer enables studies that were impossible when Ronald Fisher dubbed a not-so-fundamental result of his the Fundamental Theorem of Natural Selection, it enables interaction with domain experts in ways that were impossible in Fisher’s time. We are well into the 21st Century, and no one under the age of 50 will find credible any reason you might offer for declining to engage Joe in this forum. You can ignore all of the riff-raff, myself included, and interact with the scientist who happened, about the time that your paper addressing Fisher’s theorem was published, to address the theorem in the 37th Fisher Memorial Lecture (via video link, I might add).
The prospects for resolving some points, and arriving at a degree of agreement, are much better in a modern exchange of comments than in an old-fashioned exchange of essays. One aspect of The Skeptical Zone makes it particularly appealing in discussion of mathematical models: you can enter stuff like \LaTeX between two dollar signs, and cause readers to see stuff like It’s a miracle!
Congratulations to our resident theoretical biologist of high renown, Joe Felsenstein, on his presentation, yesterday, of the 37th Fisher Memorial Lecture. [ETA: I’ll post a separate announcement of the video, when it is released.] Following are the details provided by the Fisher Memorial Trust (with a link added by me).
Title: Is there a more fundamental theorem of natural selection?
Abstract. R.A. Fisher’s Fundamental Theorem of Natural Selection has intrigued evolutionary biologists, who wondered whether it could be the basis of a general maximum principle for mean fitness of the population. Subsequent work by Warren Ewens, Anthony Edwards, and George Price showed that a reasonable version of the FTNS is true, but only if the quantity being increased by natural selection is not the mean fitness of the population but a more indirectly defined quantity. That leaves us in an unsatisfactory state. In spite of Fisher’s assertion that the theorem “hold[s] the supreme position among the biological sciences”, the Fundamental Theorem is, alas, not-so-fundamental. There is also the problem that the additive genetic variances involved do not change in an easily predictable way. Nevertheless, the FTNS is an early, and imaginative, attempt at formulating macro-scale laws from population-genetic principles. I will not attempt to revive the FTNS, but instead am trying to extend a 1978 model of mine, put forth in what may be my least-cited paper. This attempts to make a “toy” model of an evolving population in which we can bookkeep energy flows through an evolving population, and derive a long-term prediction for change of the energy content of the system. It may be possible to connect these predictions to the rate of increase of the adaptive information (the “specified information”) embodied in the genetic information in the organisms. The models are somewhat absurdly oversimple, but I argue that models like this at least can give us some results, which decades of more handwavy papers on the general connection between evolution, entropy, and information have not.
We condemn in the strongest possible terms this egregious display of hatred, bigotry and violence on many sides, on many sides.
— Donald J. Trump
He who passively accepts evil is as much involved in it as he who helps to perpetrate it. He who accepts evil without protesting against it is really cooperating with it.
— Martin Luther King, Jr.
I condemn, in the strongest possible terms, the involvement of President of the United States in the evil of racism. The counter-protesters in Charlottesville lapsed into evil, to be sure. Meeting violence with violence, they handed their adversaries a huge victory. But their error does not make them the moral equivalent of white nationalists, neo-Nazis, and Klansmen. Seizing on their error to construct such an equivalence, as Donald Trump has done, is positively obscene. “Grab them by the pussy” pales in comparison.
Marks, Dembski, and Ewert open Chapter 3 by stating the central fallacy of evolutionary informatics: “Evolution is often modeled by as [sic] a search process.” The long and the short of it is that they do not understand the models, and consequently mistake what a modeler does for what an engineer might do when searching for a solution to a given problem. What I hope to convey in this post, primarily by means of graphics, is that fine-tuning a model of evolution, and thereby obtaining an evolutionary process in which a maximally fit individual emerges rapidly, is nothing like informing evolution to search for the best solution to a problem. We consider, specifically, a simulation model presented by Christian apologist David Glass in a paper challenging evolutionary gradualism à la Dawkins. The behavior on exhibit below is qualitatively similar to that of various biological models of evolution.
Animation 1. Parental populations in the first 2000 generations of a run of the Glass model, with parameters (mutation rate .005, population size 500) tuned to speed the first occurrence of maximum fitness (1857 generations, on average), are shown in orange. Offspring are generated in pairs by recombination and mutation of heritable traits of randomly mated parents. The fitness of an individual in the parental population is, loosely, the number of pairs of offspring it is expected to leave. In each generation, the parental population is replaced by surviving offspring. Which of the offspring die is arbitrary. When the model is modified to begin with a maximally fit population, the long-term regime of the resulting process (blue) is the same as for the original process. Rather than seek out maximum fitness, the two evolutionary processes settle into statistical equilibrium.
“The probability of life spontaneously self-assembling anywhere in this universe is mind-staggeringly unlikely; essentially zero. If you are so unquestioningly naïve as to believe we just got incredibly lucky, then bless your soul.”
Actually, “they” who posted at Evolution News and Views is someone we all love dearly, and see occasionally in the Zone — that master of arguments from improbability, Kirk Durston.
… the authors establish that their mathematical analysis of search applies to models of evolution.
I have all sorts of fancy stuff to say about the new book by Marks, Dembski, and Ewert. But I wonder whether I should say anything fancy at all. There is a ginormous flaw in evolutionary informatics, quite easy to see when it’s pointed out to you. The authors develop mathematical analysis of apples, and then apply it to oranges. You need not know what apples and oranges are to see that the authors have got some explaining to do. When applying the analysis to an orange, they must identify their assumptions about apples, and show that the assumptions hold also for the orange. Otherwise the results are meaningless.
The authors have proved that there is “conservation of information” in search for a solution to a problem. I have simplified, generalized, and trivialized their results. I have also explained that their measure of “information” is actually a measure of performance. But I see now that the technical points really do not matter. What matters is that the authors have never identified, let alone justified, the assumptions of the math in their studies of evolutionary models.a They have measured “information” in models, and made a big deal of it because “information” is conserved in search for a solution to a problem. What does search for a solution to a problem have to do with modeling of evolution? Search me. In the absence of a demonstration that their “conservation of information” math applies to a model of evolution, their measurement of “information” means nothing. It especially does not mean that the evolutionary process in the model is intelligently designed by the modeler.1
Denyse O’Leary, an advocacy journalist employed by one of the principals of the Center for Evolutionary Informatics, reports that I have essentially retracted the first of my papers on the “no free lunch” theorems for search (1996). What I actually have done in my online copy of the paper, marked “emended and amplified,” is to correct an expository error that Dembski and Marks elevated to “English’s Principle of Conservation of Information” in the first of their publications, “Conservation of Information in Search: Measuring the Cost of Success.” Marks, Dembski, and Ewert have responded, in their new book, by deleting me from the history of “no free lunch.” And the consequence is rather amusing. For now, when explaining conservation of information in terms of no free lunch, they refer over and over to performance.1 It doesn’t take a computer scientist, or even a rocket scientist, to see that they are describing conservation of performance, and calling it conservation of information.
The mathematical results of my paper are correct, though poorly argued. In fact, the theorem I provide is more general than the main theorem of Wolpert and Macready, which was published the following year.2 If you’re going to refer to one of the two theorems as the No Free Lunch Theorem, then it really should be mine. Where I go awry is in the exposition of my results. I mistake a lemma as indicating that conservation of performance in search is due ultimately to conservation of information in search. Continue reading →