UPDATED: American Heretics on YouTube

ADDED link to alternative YouTube posting without regional restrictions.

The 2019 documentary American Heretics: The Politics of the Gospel focuses on unorthodox Christians in Tulsa, Oklahoma, where TSZ contributor Jonathan Bartlett lives, and in Oklahoma City, where I live. The most prominent of the subjects are two ministers of the United Church of Christ, Robin Meyers (author of Why the Christian Right Is Wrong and Saving Jesus from the Church), and Carlton Pearson (a black protege of Oral Roberts who went on to great success as a televangelist). They both accepted, earlier in life, the orthodox doctrine that the unsaved will suffer eternal damnation, but ultimately repudiated it.

The documentary has gotten favorable reviews in major newspapers, including The New York Times, but I have to say that it is somewhat slow and diffuse. The filmmakers clearly did not capture as much good footage as they needed for a feature-length release. However, the good footage is quite good, and many of you will be interested in what it reveals of religion and politics in the U.S. today. Note that YouTube provides options for increased playback speed and for captioning.

ETA: Thanks to Dazz for locating the YouTube posting above. Note that the documentary is followed by discussion that I have not reviewed. (The posting I originally linked to is not accessible from all regions of the world.)

Recap redux

David Nemati and Eric Holloway, “Expected Algorithmic Specified Complexity.” Bio-Complexity 2019 (2):1-10. doi:10.5048/BIO-C.2019.2.

Eric Holloway has returned to The Skeptical Zone, following a long absence. He expects to get responses to his potshot at phylogenetic inference, though he has never answered three questions of mine about his own work on algorithmic specified complexity. Here I abbreviate and clarify the recap I previously posted, and introduce remarks on the questions.

If there is a fundamental flaw in the second half, as you claim, then I’ll retract it if it is unfixable.
— Eric Holloway, January 2, 2020

Continue reading

Cyclic work-lockdown strategies

A group of scientists and economists has devised a simple, tunable strategy for achieving exponential decrease in the number of new cases of Covid-19 while partially reopening the economy — or so it seems to me. The simplest form of the strategy is to alternate between k consecutive days of work and 14 - k consecutive days of lockdown. Although I am reluctant to add to the cacophony of inexpert opinions on how to deal with the pandemic, I will say that the strategy obviously works in an epidemiologic sense if the number of workdays per two-week cycle is sufficiently small. Furthermore, it is obvious that the number of workdays can be adjusted in response to the number of new cases. However, it is not obvious that k can be set sufficiently high for the strategy to work in an economic sense. Modeling reported in the \text{medR}\chi\text{iv} preprint “Cyclic Exit Strategies to Suppress COVID-19 and Allow Economic Activity” indicates that k=4 is likely to be sufficiently small. In other words, it seems that people might work half-time (4 \times 10 = 40 hours per two-week period) while driving the number of new cases toward zero. I am not qualified to judge epidemiological models, but will note that the results make sense if it is indeed the case that there is a “three-day delay on average between the time a person is infected and the time he or she can infect others.”

To be perfectly clear, I have not become a true believer in a strategy addressed in a preprint. I am saying that we should reject the notion that the pandemic will end only with herd immunity. It is not irrational to say that there may be, in the absence of an effective vaccination program, practicable methods of preventing most people from being infected, and that we should keep looking for them.

A nontechnical recap

David Nemati and Eric Holloway, “Expected Algorithmic Specified Complexity.” Bio-Complexity 2019 (2):1-10. doi:10.5048/BIO-C.2019.2. Editor: William Basener. Editor-in-Chief: Robert J. Marks II.

I recommend that you read “Recap Redux” instead of this post.

In Section 4 of their article, Nemati and Holloway claim to have identified an error in a post of mine. They do not cite the post, but instead name me, and link to the homepage of The Skeptical Zone. Thus there can be no question as to whether the authors regard technical material that I post here as worthy of a response in Bio-Complexity. (A year earlier, George Montañez modified a Bio-Complexity article, adding information that I supplied in precisely the post that Nemati and Holloway address.) Interacting with me at TSZ, a month ago, Eric Holloway acknowledged error in an equation that I had told him was wrong, expressed interest in seeing the next part of my review, and said, “If there is a fundamental flaw in the second half, as you claim, then I’ll retract it if it is unfixable.” I subsequently put a great deal of work into “The Old Switcheroo,” trying to anticipate all of the ways in which Holloway might wiggle out of acknowledging his errors. Evidently I left him no avenue of escape, given that he now refuses to engage at all, and insists that I submit my criticisms to Bio-Complexity.
Continue reading

The old switcheroo

David Nemati and Eric Holloway, “Expected Algorithmic Specified Complexity.” Bio-Complexity 2019 (2):1-10. doi:10.5048/BIO-C.2019.2. Editor: William Basener. Editor-in-Chief: Robert J. Marks II.

Eric Holloway has littered cyberspace with claims, arrived at by faulty reasoning, that the “laws of information conservation (nongrowth)” in data processing hold for algorithmic specified complexity as for algorithmic mutual information. It is essential to understand that there are infinitely many measures of algorithmic specified complexity. Nemati and Holloway would have us believe that each of them is a quantification of the meaningful information in binary strings, i.e., finite sequences of 0s and 1s. If Holloway’s claims were correct, then there would be a limit on the increase in algorithmic specified complexity resulting from execution of a computer program (itself a string). Whichever one of the measures were applied, the difference in algorithmic specified complexity of the string output by the process and the string input to the process would be at most the program length plus a constant. It would follow, more generally, that an approximate upper bound on the difference in algorithmic specified complexity of strings y and x is the length of the shortest program that outputs y on input of x. Of course, the measure must be the same for both strings. Otherwise it would be absurd to speak of (non-)​conservation of a quantity of algorithmic specified complexity.
Continue reading

Of “models” and “algorithms”

I was short with Joe Felsenstein in the comments section of “Stark Incompetence,” a post in which I address, well, um, the stark incompetence on display in a recent publication of Eric Holloway. I have apologized to Joe, and promised to make amends with a brief post on the topic that he wants to address. Now, the topic is a putative model that Eric introduced in “Mutual Algorithmic Information, Information Non-growth, and Allele Frequency” (or perhaps an improved version of the model). Here is a remark that I addressed to Joe:

Tom English: As you know, if a putative model is logically inconsistent, then it is not a model of anything. I claim that that EricMH’s putative model is logically inconsistent. You had better prove that it is consistent, or turn it into something that you can prove is consistent, before going on to discuss its biological relevance.

I will not have to go far into Eric’s post to identify inconsistencies. After explaining the inconsistencies, which I doubt can be eliminated, I will remark on why the “model” is not worth salvaging. The gist is that Eric’s attempted analysis puts a halting, output-generating simulator of a non-halting, non-output-generating evolutionary process in place of the process itself. An analysis of the simulator would not, in any case, be an analysis of the simuland.

Continue reading

Stark incompetence

David Nemati and Eric Holloway, “Expected Algorithmic Specified Complexity.” Bio-Complexity 2019 (2):1-10. doi:10.5048/BIO-C.2019.2. Editor: William Basener. Editor-in-Chief: Robert J. Marks II.

Let us start by examining a part of the article that everyone can see is horrendous. When I supply proofs, in a future post, that other parts of the article are wrong, few of you will follow the details. But even the mathematically uninclined should understand, after reading what follows, that

  1. the authors of a grotesque mangling of lower-level mathematics are unlikely to get higher-level mathematics correct, and
  2. the reviewers and editors who approved the mangling are unlikely to have given the rest of the article adequate scrutiny.

Continue reading

Non-conservation of algorithmic specified complexity…

… proved without reference to infinity and the empty string.

Some readers have objected to my simple proof that computable transformation f(x) of a binary string x can result in an infinite increase of algorithmic specified complexity (ASC). Here I give a less-simple proof that there is no upper bound on the difference in ASC of f(x) and x. To put it more correctly, I show that the difference can be any positive real number.

Updated 12/8/2019: The assumptions of my theorem were unnecessarily restrictive. I have relaxed the assumptions, without changing the proof.
Continue reading

Evo-Info 4 addendum

Introduction to Evolutionary Informatics, by Robert J. Marks II, the “Charles Darwin of Intelligent Design”; William A. Dembski, the “Isaac Newton of Information Theory”; and Winston Ewert, the “Charles Ingram of Active Information.” World Scientific, 332 pages.
Classification: Engineering mathematics. Engineering analysis. (TA347)
Subjects: Evolutionary computation. Information technology–Mathematics.

In “Evo-Info 4: Non-Conservation of Algorithmic Specified Complexity,” I neglected to explain that algorithmic mutual information is essentially a special case of algorithmic specified complexity. This leads immediately to two important points:

  1. Marks et al. claim that algorithmic specified complexity is a measure of meaning. If this is so, then algorithmic mutual information is also a measure of meaning. Yet no one working in the field of information theory has ever regarded it as such. Thus Marks et al. bear the burden of explaining how they have gotten the interpretation of algorithmic mutual information right, and how everyone else has gotten it wrong.
  2. It should not come as a shock that the “law of information conservation (nongrowth)” for algorithmic mutual information, a special case of algorithmic specified complexity, does not hold for algorithmic specified complexity in general.

My formal demonstration of unbounded growth of algorithmic specified complexity (ASC) in data processing also serves to counter the notion that ASC is a measure of meaning. I did not explain this in Evo-Info 4, and will do so here, suppressing as much mathematical detail as I can. You need to know that a binary string is a finite sequence of 0s and 1s, and that the empty (length-zero) string is denoted \lambda. The particular data processing that I considered was erasure: on input of any binary string x, the output \mathtt{erased}(x) is the empty string. I chose erasure because it rather obviously does not make data more meaningful. However, part of the definition of ASC is an assignment of probabilities to all binary strings. The ASC of a binary string is infinite if and only if its probability is zero. If the empty string is assigned probability zero, and all other binary strings are assigned probabilities greater than zero, then the erasure of a nonempty binary string results in an infinite increase in ASC. In simplified notation, the growth in ASC is

    \[A(\mathtt{erased}(x)) - A(x) = \underbrace{A(\lambda)}_\text{infinite} - \underbrace{A(x)}_\text{finite} = \infty\]

for all nonempty binary strings x. Thus Marks et al. are telling us that erasure of data can produce an infinite increase in meaning.

Continue reading

ID journal silently revises article

The online intelligent-design journal, BIO-Complexity (Robert J. Marks II, editor-in-chief; Douglas Axe, managing editor), has revised at least one of its published articles without giving any indication of change. “A Unified Model of Complex Specified Information,” by George D. Montañez, states that it was published on December 14, 2018, and makes no note of having been revised since. However, the article presently has two more entries in the reference list than it did on December 17, 2018, when I downloaded it. The announcments page of the journal says nothing about the change.

BIO-Complexity claims to be an archival publication. Thus the content should not change at all once it is released. The editors have given us reason to wonder how much of journal has silently morphed over the years. They should have required the author to submit an erratum or an addendum, no matter how benign the changes he wanted to make to the article.

I suspect, but cannot be sure, that Montañez changed the article merely to give credit to A. Milosavljević for a theorem, after learning of it from my post “Evo-Info 4: Non-Conservation of Algorithmic Specified Complexity.” If that is the case, then Montañez should have submitted an addendum explaining that he had learned of the theorem from me after his article was published. Changes to supposedly archival material are wrong even when announced, and are doubly wrong when unannounced.

It now behooves the editors of BIO-Complexity to make an announcement detailing the changes to Montañez’s article, and indicating whether any other articles have been modified since publication. If they have any sense at all, then they will announce also that they will never again change material that they represent as archival.

Evo-Info 4: Non-conservation of algorithmic specified complexity

Introduction to Evolutionary Informatics, by Robert J. Marks II, the “Charles Darwin of Intelligent Design”; William A. Dembski, the “Isaac Newton of Information Theory”; and Winston Ewert, the “Charles Ingram of Active Information.” World Scientific, 332 pages.
Classification: Engineering mathematics. Engineering analysis. (TA347)
Subjects: Evolutionary computation. Information technology–Mathematics.

The greatest story ever told by activists in the intelligent design (ID) socio-political movement was that William Dembski had proved the Law of Conservation of Information, where the information was of a kind called specified complexity. The fact of the matter is that Dembski did not supply a proof, but instead sketched an ostensible proof, in No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2002). He did not go on to publish the proof elsewhere, and the reason is obvious in hindsight: he never had a proof. In “Specification: The Pattern that Signifies Intelligence” (2005), Dembski instead radically altered his definition of specified complexity, and said nothing about conservation. In “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information” (2010; preprint 2008), Dembski and Marks attached the term Law of Conservation of Information to claims about a newly defined quantity, active information, and gave no indication that Dembski had used the term previously. In Introduction to Evolutionary Informatics, Marks, Dembski, and Ewert address specified complexity only in an isolated chapter, “Measuring Meaning: Algorithmic Specified Complexity,” and do not claim that it is conserved. From the vantage of 2018, it is plain to see that Dembski erred in his claims about conservation of specified complexity, and later neglected to explain that he had abandoned them.

Continue reading

The invention of tear ducts


Research Submarine Asherah

Designer was riding Her submarine through the depths of the ocean one day, taking stock of Her work, and decided, “I’ve learned just about everything I’m ever going to learn from these prototypes. It’s high time to take the next big step toward the ultimate goal, a species of animal in which to ripen souls for harvest.” (Of course, souls that turn out goatlike go to Hell, to suffer eternal torment at the hands of Satan, and souls that turn out sheeplike go to Heaven, to kowtow forever at the feet of God. But Designer had to come up with something considerably more sophisticated than sheep and goats, to satisfy God’s requirement that the Fate of Souls be contingent instead of determined.)

Now, if Designer had done a complete redesign, when advancing from aquatic to terrestrial organisms, Hell might well have frozen over before there were any goatlike souls to fuel the flames. So Designer said, “I know that the optics are different in air than in water, but fish eyes are gonna have to do.”Gray896
Lacrimal system
After observing that Her transitional prototype frequently took dips in the marsh to wash its eyes, She invented an organ to wet the eyes with saltwater. Compared to the eyes themselves, the lacrimal glands were a cinch to get right. As for eyelids, Designer had already tested them on some sharks. She did not anticipate that drainage would be a problem, but found that mammals with drops of water running down their faces looked very sad. In a flash of brilliance, Designer realized that eyewash could be reused to moisten the nostrils. And that was when She invented the lacrimal and naso-lacrimal ducts. What initially was supposed to be an aesthetic feature turned out to serve a useful function. God was highly impressed, and gave Designer, whom He called Asherah, a generous bonus at Christmas.

Ajrud

“Yahweh [front, flaunting large penis] and His Asherah [rear, working at computer]”

Hate the sin, love the sinner

Notice. Masterpiece Cookies sells baked goods, not the services of specific artists. There is no guarantee that any particular artist will be inspired to produce a masterpiece that meets your needs and desires. Masterpiece Cookies sometimes enters into contracts with other businesses to fulfill special orders.

Masterpiece Cookies does not mention that it takes no profit on orders that its owner, Jack Philips, finds morally objectionable. In other words, Jack walks the extra mile, and stores up riches in heaven. He does not regard marginalization of sinners by society as an effective means of winning them over to Jesus.

Felsenstein presents the 37th Fisher Memorial Lecture

Joe Felsenstein, who posts and comments in The Skeptical Zone, presented the 37th Fisher Memorial Lecture on January 4, 2018. The video recording of his lecture is now available. I’d say that the cover frame, at the very least, was well worth the wait.

Rooting out confusion is much harder than sowing it

Excuse me for attaching to this post a brief rejoinder to a pathetic response to the lecture. Andrew Jones’s “The Law of Zero Magic” appeared in the flagship publication of the intelligent design (ID) movement, Evolution News & Science Today. The title is hugely ironic, inasmuch as the movement conceives of intelligent design as violation of a law of nature, and struggles to devise the law that is violated. Continue reading

To Basener and Sanford: This is the 21st Century

What is The Skeptical Zone, William Basener and John Sanford? Why should you care?

The Skeptical Zone is where a couple of distinguished biologists, Joe Felsenstein and Michael Lynch, have dignified a recently published article of yours with a response. That is all you need to know. If they had responded on the back of a cereal box instead, providing you with a form to clip, then it would have behooved you to clip the form, fill it out, and send it, along with a self-addressed, stamped envelope, to their post office box in Battle Creek, Michigan.

Of course, I am dating myself — and also you. That is just the point. You ought to know that, even as the computer enables studies that were impossible when Ronald Fisher dubbed a not-so-fundamental result of his the Fundamental Theorem of Natural Selection, it enables interaction with domain experts in ways that were impossible in Fisher’s time. We are well into the 21st Century, and no one under the age of 50 will find credible any reason you might offer for declining to engage Joe in this forum. You can ignore all of the riff-raff, myself included, and interact with the scientist who happened, about the time that your paper addressing Fisher’s theorem was published, to address the theorem in the 37th Fisher Memorial Lecture (via video link, I might add).

The prospects for resolving some points, and arriving at a degree of agreement, are much better in a modern exchange of comments than in an old-fashioned exchange of essays. One aspect of The Skeptical Zone makes it particularly appealing in discussion of mathematical models: you can enter stuff like \LaTeX between two dollar signs, and cause readers to see stuff like \LaTeX. It’s a miracle!

Fisher’s not-so-fundamental theorem

Congratulations to our resident theoretical biologist of high renown, Joe Felsenstein, on his presentation, yesterday, of the 37th Fisher Memorial Lecture. [ETA: I’ll post a separate announcement of the video, when it is released.] Following are the details provided by the Fisher Memorial Trust (with a link added by me).

Title: Is there a more fundamental theorem of natural selection?

Abstract. R.A. Fisher’s Fundamental Theorem of Natural Selection has intrigued evolutionary biologists, who wondered whether it could be the basis of a general maximum principle for mean fitness of the population. Subsequent work by Warren Ewens, Anthony Edwards, and George Price showed that a reasonable version of the FTNS is true, but only if the quantity being increased by natural selection is not the mean fitness of the population but a more indirectly defined quantity. That leaves us in an unsatisfactory state. In spite of Fisher’s assertion that the theorem “hold[s] the supreme position among the biological sciences”, the Fundamental Theorem is, alas, not-so-fundamental. There is also the problem that the additive genetic variances involved do not change in an easily predictable way. Nevertheless, the FTNS is an early, and imaginative, attempt at formulating macro-scale laws from population-genetic principles. I will not attempt to revive the FTNS, but instead am trying to extend a 1978 model of mine, put forth in what may be my least-cited paper. This attempts to make a “toy” model of an evolving population in which we can bookkeep energy flows through an evolving population, and derive a long-term prediction for change of the energy content of the system. It may be possible to connect these predictions to the rate of increase of the adaptive information (the “specified information”) embodied in the genetic information in the organisms. The models are somewhat absurdly oversimple, but I argue that models like this at least can give us some results, which decades of more handwavy papers on the general connection between evolution, entropy, and information have not.

There are only two sides, and you are on one or the other of them

We condemn in the strongest possible terms this egregious display of hatred, bigotry and violence on many sides, on many sides.

— Donald J. Trump

He who passively accepts evil is as much involved in it as he who helps to perpetrate it. He who accepts evil without protesting against it is really cooperating with it.

— Martin Luther King, Jr.

I condemn, in the strongest possible terms, the involvement of President of the United States in the evil of racism. The counter-protesters in Charlottesville lapsed into evil, to be sure. Meeting violence with violence, they handed their adversaries a huge victory. But their error does not make them the moral equivalent of white nationalists, neo-Nazis, and Klansmen. Seizing on their error to construct such an equivalence, as Donald Trump has done, is positively obscene. “Grab them by the pussy” pales in comparison.

Evo-Info 3: Evolution is not search

Introduction to Evolutionary Informatics, by Robert J. Marks II, the “Charles Darwin of Intelligent Design”; William A. Dembski, the “Isaac Newton of Information Theory”; and Winston Ewert, the “Charles Ingram of Active Information.” World Scientific, 332 pages.
Classification: Engineering mathematics. Engineering analysis. (TA347)
Subjects: Evolutionary computation. Information technology–Mathematics.

Marks, Dembski, and Ewert open Chapter 3 by stating the central fallacy of evolutionary informatics: “Evolution is often modeled by as [sic] a search process.” The long and the short of it is that they do not understand the models, and consequently mistake what a modeler does for what an engineer might do when searching for a solution to a given problem. What I hope to convey in this post, primarily by means of graphics, is that fine-tuning a model of evolution, and thereby obtaining an evolutionary process in which a maximally fit individual emerges rapidly, is nothing like informing evolution to search for the best solution to a problem. We consider, specifically, a simulation model presented by Christian apologist David Glass in a paper challenging evolutionary gradualism à la Dawkins. The behavior on exhibit below is qualitatively similar to that of various biological models of evolution.

Animation 1. Parental populations in the first 2000 generations of a run of the Glass model, with parameters (mutation rate .005, population size 500) tuned to speed the first occurrence of maximum fitness (1857 generations, on average), are shown in orange. Offspring are generated in pairs by recombination and mutation of heritable traits of randomly mated parents. The fitness of an individual in the parental population is, loosely, the number of pairs of offspring it is expected to leave. In each generation, the parental population is replaced by surviving offspring. Which of the offspring die is arbitrary. When the model is modified to begin with a maximally fit population, the long-term regime of the resulting process (blue) is the same as for the original process. Rather than seek out maximum fitness, the two evolutionary processes settle into statistical equilibrium.

Continue reading

Two planets with life are more miraculous than one

The Sensuous Curmudgeon, who presently cannot post to his weblog, comments:

This Discoveroid article is amazing. Could Atheism Survive the Discovery of Extraterrestrial Life?. I wish I could make a new post about it. They say that if life is found elsewhere, that too is a miracle, so then you gotta believe in the intelligent designer. They say:

“The probability of life spontaneously self-assembling anywhere in this universe is mind-staggeringly unlikely; essentially zero. If you are so unquestioningly naïve as to believe we just got incredibly lucky, then bless your soul.”

Actually, “they” who posted at Evolution News and Views is someone we all love dearly, and see occasionally in the Zone — that master of arguments from improbability, Kirk Durston.