Due to popular demand I will take a quick stab at explaining the applicability of mutual algorithmic information and the information non-growth law to an allele frequency scenario.

First, I’ll outline the allele frequency scenario.

The alleles are 1s and 0s, and the gene G a bitstring of N bits. A gene’s fitness is based on how many 1s it has, so fitness(G) = sum(G). The population consists of a single gene, and evolution proceeds by randomly flipping one bit, and if fitness is improved, it keeps that gene, otherwise it keeps the original. Once fitness(G) = N, the evolutionary algorithm stops and outputs G, which consists of N 1s. The bitstring that is N 1s will be denoted Y. We will denote the evolutionary algorithm E, and it is prefixed on an input bitstring X of length N that will be turned into the bitstring of N 1s, so executing the pair on a universal Turing machine outputs the bitstring of 1s: U(E,X) = Y.

Second, I’ll briefly state the required background knowledge on algorithmic mutual information.

Kolmogorov complexity K(X) (also called algorithmic information) is the length of the shortest program that generates bitstring X. The standard form is prefix free so no program used to measure Kolmogorov complexity is the prefix of any other such program. The shortest program itself will be denoted X*, so K(X) = |X*|. Conditional Kolmogorov complexity is the length of the shortest program P* that given input I will generate X, so K(X|I) = |P*|. Joint Kolmogorov complexity of X and Y is the minimal program XY* necessary to generate the pair {X,Y}, so K(X,Y) = XY*. Mutual algorithmic information I(X:Y) is a symmetrical measurement (within a constant error) of two bitstrings: I(X:Y) = I(Y:X) = K(Y) – K(Y|X*) = K(X) – K(X|Y*) = K(X) + K(Y) – K(X,Y).

Third, I’ll state the non-growth theorem.

The law of information non-growth states the deterministic processing of random bitstrings (generated by a computable probability distribution) is not expected to increase the mutual algorithmic information between X and Y. Formally: E[I(U(R,X):Y)] <= I(X:Y), where R is a randomly generated bitstring and U(R,X) is executing the concatenated pair of bitstrings {R,X} with a universal Turing machine.

Finally, I’ll apply the theorem to the scenario.

We will say Y is the target, which according to our scenario is a bitstring of length N that consists of 1s. X is a randomly generated bitstring of length N. The typical random bitstring will provide no information regarding Y, so K(Y|X) = K(Y). Consequently, there is zero mutual algorithmic information with Y, since I(X:Y) = K(Y) – K(Y|X) = 0. The non-growth theorem says we cannot expect to increase the mutual algorithmic information through generating another random bitstring R and executing the pair with a universal Turing machine, so E[I(U(R,X):Y)] = 0.

Next, we will bring in the evolutionary algorithm E. As stated at the beginning when E is prefixed to bitstring X of length N, and executed on a universal Turing machine, then the result is N 1s, denoted Y. Consequently, the pair {E, X} requires no further information to generate Y and K(Y|E,X) = 0. This means the algorithmic mutual information between {E, X} and Y is maximal: I(Y:{E, X}) = K(Y) – K(Y|E,X) = K(Y) – 0 = K(Y).

Thus, since the combination of the evolutionary algorithm E with random input string X contains all the relevant information to generate Y, the information non-growth theorem states the combination of generating another random bitstring R and executing the triplet {R, E, X} can only decrease information regarding Y: E[I(U(R,E,X):Y)] <= I(E,X:Y).

Yes, thanks Eric for getting back to us.

@Joe, I don’t think this is going anywhere and I personally don’t have anything to add to the discussion, so I’ll just remain silent. Might ask the occasional question though.

Eric only seems interested in discussing the math, not how it relates to the natural world (let alone the supernatural) or other philosophical questions like the map/territory distinction that has been discussed ad-nauseam here already.

I’m not particularly good at math, biology or philosophy anyway. My halting oracle kicked in and I’m signing off for now. I’ll read everyone’s responses with great interest

Why don’t you consider stopping at all ones to be stopping because a target is reached?

Wouldn’t that be stretching the meaning of halting oracle a bit too much?

Eric agrees that observations show that what biologists call natural selection can give the appearance of design. However, he believes he has shown that no Turing computable process (ie dieterminsm+randomness) can reproduce that process unless intelligence has supplied that target to halt at.

In particular, he says a (1) halting oracle is needed and (2). intelligence can provide a halting oracle. Number 2 is a separate claim that he has defended at PS or at least the was discussed there — I cannot vouch for the details without some research there.

Translation: ” Nothing (no evidence) in evolutionary biology makes sense except in the light of population genetics”.

And when you add the omnipotence of natural selection then even 1+1=3

This is what you are up against, Eric. I hope you get it…

What a bloody joke!

Didn’t I tell you that before, Eric? Even if your math is right, “the seekers of truth” always have a back up plan; it doesn’t apply to biology…

Why would you waste your time trying to convince someone who doesn’t want to be convinced?

Yeah, but seems to me that something else is missing, as Corneel pointed out, there must also be halting in evolution for all that to be relevant. If there’s no halting, no halting oracle is needed.

Is this some sort of argument from fine tuning of the fitness landscape?

Joe Felsenstein,My last post to Eric was a series of questions which he said he would answer later. So I am not satisfied that Eric is done.

Let me try out the charitable approach and answer that as I think Eric might.

First, as I detailed to Dazz, it’s not that Eric denies what biologists call NS; rather, he denies it can be accomplished without intelligence.

In his OP, the fitness of a bitstring is represented by its KMI with the target, ie the bitstring of all 1s. Intelligence puts the target in the E function. Then (as I understand him) Eric models mutation+NS by the U function operating on X to produce R. He then uses Levin’s results to claim that the math means no KMI can be added beyond that in E; hence no fitness can be added.

For the dynamical model you provide, NS involves following a trajectory in the fitness landscape to a fitness peak. The rate of ascent depends on the size of the fitness parameter. I would guess that for that case, Eric’s E function would capture that ability to follow the landscape to a peak. There is no global target to aim for; there is just the local gradient-ascent process.

I think that Eric would claim that gradient ascent process built into the revised version of E is what has to be added by intelligence.

Now one could argue that this gradient ascent is no different from the gradient following captured by gravitation, where fitness becomes negative potential energy. But no ID person I am aware of says gravity is not naturalistic. So why can’t the same apply to gradient following in evolution?

I think that the answer lies in fine tuning which will be claimed to be needed to make both landscapes traversable. For evolution, it is the fine tuning of biochemistry for life and in particular life with heritable change and NS. There is nothing new in using fine tuning for challenging biology; Eric just would gets to it by a different route.

Hah. I think you are right for the fine tuning — see my most recent post to Joe.

You are also right that halting is an artefact of the model so in that sense it is similar to GA models. For biological evolution, I think we are supposed to assume fixed fitness landscape and only NS (eg ignore drift). That seems a reasonable first approximation if the goal is to show the KMI challenges.

ETA: the reasoning would be that if it fails in this simplified case, then that is enough to discount NS as a naturalistic process.

BruceS,Your summary corresponds with how I understand Eric’s argument. But without him, it becomes pretty hard to tell whether we are on the right track.

I see that dazz fielded your question, and I agree with him. No targets exist in nature, and evolution doesn’t halt. We can identify the all-1 genotype as the one with maximal fitness in the model, but the simulator is not the simuland. Unless Eric can point to some meaningful analog of halting in evolution, it is irrelevant.

Thanks Bruce

I’m not sure I understand this. Are you saying that, if NS as defined in the most simplistic model of evolution, fails to meet the KMI challenges, then we can outright discount NS as a naturalistic process? Or are you simply trying to characterize Eric’s argument? None of the above?

Perhaps he would say local optima, and then we’re entering gpuccio’s territory. Not a good place to be IMO

Indeed, if you think population genetics theory is irrelevant to thinking about evolution, you will join J-Mac in declaring it and my comments on it a “bloody joke”.

But I suspect I am being too hard on creationists and ID advocates when I say that then J-Mac’s declarations will be just what you want to say. That would be making a straw-man.

One way is to argue that the very setting up of the fitness surface is where the outside information comes in. That seems to be the position of Dembski, Marks, and Ewert in their “active information” argument. They don’t necessarily say that natural selection works, just that

ifit works, that the fitness differences ought to be attributed to Design Intervention.I have taken the contrary view — that the smoothness of the fitness surface, as smooth as it may be, is a consequence of the localness of physics and chemistry, so that a gene acting in the auditory neurons will not interact strongly with a gene affecting your big toe. That this leads to smooth(-ish) fitness surfaces without any outside intervention.

I’m just not sure that Eric is making that DME argument, as outstandingly clear as he thinks the matter is.

The omnipotence of natural selection proclaimed by you whenever you feel pinned down is a bloody joke…The rest is just schmaltz for the naive and those who want to believe in the miraculous powers of nature and feel good about it…

Your proclamation of fitness (or the survival of the fittest due to natural selection) is another bloody joke.. Those who survive, even if 1 or 2 out of the population of 100 000 must have been the fittest. How do you know that? Because they survived? Circular reasoning made look like science is not even a bloody joke…It’s not even wrong…

The claim is that evolution is adequate to account for what we see. So, we take some feature Y that evolution is meant to explain, and some starting point X, and then run E(X) to see if it produces Y.

If E(X) != Y, then something more than E(X) is necessary to account for Y.

However, many people take E(X) = Y to be a truism. Arguments like mine show it is not.

Sorry, I missed what you are looking for.

In my model, the fitness function is supplying the information that allows fitness to continue increasing. This does not supply as much information for the specific goal as the stopping criterion. It becomes an infinitesimal amount as the target string length grows to infinity. However, it is still a quantity of information that cannot be accounted for by chance and determinism.

To look at the fitness change, we need to look at change over time. But the initial form of my simulation does not reference time.

To incorporate runtime into my model we change the stopping criterion from maximizing fitness to number of iterations. In this case, since the probability of creating fitter descendant bitstrings drops as the bitstring becomes fitter, it takes more and more algorithmic information to provide enough iterations to increase fitness.

If we use the time based variant that I just proposed above this comment, we can see how the fitness landscape affects things, since the fitness landscape affects how quickly fitness can be increased. A very smooth landscape with steep gradient leads to rapid increase. A very bumpy or flat landscape leads to slow increase, and getting stuck if we cannot explore widely enough.

The landscape itself can be expressed as a bitstring, where each bit has a value associated with it, and our exploration represented as filling in missing bits in the landscape. We want to fill in the highest value bits as quickly as possible, and our speed is directly related to how many bits of the landscape’s Kolmogorov complexity we know. And, to learn the landscape’s Kolmogorov complexity we need a halting oracle.

A very smooth, steep landscape has very small Kolmogorov complexity, so not much information must be learned. But, a very bumpy landscape with needle peaks and plateaus has very high Kolmogorov complexity, and a halting oracle becomes more of a necessity to make rapid enough progress to survive.

I submit the biological fitness landscape is more like the latter scenario than the smooth gradient scenario. At any rate the question is at least an open one, so it is not a truism that random variation + chance is enough to explain what we see. Using my previous formalism, it is not a truism that E(X) = Y.

One counter someone could make is that computable compression also lets us learn to some degree, but first that begs the question by relying on some algorithmic information, and second the learning is limited.

Another counter is that a dovetailing process is guaranteed to eventually converge to a bitstring’s Kolmogorov complexity, but again this is the time issue pushed up a level. Dovetailing only converges asymptotically.

Anyways, this note is not as well thought out as my previous ones, and I’ll be thinking about how to make this all more concrete. But, I want to give you an idea of what I’m thinking before I go off and put down in an article.

I’ve found this thread to be quite helpful.

And one final note: this idea that evolution requires a halting oracle is not something I originated. Gregory Chaitin, in his attempt to mathematically model evolution, came to the same conclusion that evolution requires the input of a halting oracle.

Hey Eric,

This is very interesting stuff…If you find time can you please explain precisely what you mean by a halting oracle? I found something similar in QM and I’m wondering if it is really similar or even the same thing…

Thanks

Then you have changed your argument. In the OP, Y represented the target i.e. the genotype of maximal fitness (and you keep using that concept in your answers to Joe and Bruce). However, most biologists will deny that this creature exists in the real world (in life history theory, it is mockingly called the Darwinian demon). ID creationists of course believe that everything we observe is also the intended target, but I have not seen you make that argument yet.

Quite right. In models with a higher level of biological realism than yours (like the one made by Tom English) at a certain point an equilibrium will be reached where the effect of mutation pressure will equal that of natural selection and fitness will no longer increase. This is

exactlywhat we observe in the real world. It is the reason why we still observe heritable diseases in the face of purifying selection.The important thing to note is that this equilibrium will almost always be at a higher fitness level than the randomly chosen initial genotype X, and thus will have a higher amount of Functional Information. Your task is to show that the law of information-non-growth somehow prevents the population from reaching this state without the input of an intelligent agent. So far, you have not provided this argument.

Finally, on a more general note, I would like to emphasize how useful it is that you are willing to connect your mathematical argument to biological systems (which I really appreciate). As you can see, it forces you to explicitly state your implicit assumptions and submit them to closer scrutiny.

That’s interesting. Where did he make this statement? Did he mean the same thing that you mean by it?

There is a review and critique of Chaitin metabiology here, which includes the halting oracle. His usage is different from Eric’s, I believe.

https://www.researchgate.net/publication/315441496_Turing_Machines_and_Evolution_A_Critique_of_Gregory_Chaitin's_Metabiology

However, I think they both share the view that math somehow compels biology (see Eric’s comment in Gregory’s latest OP thread).

Even worse! I was using the charitable approach as best I could to argue what I though Eric’s position might be. See his above post to me for why I posted that way — I think he asked me to, in short.

I do think that if we use

(1) an edge case of a model (here by fixing landscape etc),

(2) we agree that edge case still captures the essentials of what is being modeled

(3) then the showing that edge case fails in some sense means we have reason to doubt some aspect of the full model.

But there are a lot of assumptions and if’s in that logic. And even if you think the logic is reasonable, you then have to apply it to biology.

The point is that there were a lot of hypotheticals in my post.

BruceS,Thanks Bruce.

Eric’s recent comments imply that we need a “halting oracle” in the simple population genetic argument, and if that oracle is not there, then something is missing. Perhaps it leaves him unsatisfied if it is not there, but in the model I used, gene frequencies keep changing, getting closer and closer to 1, but never getting there. In the Weasel-like model Eric analyzed, the single string in the population gets to be all 1’s, and then it continues in that state indefinitely.

The population is in states that have more FI than at the beginning. The fact that no Designer, or no natural mechanism, is ringing a bell and shouting “Halt!” does not change that. FI has increased and is higher as soon as the gene frequency has changed from the initial gene frequency, or, in the Weaseloid case, as soon a even one bit has flipped.

So if the Design Intervention is the stopping, it never happens, while FI has increased. Which would seem to invalidate Eric’s original claim.

Noting that Gregory Chaitin’s model has a halting oracle is irrelevant to my argument. Chaitin’s book

Proving Darwin: Making Biology Mathematicalanalogizes an evolving population to a mathematical model he develops, one that is evaluating the function B(n), the n-th Busy Beaver function.The fact that he needs the computation to Halt is irrelevant to any other case. It is not binding on a butterfly in a rainforest, nor to any population genetics model.

Thanks Eric. I’ll wait for your fully thought out position. I hope you will post a link to it here.

I still wonder if you think general relativity requires a halting oracle to explain trajectories in the spacetime landscape (manifold). Why can’t an analog of your KMI argument be run there?

Objection 1: Gravity is a force. Evolution is not.

Response: there is no force of gravity in GR; mass/energy shapes spacetime landscapes and follows the trajectory dictated by that landscape.

Objection 2: Energy and mass are real. The trajectories of population genomes in the fitness landscape are merely mathematical constructs based on our scientific models.

Response: In modern physics, mass and energy are quantum entities, and such entities are also a mathematical abstraction based on scientific models of interacting quantum fields.

Why bother? The limitless, omnipotent natural selection is used by you instead…

If natural selection can ‘create’ self-assembling molecular machines, it could surly become whatever is needed, like a halting oracle, to ‘not allow the divine foot in the door’….

But Joe, don’t you know that you must support your statement or concede that it is merely your unsupported opinion? It’s the Jock rule.

We could just add more targets to the simulation. Surely this has been done by someone.

How about if we change our WEASEL program to have two targets. One could be the original phrase as used by Dawkins and the other could simply be the reverse of that phrase. So now how do we assign a “fitness” to each genotype?

Should we calculate the number of matches to the first string, count the number of matches to the second string, and add them together?

Yeah.

But what does it mean to say that gravity is “naturalistic”? Gravity is “natural” but it requires an intelligent cause.

Can you explain what you mean when you say no targets exist in nature?

Optimization is optimization

forsomething. Right? And Joe needs targets to show that NS can create FI.Naturalistic here means following MN which requires consistency with best current consensus fundamental physics.

In this case, it is Eric’s contention that it requires a halting oracle. These may be nomologically possible in GR, but they are not necessary to explain trajectories in GR.

BruceS,Thanks Bruce. Understood

LOL, Mung. Only if

challengedto do so.Here’s what Joe actually wrote:

Are you really claiming, Mung, that you have never seen that argument made here?

I know I have.

Your behavior here, in particular the selective quoting, comes across as trolling.

I understand where you are coming from. In both Joe’s and Eric’s model we can identify the all-1 genotype as the beckoning endpoint that a population is searching for. Eric even appears to believe that the evolutionary algorithm should somehow signal the population when this goal has been reached. But let’s step away from the models and look at the biology. I can think of two arguments why the idea of fixed targets is misguided.

First, suppose we look at cystic fibrosis. It is a disease caused by recurrent mutations in the

CFTRgene. Purifying selection keeps the frequency of the disease mutations in check. Is it fair to say that the healthy state is the target of a search looking to optimize the genome? That doesn´t make sense to me, because most members of the population have already reached this point. Rather, purifying selection is an inevitable consequence of the fact that people that are very ill tend to die earlier and have fewer children. I see no target or teleology here. So why would we suddenly be talking of targets when beneficial mutations appear andthe very same process occurs?Secondly, the fitness landscape is not static. A nice example is the Major Histocompatibility Complex. MHC proteins play an important role in immunity by presenting antigens to the appropriate T-cells. Some of the genes in the complex are the most polymorphic known, harbouring dozens, sometimes hundreds of alleles. Why is that? That is because pathogens adapt to evade the most frequent alleles, causing the rare alleles to be beneficial by the mere fact that they are rare (this is called frequency-dependent selection). So which genotype is the target here? There is none, because it is the high variability itself that protects the members of the population.

Other dynamics that do not support a fixed target come into play when organisms are dealing with predators (or prey) or in competition with conspecifics for resources, mates, etc. These all cause the optimal genotype to be a moving target, strongly dependent on environmental, ecological and genomic context. That is why I resist the idea of targets in evolution.

Corneel,Great post, Corneel. Thanks.

1. When evolution can be approximated as moving on a fitness surface, if it succeeds in improving fitness, in going uphill, then the higher the fitness can be for the more generations, the better, but there is no requirement that it find the highest peak on the surface.

2. In such cases evolution will climb the peak whose sides it is on, and has no capability of making a global search for the highest peak. So we can be pretty sure that we are suboptimal organisms. But, we hope, good enough.

3. The way FI of an individual genome is calculated, we use the fitness value of that genome as the “threshold” in the calculation, and compute the fraction of all genomes with fitness higher than that, and then take . As soon as the average value of this over all genomes rises above its initial value, FI has increased. It is not a matter of waiting for the peak, or the highest peak, to be reached for there to be any increase of FI. The statement that FI cannot be increased by natural selection is refuted, long before the population gets near any peak.

Are there models of evolution that use differential equations directly? I’m aware of predator-prey models, but they do not seem to be directly applicable.

As to whether anyone has argued here that genetic algorithms are not simulating evolutionary processes but are only demonstrated Intelligent Design, I have found an example in a thread here:

This comment in a 2016 thread on genetic algorithms, a comment made by “Frankie”. “Frankie” often presents his intelligent reasoning at Uncommon Descent under another name. We all know him under other names, too. Here is an argument that one of Frankie’s joined-at-the-hip co-thinkers, JoeG, gives at his delightful blog. And even in the latest postings at that blog, there is a similar post.

Mung might have forgotten that TSZ comment, but Mung probably saw it. Because the thread Frankie commented in was Mung’s thread.

Now perhaps no serious Intelligent Design person goes along with Frankie’s intelligent reasoning. Let’s see what the Discovery Institute says on the topic:

Here is a good example. It is not signed by an author, but by “Evolution News” itself, so it is

ex cathedra.The problem with the divine foot is where do you proceed once the foot is in the door. Any idea?

Yes, there are many examples of use of differential equations to model population genetic processes in continuous time. They can be a good approximation to discrete-generations models when selection coefficients, mutation rates, and similar parameters are small.

In my online text

Theoretical Evolutionary Geneticsyou will find some discussion of them, for example in sections I.7 and II.3.That is for deterministic models. When genetic drift is included, the standard methods use partial differential equations, the “diffusion equations” of Sewall Wright (1931), RA Fisher (1930), and, in greater generality, Andrei Kolmogorov.

Translating their position in terms of

modelingit comes out:If a model is realistic it is only because the modeler made it realistic.The modeler is the “outside source of information.”

It’s not just that the IDists don’t have a model; modeling itself is antithetical to the way they see the relationship between math and reality.

They are in the business of catching modelers doing modeling.

Yes, I think this is the root cause why arguments from consensus science fail to convince ID theorists.

Eric made this comment at in Gregory’s thread on PS: “[PS’s goal] is to keep science metaphysically neutral, but I don’t think that’s possible to do while being faithful to the mathematical truth.”

I am not sure of the details of what Eric meant, but I would parse the ID position like this:

1. Reality is fundamentally mathematical (The cosmologist Tegmark holds this position as well).

2. Mathematicians have access to that fundamental reality through sound proofs.

3. Eric’s proof (based on consensus math in Levin) is sound and so provides fundamental truths about reality.

4. Eric’s proof shows that observed increases in fitness cannot be explained by naturalistic processes, which for him that means determinism+randomness. Such process are always Turing computable and conversely.

5. Hence a non-Turing-computable process must underlie observed fitness increases.

6. Halting oracles go beyond Turing computability and are logically and hence mathematically possible.

7. Intelligence implements non-Turing computability (Godel and Penrose and Lucas believe this). Hence intelligence can explain observed fitness increases.

It’s not clear to me how to argue against that position, given that one accepts 1, 2, 3. I don’t see anything in incoherent in accepting them.

I’m not saying I do. I am just saying that I cannot think of a knock-down argument for rejecting them.

BruceS,In case anyone cares why I don’t accept the arguments in my above post:

The premises that reality is fundamentally math and that intelligence requires non-Turing computation are rejected by the consensus of informed experts.

Most importantly: it seems clear that every proof in math cannot be saying something about our actual world. For example, there are several possible geometries for the our world, but only one is right.

So how do we fallible humans validate that the math we are doing applies to our actual world? Only through the process of science. So premise 3 cannot work without science to support the math Eric has chosen. Biologists, not mathematicians, are the people we should look to validate that.

BruceS,Yes, I think you have summed that up pretty well. And EricMH’s reasoning does seem consistent with that viewpoint. For that matter, Barry Arrington’s views about the reality of logic seem consistent with that viewpoint. And, for that matter, so do FifthMonarchMan’s views about the Logos.

But it is all bonkers.

Are you saying if it does not find the highest peak it is not an optimization algorithm?

And of course the highest peak would be one target. It does not have to be the only target. I proposed that we try with two targets.

So there really is no known mechanism by which evolution might move from one peak to another? Why not call them islands then?

Once again I propose a WEASEL model with two targets.

METHINKS IT IS LIKE A WEASEL

LESAEW A EKIL SI TI SKNIHTEM

What does my objective function look like in that case, how do I design it?