Part of my broader point that Dembski’s argument is support by standard information theory.

The article is pretty technical, though not as technical as the paper by Leonid Levin I got the ideas from. If you skim it, you can get the basic idea.

https://am-nat.org/site/law-of-information-non-growth/

First, I prove that randomness and computation cannot create mutual information with an independent target, essentially a dumbed down version of Levin’s proof. Dembski’s CSI is a form of mutual information, and this proof is a more limited version of Dembski’s conservation of information.

Next, I prove that a halting oracle (intelligent agent) can violate the conservation of information and create information.

Finally, I show that there is a great deal of mutual information between the universe and mathematics. Since mathematics is an independent target, then by the conservation of information there should be zero mutual information. Therefore, the universe must have been created by something like a halting oracle.

I cannot promise a response to everyone’s comments, but the more technical and focussed, the more likely I will respond.

I should really credit Allan Miller. 😉

In order to focus some of the discussion here on simple models of genetic changes in populations, perhaps we can take a look at a numerical example I gave in

my first post here at The Skeptical Zone. It was a simple model of natural selection at 100 loci in an infinite population. The changes of gene frequency (and of genotype frequency) could be straightforwardly calculated. They showed clearly that Specified Information (Functional Information, where the function was chosen as fitness) could accumulate in the genome.Perhaps Eric Holloway can show how his mutual information applies to the model situation. The model is equivalent to an evolutionary algorithm — it’s just that we don’t need to run simulations of it, as we can calculate the outcome.

So is there a Law of Conservation of Complex Specified Information in that case, or not? Does the mutual information calculation offer insights? Does it predict what cannot happen? Is perchance what actually happens in this case the thing which is predicted by Holloway not to be able to happen?

I can easily provide the equations if needed — they are quite simple.

Joe Felsenstein,I think Eric’s argument is around the forming of the Genes themselves.

If so, it’s irrelevant to CSI. Anyway, he will tell us.

Should have guessed. Once the Creationist / ID crowd has come to terms with your everyday directional selection, then they will have a very rough time explaining the absurdities resulting from intragenomic genetic conflict. There is simply no way to fit it comfortably into a design framework. Until that time … well you see how silent they are in the thread to Dave Carlson’s piece at PS. Oblivious 🙂

I see intragenomic conflict is a wide subject. I have noticed a couple of remarks by DNA_Jock disparaging Dawkins’ “selfish gene” but this seems to support the idea at least in the case of female/male conflict.

The Wikipedia entry finishes with the throwaway line:

“Conflict between chromosomes has been proposed as an element in the evolution of sex.”I wonder if Allan Miller has any thoughts on this.

Thanks Dr. Felsenstein. This looks simple enough that even I can deal with it 🙂

Actually, my beef with Dawkins is that his first book pretty much ignored pleiotropy and epistasis — broadly, the fact that the immediate environment that any allele experiences is the result of all the other alleles in that cell. Intragenomic conflict being but one aspect of these fascinating interactions.

His second offering, “The Extended Phenotype”, is a much better book.

I agree and I’m pretty sure Dawkins regards it as his seminal work.

First of all, great analysis. Thanks for putting the time in to think through the argument, BruceS.

Regarding U, I don’t have anything further than the idea we can perfectly encode the universe’s state at any point in time as a bitstring, at least in theory. For instance, if we knew the position, composition and velocity of all particles in the universe at every point in time, we could encode this as a bitstring. I think this is the same as your statement that the universe is perfect information.

Another similar idea is Laplace’s demon:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

— Pierre Simon Laplace, A Philosophical Essay on Probabilities

In this case U is the information content of the intellect in Laplace’s thought experiment, and could be represented as some kind of computational simulation, which is a bitstring.

We can further subdivide this simulation into two components, per Laplaces’ example:

1. all forces that set nature in motion

2. all positions of all items of which nature is composed

#1 is identified with the set of simulation algorithms, and #2 is identified with the specific parameters that we use to initialize the algorithms.

So, from your analysis, #1 is the same as the bitstring encoding of physics P. #2 is not addressed by your analysis, so I’ll just call it data D.

Whether or not this is the best way to think about the universe, I think it is clear what I mean, so I will proceed with this formulation for U, P and D.

Now, for human mathematical knowledge M, you are fine with saying this can be encoded as a bitstring. To make this a relevant bitstring for algorithmic information theory, we will further say the knowledge is encoded in a computational form, such as mathematical algorithms and axioms in a computational proof system. Thus, we can group all of these computational mathematical items into a software library and programming system, analogous to something like Mathematica.

So, we have four items: U, P, D and M, and can proceed with our analysis. U is composed by P and D, so we can say U is the tuple {P, D}, and thus the Kolmogorov complexity of the two is the same:

K(U) = K(P,D). (1)

Now we can examine how M relates to U, P and D as algorithmic mutual information.

Before beginning, we recall the definition of algorithmic mutual information:

I(X:Y) = K(X) – K(X|Y*) = K(Y) – K(Y|X*), (2)

where X* is the shortest program that generates X, and the same for Y.

The first thing to note is that the size of the physics software libraries can be reduced by having them rely on the mathematics libraries. So,

K(P) > K(P|M*).

However, adding in the data term does not preserve the strict inequality,

K(P,D) >= K(P,D|M*).

If we make the assumption that the data in the initial constants is independent of human mathematical knowledge,

K(M,D) = K(D) + K(D),

then we can preserve the strict inequality,

K(P,D) > K(P,D|M*),

and consequently,

K(P,D) – K(P,D|M*) > 0

I(P,D:M) > 0. (3)

Finally, we combine theorems 1, 2 and 3 and show there is mutual information between the universe and human mathematical knowledge.

K(U) = K(P,D)

K(U) – K(U:M*) = K(P,D) – K(P,D|M*)

I(U:M) = I(P,D:M) > 0.

If human mathematical knowledge is an independent target, then the mutual information I(U:M) cannot be explained by randomness + Turing computation.

This is the end of my engagement here. It has been a goodish experience, so I will venture back when I have more content to discuss.

EricMH,Eric quotes Laplace:

We may regard the present state of the universe as the effect of its past and the cause of its future.How do you account for spontaneous radioactive decay? What’s the causality?

EricMH,Sorry to see you go, Eric. You’re welcome back, any time.

Good. The example does not require a lot of knowledge of biology — it is a simple model. I suspect that it will be enough to greatly illuminate the connection, if any, of mutual information to evolutionary processes

I will await the result of your cogitating on the connection between mutual information and CSI in the example I gave. We can open a separate thread on that any time.

Where’s the outcry!

Infinite populations are not biologically realistic.

Alan Fox,Loads! 🤣

Allan Miller,Though, as a teaser, the following quote from

Extended Phenotypealways resonated with me (from memory): “[chromosomes] dragged kicking and screaming into the second anaphase of meiosis”. “No!”, I thought. They are simply doing what they have always done from day 1: releasing haploids, the ancestral organism. Sure, there might be a bit of squabbling, but …I think Dawkins missed a trick here. The stance ‘who benefits?’, or point-of-view, is a fruitful one, but in the matter of sex, it’s not ‘the gene’, and it’s not ‘the diploid’; the beneficiary is a complete haploid genome. And, it’s not all about conflict. For most of their life cycle, they co-operate in binary organisms – a deep cellular symbiosis, to mutual benefit.