Breaking the law of information non growth

Part of my broader point that Dembski’s argument is support by standard information theory.

The article is pretty technical, though not as technical as the paper by Leonid Levin I got the ideas from.  If you skim it, you can get the basic idea.

https://am-nat.org/site/law-of-information-non-growth/

First, I prove that randomness and computation cannot create mutual information with an independent target, essentially a dumbed down version of Levin’s proof.  Dembski’s CSI is a form of mutual information, and this proof is a more limited version of Dembski’s conservation of information.

Next, I prove that a halting oracle (intelligent agent) can violate the conservation of information and create information.

Finally, I show that there is a great deal of mutual information between the universe and mathematics.  Since mathematics is an independent target, then by the conservation of information there should be zero mutual information.  Therefore, the universe must have been created by something like a halting oracle.

I cannot promise a response to everyone’s comments, but the more technical and focussed, the more likely I will respond.

117 thoughts on “Breaking the law of information non growth

  1. In order to focus some of the discussion here on simple models of genetic changes in populations, perhaps we can take a look at a numerical example I gave in my first post here at The Skeptical Zone. It was a simple model of natural selection at 100 loci in an infinite population. The changes of gene frequency (and of genotype frequency) could be straightforwardly calculated. They showed clearly that Specified Information (Functional Information, where the function was chosen as fitness) could accumulate in the genome.

    Perhaps Eric Holloway can show how his mutual information applies to the model situation. The model is equivalent to an evolutionary algorithm — it’s just that we don’t need to run simulations of it, as we can calculate the outcome.

    So is there a Law of Conservation of Complex Specified Information in that case, or not? Does the mutual information calculation offer insights? Does it predict what cannot happen? Is perchance what actually happens in this case the thing which is predicted by Holloway not to be able to happen?

    I can easily provide the equations if needed — they are quite simple.

  2. Joe Felsenstein,

    They showed clearly that Specified Information (Functional Information, where the function was chosen as fitness) could accumulate in the genome.

    I think Eric’s argument is around the forming of the Genes themselves.

  3. Alan Fox: I should really credit Allan Miller. 😉

    Should have guessed. Once the Creationist / ID crowd has come to terms with your everyday directional selection, then they will have a very rough time explaining the absurdities resulting from intragenomic genetic conflict. There is simply no way to fit it comfortably into a design framework. Until that time … well you see how silent they are in the thread to Dave Carlson’s piece at PS. Oblivious 🙂

  4. Corneel: …intragenomic genetic conflict…

    I see intragenomic conflict is a wide subject. I have noticed a couple of remarks by DNA_Jock disparaging Dawkins’ “selfish gene” but this seems to support the idea at least in the case of female/male conflict.

    The Wikipedia entry finishes with the throwaway line:

    “Conflict between chromosomes has been proposed as an element in the evolution of sex.”

    I wonder if Allan Miller has any thoughts on this.

  5. Joe Felsenstein: In order to focus some of the discussion here on simple models of genetic changes in populations, perhaps we can take a look at a numerical example I gave in my first post here at The Skeptical Zone. It was a simple model of natural selection at 100 loci in an infinite population. The changes of gene frequency (and of genotype frequency) could be straightforwardly calculated. They showed clearly that Specified Information (Functional Information, where the function was chosen as fitness) could accumulate in the genome.

    Thanks Dr. Felsenstein. This looks simple enough that even I can deal with it 🙂

  6. Actually, my beef with Dawkins is that his first book pretty much ignored pleiotropy and epistasis — broadly, the fact that the immediate environment that any allele experiences is the result of all the other alleles in that cell. Intragenomic conflict being but one aspect of these fascinating interactions.
    His second offering, “The Extended Phenotype”, is a much better book.

  7. DNA_Jock: His second offering, “The Extended Phenotype”, is a much better book [than The Selfish Gene]

    I agree and I’m pretty sure Dawkins regards it as his seminal work.

  8. BruceS: If you work with I(U:M) then it is unclear if there is MI between M and U. Although among M might be mathematical statements also used in physics, M alone lacks the descriptions of how to map variables to the observable world. Further, what would the function f be? It is true changes to human math don’t care about capturing the world. So math in that sense might be independent of U in your formalism. But again, Wigner’s argument is about science, not math on its own.

    First of all, great analysis. Thanks for putting the time in to think through the argument, BruceS.

    Regarding U, I don’t have anything further than the idea we can perfectly encode the universe’s state at any point in time as a bitstring, at least in theory. For instance, if we knew the position, composition and velocity of all particles in the universe at every point in time, we could encode this as a bitstring. I think this is the same as your statement that the universe is perfect information.

    Another similar idea is Laplace’s demon:

    We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

    — Pierre Simon Laplace, A Philosophical Essay on Probabilities

    In this case U is the information content of the intellect in Laplace’s thought experiment, and could be represented as some kind of computational simulation, which is a bitstring.

    We can further subdivide this simulation into two components, per Laplaces’ example:

    1. all forces that set nature in motion

    2. all positions of all items of which nature is composed

    #1 is identified with the set of simulation algorithms, and #2 is identified with the specific parameters that we use to initialize the algorithms.

    So, from your analysis, #1 is the same as the bitstring encoding of physics P. #2 is not addressed by your analysis, so I’ll just call it data D.

    Whether or not this is the best way to think about the universe, I think it is clear what I mean, so I will proceed with this formulation for U, P and D.

    Now, for human mathematical knowledge M, you are fine with saying this can be encoded as a bitstring. To make this a relevant bitstring for algorithmic information theory, we will further say the knowledge is encoded in a computational form, such as mathematical algorithms and axioms in a computational proof system. Thus, we can group all of these computational mathematical items into a software library and programming system, analogous to something like Mathematica.

    So, we have four items: U, P, D and M, and can proceed with our analysis. U is composed by P and D, so we can say U is the tuple {P, D}, and thus the Kolmogorov complexity of the two is the same:

    K(U) = K(P,D). (1)

    Now we can examine how M relates to U, P and D as algorithmic mutual information.

    Before beginning, we recall the definition of algorithmic mutual information:

    I(X:Y) = K(X) – K(X|Y*) = K(Y) – K(Y|X*), (2)

    where X* is the shortest program that generates X, and the same for Y.

    The first thing to note is that the size of the physics software libraries can be reduced by having them rely on the mathematics libraries. So,

    K(P) > K(P|M*).

    However, adding in the data term does not preserve the strict inequality,

    K(P,D) >= K(P,D|M*).

    If we make the assumption that the data in the initial constants is independent of human mathematical knowledge,

    K(M,D) = K(D) + K(D),

    then we can preserve the strict inequality,

    K(P,D) > K(P,D|M*),

    and consequently,

    K(P,D) – K(P,D|M*) > 0

    I(P,D:M) > 0. (3)

    Finally, we combine theorems 1, 2 and 3 and show there is mutual information between the universe and human mathematical knowledge.

    K(U) = K(P,D)

    K(U) – K(U:M*) = K(P,D) – K(P,D|M*)

    I(U:M) = I(P,D:M) > 0.

    If human mathematical knowledge is an independent target, then the mutual information I(U:M) cannot be explained by randomness + Turing computation.

  9. This is the end of my engagement here. It has been a goodish experience, so I will venture back when I have more content to discuss.

  10. EricMH,

    Eric quotes Laplace:

    We may regard the present state of the universe as the effect of its past and the cause of its future.

    How do you account for spontaneous radioactive decay? What’s the causality?

  11. EricMH: Thanks Dr. Felsenstein. This looks simple enough that even I can deal with it

    Good. The example does not require a lot of knowledge of biology — it is a simple model. I suspect that it will be enough to greatly illuminate the connection, if any, of mutual information to evolutionary processes

    EricMH: This is the end of my engagement here. It has been a goodish experience, so I will venture back when I have more content to discuss.

    I will await the result of your cogitating on the connection between mutual information and CSI in the example I gave. We can open a separate thread on that any time.

  12. EricMH: This looks simple enough that even I can deal with it

    Where’s the outcry!

    Infinite populations are not biologically realistic.

  13. Alan Fox,

    “Conflict between chromosomes has been proposed as an element in the evolution of sex.”

    I wonder if Allan Miller has any thoughts on this.

    Loads! 🤣

  14. Allan Miller,

    Though, as a teaser, the following quote from Extended Phenotype always resonated with me (from memory): “[chromosomes] dragged kicking and screaming into the second anaphase of meiosis”. “No!”, I thought. They are simply doing what they have always done from day 1: releasing haploids, the ancestral organism. Sure, there might be a bit of squabbling, but …

    I think Dawkins missed a trick here. The stance ‘who benefits?’, or point-of-view, is a fruitful one, but in the matter of sex, it’s not ‘the gene’, and it’s not ‘the diploid’; the beneficiary is a complete haploid genome. And, it’s not all about conflict. For most of their life cycle, they co-operate in binary organisms – a deep cellular symbiosis, to mutual benefit.

Leave a Reply