Non-conservation of algorithmic specified complexity…

… proved without reference to infinity and the empty string.

Some readers have objected to my simple proof that computable transformation f(x) of a binary string x can result in an infinite increase of algorithmic specified complexity (ASC). Here I give a less-simple proof that there is no upper bound on the difference in ASC of f(x) and x. To put it more correctly, I show that the difference can be any positive real number.

Updated 12/8/2019: The assumptions of my theorem were unnecessarily restrictive. I have relaxed the assumptions, without changing the proof.

Definitions. Let \mathcal{X} := \{0, 1\}^*\!, the set of all binary strings. For all x in \mathcal{X}, the algorithmic specified complexity of x is defined

    \[A(x; c, p) := -\!\log_2 p(x) - K(x|c),\]

where c is an element of \mathcal{X}, and p is a distribution of probability over \mathcal{X}. Here K(x|c) is the algorithmic complexity of x conditioned on c.

Theorem. Let f(x) = y be a constant function on \mathcal{X}. Let x and c be elements of \mathcal{X} for which K(x|c) > K(y|c) holds. Also let \alpha be a positive real number. There exists a distribution p of probability over \mathcal{X} such that

    \[A(f(x); c, p) - A(x; c, p) = \alpha.\]

[12/8/2019: To generalize, replace the first two sentences of the theorem with the following: Let c be an element of \mathcal{X}, and let x and y be elements of \mathcal{X} such that K(x|c) > K(y|c). Let f be a computable function on \mathcal{X} with f(x) = y.]

Proof. Noting that f(x) = y by hypothesis,

    \begin{align*} &A(f(x); c, p) - A(x; c, p) \\ &\quad\quad= A(y; c, p) - A(x; c, p) \\ &\quad\quad= -\!\log_2 p(y) - K(y|c) - [-\!\log_2 p(x) - K(x|c)] \\ &\quad\quad= \log_2 p(x) - \log_2 p(y) + K(x|c) - K(y|c) \\ &\quad\quad= \log_2 \frac{p(x)}{p(y)} + k, \end{align*}

where k = K(x|c) - K(y|c) > 0 and x \neq y. In the case of \alpha \geq k, set p(x) = .5, and solve for p(y) \leq .5 in terms of \alpha:

    \begin{align*} \alpha &= \log_2 \frac{p(x)}{p(y)} + k \\ \alpha - k &= \log_2 \frac{.5}{p(y)} \\ 2^{\alpha - k} &= \frac{.5}{p(y)} \\ p(y) &= \frac{2^{-1}}{2^{\alpha - k}} \\ p(y) &= 2^{k- \alpha - 1}. \end{align*}

Observe that p(y) = 2^{k- \alpha - 1} is precisely equal to 2^{-1} = .5 if and only if \alpha = k, and is strictly less than 2^{-1} when \alpha > k. Thus p distributes probability mass

    \[1 - [p(x) + p(y)] \geq 0\]

over the nonempty set \mathcal{X} \backslash \{x, y\} excluding x and y. In the case of \alpha < k, set p(y) = .5, and similarly solve for p(x) < .5 in terms of \alpha. QED.

22 thoughts on “Non-conservation of algorithmic specified complexity…

  1. Your random equations are meaningless. You might as well calculate how many angels fit on a pin head. It’s true that your ID opponents talk about this nonsense called “specified complexity” and “information” (thank you Shannon for screwing up so many generations by mislabeling ‘data’ as ‘information’), but this doesn’t absolve you from clarifying and VALIDATING your assumptions before launching into nonsense “proofs”.

  2. They aren’t meaningless. Dr. English is establishing the AIT conservation of randomness proof doesn’t apply to ASC, which is correct.

  3. EricMH: They aren’t meaningless. Dr. English is establishing the AIT conservation of randomness proof doesn’t apply to ASC, which is correct.

    You replied too soon. Read again (edited).

  4. EricMH: Dr. English is establishing the AIT conservation of randomness proof doesn’t apply to ASC,

    Could you expand on this, Eric?
    What do you mean by “AIT conservation of randomness proof ” (text reference is fine as an answer).

    Why don’t you think the proof is addressing ASC?

  5. Tom: Suppose we limit the domain script-X to finite bit strings. Is there any reason in that case to say c has to be an element of that domain?

    Here is my reason: I agree with you and many of the posters that we need to be able to provide a biological interpretation of ASC to understand whether it has any scientific relevance for NS mechanism of evolution.

    But as I said in my post in your previous thread, I think that interpretation has to based on populations of genomes and how the relationship to the niche matters. For example, the bit string x could be the concatenation of actual genome bit strings and the Context C somehow captures the niche and so would have nothing to do with interpretation of domain of x. (Of course for infinite populations, C in the domain still applies by coincidence).

  6. (Advertisement) A post by me will appear today at Panda’s Thumb expressing my skepticism that the ASC measure can be used to place any limits on adaptation or on the increase of fitness by natural selection. Basically: why is description by a simple program connected to fitness? It’s not, as far as I can see,

  7. Joe Felsenstein:
    (Advertisement) A post by me will appear today at Panda’s Thumb expressing my skepticism that the ASC measure can be used to place any limits on adaptation or on the increase of fitness by natural selection.Basically: why is description by a simple program connected to fitness?It’s not, as far as I can see,

    I look forward to the post. I assume you want replies at PT only to avoid having to deal with two different places to answer questions and concerns. I hope Eric will post there.

    By the way, my quick and dirty answer to your question about fitness and ASC would be this:

    if some bit string C=context somehow captures the niche of an organism

    and if the ASC of a bit string representation of one genome is greater than that of a second bit-stringed genome for that context,

    then one could argue that the first genome is fitter than the second for the niche represented by that context.

    Or at least so it seems to a newbie like me.

    So fitness is the “meaning” of the context. I use scare quotes around ‘meaning’ because I agree with Tom that Eric’s use of the term meaning to deone a difference in KC appears to have nothing to do with usual understanding of meaning, eg in natural language or model theory.

  8. BruceS: Suppose we limit the domain script-X to finite bit strings. Is there any reason in that case to say c has to be an element of that domain?

    Sorry about the misunderstanding here. A string is a finite sequence, by definition. That is \mathcal{X} = \{0, 1\}^* is the set of all finite sequences of 0s and 1s. There is a definition of K(x) for some infinite sequences x of 0s and 1s, namely, the computable numbers. But I have no idea of how to handle an infinite sequence c of 0s and 1s in K(x|c). I cannot recall seeing anything but a (finite) binary string c. But my not seeing anything doesn’t rule out the possibility of an infinite sequence.

    BruceS: But as I said in my post in your previous thread, I think that interpretation has to based on populations of genomes and how the relationship to the niche matters. For example, the bit string x could be the concatenation of actual genome bit strings and the Context C somehow captures the niche and so would have nothing to do with interpretation of domain of x. (Of course for infinite populations, C in the domain still applies by coincidence).

    Sorry, I’m not following. Let me take a shot in the dark. Whatever the environment is, C has to be a binary encoding of it.

  9. Tom English: Sorry about the misunderstanding here. A string is a finite sequence, by definition.

    I’m mainly concerned with how your OP for this thread relates to Eric’s claims in section 4.2. However, I understand from your long reply in the other thread that you are planning a new OP that will address that in more detail.

  10. Tom English: Sorry, I’m not following. Let me take a shot in the dark. Whatever the environment is, C has to be a binary encoding of it

    Right, I mentioned that in my long post on this in the other thread.

    But I should start by explaining what I am trying to do. Based on on that goal, you can then decide whether it is worth your time to reply to this post.

    Mainly as an intellectual exercise, I am trying to to see if there is a way to capture evolution by the mechanism of NS using one of these three ASC formalisms and results.

    1. Levin’s result of conservation of mutual information. That’s the one Eric used in his earlier OPs here. To try to use that formalism to model NS, I believe you have to interpret the two bit strings x and y and the function f(x) that cannot increase the mutual information of x and y.

    2. The hypothesis testing results in section 3 of Eric’s paper (which I think are a generalization of Milosavljevic’s work, although I need to study them both further to confirm that). For that case, we have the probability distributions p, q and the context C. I need to do more work to understand if and how C relates to the two
    distributions.

    3. Some version of Eric’s claims about conservation of ASC. Here we would need to interpret the x, f(x), p(x), and the context C.

    I believe the math of the first two is not in dispute. This is not true of the third, but one could try to interpret the formalism regardless.

    As was pointed out in the thread responding to Eric’s earlier attempt, and as Joe re-iterates in his PT post, to capture NS using the formalism of any of the above, one must be able to model a population of genomes and how NS uses fitness in a niche when operating on that population.

    If it is possible to capture NS in one of the formalism, then I think that would be a response to Joe’s post in PT. I am not saying that the resulting model would be of any practical use to biologists. I am only saying that it would show it is possible to provide a biological interpretation of some approaches to ASC which captures NS and its role in population genetics.

  11. I have updated the opening post.

    BruceS: I’m mainly concerned with how your OP for this thread relates to Eric’s claims in section 4.2. However, I understand from your long reply in the other thread that you are planning a new OP that will address that in more detail.

    The place to look first is the opening of Section 4 of the article by Nemati and Holloway. Eric says that there’s something wrong with my argument in Evo-Info 4 that data processing can result in an infinite increase in ASC, i.e., that the difference of A(f(x); c, p) and A(x; c, p) can be infinite. He has generalized the argument slightly, replacing my constant function f(x) = \lambda, where \lambda denotes the empty (length-zero) string, with the constant function f(x) = y, where y can be any binary string. (That’s why, in my original statement of the theorem in the opening post, I stipulated a constant function f(x) = y. I thought that it was best to go with a theorem that was clearly related to the opening of Section 4. Today I realized, while considering how to reply to your comment, that I should have started with the more general theorem, and then stated the less general theorem, with f a constant function, as a corollary.)

    The most important thing to understand is that Eric acknowledged, in his comment above, that the theorem is correct, but is now engaging in rhetorical shenanigans to conceal the fact that it refutes a claim that he made hither and yon in cyberspace. You asked him just the right question, in your response above. The reason that you did not get an answer is that it would have exposed what Eric wants desperately to hide. And the reason that Eric is desperate is that he has proclaimed, hither and yon, his great expertise in ID and algorithmic information theory. I do not relish the thought that I am threatening his self-image. And I tell you, in all sincerity, that it’s distressing to see that as Eric fights to preserve his self-image and his public image, his ethical core is evaporating. But I am not going to let up, inasmuch as the Discovery Institute (foolishly) has cast Eric as a public intellectual, and many more people than Eric himself are hurt by his crankery.

    You can see, in his (yters’s) comments on Joe Felsenstein’s post at The Panda’s Thumb, including the one quoted below, that Eric has begun constructing a false narrative in which it was not he, but instead I, who introduced the notion that ASC is conserved in the sense that algorithmic mutual information is conserved — what he has started calling “AIT non growth”:

    Eric Holloway commenting as yters at PT: You are making a simple logic error in trying to imply that since algorithmic mutual information is a special case of ASC that your AIT non growth result says something important and negative about ASC. But A implies B doesn’t mean proving something about B implies something about A. You are making a fallacy known as ‘affirming the consequent.’ Just because AIT non growth applies to B doesn’t mean it has to apply to A, and the fact it does not apply to A does not further imply there is no conservation theorem that applies to ASC. In fact there is such a conservation limit as I prove in our recent expected ASC paper in Bio Complexity. So, it is my turn to be flabbergasted that you keep going on about the AIT non growth as if it mattered for ASC, when you most certainly have the background to know better :).

    Some points in response:

    1. Eric has silently introduced a distinction of “AIT non growth” and the “non growth” that he establishes in Section 4.2. He has not made this distinction before, having previously said that the “law of information conservation (nongrowth),” from algorithmic information theory (AIT), held for ASC as for algorithmic mutual information. You can see at the end of the (familiar) annotated screenshot, below, that he believed that he had refuted, with his new “non growth” result, my refutation of what he now calls “AIT non growth.”

    2. In the screenshot, below, it is obvious that Eric defines fASC very differently from ASC. How great is the difference? In the example given at the beginning of Section 4, for which ASC is infinite, fASC is a negative number. Eric then derives an upper bound on fASC, and decorates the mathematical result with rhetoric regarding ASC. Nowhere does he prove any relation of fASC and ASC.

    3. In short, Eric does not want to admit to error. So he turns things around, and falsely attributes to me the claim that I have refuted his upper bound on fASC — a result that he egregiously mischaracterizes as “non growth of ASC.”

    4. My non-conservation theorems devastate the notion, articulated in the book by Marks, Dembski, and Ewert (Introduction to Evolutionary Informatics), that ASC is a measure of meaningful information. I otherwise would not have addressed non-conservation of ASC in part 4 of my response to the book.

    5. As far as I can see, my non-conservation theorems say nothing about the utility of ASC as a statistic. As I indicated in Evo-Info 4, a statistical interpretation of ASC makes sense to me. Three weeks after my post appeared, Bio-Complexity published an article in which George Montañez adopted a perspective similar to mine. In fact, George later added to his article citations of an important hypothesis-testing precedent, from 24 years ago, that I had discovered, and had discussed in my post.

    6. Section 2 of the article by Nemati and Holloway is an essay on why Eric feels deeply that ASC is a measure of meaningful information. It is probably based on his Ph.D. dissertation. Section 3 treats ASC not as a measure of meaningful information, but instead as a statistic. It is probably based on Nemati’s master’s thesis. Section 4 of the article, which is clearly due to Eric, and is framed as a response to the first of my non-conservation results, treats ASC in pure abstraction. However, non-conservation of ASC is irrelevant to Section 3, including the “conservation of expected ASC” result. Also, the upper bound for fASC, not ASC, that Eric derives in Section 4.2 is irrelevant to Section 3.

    7. In short, my non-conservation results, including the one in the OP, say “something important and negative about ASC” as a measure of meaningful information, not ASC as a statistic. They say nothing at all about Section 4.2, because that is where Eric derives a result for something that is very different from ASC, fASC.

  12. BruceS: The hypothesis testing results in section 3 of Eric’s paper (which I think are a generalization of Milosavljevic’s work, although I need to study them both further to confirm that).

    Not to discourage further study, but Section 3 addresses expected ASC, and Milosavljevic addresses algorithmic mutual information, not its expectation.

    BruceS: 3. Some version of Eric’s claims about conservation of ASC. Here we would need to interpret the x, f(x), p(x), and the context C.

    The “conservation of ASC” is just rhetoric that Eric has attached to a result for fASC, which is very different from ASC. No one has ever applied fASC. It serves no purpose but to provide Eric with a result that he can mischaracterize as “conservation of ASC.”

    BruceS: I believe the math of the first two is not in dispute. This is not true of the third, but one could try to interpret the formalism regardless.

    Why would you do anything at all with fASC? The “conservation of ASC,” which is nothing but an upper bound on fASC, is an almost-trivial consequence of the definition of fASC. Eric obviously built the upper bound into the definition of fASC, just so that he would have an upper bound that he could mischaracterize as “conservation of ASC.” Please do not give his crankery any credence.

    As for your first option:

    BruceS: 1. Levin’s result of conservation of mutual information. That’s the one Eric used in his earlier OPs here. To try to use that formalism to model NS, I believe you have to interpret the two bit strings x and y and the function f(x) that cannot increase the mutual information of x and y.

    A gazillion people have answered the ID protestation “But where does the information come from!?” with “The environment!” There is conceivably some sense in relating changes in the frequency distribution of a population over fitnesses with changes in the algorithmic mutual information of a (binary encoded) population and its (binary encoded) environment. However, I want to say that, although natural selection is a (the only known) mechanism of adaptive evolution, I don’t believe that it’s correct to equate the fitness of a genotype with the adaptedness of the corresponding phenotype (or distribution of phenotypes, given that the genotype typically does not fully determine the phenotype) to the environment.

    If you model the environment explicitly, and your model lacks something that looks like feedback from the environment to the population, then I’d say that your model is categorically wrong. Perhaps Joe Felsenstein will disagree with me here. But, notwithstanding my inexpertise on the general topic, I’m fairly confident on this particular point. So if you see the mutual information of the population and environment increasing, the increase is attributable to interaction of the population with its environment. In terms used by Dembski and Marks, the evolutionary process poses queries with offspring, and the environment informs the process with its responses to the queries. However, if you take that literally, then you are reifying the model. (Note that Dembski and Marks go off the rails by asserting that evolution is a search for a prespecified target, and saying that the environment would not have the information to guide the search to the prespecified target unless there were an informed choice of environment. This, loosely speaking, is conservation of active information.)

    I’m pretty sure that I’ve seen a mathematician attempt to relate adaptation to increase in mutual information. He’s a very bright guy, but I cannot recall his name at the moment. I’ll leave it to you to do some googling.

  13. BruceS,

    One other thing. You have to understand that Eric believes that his ideas come from Jesus. You think that I’m being sarcastic? Here’s the first sentence of the acknowledgments section of his master’s thesis:

    Above all, I want to thank Jesus Christ for inspiring this thesis.

    He evidently feels no compunction about pronouncing on matters without reviewing the relevant literature. For instance, you’ll get gobs of hits if you use Google Scholar to search for papers on meaningful information or semantic information — even with restriction of search terms. For instance, I get 94 results in a search for the precise phrase “measure of semantic information”. The engineers in the ID crowd, including Eric, are ignoring a large volume of prior work. (Ironically, the few scientists in the ID movement engage mostly in critical reviews of the literature.)

    Have a look at the Google Scholar results for “mutual information” evolution adaptation environment. Read some papers. Please do not make Eric’s “inspired” speculations your starting point.

  14. Tom English: Not to discourage further study, but Section 3 addresses expected ASC, and Milosavljevic addresses algorithmic mutual information, not its expectation

    I am still struggling with the details of exactly how they do hypothesis testing, so it’s helpful to see both takes on it. They are working with OASC, of course, to do actual computations, so I am taking your point comparing the two papers as saying they use different test statistics with based on AIT But I have not finished studying the papers in order to confirm that thought..

    For OASC, I realized one needs to be familiar with how LZW compression works: specifically, the role of the dictionary and what text it is built from. So I needed to take a detour to re-learn that.

  15. Tom English: Have a look at the Google Scholar results for “mutual information” evolution adaptation environment. Read some papers. Please do not make Eric’s “inspired” speculations your starting point.

    I agree there is lots of work in biology using information theory, although I believe it is almost all the Shannon version. For AIT work, I was a bit surprised Joe did not mention that Milosavljevic paper in his PT post as an example of how AIT could be used to do biology, even if AIT is has not been adopted generally as Shannon info has.

    So I agree information theory is biologically useful. That’s why I made the point at PT that it was not biologically use of AIT that mattered for ID, it was whether the conservation laws IDists claim they have proven for math had any bearing on reality. The only way to know would be to do science, not math.*

    I have challenged Eric to try to show how his claimed math results could be used to build a biological model; I was wondering how I would meet that challenge. But after thinking about it more, I have decided I don’t know nearly enough biology to do something I’d be willing to ask actual biologists to look at. I do admire Eric for being willing to do that.

    For me, when ID proponents refer to semantics or meaning, I take them as inventing some new concept, and not referring to work in philosophy of language or model theory. Maybe such ID work fits into biosemiotics; I don’t know enough about that field to say.

    I avoid speculating on the motivations of ID proponents and just stick to the math. But I am a newcomer to these debates; I get the impression you have been at it for quite a while in which case I could understand how you could be frustrated at the stalemated state of the intellectual exchanges.
    —————————–
    * I believe Durston has done the most biologically-based work on ID; there is an exchange at PS on whether his work is biologically valid.

  16. Tom English: The “conservation of ASC” is just rhetoric that Eric has attached to a result for fASC, which is very different from ASC. No one has ever applied fASC. It serves no purpose but to provide Eric with a result that he can mischaracterize as “conservation of ASC

    I am confused by this. I understand the role of the function f in in fASC as crucial to the claims of conservation. Specifically, conservation of ASC is the claim that for a deterministic function f,

    ASC(x,C,p,.) >= ASC(f(x), C,p,f). as per in 42 and 43.

    For stochastic f, conservation is only claimed for expectations, as per 51.

    I am taking the Levin conservation of AIM as the model. According to Corollary 4.5. in Shannon Information And Kolmogorov Complexity (Grünwald and Vitanyi), Levin’s result is

    I (x:y) > I(f(x):y) -K(f) (up to an additive constant and ignoring other details like f having to be recursive).

    That is, KMI of X, Y cannot be increased by deterministic f. Further a stochastic f can only increase KMI “with negligible probability”.

    Eric believes he has proved conceptually similar conservation results for ASC using a function f, except his stochastic case is only with respect to expectation.

    Again, I understand your OP in this thread as showing how Eric’s claim about deterministic f cannot be right by showing how to generate counter-examples. And your images in the previous post’s thread are meant to indicate the problems with the proof regarding expectation.

    If the math were agreed to, then the next issue would be how conservation could apply to reality; particular for ID would be applications in biological evolution involving NS.

  17. Tom English,

    If you model the environment explicitly, and your model lacks something that looks like feedback from the environment to the population, then I’d say that your model is categorically wrong. Perhaps Joe Felsenstein will disagree with me here. But, notwithstanding my inexpertise on the general topic, I’m fairly confident on this particular point. So if you see the mutual information of the population and environment increasing, the increase is attributable to interaction of the population with its environment. In terms used by Dembski and Marks, the evolutionary process poses queries with offspring, and the environment informs the process with its responses to the queries. However, if you take that literally, then you are reifying the model. (Note that Dembski and Marks go off the rails by asserting that evolution is a search for a prespecified target, and saying that the environment would not have the information to guide the search to the prespecified target unless there were an informed choice of environment. This, loosely speaking, is conservation of active information.)

    This is the challenge evolution faces. Finding a target without the sequential functional information available to the algorithm. This would move the testability of the theory beyond micro evolution. In my mind I agree with Dembski et al that the environment is too vague a target to direct the combination of sequences required for life.

    While the math may not supply a definite proof to this hypothesis it is indicating that this problem for the environment providing information is real.

  18. colewd: This is the challenge evolution faces. Finding a target without the sequential functional information available to the algorithm. This would move the testability of the theory beyond micro evolution. In my mind I agree with Dembski et al that the environment is too vague a target to direct the combination of sequences required for life.

    There’s your mistake, Bill. There are no targets in evolution. Evolution is not a search.

  19. Alan Fox: There’s your mistake, Bill. There are no targets in evolution. Evolution is not a search.

    In fact, Marks, Dembski, and Ewert have backed away from the claim that biological evolution is search. Chapter 5 of Introduction to Evolutionary Informatics is “Conservation of Information in Computer Search” (emphasis added). The gist of what they’re saying now is that undirected computational evolution, which they reduce to the hoary Tierra (ignoring more-interesting systems that have come along since), does not “work,” and that computational models such as that in the Avida NAND experiment “work” only because programmers have rigged the systems to guide evolutionary “search” to “targets.” What they call active information is a measure of the effect of putative rigging of a system to produce a desired outcome.

    As I will try to explain in a future Evo-Info installment, Marks et al. look at the Avida-NAND software system from an engineering perspective, and see it as engineered to search for a target. However, scientists have not modeled evolution as search for a target in Avida-NAND. It generally is not possible to tell by reading the software implementing a model what the model is. In particular, the fact that the software signals when an event of interest (e.g., the emergence of an Avidian that computes the NAND function) occurs does not imply that the event is a “target” that the software was designed to “hit.”

    This stuff is very difficult to explain, because most non-scientists (including many highly educated engineers) do not understand scientific modeling. You may remember when UD put out a call for the source code (program) for Dawkins’s monkey/Shakespeare model of cumulative selection (which he never referred to as the Weasel program), and Dembski exulted at getting two programs from “Oxfordensis.” The whole affair was gobsmackingly stupid. But I understand entirely why it would seem otherwise to someone, e.g., Bill Cole, who does not understand that the software is not itself the model.

  20. Tom English: The “conservation of ASC” is just rhetoric that Eric has attached to a result for fASC, which is very different from ASC. No one has ever applied fASC. It serves no purpose but to provide Eric with a result that he can mischaracterize as “conservation of ASC.”

    BruceS: I am confused by this. I understand the role of the function f in in fASC as crucial to the claims of conservation. Specifically, conservation of ASC is the claim that for a deterministic function f,

    ASC(x,C,p,.) >= ASC(f(x), C,p,f). as per in 42 and 43.

    Congratulations! You’ve out-Hollowayed Holloway.

    1. You rhetorically conflate ASC and fASC, just as Holloway does in the screenshot attached to a comment above.
    2. You go a step further in conflation than Holloway ever has, silently dropping the f from fASC in the inequality you write.
    3. You state flatly that “conservation of ASC is the claim that…,” and then make a claim that is obviously not the claim in Holloway’s article. See the attached screenshot.

    Seriously, what you’ve posted looks like Holloway’s next lie about what he wrote in his article.

  21. Tom English: Seriously, what you’ve posted looks like Holloway

    Then perhaps I succeeded in channeling his thought while he wrote section 4 of the paper.

    In case I was not clear in presenting the aim of my previous post: I was trying to understand what Eric thinks he is claiming and then to understand your critique by that understanding of Eric’s goals. I am not saying his arguments succeed or that his notation represents his goals well. I am not trying to say he is right.

    In summary: I understand Eric is using the function f in his work in a conceptually similar way to how f is used in the Levin result as present by Gunwald & Vitanyi. That paper calls the result “non-growth of information” whereas Eric uses “conservation”, but I think Eric is aiming at the same concept.

    In section 4, Eric works through a set of generalizations for function f operating on the bitstring x: He starts with deterministic f, then generalizes to stochastic case for f and fixed x, then generalizes again to expectation over whole distribution p of X with stochastic f.

    Under that interpretation of the paper, I take you as criticizing both the notation and the arguments that support his claims.

    Your critique against the deterministic f is the counter-examples in the OP of this thread. For stochastic f, you point out in the other thread that f ASC(x, C, p, f) is not ASC: the notation and more importantly the later argument refers to distribution p of untransformed x, not distribution of L of transformed x, which is what ASC would need.

    Given my understanding of Eric’s use of ‘conservation’, I recognize that the title of 3.2 is not consistent with Eric’s use of the word.

    I welcome Eric’s feedback on whether I have understand his goals correctly.

Leave a Reply