Eric Holloway needs our help (new post at Panda’s Thumb)

Just a note that I have put up a new post at Panda’s Thumb in response to a post by Eric Holloway at the Discovery Institute’s new blog Mind Matters. Holloway declares that critics have totally failed to refute William Dembski’s use of Complex Specified Information to diagnose Design. At PT, I argue in detail that this is an exactly backwards reading of the outcome of the argument.

Commenters can post there, or here — I will try to keep track of both.

There has been a discussion of Holloway’s argument by Holloway and others at Uncommon Descent as well (links in the PT post). gpuccio also comments there trying to get someone to call my attention to an argument about Complex Functional Information that gpuccio made in the discussion of that earlier. I will try to post a response on that here soon, separate from this thread.

145 Replies to “Eric Holloway needs our help (new post at Panda’s Thumb)”

  1. J-Mac
    Ignored
    says:

    Rumraket: I don’t believe you. And I don’t believe you are capable of arguing the truth of that claim.

    Who gives a damn what you believe?!

    The truth??? From you??? As you see it???
    It’s a good one… You actually made me smile…

  2. Tom English Tom English
    Ignored
    says:

    Corneel: So what I’ve gathered so far is that “information” is what IDers use to avoid talking about adaptation and fitness.

    Watching ID proponents in apologetics videos is one of the best ways to learn what the ID movement is attempting to do. The brief video I linked to above is the most expositive of all that I’ve watched. Stephen Meyer identifies the crucial components of his argument in about two minutes. The following link take you 60 seconds into the video, when Meyer is just getting into the meaty part.

    https://youtu.be/r5gJW5CMEC8?t=60

    It takes only 18 seconds for Meyer to go from Lyell’s uniformitarianism, which he is coopting (with unacknowledged modification), to his main point: “Information always comes from an intelligence.” As you can see in the screenshot attached to this comment, the emphasis on always comes from the video. You will hear in the video that the “interviewer,” who says hardly anything, jumps in at this point to verbally emphasize always. Note that Meyer later shifts from an intelligence to a mind. I haven’t checked this closely, but I believe that that you can put mind in place of intelligence throughout ID writings without changing the meaning.

    Back to your comment… The primary use of information in ID rhetoric is to implicate minds. Above, Mung stated a point that I’ve been making for years, namely, that the technical arguments of ID proponents can be stated strictly in terms of probability. I have demonstrated that the arguments are actually simpler and clearer when “-log” is not prepended to “p.” What I have said, and Mung has not said, is that the “information” talk is gratuitous.

    I predict that the principal ID activists — the ones overtly affiliated with the Discovery Institute — will never acknowledge that log improbability is not itself information. They know at this point that I’m right on the matter. But they will never do anything to undercut the “information” rhetoric, in which the ID movement has invested hugely.

  3. Neil Rickert
    Ignored
    says:

    Tom English: “Information always comes from an intelligence.”

    I don’t much object to that part. It fits with what I mean by “information.”

    But there’s the real problem. The word “information” has many different meanings. The information argument from ID depends on equivocation between different meanings. It is all sleight of hand, much like so many other apologetics arguments.

  4. Tom English Tom English
    Ignored
    says:

    Continuing my last comment, and returning to Eric Holloway and the Walter Bradley Center for Natural and Artificial Intelligence

    To make their “Information always comes from an intelligence [mind]” argument work as they want, ID activists must argue also that a machine cannot be an intelligence. Any seeming intelligence on the part of a machine must ultimately come from its designer. Machines cannot create information. That, as a practical matter, is the meaning of “conservation of information in search.”

  5. Tom English Tom English
    Ignored
    says:

    Tom English quoting Meyer: “Information always comes from an intelligence.”

    Neil Rickert: I don’t much object to that part. It fits with what I mean by “information.”

    That’s fine, as long as you’re coherent and honest. Disguising creationist arguments from improbability by applying the logarithm function is dishonest, and has in fact led to incoherence in ID claims.

    Neil Rickert: But there’s the real problem. The word “information” has many different meanings. The information argument from ID depends on equivocation between different meanings. It is all sleight of hand, much like so many other apologetics arguments.

    Agreed.

  6. Tom English Tom English
    Ignored
    says:

    Joe Felsenstein: (An exception to all this is algorithmic information theory as invoked by Dembski, by Ewert, and by Holloway. I hope to start a thread on this soon — why would we think that computability by a short algorithm would identify design?)

    If you want to do that soon, then I should make a priority of wrapping up “Evo-Info 4: Meaningless Meaning” (title tentative). I think you’ll want to see what I have to say before proceeding. There’s a fair amount that we’ve never discussed. For instance, the one relevant theorem of Marks et al., “algorithmic specified complexity is rare,” was derived by someone else, about 25 years ago. It’s essentially an application of Markov’s inequality. 😉

  7. Neil Rickert
    Ignored
    says:

    Moved a post to guano.

  8. Corneel Corneel
    Ignored
    says:

    Tom English: The primary use of information in ID rhetoric is to implicate minds.

    That seems to fit with the way I have seen the word information used, such as the enormous emphasis on creating new information, whereas simple copying and modifying what already exists doesn’t count. Is “creating” a thing that only persons can do? Is something only new when it is the result of a creative process? It really helps understanding the arguments to pay attention to these things.

  9. BruceS
    Ignored
    says:

    Tomato Addict:
    . Far from preventing information being gained, it tells us exactly how to gain that information – by conditioning on the Sufficient statistic.

    Fisher information?

  10. Tom English Tom English
    Ignored
    says:

    Corneel: That seems to fit with the way I have seen the word information used, such as the enormous emphasis on creating new information, whereas simple copying and modifying what already exists doesn’t count. Is “creating” a thing that only persons can do? Is something only new when it is the result of a creative process? It really helps understanding the arguments to pay attention to these things.

    Yeah, people have yapped for decades about this stuff without clear definitions. Everybody is supposed to “just know” the meanings of terms like create and new information, but I honestly don’t know them. I can think of many possible meanings. And the fact that there are so many possible meanings is precisely the problem.

    It seems that where Bob Marks would say that he created something, I would say that I found a solution to a previously unsolved problem. I did not adopt my language in opposition to ID. Never in my life have thought of what I did in terms of creation. It was always discovery. (I say that as someone whose first master’s thesis was a collection of 40 poems, along with a poetics. It felt to me as though the words I needed were “out there,” to be discovered.) Funny thing is, my discovery of something is the same irrespective of whether I’m actually the first to have discovered it. Something I came up with in the research for my second master’s thesis turned out to have been developed by Alan Turing for use in deciphering Enigma-enciphered messages. Did I create old information instead of new information? 😉

  11. BruceS
    Ignored
    says:

    Tom English:

    Neil Rickert: But there’s the real problem. The word “information” has many different meanings. The information argument from ID depends on equivocation between different meanings.

    Tom English: Agreed.

    Tom (or anyone): Does the same apply to the phrase “Law of Conservation of Information”?

    I see how one might apply that name to Levin’s mathematical result regarding Kolmogorov complexity and the related definition of mutual information. For a different meaning, there is the conservation of quantum information in fundamental physics.

    But is that phrase used anywhere in consensus biology, and in particular in evolutionary models?

  12. Neil Rickert
    Ignored
    says:

    BruceS: Tom (or anyone): Does the same apply to the phrase “Law of Conservation of Information”?

    Definitely.

    Conservation of information might make sense in terms of algorithmic information theory. But it makes no sense if we are talking about Shannon information, where we can generate as much new information as we want just because we decide to communicate.

  13. Joe Felsenstein Joe Felsenstein
    Ignored
    says:

    Years ago Peter Medawar used the term in his book The Limits of Science. His argument was that if you applied a 1-1 transformation to information, none would be lost. Which is fairly obvious, since you can always apply the inverse of the transformation and get back to where you started. (Think lossless compression).

    One other person used it: in a paper on algorithmics of search in 1996. Guy by the name of Tom English. I’ll leave him to tell you about that.

  14. colewd
    Ignored
    says:

    Neil Rickert,

    The information argument from ID depends on equivocation between different meanings. It is all sleight of hand, much like so many other apologetics arguments.

    I see the same issue on the evolution side. The question is what mechanism can consistently generate symbols that translate to a diverse set of functions. We know a computer programmer can. The diversity of life requires this.

  15. Neil Rickert
    Ignored
    says:

    colewd: I see the same issue on the evolution side. The question is what mechanism can consistently generate symbols that translate to a diverse set of functions.

    I’m not seeing that. It’s the ID folk who try to describe everything in terms of symbols. As far as I can tell, evolutionary biologists are concerned with traits, behaviors (behavioral traits), adaptation and the like. Yes, they may mention DNA, but they see it more as templates than as symbols.

  16. colewd
    Ignored
    says:

    Neil Rickert,

    I’m not seeing that.

    This is what Joshua is doing with his cancer can create information hypothesis.

  17. Mung Mung
    Ignored
    says:

    Tom English: (Yes, Mung Mung, I just made a design inference. And I did it without CSI or FIASCO calculations.)

    I don’t think i have this book by Quastler. I think I do have the one from 1953:

    Essays on the Use of Information Theory in Biology

  18. Mung Mung
    Ignored
    says:

    BruceS: Just wanted to make sure you noticed Tom’s post defining “pointwise entropy”. So add that term to ‘surprise’, ‘surprisal’, ‘information gain’, ‘information’, and just plain ‘entropy’.

    Oh yes, I noticed. 🙂

    Gave me a new term to google. Heh.

    https://arxiv.org/pdf/1602.05063.pdf

  19. Neil Rickert
    Ignored
    says:

    colewd: This is what Joshua is doing with his cancer can create information hypothesis.

    He is trying to answer the ID folk on there own terms. I doubt that his own biology research is done in terms of symbols.

  20. Mung Mung
    Ignored
    says:

    Tom English: 1. It reminds you that the expression has been pulled out of a larger expression, and that the larger expression is of primary interest.

    Yes. I kept trying to get them to understand that H apples to “the larger expression” (the distribution as a whole).

    Joshua Swamidass has this thing where he likes to claim that entropy = information and that appears to be based on his claim that -log p = H. It appeared to me that he meant -log pi = H and simply was typing in haste.

    I tried to show why -log pi is not equal to H.

  21. Mung Mung
    Ignored
    says:

    BruceS: He believes that mutual information between people and the universe explains Wigner’s unreasonable effectiveness of math claim.

    I always fall back on what are the probability distributions. That may or may not be the right question to ask. I’ve just now started reading up on Kolmogorov Complexity and AIT.

  22. Mung Mung
    Ignored
    says:

    Tom English: Above, Mung stated a point that I’ve been making for years, namely, that the technical arguments of ID proponents can be stated strictly in terms of probability. I have demonstrated that the arguments are actually simpler and clearer when “-log” is not prepended to “p.” What I have said, and Mung has not said, is that the “information” talk is gratuitous.

    Well, that’s a tough one. My interest primarily lies in seeking clarity so that the sides are not talking past each other and so that a layperson such as myself can understand just what is being asserted. Information in biology has a long history that pre-dates ID.

    An example is in trying to get people to distinguish between what is said to be a measure of information and what is actually being measured. Some people think that Shannon gave us a mathematical definition of information. I’m not convinced that he did.

    Over at PS I argued that “surprisal” was unsatisfactory because I am no more surprised when snake eyes is rolled than I am when a seven is rolled because I know the underlying distribution. 🙂

    ETA: Oh, and I prefer the term “entropy” to “information” because that’s what Shannon called it and that’s what it’s called in the books. So yes, I do think we can dispense with a lot of the “information” talk.

  23. Tom English Tom English
    Ignored
    says:

    Mung: I tried to show why -log pi is not equal to H.

    The only “reason why” is the definition H(p) = -\sum_i p_i \log p_i, which plainly is not -\!\log p_i. Or are you trying to explain Shannon’s rationale for defining entropy as he did, in the context of communication? My recollection is that Shannon gave a clear explanation in his original paper. However, I don’t recall whether there’s a brief passage that would be appropriate for copy-and-paste.

  24. Joe Felsenstein Joe Felsenstein
    Ignored
    says:

    Mung: So yes, I do think we can dispense with a lot of the “information” talk.

    When asking about the logic of Dembski’s argument, we can omit all of it, and just talk about the probability of getting this good an adaptation (in some fitness-related sense of goodness).

  25. Tom English Tom English
    Ignored
    says:

    Joe Felsenstein: One other person used it [the term “conservation of information”]: in a paper on algorithmics of search in 1996. Guy by the name of Tom English. I’ll leave him to tell you about that.

    The most embarrassing of all my errors. I put a lot of work into figuring out exactly how I went wrong, and into correcting my error. It was a character-building experience. I learned that there is dignity in admitting to mistakes, precisely because the process is painful. Pretending that you do not make mistakes is undignified.

    A key point is that the search is on the part of the entity selecting an algorithm. The selected algorithm merely samples. A sampler begins with no information [EDIT: about unsampled points], and does not gain information about unsampled points by processing sampled points. Hence a sampling process is itself utterly uninformed, and my talk about “conservation of information” was at best misleading. See my blog post “Sampling Bias Is Not Information.”

  26. Tom English Tom English
    Ignored
    says:

    Neil Rickert: He [Joshua Swamidass] is trying to answer the ID folk on there own terms. I doubt that his own biology research is done in terms of symbols.

    I think his Ph.D. is in bioinformatics. If that’s right, then he has done a lot in terms of symbols. But that doesn’t mean that the referents of the symbols are themselves symbols.

  27. colewd
    Ignored
    says:

    Neil Rickert,

    He is trying to answer the ID folk on there own terms. I doubt that his own biology research is done in terms of symbols.

    His field is computational biology. All his work is done with symbols 🙂

  28. Tom English Tom English
    Ignored
    says:

    Mung: Gave me a new term to google. Heh.

    https://arxiv.org/pdf/1602.05063.pdf

    Dated 2017. I hope that’s a sign that the “pointwise” usage is catching on. I think it goes some way to prevent confusion that clearly is a problem. The “oh, we know what me mean, so the term doesn’t matter” response flies in the face of experience.

  29. Tom English Tom English
    Ignored
    says:

    colewd: His field is computational biology. All his work is done with symbols 🙂

    The tool used in investigation is not itself the object of investigation. 🙂

  30. Tom English Tom English
    Ignored
    says:

    Mung: I’ve just now started reading up on Kolmogorov Complexity and AIT.

    The source that Bruce linked to, up the thread, is my favorite. I presently have it open in a tab: “Algorithmic Information Theory.” It makes important links between classical (statistical) information, defined on distributions, and algorithmic information, defined on strings of symbols. (Note that some care is required in extending algorithmic complexity to discrete objects that are not themselves binary strings, and that Marks, Dembski, and Ewert do not take the care that they should in defining algorithmic specified complexity.)

  31. BruceS
    Ignored
    says:

    colewd:

    The question is what mechanism can consistently generate symbols that translate to a diverse set of functions. We know a computer programmer can.

    I see danger is with the use of the term ‘symbol’ because it implies semantic information to me. But that’s not what biology is using; it uses Shannon information as far as I know.

    So when using the term ‘symbol’, one has to be very careful to limit any implications of semantics to clearly specified mathematics and science which are not in need of humans to determine meaning.

    For example, if one speaks of a genetic code, then it is important to recognize this is just a short form for certain facts of biochemistry along with certain cell mechanisms, and is not for meant to imply code in the sense that semantics is used for human language.

    Switching topics: You say “generate symbols that translate to a diverse set of functions”. I note that functional information as originally introduced by Szostak for biochemistry is the opposite of what you wrote, at least in the following sense: Shannon info can remove the redundancies of a a genetic sequence treated as a meaningless string of symbols, but alone it is insufficient to deal with biological redundancies resulting from from the fact that “different sequences can encode RNA and protein molecules with different structures but similar functions.”

    To deal with such redundancies, Hazen proposes to use the -log2 of “the probability that a random sequence will encode a molecule with greater than any given degree of function.” To make the approach apply to a specific situation, one has to specify the function of interest and then empirically estimate the number of sequences which has that function (the example function “bind ATP’ is used in his paper).

    So one could say that functional information translates a diverse set of symbols into a single biological function.

  32. BruceS
    Ignored
    says:

    Mung: So yes, I do think we can dispense with a lot of the “information” talk.

    Go on then. I double dog dare you not to post any more posts with that word.

  33. Tom English Tom English
    Ignored
    says:

    Mung: An example is in trying to get people to distinguish between what is said to be a measure of information and what is actually being measured. Some people think that Shannon gave us a mathematical definition of information. I’m not convinced that he did.

    Of course Shannon gave a mathematical definition of (mutual) information. The mistake is in thinking that such a definition would tell us what information “really is.” There’s a long history of appeals to Mathematical Truth in creationism, i.e., to establish what is true in physical reality. That reflects a sad misunderstanding of how math works. (I know that Neil is with me on this.)

    The key point, I think, is that there is no physical measurement of probability. The reason that we have probability measures in mathematics is that Kolmogorov adapted measure theory to probability theory. The “measure” in measure theory is an analogy. Thinking in terms of physical measurement will help you in acquiring an understanding of the math — early in the process, anyway. But when a probability measure or an information measure is part of a scientific model, there’s no implication that probability or information is itself physically measurable.

  34. Tom English Tom English
    Ignored
    says:

    BruceS: I double dog dare you

    Oh, no, not that!

  35. colewd
    Ignored
    says:

    BruceS,

    So one could say that functional information translates a diverse set of symbols into a single biological function.

    If you tried to apply the function of your skin cells dividing to a diverse set of symbols you would struggle.

  36. Neil Rickert
    Ignored
    says:

    colewd: All his work is done with symbols

    Many biologists do their work with symbols. But their work is not about alleged symbols in biological systems.

  37. BruceS
    Ignored
    says:

    Mung:
    https://www.informatics.indiana.edu/rocha/publications/pattee/

    As Martin so wisely put it I’m aware of his work.

    Although admittedly not to any great level of detail.

    However, I believe nothing in his work contradicts the point I was trying to make.

    Am I missing something? If I am. and you have a quote to prove it, it would be great if you can provide a context for that quote in your words, and not just a cite. If you do, I am willing to ignore up to three uses of that word which must not be named.

  38. BruceS
    Ignored
    says:

    colewd:
    BruceS,

    If you tried to apply the function of your skin cells dividing to a diverse set of symbols you would struggle.

    No doubt. But I would struggle with trying to explain just about anything biological.

    On the other hand, if you asked any biologist with the relevant expertise, I am sure she or he could provide an explanation involving biochemistry and mechanisms. And if you wanted to push the concept of function versus mis-function or lack of function, then a philosopher of biology could complete any missing details of the conceptual analysis to naturalize the norms involved without regression to an intelligent observer.

  39. BruceS
    Ignored
    says:

    Mung:

    Some people think that Shannon gave us a mathematical definition of information. I’m not convinced that he did.

    That’s just the Platonist in you, as I posted at PS. But the right way out of the cave is math, not dictionarianism.

    You remind me of KeithS, who I assume is lost on the hiking trails of UK (soon just England), searching for his TSZ redeemer.

    Why do I draw that comparison with KS? Because you seem to share his theistic view of concepts as things which exist and are explainable from some God’s-eye view. (I’m stealing and possibly misunderstanding Neil’s critique).

    Of course, in your case, I believe the theistic part is fine by you.

  40. Mung Mung
    Ignored
    says:

    BruceS: Of course, in your case, I believe the theistic part is fine by you.

    Not sure you understood my comment. 🙂

    Let me try it this way. Was Shannon seeking to create a mathematical definition of information or was he trying to create a “measure” of “an amount of information” where what information is remains as vague and undefined after Shannon as it was before Shannon?

    You could be right that I do not yet grasp what constitutes a definition in mathematics so help along those lines would be appreciated.

  41. Neil Rickert
    Ignored
    says:

    Mung: Let me try it this way. Was Shannon seeking to create a mathematical definition of information or was he trying to create a “measure” of “an amount of information” where what information is remains as vague and undefined after Shannon as it was before Shannon?

    Neither, I would think.

    Shannon was giving a mathematical theory of information, which is not at all the same as a definition of information.

    Compare with the theory of sets (from mathematics). The theory does not tell us what a set is. It does define or specify the operations (mathematical operations) that we are allowed to use on sets. But it never tells us what a set is.

    Similarly, Shannon’s theory never tells us what information is. But it does tell us the kind of things we can do with information (in terms of mathematics).

    Yes, Shannon did define a way of measuring information, although he did not define what it was that we would be measuring. But, at least in my opinion, the measurement of information was not of central importance. He did call it a theory of communication.

  42. colewd
    Ignored
    says:

    BruceS,

    I see danger is with the use of the term ‘symbol’ because it implies semantic information to me. But that’s not what biology is using; it uses Shannon information as far as I know.

    Later in this post you mention Hazen and Szostak functional information. Do you consider Shannon information and functional information the same?

  43. BruceS
    Ignored
    says:

    colewd:
    BruceS,

    Later in this post you mention Hazen and Szostak functional information.Do you consider Shannon information and functional information the same?

    No thy are not the same in general.

    But you may find certain empirical cases where you can make assumptions about probability of certain physical situations and also take into account what we know about protein-related genome sequences, and then show a relation in that particular case.

    As I recall, Swamidass provides such an example and explains it in his replies to English in the comments section of his OP at TSZ last year. But I am going by memory on that statement.

    Possibly the cancer function stuff at PS can be looked at that way too, since it involves both function and Shannon MI. However I have been simply assuming that for Swamidass in that case, ‘function’ meant a protein with a biological function (eg to drive the survival and proliferation of cancer cells that possessed the protein in the body environment).

  44. Tomato Addict
    Ignored
    says:

    BruceS,

    That too, I think, but I was making a comparison between Levin’s Information Non-Growth and Rao-Blackwell. RB shows us how to find a minimum variance estimator. If I am correct about ING, serving the same purpose for algorithmic statistics, then conditioning on the algorithmic sufficient statistic show how to gain the algorithmic information (which Holloway claims cannot be gained).

    IOW: I think Holloway is misinterpreting the meaning of Information Non-Growth.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.