Eric Holloway needs our help (new post at Panda’s Thumb)

Just a note that I have put up a new post at Panda’s Thumb in response to a post by Eric Holloway at the Discovery Institute’s new blog Mind Matters. Holloway declares that critics have totally failed to refute William Dembski’s use of Complex Specified Information to diagnose Design. At PT, I argue in detail that this is an exactly backwards reading of the outcome of the argument.

Commenters can post there, or here — I will try to keep track of both.

There has been a discussion of Holloway’s argument by Holloway and others at Uncommon Descent as well (links in the PT post). gpuccio also comments there trying to get someone to call my attention to an argument about Complex Functional Information that gpuccio made in the discussion of that earlier. I will try to post a response on that here soon, separate from this thread.

334 thoughts on “Eric Holloway needs our help (new post at Panda’s Thumb)

  1. It looks like the “speculative science experts” are having a blast here…
    I love watching people comment who think that their speculations are as good facts… All Joe F needs is some assumption and then he can pretend that it is science…
    ETA: Didn’t I ask again and again to experimentally test those speculations?
    I guess nobody wants to be proven wrong… Joe F especially…

  2. BruceS: The issues for me are whether that math has anything to do with biological evolution.

    Of course, I don’t object to that. I’d read Part (Roman numeral) I of Levin’s paper, and there were things that simply jumped out at me when I looked at Holloway’s spiel.

    BruceS: Well, it depends what you mean by an algorithmic universe. I don’t think the universe computes. But I do think the many worlds interpretation of QM is the best, and it says the universal wave function is deterministic. (So does Bohmian interpretation, FWIW). So I think the universe can be explained deterministically.

    Note that if the universe were somehow shown to be indeterministic (I don’t see how we would ever rule out the possibility that indeterminism is merely apparent), the conclusion would be that your interpretation of QM is wrong, not that the supernatural is somehow manifest in nature (“naturalism is false”). No physicist has ever suggested that an indeterministic universe is not natural. Nor has any physicist ever suggested that a continuous-time universe is not natural. But Bartlett and Holloway are saying that naturalism entails a universe that is discrete and deterministic when they say that naturalism entails an algorithmic universe, irrespective of which of several sensible meanings they give algorithmic universe.

    I in fact have a pretty good idea of the sense in which Holloway regards the Universe to be algorithmic, having worked on a response to his talk “Imagination Sampling,” and having read Bartlett’s “Using Turing Oracles in Cognitive Models of Problem-Solving.” (My characterization of these guys’ claims may seem like a hip shot to you, but it’s not. I’ve dug into their work, and simply haven’t gotten around to addressing it in detail. Stuff from worthier adversaries keeps popping up, so I’ll perhaps never finish my OP addressing Holloway’s talk.)

    The gist of the following is that a discrete, deterministic Universe is not necessarily algorithmic. The model is at least close to, if not exactly, what Holloway is talking about.

    Say that the state of the universe at time t+1 is \omega_{t+1} = f(\omega_t), where the transition function f is defined on the countably infinite set \Omega of possible states. This means that the state of the universe at the next discrete step in time is determined by its present state. Let us say, for simplicity, that \Omega is the set of non-negative integers. The set of all functions on \Omega is uncountable, while the set of all Turing-computable functions on \Omega is countable. Hence there exists a possible universe that is discrete and deterministic, but with a state transition function that is not Turing-computable.

    There are other ways to go at this argument. The crucial point is that there are more ways for a discrete, deterministic universe to go than there are algorithms (finite sequences over a finite set of symbols) to describe how it goes. I’ve just hinted at the fact that we really don’t need to worry about these algorithm thingies at all. For any set of objects, only countably many of the objects are describable. When we talk about an algorithmic universe, we’re talking about a particular kind of description of timelines. Does naturalism entail describability of the Universe? The notion is preposterous. If we were to establish somehow that the Universe is indescribable, we obviously would not be forced to reject naturalism. The issue would be one of epistemology, not ontology.

    BruceS: (ETA: I’m only referring to explanations at the level of fundamental physics. Stochastic models reflecting our ignorance are needed at likely other levels of science.)

    It’s good for you to spell that out. But I wouldn’t have supposed anything else, given the way you were expressing yourself.

    BruceS: The algorithmic issue only comes up in the work being discussed if you accept that biological evolution cannot create new information.

    I’m not aware of any biologist having claimed that evolution creates “new information” (whatever that is) out of nothing. Biologists are keen on having evolution abide by the laws of physics (whatever that means), and I find it hard to believe that any of them would take a stand against conservation of information (also called conservation of probability) in quantum mechanics. The “creation of new information” claim is something that the ID movement synthesized from parts of Dawkins’s The Blind Watchmaker. They want the issue to be information, so they put in the mouths of their adversaries claims about information that their adversaries never made. I’ve pondered this matter a whole bunch, and have done a lot of googling, as the English that Dembski and Marks tagged with “English’s Principle of Conservation of Information.” I’m pretty sure that they found “creation of information” nowhere, and read it into TBW.

    BruceS: The blog post cites a Barlett paper which references work on hypercomputation by Copeland, who has suggested it as a possible way minds could avoid being simulatable by TMs. The cited Bartlett paper itself cites some work which references Penrose’s stuff on Godel showing human minds are not simulatable by TMs (a view which is of course widely disparaged).

    Yeah, but none of those folks challenging algorithmic theories of mind is challenging naturalism. Naturalism does not entail an algorithmic theory of mind. That’s just shit that Bartlett blew out his ass, and that Holloway ate up with a spoon. It’s in the same general class as “Darwinism entails creation of information out of nothing,” but it’s vastly more ridiculous.

    Thanks for the comment. It went a long way to help me get said some things that needed saying.

  3. In regard to conservation of information and the “creation of new information”, Dembski’s original argument does not need to be phrased in terms of information. As Tom has mentioned repeatedly here, CSI is just a statement that the probability of getting something that is this good or better (on a relevant goodness scale) is less than 10^{-150}. There is no need to take the logarithm and call the result “information” or “specified information”.

    This may distress those who want to connect all this with some grand theological viewpoint, but for assessing whether it can be ruled out that some existing adaptations could have been produced by natural selection, it is good enough.

    So figuring out what “really is information” or “really isn’t information” is irrelevant to assessing Dembski’s argument. And making statements about mutual information does not automatically tell us which of those statements are useful to biologists.

  4. Neil Rickert,

    We see Durston wanting to impose his own external requirements. In the meantime, the cancer is doing what cancers do.

    Can you elaborate on how he is doing this?

  5. Joe Felsenstein:

    So figuring out what “really is information” or “really isn’t information” is irrelevant to assessing Dembski’s argument.

    Thanks.

    My post was somewhat off topic since it was about Holloway’s ideas, not Dembski’s.

    Holloway’s argument involving Levin’s result differs from Dembski’s argument, at least as far as I understand the two arguments. Though I would not be surprised if they are linked when analysed in depth.

  6. Tom English:
    I in fact have a pretty good idea of the sense in which Holloway regards the Universe to be algorithmic,

    Thanks for this helpful reply. You raise some interesting philosophical issues which would be fun to discuss if you have the time to do an OP at some point.

    I’m not aware of any biologist having claimed that evolution creates “new information” (whatever that is) out of nothing. Biologists are keen on having evolution abide by the laws of physics (whatever that means), and I find it hard to believe that any of them would take a stand against conservation of information (also called conservation of probability) in quantum mechanics.

    I was careless in using the phrase “creation of new information”. I was trying to say something like, “If you believe that biological evolution, as described by the naturalistic processes in the models of consensus science, cannot explain the changes in genome we observe, then you might postulate that an intelligence unconstrained by those processes is involved”. I suspect that that phrasing would have been problematic as well, however!

    I’ve never understand how the conservation of quantum information, which applies only to fundamental physics, could apply to the models of biological science. Anyone that suggests this would seem to be ignorant of decoherence.

    What do you think of Swamidass’s attempt to explain a principle of Information Non-Growth that, if defendable, seems to be more appropriate to biology.
    Law of Information Non Growth

  7. Tom English: That’s why I characterize the stuff coming from Holloway, and also his BFF Jonathan Bartlett, as mathmaticalistic, and not mathematical.

    It’s not pseudo-mathematics though. Perhaps quasi-mathematics?

    ETA:

    Tom English: To expose mathematicalism for what it is, you have to bring mathematics to bear. But the folks you hope to persuade are not mathematically inclined, and are going to stick with those who tell them the kind of stories that they like to hear.

    There are perhaps the odd few who admit they do not understand the mathematics and because they do not understand the mathematics they shy away from those arguments.

    Not saying I’m one of them. 😉

  8. My first name is Mung and my last name is Mung and I have no doctorate. Hope that helps anyone who wants to know how to reference me by name.

  9. Neil Rickert: ID is about external design imposed on a biological system by an external designer. Evolution is about internal design/redesign by the population itself.

    Would you say that Michael Denton is not an IDist?

    I, for one, am willing to consider organisms as participating in their own design. I think “design from without” smacks too closely of creationism.

    Of course, your description of evolution is incorrect. Evolution as popularly conceived and represented is also design by an external designer.

    Alan Fox: How many times can one say the niche adds bias and be ignored?

    +2

  10. BruceS: You raise some interesting philosophical issues which would be fun to discuss if you have the time to do an OP at some point.

    I really don’t have time, and Joe knows why. I checked to see whether I could wrap up my post on imagination sampling in a reasonable amount of time, and it turned out that I could not. However, I did watch the video of Holloway’s talk again. I’d forgotten just how error-ridden and bizarre it is. If you want to know where he’s coming from, I highly recommend setting the playback speed at 1.5x (he talks too fast for 2x), turning on the captions, and giving him about 18 minutes of your time. I’ll mention that the imagination-versus-algorithm experiment he reports upon is horribly botched, because he doesn’t give the algorithm the time it needs to come up with the best solution it’s able to produce.

    ETA: Holloway’s talk on imagination sampling: https://www.youtube.com/watch?v=ZS1vTcQrMoU

    Now that I’m writing this, I’m feeling the itch to go at it again. I really shouldn’t do that. If I do put time into a post, it needs to be a long-unfinished one on algorithmic specified complexity — which Marks, Dembski and Ewert have been calling a measure of meaningful information (another “little” item that Holloway is now neglecting to mention). That would be a good followup to Joe’s post.

    But exercises like this are good. “How do I know what I think till I see what I say.” I’ve dumped a copy of what I’ve written into my draft of the response to Holloway. So, if I ever finish it, you’ll have helped.

  11. Joe Felsenstein: In the case of evolution, mutual information between what and what?

    Exactly!

    And all mutual information is, is a measure of how much one probability distribution tells us about a second probability distribution. And self-information is just a special case of mutual information.

  12. Joe Felsenstein: In regard to conservation of information and the “creation of new information”, Dembski’s original argument does not need to be phrased in terms of information. As Tom has mentioned repeatedly here, CSI is just a statement that the probability of getting something that is this good or better (on a relevant goodness scale) is less than 10^{-150}. There is no need to take the logarithm and call the result “information” or “specified information”.

    Wow. I agree with Joe again! And Tom. We could just as easily have called it complex specified entropy. 😉

  13. BruceS: I was careless in using the phrase “creation of new information”.

    I thought you were simply echoing what you’ve seen from IDists. I’ve often seen them use the phrase “creation of new information” (emphasis added). Searching now with the “new” included — something I’ve never done before — I’ve finally found the source of it. So thanks again for the prompts!

    I see now that creationists and neo-Paleyists have been quote-mining Henry Quastler for at least 15 years. I’ve attached a screenshot of the snippets of Quastler’s The Emergence of Biological Organization that I’m able to get from Google Books.

    Here’s an example of the quote-mining, in a video featuring Stephen Meyer: “The creation of new information is habitually associated with conscious activity” (runs about 2.5 minutes). Amusingly, I find that Meyer recently modified the quote to suit himself, in “Yes, Intelligent Design Is Detectable by Science” (April 2018):

    As the pioneering information theorist Henry Quastler observed, “Information habitually arises from conscious activity.”8
    _____________
    8. Henry Quastler, The Emergence of Biological Organization (New Haven: Yale UP, 1964), 16.

    As you can see below, Quastler actually writes on page 16:

    Creation of New Information. The “accidental choice remembered” is a common mode of originating information. Since creation of information is habitually associated with conscious activity, it will be worthwhile…

    Note that the quotation in the video not only replaces the opening “Since” with “The,” but also changes “creation of information” to “creation of new information.” Both the video and the ENV article put a period at the end, where there is of course ellipsis. As many years as I’ve observed this stuff, I continue to be shocked when I see what scum bags these “Christian thought leaders” are.

    Is Meyer’s gross distortion of the passage an innocent lapse in memory? Well, how would it be that he has the bibliographic data cached away — surely he does not have it memorized — but not the quotation itself? I see now that Meyer, Gauger, and Nelson write in “Theistic Evolution and the Extended Evolutionary Synthesis: Does It Work?” (Chap. 8, Theistic Evolution, 2017):

    Where does the programming — the algorithmic control — that accounts for the “preprogrammed adaptive capacity” of living organisms come from? We know of only one source of such programming. Our uniform and repeated experience affirms that the only source for information-rich programs is intelligent agency. Or as the information theorist Henry Quastler put it, “the creation of new information is habitually associated with conscious and rational activity.”

    I’ve added emphasis to the words that they introduce. They refer to page 16 of Quastler’s book.

    This just keeps getting better and better. Googling the phrase “creation of new information is habitually associated,” i.e., including the word new that does not appear in the original, I get upward of 250 hits. A few of them are responses including this series of comments by Diogenes on Larry Moran’s blog. Diogenes did a fantastic job of showing how Meyer’s quotations of Quastler have “mutated” over the years. It is highly implausible that Meyer is doing it by accident.

    (Yes, Mung Mung, I just made a design inference. And I did it without CSI or FIASCO calculations.)

  14. Mung Mung: Wow. I agree with Joe again! And Tom. We could just as easily have called it complex specified entropy.

    Flash! Just in from Richard Dawkins (bold emphasis added):
    _____________________________

    I am aware that my characterization of a complex object — statistically improbable in a direction that is specified not with hindsight — may seem idiosyncratic. So, too, may seem my characterization of physics as the study of simplicity. If you prefer some other way of defining complexity, I don’t care and I would be happy to go along with your definition for the sake of discussion. But what I do care about is that, whatever we choose to call the quality of being statistically-improbable-in-a-direction-specified-without-hindsight, it is an important quality that needs a special effort of explanation. It is the quality that characterizes biological objects as opposed to the objects of physics. The kind of explanation we come up with must not contradict the laws of physics. Indeed it will make use of the laws of physics, and nothing more than the laws of physics. But it will deploy the laws of physics in a special way that is not ordinarily discussed in physics textbooks. That special way is Darwin’s way. I shall introduce its fundamental essence in Chapter 3 under the title of cumulative selection.

  15. Mung and Joe:

    When you pull -\log p_i out of the expression for entropy,

        \[H(p) = -\sum_i p_i \log p_i,\]

    what you have is the pointwise entropy. It is vital to understand that the probabilities \{p_i\} are of events that are nonempty, exhaustive, and mutually exclusive. Where Dembski fails is in allowing the assignment of “information” (scare quotes deserved) to both the events T and T^\prime \subset T \subseteq \Omega, where \Omega is the sample space.

  16. Mung:

    Joe Felsenstein: In the case of evolution, mutual information between what and what?

    Exactly!

    And all mutual information is, is a measure of how much one probability distribution tells us about a second probability distribution. And self-information is just a special case of mutual information.

    The entropy for probability mass function p is sometimes referred to as self-information because it is identical to the mutual information of p and itself:

        \[H(p) = I(p; p)\]

    I was going to agree strongly with Joe’s remark, and I’m glad to see that you had already. However, I’m going to make an observation that I never have before, and that I think may be a very useful one.

    Shannon developed his mathematical results in “A Mathematical Theory of Communication.” The interpretation that Shannon gave his mathematical abstractions makes plenty good sense when you see how he related them to communication. Now, if you lift his mathematical structure out of that area of application, and attach it to another area, Shannon’s original interpretation doesn’t necessarily make sense anymore. (Shannon himself said something to this effect.) I believe that for each redeployment of the mathematics, there needs to be a fundamental rethinking of the interpretation. This seems obvious after I say it, but I think I’ve just put my finger on a source of considerable misunderstanding of “information.”

    I got excited when I saw Joe’s question because, as simple as it is, it’s a very important question to ask. (People caught up in fancy thinking often do overlook the most important questions, precisely because the questions are, on their faces, simple and “obvious.”) The names that are traditionally associated with mathematical entities do not tell us what the “really are.”

  17. Tom English: Now, if you lift his mathematical structure out of that area of application, and attach it to another area, Shannon’s original interpretation doesn’t necessarily make sense anymore.

    Yes, this is a problem that I see in a lot of “information” talk.

  18. Neil Rickert:

    To say that it is functional, it to say that it serves its purpose.

    Durston presumably sees this as a misfunction, as a failure to serve the purpose of being non-cancerous.

    The latest PS Durston post that I have seen talks about the role of an intelligent agent in separating function from mis-function. When I first read that, I thought he was referring to the philosophical problem of naturalizing norms and how to avoid a regression to the norms of that intelligent agent. But on review, I decided I was reading too much into Durston’s post.

    In any event, I agree with you that the starting point of solving that philosophical issue is to consider a living agent and its inherent goals of continuing to live and of reproducing.

  19. Neil Rickert: I’m undecided on how to characterize Denton — not that it matters.

    I think there are as many varities of ID as there are IDists. For example, what do J-Mac and phoodoo agree on? What are they undecided about? What do they disagree on and why?

    I’ve no idea to any of those questions. But oddly, in “Darwinism” where apparently no dissent is allowed I can go and read about the differences in opinion and the various “camps” that exist within the vast edifice that is our current attempt at understanding our origins. And how through further research those differences in option will eventually be resolved. Can the same be said for ID? I think not.

    This is why ID will never rise above a mechanism to generate page hits and sell blogs.

  20. Mung: Exactly!

    And all mutual information is, is a measure of how much one probability distributiontells us about a second probability distribution.

    Just plain Mung:
    Holloway is using algorithmic (Kolgomorov) MI: see here. The Shannon approximation/bound does come up in the computer simulations that he and Swamidass trade barbs about (eg in zip implementations).

    I think Holloway prefers the algorithmic framework because it was used by ASC and because at one point he defines an organism’s complexity using Kolmogorov minimal sufficient statistic (KMSS). But that is just a guess.

    FWIW, I believe the same results of MI non-growth under Holloway’s constraints apply to the Shannon version of MI; see Section 4.1 of this paper (pdf).

    But talk of an organism as at the linked posts is vague, so when push comes to shove, I think it is the genomes of an organism and its successors in the evolved populations that are the symbol strings that KMSS and K-MI are applied to. And, to repeat myself, I believe Holloway’s claim fails because his functions f(x) in his I(f(x):y) only consider mutation, ignoring the selection mechanisms of evolution .

  21. A relatively simple protein fold process requires an increase of information…

    DNA > RNA >Chain of amino acids > folded protein
    In addition to the sequence information there is also the position information…

    Where did this information come from???

  22. Mung,

    Just wanted to make sure you noticed Tom’s post defining “pointwise entropy”. So add that term to ‘surprise’, ‘surprisal’, ‘information gain’, ‘information’, and just plain ‘entropy’.

    As Shakespeare said,
    “A rose by any other name would still be -log p”.
    Or something like that.

  23. regarding this:
    “A relatively simple protein fold process requires an increase of information…

    DNA > RNA >Chain of amino acids > folded protein
    In addition to the sequence information there is also the position information…

    Where did this information come from???”

    I have also proposed a relatively easy experiment that would prove or disprove the information increase in protein folds or embryo development. It would end the never ending speculations about Complex Specified Information. I have not heard from anyone yet. It looks like nobody is interested in the resolving this issue but rather everyone seem to want to continue the futile speculations and the proposals of definitions of CSI…

    I wonder why?

  24. BruceS: Just wanted to make sure you noticed Tom’s post defining “pointwise entropy”. So add that term to ‘surprise’, ‘surprisal’, ‘information gain’, ‘information’, and just plain ‘entropy’.

    As Shakespeare said,
    “A rose by any other name would still be -log p”.
    Or something like that.

    I may have seemed to say that one name is right, and the others are wrong. The thing is that some names encourage understanding, some encourage misunderstanding, and some are just names. (I like the neutrality of surprisal.) As I said above, the name doesn’t tell us what the named abstraction “really is.”

    I came across the “pointwise” qualifier only four or five years ago, in a well organized survey of classical measures of information. There are several things I like about it:

    1. It reminds you that the expression has been pulled out of a larger expression, and that the larger expression is of primary interest.

    2. It’s hard to misinterpret.

    3. It can be applied consistently to various information-theoretic quantities. For instance, when Dembski and Marks stipulate that the target T is a block in a partition of the sample space, their active information

        \[I_+ = \log \frac{ Q(T) }{ P(T) }\]

    is, up to sign, the pointwise relative entropy. (George Montañez, formerly a student of Marks, mentioned the “pointwise” part of this in his dissertation proposal. I don’t recall him saying anything, ever, about the requirement of a partition of the sample space.)

    Algorithmic specified complexity is closely related to pointwise relative entropy, and some practical approximations to it are the pointwise entropy of one distribution relative to another. I don’t address this specifically in “Deobfuscating a Theorem of Ewert, Dembski, and Marks,” but I do supply the preliminaries (not for the mathematically faint of heart).

  25. Now that I am back to having a keyboard to sit before (I was out of town traveling), let me raise again the issue of how Holloway’s mutual information result shows that Specified Information cannot increase. I don’t see it. Holloway argues at Uncommon Descent (see here with correction here) that transformation of one variable cannot increase the mutual information with another.

    Let me accept that, and also accept that information may (or may not) be a relevant way to discuss this. The question I am still mystified by is: how does this show that the Specified Information in genomes from a population cannot increase? We have to have a scale of merit, say fitness, and a distribution of genotypes on it. And we have to keep the scale the same, unlike what is done in Dembski’s Law of Conservation of Complex Specified Information.

    Does that mutual information theorem then show that the average specified information in a population cannot increase from one generation to another when evolutionary processes are acting?

    Howzzat work? ‘Cause I can easily set up population genetics models with genotypes, fitnesses, etc. which have population mean fitness increasing sometimes, or even often.

    And of course the issue is not empirical, and not that I need to show that fitnesses (or averages of negative logs of fractions of populations at or above each genotype) always increase, but to ask whether there is some proof that in these models SI cannot increase because of some conservation law result.

  26. Joe Felsenstein: … let me raise again the issue of how Holloway’s mutual information result shows that Specified Information cannot increase. I don’t see it. Holloway argues at Uncommon Descent (see here with correction here) that transformation of one variable cannot increase the mutual information with another.

    I hope you don’t see it anytime soon, because that would be a sign of cognitive decline.

    Holloway’s signal feature is equivocation. He quotes Elsberry and Shallit remarking, “Dembski does not discuss how to determine the CSI contained in f.” Then he “responds” unresponsively with stuff about algorithmic mutual information, which obviously is not complex specified information (CSI).

    Deconstruction of Holloway is not a worthwhile endeavor, but I will mention that he seems to conflate similarly named, though formally different, concepts from different points in the history of ID. As best I can tell, he’s connecting the specified complexity of 2002, which is nothing but the log-improbability of an event with a “detachable specification” (whatever that means), with the algorithmic specified complexity of 2018, which does not come with a notion of detachable specification, and which is formally similar to, but not the same as, algorithmic mutual information. Holloway evidently is treating the specified complexity of 2002 as though it were algorithmic [EDIT: specified complexity mutual information] because they feel like the same thing to him.

    The ordinary, and easier to understand, expression of non-growth of information in algorithmic information theory (see, for instance, “Algorithmic Complexity” in Scholarpedia) is:

        \[K(f(x)) \leq K(x) + K(f) + C,\]

    where C is a constant depending only on the universal computer in terms of which K(\cdot) is defined. This says that application of the function f to binary string x actually can yield more algorithmic information than is in x. But as the algorithmic information of x grows, the contribution of the algorithmic information of f, which does not grow, becomes negligible. That is, the non-growth of algorithmic information is an asymptotic result. This is somewhat more subtle than Holloway is making it out to be. He chronically ignores “little” details that are fairly important.

  27. Tom English: I may have seemed to say that one name

    Thanks Tom. Actually, this was meant for Mung and alluded to several exchanges at PS. He, Swamidass, and other commentators have had spointed exchanges about what the “correct” name is for -log p. All the names I listed in my post have been suggested.

    Info Primer for Mung

  28. BruceS,

    I was just using your comment to Mung as a springboard. I’m not following Peaceful Science, and don’t want to follow it. I like what Joshua is trying to do, and wish him success. But I’m not too terribly keen on the way he operates at times.

    I knew all of those names for -log p. You’ll find all kinds of names in the Wikipedia article for the Kullback-Leibler divergence, which I called relative entropy above. The authors of the article seem not to understand that an appropriate name for the quantity depends on context.

  29. In the last kerfuffle I learned what functional information was. I just learned what mutual information is. Now I just need to find out what complex specified information is and then I may be able to follow the discussion again.

    So what I’ve gathered so far is that “information” is what IDers use to avoid talking about adaptation and fitness.

  30. Corneel,

    So what I’ve gathered so far is that “information” is what IDers use to avoid talking about adaptation and fitness.

    And adaptation and fitness is what evolutionists use to avoid talking about information. These guys should get together and discuss all the issues 🙂

  31. Corneel:
    In the last kerfuffle I learned what functional information was. I just learned what mutual information is. Now I just need to find out what complex specified information is and then I may be able to follow the discussion again.

    So what I’ve gathered so far is that “information” is what IDers use to avoid talking about adaptation and fitness.

    Specified information is just -\log P, where P is the probability that a random genotype is better than (or tied with) the one you are evaluating. Asking whether enough specified information can be put into a genome by natural evolutionary processes to qualify as Complex Specified Information is just asking whether those processes can get the genome far enough out into the tail of the original distribution. So it is a calculation about adaptation and fitness, and does not need to be posed as about “information” for us to evaluate Dembski’s logic.

    What I have not seen from Holloway yet is
    1. Any comment on why Dembski can use the mapping of evolutionary change to set up the target region in the previous generation when he is sketching the proof of his LCCSI. Holloway himself has stated that the target specification must be “detachable” from the processes. Why doesn’t he worry about Dembski’s violation of this?
    2. Any discussion of why the LCCSI is valid, when it changes specification each generation? To refute the ability of the evolutionary processes to achieve CSI one would have to keep evaluating each generation with the same target region.

  32. Joe Felsenstein:
    let me raise again the issue of how Holloway’s mutual information result shows that Specified Information cannot increase.I don’t see it.Holloway argues at Uncommon Descent

    Thanks for the links pointing out Holloway’s comments at Uncommondescent. I read through his posts, and three items in particular stuck out for me.
    He believes that mutual information between people and the universe explains Wigner’s unreasonable effectiveness of math claim. I have no idea how one could define or calculate such MI; it’s more like a claim of mystic oneness of human intelligence and the universe to me. Holloway sees it as evidence of fine tuning.

    He did not know the difference between the meansings of ‘DNA’ and ‘genetic code’.

    He is not sure if CSI applies to biology.

    From this, I conclude that Holloway has no biological model in mind for applying Levin’s result. Instead, he puts his faith in fine tuning arguments applied universally. This fallback has also come up in the discussion at PS.

  33. J-Mac: A relatively simple protein fold process requires an increase of information

    I don’t believe you. And I don’t believe you are capable of arguing the truth of that claim.

  34. Thanks, Joe. That helps. I was getting lost in all those different types of “information”

    Joe Felsenstein: Specified information is just -\log P, where P is the probability that a random genotype is better than (or tied with) the one you are evaluating.

    That resonates with Hazen-Szostak functional information, except that the proportion of configurations that meets some functional threshold has become a probability. Are the two equivalent in some sense?

    Joe Felsenstein: So it is a calculation about adaptation and fitness, and does not need to be posed as about “information” for us to evaluate Dembski’s logic.

    That depends on whether one accepts that “better” means “better adapted” does it not? I have a hunch that this is not always the case. Bill seems to disagree in the comment just above yours, for one thing.

  35. Rumraket: I don’t believe you. And I don’t believe you are capable of arguing the truth of that claim.

    I’ve got one.

    Hey, J-Mac, how much does the information increase by and what units are you using there?

  36. Corneel:
    Thanks, Joe. That helps. I was getting lost in all those different types of “information”

    [in response to my definition of specified information]

    That resonates with Hazen-Szostak functional information, except that the proportion of configurations that meets some functional threshold has become a probability. Are the two equivalent in some sense?

    If we have a space of sequences and draw from it with all of them equally likely, the probability of a set of sequences is also the fraction of all sequences that are in that set.

    [in response to my identifying the “function” scale as “better”]

    That depends on whether one accepts that “better” means “better adapted” does it not? I have a hunch that this is not always the case. Bill seems to disagree in the comment just above yours, for one thing.

    In general, scales used for specified information are quantities positively correlated with fitness, and fitness is the ultimate scale. We would not, for example, use “blueness” as the scale, as finding an exceptionally blue individual is not our concern. (An exception to all this is algorithmic information theory as invoked by Dembski, by Ewert, and by Holloway. I hope to start a thread on this soon — why would we think that computability by a short algorithm would identify design?)

  37. Joe Felsenstein: We would not, for example, use “blueness” as the scale, as finding an exceptionally blue individual is not our concern.

    If Dembski’s argument is anything like that of other IDists the scale will be “resembling modern humans”. There has to be some ultimate target, after all.

    Already looking forward to your new OP; I usually end up learning new stuff.

  38. I am late to the party, as usual, but I’m pretty sure Levin’s Law of Information Non-Growth is more or less the equivalent of the Rao-Blackwell-Kolmogorov theorem in statistics. Far from preventing information being gained, it tells us exactly how to gain that information – by conditioning on the Sufficient statistic.

  39. Tomato Addict:
    I am late to the party, as usual, but I’m pretty sure Levin’s Law of Information Non-Growth is more or less the equivalent of the Rao-Blackwell-Kolmogorov theorem in statistics. Far from preventing information being gained, it tells us exactly how to gain that information – by conditioning on the Sufficient statistic.

    You might be just the one I have been looking for!!!

    Are you familiar with the category theory that stems from the set theory?
    We apparently have some mathematicians here at TSZ but when it comes to doing real math, other than speculative math based on their own assumptions or definitions, they are useless…
    Maybe you can help to resolve the cell differentiation issue in embryo development based on the category theory?

    Here are the links to my OPs:

    Does embryo development process require ID?

    http://theskepticalzone.com/wp/does-embryo-development-process-require-id/

    Does Embryo Development Require God’s Guidance?

    https://discourse.peacefulscience.org/t/does-embryo-development-require-gods-guidance/1729?page=4

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.