Correspondences between ID theory and mainstream theories

Per Gregory and Dr. Felsenstein’s request, here is an off the top of my head listing of the major theories I can think of that are related to Dembski’s ID theory. There are many more connections I see to mainstream theory, but these are the most easily connected. I won’t provide links, at least in this draft, but the terms are easily googleable. I also may update this article as I think about it more.

First, fundamental elements of Dembski’s ID theory:

  1. We can distinguish intelligent design from chance and necessity with complex specified information (CSI).
  2. Chance and necessity cannot generate CSI due to the conservation of information (COI).
  3. Intelligent agency can generate CSI.

Things like CSI:

  • Randomness deficiency
  • Martin Löf test for randomness
  • Shannon mutual information
  • Algorithmic mutual information

Conservation of information theorems that apply to the previous list:

  • Data processing inequality (chance)
  • Chaitin’s incompleteness theorem (necessity)
  • Levin’s law of independence conservation (both chance and necessity addressed)

Theories of things that can violate the previous COI theorems:

  • Libertarian free will
  • Halting oracles
  • Teleological causation

187 thoughts on “Correspondences between ID theory and mainstream theories

  1. EricMH,

    This has been formalized by quite a few mathematicians at this point. I’ve mentioned a couple in the top level article, e.g. randomness deficiency and tests for randomness.

    Randomness tests alone don’t resolve the paradox, because there are patterns — such as one’s SSN — that might qualify as random but nevertheless have special significance to us.

  2. What I have yet to see is application to real world data. For example, apply these concepts to bacterial populations, or to comparisons of mammalian genomes. Show us how CSI is measured, and show us how it is able to differentiate between naturally occurring mutations and designed mutations. Until this is done, it is all meaningless.

  3. keiths, to Eric:

    The revised version is useless, because you already have to do all the work of showing that something couldn’t have evolved before concluding that it has CSI and therefore couldn’t have evolved. Besides, for most cases of biological interest, neither Dembski or anyone else can calculate the necessary probabilities. Not even for the flagellum, which is the pet structure of IDers everywhere.

    A further problem is that Dembski misapplies specification, as I explained here. He also assumes “a dictionary of 100,000 basic concepts” while giving no justification for that particular number.

  4. There’s some egregious equivocation in this thread, beginning with the OP.

    Dembski, joined later by Marks and Ewert, has published three different definitions of specified complexity. He has stated a “conservation law” only for CSI as originally defined — the log-improbability, log(1/p), of an event with a detachable specification. In his second and third definitions, CSI is a log ratio of nonconstant quantities. His semiformal arguments about conservation of [complex specified] information (log-improbability of an event with a detachable specification), given in his book No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2002), do not apply to the later forms of specified complexity. And formal results that have been derived for the later forms of specified complexity do not apply to the original form. Thus EricMH is grossly wrong when he makes sweeping claims about “Dembski’s ID theory.”

    Dembski said nothing about conservation of the second “CSI” that he defined (2005). That “CSI” was recently dubbed semiotic specified complexity by George Montañez (who, unlike EricMH, attends to essential mathematical details).

    In 2009 (?), Dembski and Marks restated the Law of Conservation of Information (LCI), with “information” referring to active information instead of complex specified information (specified complexity). They gave no indication at all that Dembski had previously claimed that his original form of specified complexity was conserved. In the years since, they have said nothing whatsoever about Dembski’s original LCI. That is, they have never claimed that Dembski came up with two laws, one for specified complexity, and the other for active information. They simply abandoned Dembski’s original claim that specified complexity was conserved, and transferred his “conservation of information” rhetoric to active information.

    Ewert, Dembski, and Marks introduced algorithmic specified complexity (ASC, not CSI) in 2012. It is the only one of Dembski’s three forms of specified complexity that they address in their book, Introduction to Evolutionary Informatics (where the “information” in “conservation of information” is strictly active information). Rather than say that ASC is conserved, they say that that high ASC is improbable. Furthermore, they bill ASC as a measure of meaningful information. (In my most recent post, I proved that ASC is not conserved in the sense that algorithmic mutual information is conserved. I also demonstrated, with a striking pictorial example, that it is ludicrous to regard ASC as a measure of meaning.)

    None of EricMH’s “Things like CSI” is like Dembski’s original CSI — the only CSI that he ever said was conserved. One of them, the Martin Löf test for randomness, is not like any of the forms of specified complexity. The third form, ASC, is somewhat like classical mutual information and algorithmic mutual information for the simple reason that its formal expression is a mixture of expressions from the two. Observing the formal similarity, and leaving it at that, is utterly vacuous. More interesting — I first noticed this about five years ago, but have never figured out what to make of it — is the observation that ASC,

    -log2 p(x) – K(x|c),

    where x and c are binary strings, is a generalization of randomness deficiency. That is, set c to the empty string, and ASC is the randomness deficiency of x relative to probability distribution p over binary strings. This is not good news for ID theory, because it leads to an ugly question: Why should a generalized measure of randomness deficiency be regarded as a measure of meaning? Ewert, Dembski, and Marks have not raised the question, let alone attempted to answer it. (To repeat, I made it abundantly clear in my most recent post that ASC is not a measure of meaningful information.)

    George Montañez has generalized the “high ASC is rare” theorem of Ewert, Dembski, and Marks, obtaining a theorem that applies not only to ASC, but also to semiotic specified complexity (and Szostak’s “functional information”). Montañez has failed to resist the urge to call his theorem a “conservation of information” theorem. (He does not explain why he calls it a “conservation of information theorem” — he just does it.) However, Montañez’s theorem does not apply to Dembski’s original form of specified complexity.

    EricMH wants us to make something of the “conservation of information” theorem for algorithmic mutual information, so I repeat that I proved, in the most recent of my opening posts at TSZ, non-conservation of algorithmic specified information.

    Finally, I should mention that EricMH is not the only one to use “CSI” equivocally in this thread. Some participants who are ostensibly opposed to ID have been aiding and abetting EricMH in his revision of the history of ID.

  5. In the preceding comment, I accidentally entered “algorithmic specified information” where I meant “algorithmic specified complexity.”

  6. Tom:

    Montañez has failed to resist the urge to call his theorem a “conservation of information” theorem. (He does not explain why he calls it a “conservation of information theorem” — he just does it.)

    At UD in 2009, I criticized Dembski’s use of the phrase “Law of Conservation of Information”. Montañez defended it for some odd reasons.

    Here’s the exchange:

    keiths:

    Dr. Dembski,

    The six existing conservation laws of physics are all strict conservation laws; the quantities in question neither increase nor decrease.

    Your proposed “Law of Conservation of Information” would be the first conservation law for which the quantity in question was not, in fact, conserved. Doesn’t that strike you as a bit presumptuous?

    The Second Law of Thermodynamics stipulates that in an isolated system, entropy will either increase or at best, remain constant; your LCI states that information will either decrease or at best, remain constant.

    Apart from a change in sign, they are exactly parallel. The SLoT doesn’t purport to be a conservation law. Why should the LCI?

    Montañez:

    The minimum information cost doesn’t change, so that would be the “quantit[y] in question [that] neither increase[s] or decrease[s].” At least that’s how I read it.

    keiths:

    The problem is that you could say the same thing about entropy. The starting entropy of a system is conserved, though the overall amount may (and usually does) increase. Yet we don’t call the SLoT the “Law of Conservation of Entropy”. Why? Because it would be highly misleading, since entropy is not conserved overall.

    Likewise, it is misleading for Dembski and Marks to call their principle the “Law of Conservation of Information” when information is not conserved overall.

    Montañez:

    Would you rather they be more specific and call it the “Law of Conservation of Minimum Information Cost”?

    They could do that, but it lacks the same ring.

    keiths:

    When you’re proposing a law of nature, don’t you think accuracy is a little more important than whether the name of the law has a nice “ring” to it?

    In any case, the problems with the LCI go beyond the misuse of the word “conservation”. The term “information” is also used questionably.

    By “information”, Dembski and Marks mean “active information”, which is their own idiosyncratic invention. The rest of the world takes “information” to mean something quite different.

    It’s as if I were to propose a universal “Law of Obfuscation of Matter”, only to reveal that I was redefining both “obfuscation” and “matter” in ways that were unique to me.

    If it’s not about conservation and it’s not about information, then why call it the “Law of Conservation of Information?” At the very least, Dembski and Marks should drop the word “conservation” and substitute “active information” for “information.”

  7. keiths,

    1. All three of the “conservation of information” theorems Dembski and Marks had at the time (in “Life’s Conservation Law”) follow easily by Markov’s inequality. But Dembski and Marks have never mentioned Markov’s inequality. I suspect that they did not want to reveal the little trick that they were playing.

    2. With obfuscation of Markov’s inequality,

        \[P(X \geq \alpha E[X]) \leq \frac{1}{\alpha},\]

    you can get

        \[P\left(-\!\log_2 \frac{p}{X} \geq a\right) \leq 2^{-a},\]

    with substitutions p := E[X] and a := \log_2 \alpha. Taking random variable X to be the probability of success of a randomly selected “search,” George would regard the obfuscated Markov’s inequality as saying that active information is conserved in the search for a search. However, we might as well take it as saying that high active information is improbable in a random search for a search. The “high specified complexity is improbable” theorem is quite similar in form to the “high active information is improbable” theorem. Indeed, George applies Markov’s inequality in the proof. My guess is that his reason for calling his “high specified complexity is improbable” theorem a “conservation of information” theorem is that it has the look and the feel of a “high active information is improbable” theorem that has already been called a “conservation of information” theorem.

    3. I suspect that almost all physicists would scoff at calling Markov’s inequality, with random variable X taking a probability as its value, conservation of information — particularly when there is already a principle of conservation of information (an equality) in quantum mechanics (never mentioned by IDists). I suspect that Dembski and Marks knew better than to make it clear what they were doing.

    4. Joe Felsenstein gets the credit for recognizing Markov’s inequality.

  8. Tom,

    3. I suspect that almost all physicists would scoff at calling Markov’s inequality, with random variable X taking a probability as its value, conservation of information — particularly when there is already a principle of conservation of information (an equality) in quantum mechanics (never mentioned by IDists).

    I agree. Calling it “conservation of information” is inaccurate and gives it an unwarranted air of profundity. It’s pure marketing.

    Indeed, George applies Markov’s inequality in the proof. My guess is that his reason for calling his “high specified complexity is improbable” theorem a “conservation of information” theorem is that it has the look and the feel of a “high active information is improbable” theorem that has already been called a “conservation of information” theorem.

    He may be following a precedent, but it’s a precedent that he defended (for questionable reasons including “it has a nice ring to it”) when Dembski and Marks first proposed it.

  9. keiths: Randomness tests alone don’t resolve the paradox, because there are patterns — such as one’s SSN — that might qualify as random but nevertheless have special significance to us.

    Yes, that’s one of the innovations of CSI to include context to account for such things. Randomness deficiency is a form of CSI, but CSI itself is a broader concept.

    For example, one could have an incompressible bitstring (i.e. l(X) <= K(X)), so no randomness deficiency, that yet has high CSI. If the human mind were a halting oracle, then much of what it creates would be this sort of incompressible CSI.

  10. keiths: I addressed that question in an OP several years ago:

    A resolution of the ‘all-heads paradox’

    Well, then you have a Fields Medal to win. Much mathematical work has been based on the premise that all heads is in some sense objectively less random than equal heads and tails, such that all heads is good reason to suspect a non random source, if such alternative hypotheses are available.

  11. Gordon Davisson: But let me go further and argue that we’re almost certainly less capable than algorithms at this sort of thing.

    This is false. We can never be less capable than algorithms, only slower.

    The rest of your argument is based on whether humans are perfect halting oracles like Turing described, but I’ve already explained this is not necessary for my argument. All that is necessary is that the human mind set is not computable, and there is a whole lot of uncomputable space between what algorithms can do and what perfect halting oracles can do. In fact, since there is so much (infinite!) space, it would seem more appropriate to place the burden of proof on those claiming the human mind is reducible to a Turing machine.

  12. Joe Felsenstein: Is there a conservation law for specified information when the specification is held the same? No. It’s very easy to find counterexamples.

    The specification can be changed by the stochastic process or stay the same, the conservation of information still applies. I don’t understand the problem you think this poses for Dembski’s COI. Maybe if you can include a very clear example of the problem in your explanation I’ll get it.

  13. faded_Glory: What is missing here is an appreciation of the brute fact that biological entities require viability.

    I think we are talking cross ways. My only point in that lengthy paragraph is to explain why it is not enough to say “improbable.” Specification is an essential part of the argument.

  14. Tom English: It would still be daft to say that human intelligence “is” (how bizarre, to say “is” rather than something like “is equipped with”) a partial halting oracle, inasmuch as an oracle responds infallibly in a single time step, and a human commonly requires a long time to respond fallibly.

    The point is not whether humans operate exactly like a hypothetical halting oracle, but whether such a model is necessary to simulate the behaviors that the human mind exhibits.

    Just like no one is going to program a physics simulation with a literal Turing machine, but it is still true that a Turing complete language is adequate to simulate anything we program.

  15. Corneel: Here we reach the deep core of the ID argument, which posits that certain patterns are inherently meaningful, and that we intuitively recognise them as such (e.g. 500 coin flips ending up all heads).

    But it’s pretty hard to formalize this intuition, especially if people obstinately refuse to see things your way (Huh, I bet your coin has two heads).

    Partly correct. While it is true there are objective ways to formalize inherently meaningful patterns, that is not necessary for the CSI argument. All that is necessary is the knowledge base used for specification is independent of the process generating the event under analysis.

  16. graham2: I engaged in an argument about all this on an ID site, before I was banned, that if a series of numbers that spelled out the digits of PI turned up, they satisfy the requirements of randomness beautifully, yet are obviously rigged.

    I wish someone would put me out of my misery.

    CSI only guarantees true positives. So, it cannot say X is random. All it can say is that X is not random.

  17. Corneel: But I suspect that every one of those tests requires one to specify the expected distribution (I may be wrong here), i.e. they enable you to reject that your outcome is a chance event, provided that you have specified the appropriate null-hypothesis.

    Not so much specify, but assume, the expected distribution. For example, Kolmogorov defined compressibility as non-random because most bitstrings are incompressible. This, of course, assumes a uniform distribution over bitstrings, or something equivalent. Every concept I refer to like this does something similar. They assume there is nothing a priori special about the prior distribution that will favor some particular outcome, and so if these special events show up they at least disqualify the “everything is equally likely” null hypothesis. So, the null hypothesis and test go hand in hand with these concepts.

  18. keiths: If the earlier version of CSI had worked, Dembski would have had something significant, as he would have avoided the need to evaluate the probability of an object’s being formed by “Darwinian and other material mechanisms”. Alas, it didn’t work, so Dembski had to revise the definition of CSI.

    The revised version is useless, because you already have to do all the work of showing that something couldn’t have evolved before concluding that it has CSI and therefore couldn’t have evolved. Besides, for most cases of biological interest, neither Dembski or anyone else can calculate the necessary probabilities. Not even for the flagellum, which is the pet structure of IDers everywhere.

    Sorry, we’ll have to agree to disagree and end it here. We are just going in circles at this point.

  19. Tom English: His semiformal arguments about conservation of [complex specified] information (log-improbability of an event with a detachable specification), given in his book No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2002), do not apply to the later forms of specified complexity. And formal results that have been derived for the later forms of specified complexity do not apply to the original form.

    You are wrong. Montañez recent paper shows how it’s all related. Essentially, just normalize the specifications that add up to more than one so you get a well behaved probability, and you get the improbability result, which in turn can easily be translated into the form of Dembski’s original COI argument in his NFLT book.

    You are constantly harping on minor and inconsequential issues. As for your non-conservation article, I explained to you in that thread why your proof doesn’t apply to ASC, because you are not transforming the random variable with your function, as you should if you want it to apply to ASC.

    It’d be great if you actually came up with something that’s fundamentally wrong with ID theory. You are an extremely talented mathematician and it’d be very interesting to see you come up with such a result. Otherwise, it seems you are more interested in making a lot of noise about nothing, and you can probably spend your time more productively elsewhere.

  20. And with that I believe I’ve done my due diligence in replying to everyone’s comments. I didn’t really get much out of this interaction, just many recycled old arguments and poorly understood ID theory, and it’s taken up 2h of my scant free time. So, I’ll be taking a hiatus from this site, too.

  21. EricMH,

    It may be satisfying for you to grab your ball and go home, declaring that your opponents don’t know what they’re talking about and aren’t worth your time. But it isn’t true, and it just makes you look bratty.

  22. EricMH,

    Sorry, we’ll have to agree to disagree and end it here. We are just going in circles at this point.

    You may be going in circles, but I’m raising a point which you have yet to address: namely, Dembski’s 2005 revision of CSI so that it takes “Darwinian and other material mechanisms into account”. That’s an enormous concession that erases any potential usefulness that CSI might otherwise have had.

    I’ve also pointed out that Dembski’s use of specification is invalid:

    There are lots of problems with this, but perhaps the biggest one is that the bullseye is still being drawn too narrowly. Evolution doesn’t care doesn’t care whether a concept is “level 4 or less”, and it certainly doesn’t care whether something can be described as a “bidirectional rotary motor-driven propeller.” Evolution doesn’t care about anything except fitness, and the only legitimate target is therefore “anything at all that would sufficiently increase fitness starting from a given ancestral population”. (Keeping in mind that the fitness landscape changes over time.)

    Good luck to anyone trying to quantify that.

    No response from you. I understand. What could you possibly say?

  23. Tom English: Dembski, joined later by Marks and Ewert, has published three different definitions of specified complexity. He has stated a “conservation law” only for CSI as originally defined — the log-improbability, log(1/p), of an event with a detachable specification. In his second and third definitions, CSI is a log ratio of nonconstant quantities. His semiformal arguments about conservation of [complex specified] information (log-improbability of an event with a detachable specification), given in his book No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2002), do not apply to the later forms of specified complexity. And formal results that have been derived for the later forms of specified complexity do not apply to the original form. Thus EricMH is grossly wrong when he makes sweeping claims about “Dembski’s ID theory.”

    EricMH: You are wrong. Montañez recent paper shows how it’s all related. Essentially, just normalize the specifications that add up to more than one so you get a well behaved probability, and you get the improbability result, which in turn can easily be translated into the form of Dembski’s original COI argument in his NFLT book.

    According to the abstract of Montañez’s paper,

    [W]e define a model that allows us to cast Dembski’s semiotic specified complexity, Ewert et al.’s algorithmic specified complexity, Hazen et al.’s functional information, and Behe’s irreducible complexity into a common mathematical form.

    What Montañez calls semiotic specified complexity does not appear in No Free Lunch (2002). He correctly cites “Specification: The Pattern That Signifies Intelligence” (2005). That was where Dembski first made specification a matter of degree.

    In No Free Lunch, an event categorically does or does not have a detachable specification. There is not, as in semiotic specified complexity and algorithmic specified complexity, a numerical measure of descriptive complexity. So the normalization you describe makes no sense at all in the context of No Free Lunch.

    EricMH: You are constantly harping on minor and inconsequential issues.

    When Judge Jones ruled that ID was not science, in Kitzmiller v. Dover Area School District (2005), the most sensational claim of ID was that design detection was underwritten by Dembski’s Law of Conservation of [Complex Specified] Information — a law of nature, comparable to the Second Law of Thermodynamics. William Dembski, then touted as the “Isaac Newton of information theory,” abandoned his putative law in, IIRC, December 2008, when he and Marks released a preprint of “Life’s Conservation Law.”

  24. Eric,

    In No Free Lunch, Dembski attempted to demonstrate that the flagellum is designed.

    He failed. Do you agree, and do you understand why?

  25. EricMH: I think we are talking cross ways.My only point in that lengthy paragraph is to explain why it is not enough to say “improbable.”Specification is an essential part of the argument.

    In biology, specification actually makes certain configurations more probable, not less. This is again because of viability (and in a broader sense, natural selection for fitness) coupled with the fact that progeny will generally be genetically and morphologically very close to its ancestors – that is the way biology works.

    Natural selection is a very strong filter that biases the probability distribution hugely towards certain specific and non-random outcomes. Unless you build that somehow into your mathematics, you aren’t talking about biology.

  26. EricMH: Well, then you have a Fields Medal to win.Much mathematical work has been based on the premise that all heads is in some sense objectively less random than equal heads and tails, such that all heads is good reason to suspect a non random source, if such alternative hypotheses are available.

    Intuitively that premise appears valid. A series of only heads is far less robust to errors than a sequence of equal heads and tails. Flip one tail and it is game over for the all-heads outcome, whereas a string of tails can still be offset by a string of heads later in the flipping to still result in an equal proportion at the conclusion of the experiment.

    Your question is, when is an outcome non-random enough to warrant the inference to a different source than blind coin flipping? If there is a filter that removes the tails before you even get to see them, you will end up with an all-heads outcome even if the flips themselves have equiprobable outcomes. It is this combination that permits complex specified outcomes. I don’t see why we would need to conclude that such a process constitutes ‘design’, i.e. conscious intent.

    All that this ID mathbabble does is validate Darwin’s conclusion that natural selection can be the explanation for complex biological entities, circumventing the barrier of extremely low probabiities that exist under an assumption of equi-probability. You don’t need to convince us, we have understood this for over 150 years.

  27. EricMH: CSI only guarantees true positives.So, it cannot say X is random.All it can say is that X is not random.

    If there is anything I hate in these discussions it is the use of the word ‘random’ without further qualification. Random can mean a lot of very different things – stochastic, equiprobable, without aim or intent, haphazard, and so on.

    In what sense do you use the word above?

  28. EricMH: CSI only guarantees true positives.So, it cannot say X is random.All it can say is that X is not random.

    Since evolution clearly is non-random (‘random’ in the sense of having equiprobable outcomes), and nobody suggests that it is otherwise, I don’t see the relevance of CSI to the evolution debate. You are trying to falsify something that nobody claims.

  29. EricMH:

    It’d be great if you actually came up with something that’s fundamentally wrong with ID theory.

    What is wrong with it is that ID demonstrates neither intelligence nor design. All it does is rule out processes with equi-probable outcomes (processes hat nobody proposes for evolution). You then inject this with a dose of unproven dualistic philosophy to draw sweeping and unwarranted conclusions.

    As such, ‘ID’ is a complete misnomer, and all these maths have litte to no relevance to biology or evolution.

    Why don’t you try instead to model the proposed combinatorial process of heritable variation and natural selection in challenging environments? If you do that, and you can demonstrate that CSI cannot be generated by such a process, you might have a chance to convince people.

  30. EricMH:

    Joe Felsenstein: Is there a conservation law for specified information when the specification is held the same? No. It’s very easy to find counterexamples.

    The specification can be changed by the stochastic process or stay the same, the conservation of information still applies. I don’t understand the problem you think this poses for Dembski’s COI. Maybe if you can include a very clear example of the problem in your explanation I’ll get it.

    OK. Suppose that we had a specification that included all of the most fit genotypes, and they were a small enough fraction of all possible genotypes that you designated the set as having as CSI. And somebody came along and put forward a theorem that showed that evolutionary processes could not get you into that set if you started outside of it. And showed this in general, not just for some particular case.

    Now this would be a Big Deal. A very big deal. It would basically stop population genetics modeling in its tracks. A hundred years of population genetics theory, gone in a single day.

    Now note what it requires you to do. You have to have a set, and compare where the population’s genotypes are before and after evolutionary processes work. So the specification used must be the same in those two generations.

    Dembski’s Law of Conservation of Complex Specified Information sort-of claimed to do that in No Free Lunch 2002. I say, “sort of” since for all the extensive explaining of what CSI was, he never actually discussed this part of the argument. And if you look carefully, you will see that in sketching the proof of the LCCSI, Dembski has a specification before that is different from the specification after. So whatever it does accomplish, that Law does not keep the specification the same. To contradict population genetics theory one has to keep them the same, and have some law showing that to be in the specified set, you have to already have been in it the generation before.

    Does EricMH agree with this argument? I hope that he is still reading, because he called for me to make an explanation. And also, this is his thread.

  31. Let me add that, to show that Specified Information is not conserved, we can use a simpler example, which will be found in my 2007 article (Google with terms “Dembski” and “Felsenstein” to find it).

    We have a digital image, an array of 10,100 0’s and 1’s, which looks like a flower (it was in fact made from a photo of a flower). And we take a permutation of the integers 1 through 10,100 and we scramble it. We know the permutation, so we can in principle unscramble it any tme we want. The information is conserved, since we can reverse the permutation at will and the image will return, unchanged.

    But if we use the specification “looks like a flower” the original image has it, and the scrambled image doesn’t. So although information is conserved Specified Information isn’t. The amount of it can either increase or decrease (depending on which direction we’re going), while all that time, the Shannon information is conserved.

    Again, I hope that EricMH is convinced by this example that the conservation of information does not apply to Specified Information.

  32. Thanks Dr. Felsenstein, your explanations are a bit clearer.

    For your first example, we’d need to define it a bit more. The important thing is whether the specification is independent of the evolutionary process. If it is, then whether the specification is the same or not is immaterial. Winston’s improbability of ASC still holds.

    Your second example is the same as Tom English’s example in his non conservation of ASC post. I pointed out there the problem is he doesn’t appropriately modify the random variable. The transformation must not only be applied to the particular instance, but also the random variable the instance is drawn from, if you want to apply the argument to ASC. Once you transform the random variable, then Winston’s proof holds.

  33. keiths: It may be satisfying for you to grab your ball and go home, declaring that your opponents don’t know what they’re talking about and aren’t worth your time. But it isn’t true, and it just makes you look bratty.

    It’s not very satisfying, I’m speaking out of frustration. I’d love to see a good refutation of ID theory, instead of constant misunderstandings, strawmen, completely irrelevant points, etc. I see here at TSZ, at PS, at UD, in articles, and so on. And, the more I learn the basics of fields like information theory, statistics, computer science the more irrelevant the counter arguments become and I see even the ‘counter arguments’ actually support ID theory.

    So many very intelligent, highly educated people claim ID is completely bogus. Why, then, is it so hard to formulate an articulate refutation of the theory?

    But, it looks like I got some good (perhaps) responses after I complained, so I’ll revisit these comments in the coming days.

  34. Tom English: William Dembski, then touted as the “Isaac Newton of information theory,” abandoned his putative law in, IIRC, December 2008, when he and Marks released a preprint of “Life’s Conservation Law.”

    I’m not very aware of any of the above, but why do you think his COI for active information doesn’t apply? Why do you claim Winston’s and Montanez’s conservation laws are irrelevant?

  35. EricMH:

    I’d love to see a good refutation of ID theory, instead of constant misunderstandings, strawmen, completely irrelevant points, etc. I see here at TSZ, at PS, at UD, in articles, and so on.

    Consider the possibility that you’re misunderstanding the counterarguments or failing to recognize the flaws that critics are identifying.

    In the other thread, for example, you just praised a paper of Winston Ewert’s:

    Using Dembski’s CSI specifically Winston has a neat paper on applying ASC to the game of life:

    I identified more than 20 substantive errors in that “neat” paper. You need to be a bit more skeptical of ID and ID-related claims.

  36. EricMH: The important thing is whether the specification is independent of the evolutionary process. If it is, then whether the specification is the same or not is immaterial. Winston’s improbability of ASC still holds.

    Not sure if I am following along, but this seems straightforward enough:

    Joe said that his specification was “has high fitness”. Since the evolutionary process involves natural selection, the specification does not seem independent to me: natural selection results in an increased representation of genomes that fit the specification.

  37. EricMH:
    Thanks Dr. Felsenstein, your explanations are a bit clearer.

    For your first example, we’d need to define it a bit more.The important thing is whether the specification is independent of the evolutionary process.If it is, then whether the specification is the same or not is immaterial.Winston’s improbability of ASC still holds.

    I’d hope we can set ASC and its conservation aside. I have large doubts that ASC means anything that is relevant to evolution or adaptation. I hope soon to put up a post (at PT) raising my objections to ASC.

    In establishing that SI is not conserved, I was thinking of SI as used by William Dembski from 2002 on, in his Law of Conservation of Complex Specified Information, to argue that evolutionary forces cannot accumulate SI in the genome. You have declared that his argument has never been refuted. So I was giving some simple examples showing that he did not succeed in showing that SI cannot be put into the genome by natural selection. Do you agree with me about that, or not?. It will do no good to redirect attention to later arguments of Dembski and Marks, as the original LCCSI argument was one of those that you declared to be unrefuted. Or have you had second thoughts about that?

    Note also that Functional Information arguments, which design advocates like to invoke, are using the original SI, not an ASC argument.

     

    Your second example is the same as Tom English’s example in his non conservation of ASC post.I pointed out there the problem is he doesn’t appropriately modify the random variable.The transformation must not only be applied to the particular instance, but also the random variable the instance is drawn from, if you want to apply the argument to ASC.Once you transform the random variable, then Winston’s proof holds.

    Again, let’s set ASC aside, I hope soon to point out its utter irrelevance to refuting that natural selection can bring about substantial amounts of adaptive evolution soon. Maybe that argument of mine will be a failure, maybe not. Let’s get settled first whether you see the force of the arguments against use of ordinary CSI to establish limits on adaptation.

  38. Corneel: Not sure if I am following along, but this seems straightforward enough:

    Joe said that his specification was “has high fitness”. Since the evolutionary process involves natural selection, the specification does not seem independent to me: natural selection results in an increased representation of genomes that fit the specification.

    In the original Dembski 2002 argument, where he uses the Law of Conservation of Complex Specified Information, he uses a mapping from
    this generation back to the previous one so define a region then that corresponds to the specification now. He argues, straightforwardly, that this constructed specification in the previous generation is a region of just as low probability as the current specification. He’s right about that. But to use that mapping in contructing the previous generation’s specification, one has to use knowledge of the evolutionary processes.

    So that’s where the violation of Dembski’s condition, a violation by Dembski himself, occurred.

    If Dembski had taken a region (genomes of high enough fitness) and kept it the same in both generations, and somehow shown that if you ended up in it, you had to start in it, that would be a huge problem for evolutionary biology. I would not raise the issue of independence of that specification, because I would then still be stymied by the inability of evolutionary processes to get the genome better-adapted. (In that hypothetical case).

  39. Joe Felsenstein: In the original Dembski 2002 argument,

    The 2002 year argument???
    Don’t you think science has progressed since then, Joe? Or, are you living in the past?
    I’m pretty sure that Dembski has fine-tuned his argument as he should, just like any progressive scientist has to do to remain respected… Einstein had fine-tuned and even abandon some of his theories…

    It seems obvious to me that you expect Dembski’s current scientific views to affect his past views, just like in quantum mechanics, where retrocausality shows that our future actions affect past events…

    This expectation seems reasonable…at least to me… 😉

  40. Joe Felsenstein: In the original Dembski 2002 argument, where he uses the Law of Conservation of Complex Specified Information, he uses a mapping from
    this generation back to the previous one so define a region then that corresponds to the specification now.

    Thanks for the explanation. To make sure I understand properly: the mapping is a transformation of some bitstring (a genome) from generation t to a modified bitstring at t+1, right? And Dembski argues that the set of genomes at t has a specification that is as unlikely as that at t+1 because there exists a mapping from one to another. SInce he needs to know the reverse mapping he violates the condition of independence.

    Joe Felsenstein: If Dembski had taken a region (genomes of high enough fitness) and kept it the same in both generations, and somehow shown that if you ended up in it, you had to start in it, that would be a huge problem for evolutionary biology.

    That part I don’t understand, because that could never be. The whole point of the evolutionary process is to push any genome towards the region of high fitness. In Eric’s previous OP I got the impression that he was not denying that populations could evolve to higher fitness, but rather looking for the place where the information from Intelligence entered the process. He concluded that it was hardcoded in the evolutionary algorithm (Eric, correct me if I misunderstood). Of course it is; his evolutionary algorithm contained the information that allele 1 has higher fitness than 0. In the real world, populations receive that information by feedback from the environment.

  41. keiths: Consider the possibility that you’re misunderstanding the counterarguments or failing to recognize the flaws that critics are identifying.

    Certainly always a possibility, just that does not appear to be the case, in my own humble judgment of my judgment 🙂

    But, at the very least it is objectively clear there is no articulate, concise refutation of the core ID claim: that CSI is conserved.

    We have long rambling articles by Shallit, Erik, and Wein, that end up being really difficult to make heads or tails of, and when I do finally pin down a supposed refutation it is either a strawman, an irrelevant disagreement, a big bag of insults and condescension, or even supports the ID position.

    We have Dr. English and Dr. Felsenstein claiming they have such a refutation, which appears fairly convoluted, and also seems to be the same concept: that you can apply a function to an event and get ASC all over the map. But, as I have pointed out a couple times, you just get a new random variable and the improbability of ASC continues to apply.

    Then we have “refutations” that you’ve offered which seem to amount to you disagreeing with how ID theory is applied to biology, which, first of all, is irrelevant to whether the mathematical portion of ID theory is correct, and second seems to be a lot of speculation and personal opinion. Now, it all may be correct, but it is definitely not clearly and articulately stated, nor does it come across as very definitive or even relevant.

    Over at PS we have Swamidass insisting he can “disprove ID” which just amounts to him proclaiming “I doubt it” and “you should trust my judgment because I’m an information theory expert.” Least convincing of the lot.

    The only cogent response I’ve seen is by Devine, which again is just his personal opinion to utilize algorithmic information more than Dembski does, and is not really a refutation of anything. At least he provides some helpful food for thought and essentially agrees with Dembski.

    So, overall I have seen nothing that qualifies as a ‘refutation of ID’, and I believe I have examined all the ‘refutations’ that are available online.

    On the other hand, as I’ve also stated multiple times, we have well known, mainstream conservation of information theorems that all the critics seem to ignore, and they act like Dembski is proposing something entirely unheard of.

    Thus, the critics seem much less than honest in their criticism of the theory. Instead it appears they have an agenda to discredit ID by any means possible, instead of give the theory a fair hearing.

  42. Corneel: Since the evolutionary process involves natural selection, the specification does not seem independent to me: natural selection results in an increased representation of genomes that fit the specification.

    If the specification is not independent, then the problem is ill posed. Lack of independence means it is an invalid specification. What is the issue?

  43. Joe Felsenstein: Do you agree with me about that, or not?

    No, as I responded to Corneel just above, you are calling something else specified information other than what Dembski et. al. are referring to. So, your example is ill posed.

  44. Joe Felsenstein: In the original Dembski 2002 argument, where he uses the Law of Conservation of Complex Specified Information, he uses a mapping from
    this generation back to the previous one so define a region then that corresponds to the specification now.

    If you can give a specific reference to this argument I can look it up, but currently I do not know what you are talking about. ASC is a form of CSI, and if you want to discard ASC, then I don’t really know what you are referring to.

    Anyways, my hopes have been dashed, and I don’t really see any substantive responses. I will have to devote my limited time elsewhere.

  45. EricMH: If you can give a specific reference to this argument I can look it up, but currently I do not know what you are talking about.ASC is a form of CSI, and if you want to discard ASC, then I don’t really know what you are referring to.

    Sure. Page 148 of No Free Lunch where Dembski writes:

    The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems. Even the staunch Darwinist Richard Dawkins will admit that life is specified functionally, cashing out the functionality of organisms in terms of reproduction of genes. Thus Dawkins (1987, p. 9) will write: ‘Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction.

    That page is also where you will find the argument about going from a generation back to its antecedents, constructing different specifications of equal strength

    [EricMH]:
    Anyways, my hopes have been dashed, and I don’t really see any substantive responses.I will have to devote my limited time elsewhere.

    I hope that this does not mean that, after calling for a specific reference, you are departing without waiting for it. That would dash my hopes.

  46. EricMH: Anyways, my hopes have been dashed, and I don’t really see any substantive responses. I will have to devote my limited time elsewhere.

    Yeah, same here.

  47. Joe Felsenstein: That page is also where you will find the argument about going from a generation back to its antecedents, constructing different specifications of equal strength

    It’s still not clear what exactly you think the problem is, but I will try to guess.

    I think you are claiming that since evolution selects fitter organisms, then perhaps it is almost certain that over a long enough timeline evolution will generate organisms with the fitness related specifications you list. Thus, you believe this proves that evolution can generate CSI.

    It seems you did not read the rest of the argument about conservation of information. Dembski’s point is *not* that a fitter organism cannot come out of an evolutionary process. Dembski’s point is that if this happens, the CSI exhibited by the organism was contained, but not originated, by the evolutionary process. That is the point of his analogy of the pencil and the pencil factory.

    The whole point of the conservation of information argument is that if CSI results from some stochastic process, then the process itself must contain at least that much CSI itself. And in fact Dembski’s vertical no free lunch theorem proof shows the problem actually becomes exponentially more difficult as you backtrack through the responsible processes, as the search spaces grow exponentially. Each prior process, in turn, requires exponentially more CSI.

    Back to the pencil factory, imagine a pencil factory factory. The pencil factory itself is complex, but a factory to make the factory would be absolutely mind boggling complex. That is the problem the conservation of information is demonstrating.

    If you still think I am not getting your point, please quote the exact passage where you think Dembski makes his mistake, and take it apart identifying exactly why there is a mistake, and provide a mathematical example showing that CSI is not conserved. Otherwise, I do not have anything further to say on this matter.

  48. Just bumping this thread, hoping Joe sees it.

    Eric, would you answer my question in the meantime? Doesn’t your pencil factory allow for an non-interventionist scenario? Once, the “evolutionary process” is in place, a population can evolve without guidance from an intelligent agent. That has a distinct theistic evolution feel about it.

  49. I am aware of Eric’s comment. Thanks to him for continuing to discuss this. I am currently rereading Dembski (1996 and 2002) to make sure I am correctly characterizing his argument. Will comment on that here soon.

Leave a Reply