Tom English has recommended that we read Dembski and Marks’ paper on their Law of Conservation of Information (not to be confused with the Dembski’s previous LCI from his book No Free Lunch). Dembski also has touted the paper several times, and I too recommend it as a stark display of the the authors’ thinking.
Most people won’t take the time to carefully read a 34-page paper, but I submit that the authors’ core concept of “conservation of information” is very easily understood if we avoid equivocal and misleading terms such as information, search, and target. I’ll illustrate it with a setup borrowed from Joseph Bertrand.
The “Bertrand’s box” scenario is as follows: We’re presented with three small outwardly identical boxes, each containing two coins. One has a two silver coins, one has two gold coins, and one has a silver coin and a gold coin. We’ll call the boxes SS, GG, and SG. We are to randomly choose a box, and then randomly pull a coin from the chosen box.
It’s clear that we’re equally likely to end up with a gold coin or a silver coin, so the probability of getting a gold coin (our preferred result) is
1/2. This unconditional probability of
1/2 gets updated when we choose a box. If we choose GG, then our odds increase from
1 — that is, they change by a factor of
2, so we’ll say that choosing GG gives us a probability gain (
2. Likewise, SG gives us a
1, and SS a
0. Note that the probability of choosing a given box (
1/3) doesn’t exceed
1/β for that box. This is an example of the following universal fact of probability:
E1 updates the probability of event
E2 by a factor of
β, then the probability of
E1 is at most
P(E2|E1)/P(E2), the above statement says that
P(E1) ≤ P(E2)/P(E2|E1). This is very simple to derive, starting with the following truism:
P(E1 & E2) ≤ P(E2)
P(E2|E1)*P(E1) ≤ P(E2)
and dividing both sides by
P(E1) ≤ P(E2)/P(E2|E1)
This says that a large gain in probability is obtained at the “cost” of an unlikely event. Dembski and Marks’ LCI is nothing more than an application of this fact to cases in which
P(E2) are based on uniform distributions (which I’ll show in a comment*). So not only is their LCI trivially derivable, but it’s also based on two assumptions of uniform probability, a.k.a. tornado-in-a-junkyard assumptions.
Contrast the simplicity of this concept with the import of the authors’ claims that “the Law of Conservation of Information shows that Darwinian evolution is inherently teleological” and “LCI underwrites the conclusion that Darwinian evolution is teleologically programmed with active information”. These are claims that can be made to appear plausible only through copious amounts of obfuscation and equivocation, which is exactly what you’ll find if you read the paper.
* It appears that subscripts don’t work in comments, so I’ll add this as a footnote to the OP. In case anyone from the Evo Info Lab ever reads this and has some doubts, the following shows that Dembski and Marks’ LCI is an application of
P(E1) ≤ P(E2)/P(E2|E1) (Eq. 1)
1) The probability distribution over
Ω1 is uniform.
2) The unconditional (i.e. prior to
~E1 being realized) probability distribution over
Ω2 is also uniform.
(I should note that Dembski and Marks never actually mention the second assumption, but it holds for all of their examples, and without it their LCI is false. It was Atom, another member of Evo Info Lab, who pointed out this tacit assumption to me.)
If we define
E1 ⊆ Ω1 as the set of all outcomes that confer a probability of at least
E2 ⊆ Ω2, then it follows that
q ≤ P(E2|E1). Furthermore, we define
p2 as the probabilities conferred on
E2 by uniform distributions over
Ω2 respectively. Then substitution into Eq. 1 is straightforward, giving:
p1 ≤ p2/q
which is precisely Dembski and Marks’ LCI.