It is quite common for ID commenters to argue that it is not possible for evolutionary forces such as natural selection to put Functional Information or Specified Information) into the genome. Whether they know it or not, these commenters are relying on William Dembski’s Law of Conservation of Complex Specified information. It is supposed to show that Complex Specified Information cannot be put into the genome. Many people have argued that this theorem is incorrect. In my 2007 article I summarized many of these objections and added some of my own.
One of the sections of that article gave a simple computational example of mine showing natural selection putting nearly 2 bits of specified information into the genome, by replacing an equal mixture of A, T, G, and C at one site with 99.9% C.
This post is intended to show a more dramatic example along the same lines.
Tiktaalik is still being used as a successful prediction of something. I know it was supposed to be a successful prediction of universal common descent because it is A) Allegedly a transitional form between fish and tetrapods and B) It was found in the “correct” strata because allegedly no evidence of tetrapods before 385 million years ago- plenty of fish though and plenty of evidence for tetrapods around 365 million years ago- Tiktaalik was allegedly found in strata about 375 million years old- Shubin said that is the strata he looked in because of the 365-385 range already bracketed by existing data.
The thinking was tetrapods existed 365 mya and fish existed 385 mya, so the transition happened sometime in that 20 million years.
Sounds very reasonable. And when they looked they found Tiktaalik and all was good.
Then along comes another find that put the earliest tetrapods back to over 390 million years ago.
Now had this find preceded Tiktaalik then Shubin et al. would not have been looking for the transitional after the transition had occurred- that doesn’t make any sense. And that is why it is a failed prediction- the transition occurred some 25 million years before, Shubin et al., were looking in the wrong strata.
Mark Chu-Carroll has a take-down of the argument here, but I’d be interested to know what the ID proponents who post here make of it. It seems to me so self-evidently wrong, that I’d expect ID proponents to be rather keen to point out the errors, but it gets a shout-out at UD.
I’ve only had time to skim it so far, but as it seems to be an interesting treatment of the concepts variously referred to by ID proponents as CSI, dFCSI, etc, I thought it might be useful. It is also written with reference to AVIDA. Here is the abstract:
Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define “functional information,” I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA–GTP binding energy), I(Ex) = −log2[F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function ≥ Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of function.
I thought it would be interesting to look at following the thread on Abel’s paper. I’d certainly be interested in hearing what our ID contributors make of it