Siding with Mathgrrl on a point,and offering an alternative to CSI v2.0

[cross posted from UD Siding with Mathgrrl on a point, and offering an alternative to CSI v2.0, special thanks to Dr. Liddle for her generous invitation to cross post]

There are two versions of the metric for Bill Dembski’s CSI. One version can be traced to his book No Free Lunch published in 2002. Let us call that “CSI v1.0”.

Then in 2005 Bill published Specification the Pattern that Signifies Intelligence where he includes the identifier “v1.22”, but perhaps it would be better to call the concepts in that paper CSI v2.0 since, like windows 8, it has some radical differences from its predecessor and will come up with different results. Some end users of the concept of CSI prefer CSI v1.0 over v2.0.
Continue reading

The Laws of Thought

aren’t.

They are perfectly valid rules of reasoning, of course.  Wikipedia cites Aristotle: :

  • The law of identity: “that every thing is the same with itself and different from another”: A is A and not ~A.
  • The Law of Non-contradiction: that “one cannot say of something that it is and that it is not in the same respect and at the same time”
  • Law of Excluded Middle: “But on the other hand there cannot be an intermediate between contradictories, but of one subject we must either affirm or deny any one predicate.”

And of course they work just fine for binary, true-or-false, statements, which is why Boolean logic is so powerful.

But I suggest they are not Laws of Thought.

Continue reading

Specification for Dummies

I’m lookin’ at you, IDers 😉

Dembski’s paper: Specification: The Pattern That Specifies Complexity gives a clear definition of CSI.

The complexity of pattern (any pattern) is defined in terms of Shannon Complexity.  This is pretty easy to calculate, as it is merely the probability of getting this particular pattern if you were to randomly draw each piece of the pattern from a jumbled bag of pieces, where the bag contains pieces in the same proportion as your pattern, and stick them together any old where.  Let’s say all our patterns are 2×2 arrangements of black or white pixels. Clearly if the pattern consists of just four black or white pixels, two black and two white* , there are only 16 patterns we can make:

Patterns1And we can calculate this by saying: for each pixel we have 2 choices, black or white, so the total number of possible patterns is 2*2*2*2, i.e 24  i.e. 16. That means that if we just made patterns at random we’d have a 1/16 chance of getting any one particular pattern, which in decimals is .0625, or 6.25%.  We could also be fancy and express that as the negative log 2 of .625, which would be 4 bits.  But it all means the same thing.  The neat thing about logs is that you can add them, and get the answer you would have got if you’d multipled the unlogged numbers.  And as the negative log of .5 is 1, each pixel, for which we have a 50% chance of being black or white, is worth “1 bit”, and four pixels will be worth 4 bits.

Continue reading

Searching for a search

Dembski seems to be back online again, with a couple of articles at ENV, one in response to a challenge by Joe Felsenstein for which we have a separate thread, and one billed as a “For Dummies” summary of his latest thinking, which I attempted to precis here. He is anxious to ensure that any critic of his theory is up to date with it, suggesting that he considers that his newest thinking is not rebutted by counter-arguments to his older work. He cites two papers (here and here) he has had published, co-authored with Robert Marks, and summarises the new approach thus:

So, what is the difference between the earlier work on conservation of information and the later? The earlier work on conservation of information focused on particular events that matched particular patterns (specifications) and that could be assigned probabilities below certain cutoffs. Conservation of information in this sense was logically equivalent to the design detection apparatus that I had first laid out in my book The Design Inference (Cambridge, 1998).

In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled “Conservation of Information Made Simple” (go here).

 

As far as I can see from his For Dummies version, as well as from his two published articles, he has reformulated his argument for ID thus:

Patterns that are unlikely to be found by a random search may be found by an informed search, but in that case, the information represented by the low probability of finding such a pattern by random search is now transferred to the low probability of finding the informed search strategy.  Therefore, while a given search strategy may well be able to find a pattern unlikely to be found by a random search, the kind of search strategy that can find it itself commensurably improbable i.e. unlikely to be found by random search.

Therefore, even if we can explain organisms by the existence of a fitness landscape with many smooth ramps to high fitness heights, we have are left with the even greater problem of explaining how such a fitness landscape came into being from random processes, and must infer Design.

I’d be grateful if a Dembski advocate could check that I have this right, remotely if you like, but better still, come here and correct me in person!

But if I’m right, and Dembski has changed his argument from saying that organisms must be designed because they cannot be found by blind search to saying that they can be found by evolution, but evolution itself cannot be found by blind search, then I ask those who are currently persuaded by this argument to consider the critique below.

Continue reading

The Chewbacca Defense?

Eric Anderson, at UD writes, to great acclaim
:

Well said. You have put your finger on the key issue.

And the evidence clearly shows that there are not self-organizing processes in nature that can account for life.

This is particularly evident when we look at an information-rich medium like DNA. As to self-organization of something like DNA, it is critical to keep in mind that the ability of a medium to store information is inversely proportional to the self-ordering tendency of the medium. By definition, therefore, you simply cannot have a self-ordering molecule like DNA that also stores large amounts of information.

The only game left, as you say, is design.

Unless, of course, we want to appeal to blind chance . . .

Can anyone make sense of this? EA describes DNA as “an information rich molecule”. Then as a “self-ordering molecule”. Is he saying that DNA is self-ordering therefore can’t store information? Or that it does store information,therefore can’t be self-ordering? Or that because it is both it must be designed? And in any case, is the premise even true? And what “definition” is he talking about? Who says that “the ability of a medium to store information is inversely proportional to the self-ordering tendency fo the medium?” By what definition of “information” and “self-ordering” might this be true? And is it supposed to be an empirical observation or a mathematical proof?

Granville Sewell vs Bob Lloyd

Bob Lloyd, professor emeritus of chemistry at Trinity College Dublin, wrote an opinion article in Mathematical Intelligencer (MI) commenting on Sewell’s not-quite-published AML article. This was mentioned in a previous thread, where Bob briefly commented. Granville was invited to participate but never showed up.

In response to Lloyd, Sewell submitted a letter to the editor. On advice of a referee, his letter was rejected. (Rightly so, in my view. More on that later.) Sewell has now written a post on Discovery Institute’s blog describing his latest misfortune. The post contains Sewell’s unpublished letter and some of the referee’s comments. I invite you to continue the technical discussion of Sewell’s points started earlier.

Continue reading

More on Marks, Dembski, and No Free Lunch, by Tom English

Tom English has a great post at his blog, Bounded Science, which I have his permission to cross post here:

Bob Marks grossly misunderstands “no free lunch”

And so does Bill Dembski. But it is Marks who, in a “Darwin or Design?” interview, reveals plainly the fallacy at the core of his and Dembski’s notion of “active information.” (He gets going at 7:50. To select a time, it’s best to put the player in full-screen mode. I’ve corrected slips of the tongue in my transcript.)

[The “no free lunch” theorem of Wolpert and Macready] said that with a lack of any knowledge about anything, that one search was as good as any other search. [14:15]And what Wolpert and Macready said was, my goodness, none of these [“search”] algorithms work as well as [better than] any other one, on the average, if you have no idea what you’re doing. And so the question is… and what we’ve done here is, if indeed that is true, and an algorithm works, then that means information has been added to the search. And what we’ve been able to do is take this baseline, that all searches are the same, and we’ve been able to, in cases where searches work, measure the information that is placed into the algorithm in bits. And we have looked at some of the evolutionary algorithms, and we found out that, strikingly, they are not responsible for any creation of information. [14:40]

And according to “no free lunch” theorems, astonishingly, any search, without information about the problem that you’re looking for, will operate at the same level as blind search.” And that’s… It’s a mind-boggling result. [28:10]

Bob has read into the “no free lunch” (NFL) theorem what he believed in the first place, namely that if something works, it must have been designed to do so. Although he gets off to a good start by referring to the subjective state of the practitioner (“with a lack of knowledge,” “if you have no idea what you’re doing”), he errs catastrophically by making a claim about the objective state of affairs (“one search is as good as any other search,” “all searches are the same”).

Continue reading

Todd Wood on the Tennessee bill (and falsification)

He makes some very interesting comments, and also this point, which I think is worth making:

I know for a fact that evolution cannot be falsified; it can only be replaced.

“Falsification” has become kind of shibboleth that in my view has outlived its usefulness as a criterioin for what is, or is not, science.  We don’t actually proceed by what people usually mean by “falsification” in science, IMO, we proceed by replacing existing models with better ones (in that sense, of course, all models with a better fit to data than a previous model “falsify” the previous model – but that’s not what people usually mean).  So I think Todd, as so often, is right here.  Of course I think he is radically wrong about the age of the earth, but that’s another story!

Continue reading

Semiotic theory of ID

Upright BiPed has been proposing what he has called a “semiotic” theory of Intelligent Design, for a while, which I have found confusing, to say the least.  However, he is honing his case, and asks Nick Matzke

…these three pertinent questions regarding the existence of information within a material universe:

  1. In this material universe, is it even conceivably possible to record transferable information without utilizing an arrangement of matter in order to represent that information? (by what other means could it be done?)
  2. If 1 is true, then is it even conceivably possible to transfer that information without a second arrangement of matter (a protocol) to establish the relationship between representation and what it represents? (how could such a relationship be established in any other way?)
  3. If 1 and 2 are true, then is it even conceivably possible to functionally transfer information without the irreducibly complex system of these two arrangements of matter (representations and protocols) in operation?

… which I think clarify things a little.

I think I can answer them, but would anyone else like to have a go? (I’m out all day today).

Functional information and the emergence of biocomplexity

Journal club time again 🙂

I like this paper: Functional information and the emergence of biocomplexity by Hazen et al, 2007 in PNAS, and which I hadn’t been aware of.

I’ve only had time to skim it so far, but as it seems to be an interesting treatment of the concepts variously referred to by ID proponents as CSI, dFCSI, etc, I thought it might be useful.  It is also written with reference to AVIDA.  Here is the abstract:

Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define “functional information,” I(Ex ), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA–GTP binding energy), I(Ex ) = −log2[F(E x)], where F(Ex ) is the fraction of all possible configurations of the system that possess a degree of function ≥ Ex . Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of function.

I thought it would be interesting to look at following the thread on Abel’s paper.  I’d certainly be interested in hearing what our ID contributors make of it 🙂

 

Chaos and Complexity

Gil’s post With much fear and trepidation, I enter the SZone got somewhat, but interestingly, derailed into a discussion of David Abel’s paper The Capabilities of Chaos and Complexity, which William Murray, quite fairly, challenged those of who expressed skepticism to refute.

Mike Elzinga first brought up the paper here, claiming:

ID/creationists have attempted to turn everything on its head, mischaracterize what physicists and chemists – and biologists as well – know, and then proclaim that it is all “spontaneous molecular chaos” down there, to use David L. Abel’s term.

Hence, “chance and necessity,” another mischaracterization in itself, cannot do the job; therefore “intelligence” and “information.”

And later helpfully posted here a primer on the first equation (Shannon’s Entropy equation), and right now I’m chugging through the paper trying to extract its meaning.  I thought I’d open this thread so that I can update as I go, and perhaps ask the mathematicians (and others) to correct my misunderstandings.  So this thread is a kind of virtual journal club on that paper.

I’ll post my initial response in the thread.