Here is a pattern:

It’s a gray-scale image, so it is just one 2D matrix. Here is a text file containing the matrix:

I would like to know whether it has CSI or not. Here is Dembski’s paper, in which he gives the formula:

47

Here is a pattern:

It’s a gray-scale image, so it is just one 2D matrix. Here is a text file containing the matrix:

I would like to know whether it has CSI or not. Here is Dembski’s paper, in which he gives the formula:

It has struck me more than once that a lot of the confusion that accompanies discussions about stuff like consciousness and free will, and intelligent design, and teleology, even the explanatory filter, and the words “chance” and “random” arises from lack of clarity over the difference between *decision-making* and *intention. *I think it’s useful to separate them, especially given the tendency for people, especially those making pro-ID arguments, but also those making ghost-in-the-machine consciousness or free will arguments, to regard “random” as meaning “unintentional”. Informed decisions are not random. Not all informed decisions involve *intention*.

This was my first computer:

It was called Dr Nim. It was a computer game, but completely mechanical – no batteries required. You had to beat Dr Nim, by not being the one left with the last marble, and you took turns with Dr Nim (the plastic board). It was possible to beat Dr Nim, but usually Dr Nim won.

Dembski seems to be back online again, with a couple of articles at ENV, one in response to a challenge by Joe Felsenstein for which we have a separate thread, and one billed as a “For Dummies” summary of his latest thinking, which I attempted to precis here. He is anxious to ensure that any critic of his theory is up to date with it, suggesting that he considers that his newest thinking is not rebutted by counter-arguments to his older work. He cites two papers (here and here) he has had published, co-authored with Robert Marks, and summarises the new approach thus:

So, what is the difference between the earlier work on conservation of information and the later? The earlier work on conservation of information focused on particular events that matched particular patterns (specifications) and that could be assigned probabilities below certain cutoffs. Conservation of information in this sense was logically equivalent to the design detection apparatus that I had first laid out in my book

The Design Inference(Cambridge, 1998).In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled “Conservation of Information Made Simple” (go here).

As far as I can see from his For Dummies version, as well as from his two published articles, he has reformulated his argument for ID thus:

Patterns that are unlikely to be found by a random search may be found by an informed search, but in that case, the information represented by the low probability of finding such a pattern by random search is now transferred to the low probability of finding the informed search strategy. Therefore, while a given search strategy may well be able to find a pattern unlikely to be found by a random search, the kind of search strategy that can find it itself commensurably improbable i.e. unlikely to be found by random search.

Therefore, even if we can explain organisms by the existence of a fitness landscape with many smooth ramps to high fitness heights, we have are left with the even greater problem of explaining how such a fitness landscape came into being from random processes, and must infer Design.

I’d be grateful if a Dembski advocate could check that I have this right, remotely if you like, but better still, come here and correct me in person!

But if I’m right, and Dembski has changed his argument from saying that organisms must be designed because they cannot be found by blind search to saying that they can be found by evolution, but evolution itself cannot be found by blind search, then I ask those who are currently persuaded by this argument to consider the critique below.

I’ve just started reading it, and was going to post a thread, when I noticed that Phinehas also brought it up at UD, so as I kind of encouragement for him to stick around here for a bit, I thought I’d start one now:)

I’ve only read the first chapter so far (I bought the hardback, but you can read it online here), and I’m finding it fascinating. I’m not sure how “new” it is, but it certainly extends what I thought I knew about fractals and non-linear systems and cellular automata to uncharted regions. I was particularly interested to find that some aperiodic patterns are reversable, and some not – in other words, for some patterns, a unique generating rule can be inferred, but for others not. At least I think that’s the implication.

Has anyone else read it?