# Belling the Cat

The Mice once called a meeting to decide on a plan to free themselves of their enemy, the Cat. At least they wished to find some way of knowing when she was coming, so they might have time to run away. Indeed, something had to be done, for they lived in such constant fear of her claws that they hardly dared stir from their dens by night or day. Many plans were discussed, but none of them was thought good enough. At last a very young Mouse got up and said: “I have a plan that seems very simple, but I know it will be successful. All we have to do is to hang a bell about the Cat’s neck. When we hear the bell ringing we will know immediately that our enemy is coming.” All the Mice were much surprised that they had not thought of such a plan before. But in the midst of the rejoicing over their good fortune, an old Mouse arose and said: “I will say that the plan of the young Mouse is very good. But let me ask one question: Who will bell the Cat?”

More heat than light seems to me to be generated by the demand for IDists to “define CSI” and the equations that are fired back in response. Nobody is disputing that we have plenty of equations.  Here is that bright young mouse, Dembski’s:

χ= –log2[10120 · φS(TP(T|H)]

The problem seems to me to lie in Belling the Cat.

So let’s take a closer look at that equation.

Dembski defines φS(T) as:

The number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T

Where S is “a semiotic agent”.  Fair enough.  If a “semiotic agent” (me, you, Dembski, a visting Martian) spots a pattern that can be described simply enough, it is a candidate for CSI testing, whether it is a black monolith on the moon (“a black monolith”)  faces on Mount Rushmore (“faces of American presidents”), or a sequence of nucleotides that results in a protein that helps an organism survive (“functional protein”), it’s a candidate.

However, it’s the next bit that presents the cat-belling problem, and it’s a problem, I suggest, with any of the definitions of CSI, or its various acronymic relatives, so far proffered: H.

To infer design, Dembski requires that we reject the “chance hypotheses, H“: In his Specification paper, Dembski suggests various examples of chance hypotheses, the rejection of which might lead us to conclude Design.

• that a coin is fair
• that an archer hit a small target by chance
• that a die is fair, and that the rolls are stochastically independent

All fine so far. But then:

• the relevant chance hypothesis that takes into account Darwinian and other material mechanisms

And there’s your belling problem, right there.  Dembki’s entire solution to the problem of detecting design absolutely depends on the proper calculation of the distribution of probabilities under his null hypothesis.  As he himself says:

We begin with an agent S trying to determine whether an event E that has occurred did so by chance according to some chance hypothesis H (or, equivalently, according to some probability distribution P(·|H)).

That is just fine, if you’ve got a nice tame cat, like a fair coin or die, and we simply want to know whether the coin or die is indeed fair, because we can define “fair” as a very specific probability distribution, because we have a perfectly good theorem. We can also compute a fairly good probability distribution for the landing points reached by arrows from a blind archer, either by empirical means, or by some kind of null model. But in the context of inferring design from biology, the probability distribution a “chance hypothesis that takes into account Darwinian and other material mechanisms” – is precisely what Darwin and evolutionary biologists spend their days trying to find out!

If ID proponents can calculate the probability distribution under a “chance hypothesis that takes into account Darwinian and other material mechanisms”, then, cool. Science will be done, and the Nobel committee can be disbanded.

But until they’ve done that, no matter how many equations they produce, they haven’t given us any definition of CSI that will allow us to detect design in biology, no matter how useful such definitions may be is for detecting nefarious design in seedy gaming houses, or whether an archer is peeking through a blindfold.

The cat remains unbelled.

## 83 thoughts on “Belling the Cat”

1. I mostly keep out of CSI discussions, because the whole topic seems bogus. For that matter, I remain unconvinced that semiotics is a legitimate area of study.

Dembski and co apparently take “information” to be the name of a natural kind. And that seems completely wrong to me. To me, the term “information” implies abstractness, detachment from physical process.

As an example, consider the player pianos that were at one time common. You inserted a roll of paper with punched holes, and the mechanism of the player piano used those holes to trigger the motion of the piano keys. To me, that player roll was never information. It was more like a template. It was something used as part of a causal role.

By contrast, consider sheet music, with notes written on a staff. Those are information — or, more properly, a representation of information. The difference is that they are detached from the causal mechanism.

I see DNA as more like the piano roll than like the sheet music. It is part of a causal mechanism. It is more like a template than like information.

2. Oh, I agree. It’s bogus in so many ways.

But the biggest bogosity has to be in the entire idea making an inference by rejecting a null you can’t actually compute without knowing the answer to the question you wanted to address in the first place.

3. I mean, you can reject the null that a coin is fair, because we define a fair coin as one that will, when tossed, produce series of outcomes that exhibit a certain probability distribution.

We can’t reject the null that life forms are undesigned, because we don’t define undesigned forms as forms that have a certain probability distribution.

4. Although some will say (Upright Biped, for example) that DNA isn’t quite like a piano roll, because there is an arbitrary relationship between codon and amino acid, not a straightforward mechanical one.

And it may indeed be arbitrary – on an alien planet the relationship might be quite different. But that doesn’t mean that it isn’t still mechanical. Given the messenger RNA sequences we happen to have, the codon-amino acid mapping is what it is.

And plenty of selective advantages to any proto-coding system that tended to be reliable, i.e. for optimum mappings.

5. And this is why I’d be interested to see KF (et al) defend his “needle in a haystack the size of the cosmos” analogy and relate it to actual biology, but in a discussion with actual experts in the field rather then the echo-chamber.

6. Kairosfocus responds to me, although whether he actually read my piece here is unclear. I will assume he has, and invite him to come here and discuss it in person if he would like (rather than holler across the Gulf):

EL (via Joe):

Nope.

The pivotal issue is sampling theory, not probability distributions.

Dembski’s CSI definition actually contains a probability distribution, and if he can’t compute it (which he can’t), he can’t do the Fisherian hypothesis testing, which is what he advocates. So this is a “pivotal issue”: Dembski’s CSI falls at the first fence when it comes to biological systems because there is no way of computing the probability distribution under his null.

In essence, as has been pointed out over and over and over again, but ignored, when one takes a relatively small sample of a large population, one only reasonably expects to capture the bulk, not special zones like the far tails or isolated and highly atypical zones. This is like the old trick of reaching a hand deep into a sack of beans and pulling out a handful or two to get a good picture of the overall sack’s contents.

Actually, the relative sample size is irrelevant – what matters in sampling theory is the absolute sample size, and once you have a substantial sample then the population size ceases to matter, as long as you have sampled randomly. So if want to estimate the mean size of bean in a sack of beans, you need a decent sample size. But the sample size you need does not depend on the size of the sack. So if it has been “pointed out over and over and over again” it’s been wrong over and over and over again. But I think KF is confused about sampling theory…

When we have config spaces for 500 bits or more, we are dealing with pops of 3.27 * 10^150 and up, sharply up. The atomic resources of the solar system working at fastest chemical reaction rates and for the scope of the age of the cosmos, would only be able to sample as one straw to a cubical haystack 1,000 LY thick, about as thick as our Galaxy. The only thing we could reasonably expect to pick up on a blind sample of such scope, would be the bulk. Here, straw, and not stars or solar systems etc.

KF doesn’t actually seem to be talking about “sampling theory”, as the term is normally understood, at all. What he is talking about is what Dembski calls “probabilistic resources” – the number of “trials” (as they are often called – Excel calls them that, for one) that you would have to have in order to have a decent chance of least one result in the tail of some. The more extreme the tail, the greater number of trials you’d need before you netted one. But that’s exactly what the term probability distribution means – whereas in a frequency distribution, the height of the histogram bars tells you how often the observation in question occurs, in a probability distribution, these heights are expressed as a proportion of the total number of observations, giving you an estimate of how probable it is that you would make any given observation on a single occasion, from which you can compute very simply what that probability would be for N trials: 1-(1-p)N. You don’t need sampling theory to do this – you just need the probability distribution and a calculator. But you can’t do it without the probability distribution, which is precisely what neither Dembski nor Kairosfocus have.

Where also, the other thing that you have long, and unreasonably, refused to accept is that once we deal with specifically functional configs of sufficient complexity, the constraints of proper arrangement of the right parts to achieve function confine us to narrow and very unrepresentative zones of the space of possibilities. Islands of function for illustrative metaphor. All of this has been accessible long since but you have refused to listen.

Somebody is refusing to listen, it seems, KF, but I don’t think it’s me 🙂 The above is assertion, not argument, and I do not share your view that it is correct. Clearly some people do (Behe, for instance) but the vast majority of biologists do not. If “biological fitness space” to use the jargon, is smooth, and multi-dimensional, then there is no reason to suppose that even complex configurations are on “Islands of function”. And as long as similar genotypes result in similar phenotypes (they do), “fitness space” will be smooth, and as long as there are many different traits that can potentially enhance the chances of living and breeding successfully (there are), fitness space will be multi-dimensional. So why would we expect “islands”? Leaving aside, of course island of self-replication itself, which remains without a detailed theory at present. But if that was the basis of ID, then why all the sniping at poor old Darwin, who never even claimed to be able to explain how self-replication got going in the first place?

I will simply say that by looking at sampling theory without having to try to get through a thicket of real and imaginary objections to probabilistic calculations, we can easily and readily see why it is unreasonable on the gamut of the solar system (or for 1,000 bits the observed cosmos) to expect to encounter FSCO/I by blind chance and mechanical necessity.

The only sense in which “sampling theory” has any bearing on anything I’ve said is that if you take a random sample from what you think is a single population (of mice, for instance) and you find an outlying value that is extremely unlikely to have turned up in a random sample, you are entitled to conclude that it probably came from a different population (of rats, for instance). But the point is that in order to work out whether the observed value is an outlier, you have to have some way of computing the probability distribution under your null (that the sample is all mice). So you simply cannot escape from the part of the CSI definition that neither you nor Dembski can provide: the probability distribution under the null of non-design.

Where also of course the thresholds of complexity chosen were chosen exactly for being cutoffs where the idea that chance and necessity would be reasonable would become patently ridiculous. It just turns out that even 1,000 bits is 100 – 1,000 times fewer bits than are in the genome for credible first cell based life.

No, it doesn’t “turn out” like that – what it “turns out” to be depends absolutely on your null distribution. Packing that into a black box called “what would be reasonable” is begging the entire question. It’s like saying that this boa constrictor is longer than a piece of string, without specifying how long the piece of string is. Simply saying “a reasonable piece of string” gets us nowhere.

And, that is the pivotal case as this is the root of the suggested Darwinian tree of life. Where, precisely because the von Neumann self replicator [vNSR] required for self replication is not on the table, cutting off the hoped for excuse of the wonderful — though undemonstrated — powers of natural selection acting on chance variations.

The only empirically warranted explanation for the FSCO/I pivotal to first life is the same as the only observed source of such: design.

Not unless you can produce the probability distribution for OOL under the null of no design. We know even less about that we do about the probability distribution of complex life, given simple life, but that doesn’t entitle us to infer anything from its tails – quite the reverse. All we can conclude, as most of us do conclude, is that we don’t yet know how self-replication on earth got started.

The rest of KF’s post is about Search for a Search, and we already have a thread on that, so I’ll leave it there, repeating my open invitation to KF to come over and discuss it in person, although I will ask KF to consider the possibility that the objections to the Search for a Search argument are substantial, and, as ever, hinge on characterising probability distributions that we simply do not have the information necessary to compute.

Those probability distributions won’t go away. Giving them algebraic or acronymic representations won’t butter no parsnips.

7. If you mix DNA/RNA oligomers, they will find their complementary sequence (if it exists) and bind to it. If one could create an ‘infinite string’ of random DNA, and chuck oligomers at it, they would all bind somewhere. They would all ‘find’ the relevant ‘information’. CSI does not come into it. The ‘information’ is in relative binding energies – physics. And this is also true of tRNA. Anticodons bind complementary sequence. They do so in a ‘controlled’ way, but there is no decision-maker – the tRNA that binds most strongly displaces all others. Which is why the system does not need constant supervision.

8. The probability of KF coming here is very low. The official reason is that this is a fever swamp of Nazi Marxists family-threateners and Nazi Marxist family-threatening enablers. I doubt even the head of TWT on a stick would satisfy him, although he may see it as a basis for negotiations.

9. Man, I finally get the argument. Or rather, I finally understand the details of the argument thanks to this analogy:

No, it doesn’t “turn out” like that – what it “turns out” to be depends absolutely on your null distribution. Packing that into a black box called “what would be reasonable” is begging the entire question. It’s like saying that this boa constrictor is longer than a piece of string, without specifying how long the piece of string is. Simply saying “a reasonable piece of string” gets us nowhere.

Thanks Lizzie. I think I really get the slight of hand that Dembski is attempting now.

10. Of course, KF’s root and tree analogy does not mean that the tree of life cannot “grow” without a “root” theory of origin of life. The tree is a diagram. He is confusing a diagram with, well, an actual tree that grows in the ground. This is stretching an analogy too far.

All explanations of the theory of evolution implicitly begin with “Given self-replicators…” And they are (we are) of course a given.

KF’s tortured analogy is like saying you can’t have the standard model of Physics without a complete theory of baryogenesis*. I know KF likes his scientists and philosophers to have matured for a few centuries, but even he must have heard of the standard model.

You will report his error back to him, won’t you Joe?

*It is entirely possible that a creationist might attribute this to a cosmic battle between Jesus and Satan, with Jesus winning out against the evil one’s anti-matter. I’m sure there are many wonderful theories waiting to be expressed when it comes to cosmology.

11. Yeah, it’s kind of an aha! thing. But it’s the reason why people have been pointing out for so long that Dembski is constructing what Dawkins called “argument by incredulity.” Dembski is saying “life is so complicated that it doesn’t seem reasonable to me that it could have happened without the Designer.”

Now, a related question might be how we could construct a probability distrubution that seems reasonable to Dembski, such that we could take unknown objects, apply the “reasonable to Dembski” test, and determine Design that way. And I recall that Dembski has been presented with plenty of objects to do this with, provided he shows his calculations. He’s never accepted this challenge.

At Dover, Behe essentially testified that “design” is an attribute of an object like color or mass, directly observable. He also admitted that only members of a particular religious sect could see it!

So we come full circle. Dembski’s slight of hand isn’t in hiding his inability to calculate his probability distribution behind a lot of misdirection. His slight of hand is in taking a particular religious doctrine and trying to make it all sciency. If it weren’t religion, he’d long since have admitted error like any scientist.

12. davehooke:
KF’s tortured analogy is like saying you can’t have the standard model of Physics without a complete theory of baryogenesis*. I know KF likes his scientists and philosophers to have matured for a few centuries, but even he must have heard of the standard model.

Remember that the Creationist Model holds that everything was poofed in a single atomic event. All mass, energy, physics and chemistry and everything — including life, which has not changed since it all got poofed up 6000 years ago.

So to force-fit reality within this model, evolution MUST BE this initial act of creation. There is no implication of process in the creationist model. KF’s model of physics, similarly, must and does include baryonogenesis. Part of the Great Poof.

13. On page 23 of Dembski’s “Specification” paper, he purports to make “specified complexity” insensitive to context by borrowing from Seth Lloyd.

Even so, it is possible to define specified complexity so that it is not context sensitive in this way. Theoretical computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history. This number sets an upper limit on the number of agents that can be embodied in the universe and the number of events that, in principle, they can observe. Accordingly, for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M•N will be bounded above by 10^120.

The reference is to Seth Lloyd, “Computational Capacity of the Universe,” Physical Review Letters 88(23) (2002): 7901–4.

This gets to the heart of Dembski’s – and all of ID/creationism’s – misconceptions and misrepresentations about the physical universe.

Seth Lloyd’s paper was in Physical Review Letters. One doesn’t publish a paper in that journal by getting the physics wrong. Even though I haven’t read that particular paper, I am pretty sure that Lloyd wasn’t discussing the fact that matter is condensing even as we can look out and see billions upon billions of stars and galaxies along with all the stuff we see on planets that are the result of all that condensation.

Why Dembski thinks Lloyd’s calculation has anything to do with how matter behaves would escape me if it weren’t for the fact that I already know where ID advocates get their ideas about atoms and molecules. It goes right back to “Spontaneous Molecular Chaos” and all those tornado-in-a-junkyard types of argument; to Henry Morris and Duane Gish, and now Granville Sewell.

Condensing matter is an indisputable, observable fact that anybody can verify instantly even as they sit at their computers asserting otherwise.

With just that basic knowledge, there would be no way that Dembski could assert that what we see is the result of nothing but uniform random sampling that is “impossible” in the lifetime of the universe.

This is simply another example of hijacking a paper and abusing it.

14. Lizzie:

But the biggest bogosity has to be in the entire idea making an inference by rejecting a null you can’t actually compute without knowing the answer to the question you wanted to address in the first place.

Exactly. It’s a circular argument.

I’ve been raising this issue since at least 2006:

Dembski’s refinement runs into trouble, though, because he admits that to determine that a system has CSI, we must estimate the probability of its production by natural means. Systems with CSI have a low probability of arising through natural means.

This renders the reasoning circular:

1. Some systems in nature cannot have been produced through undirected natural means.

2. Which ones? The ones with high CSI.

3. How do you determine the CSI of a system? Measure the probability that it was produced through undirected natural means. If the probability is vanishingly small, it has CSI.

4. Ergo, the systems that could not have been produced through undirected natural means are the ones which could not have been produced through undirected natural means.

(Posting as ‘Karl Pfluger’ at UD)

The circularity also plagues other acronymic derivatives of CSI such as gpuccio’s “dFSCI” and KF’s “FSCO/I”.

I keep raising the issue with ID proponents, but they have no coherent answer.

15. Well, KF has responded, but unfortunately doesn’t appear to have read what we’ve written:

Joe (& TSZ):

There is no sleight of hand involved.

Sampling theory is well known and is routinely used to characterise distributions on the known properties of such sampling in light of the law of large numbers [i.e. a large enough sample, often 25 – 30 in cases relevant to bell type distributions — the far tails are special regions and tend not to be picked up, we have had the discussion about dropping darts from a ladder to a paper cut-out long since (years and years ago . . . ), you are just not going to easily hit the tail by chance with reasonable numbers of “drops” . . .] and related results.

Let me repeat: you are confusing “sampling theory” – which is primarily to do with the size of sample you need to get a reliable estimate of the parameters of a population, and which is independent of the size of the population – with the principle of a probability distribution, which tells you the probability (vertical axis) of observing a given event type (horizontal axis), given one try/draw/trial.

We all know that if an event is in the extreme tails of a probability distribution, you are going to need a lot of observations (usually not called a “sample”, so that may be where you are confused – usually called trials, or draws, or, indeed, observations) in order to have a decent chance of seeing it, which is why bird-watching is so boring, unless you are fascinated by house sparrows. Nobody is disputing this point.

What we are disputing is where you got that distribution in the first place.

Indeed, this theory and its needle in the haystack result is what lies behind the statistical form of the second law of thermodynamics.

The basic point is a simple as the point of the proverb about (blindly) searching for needles in haystacks.

Namely, if there is ever so much stack and ever so little needle, it is going to be very hard to find the needle in the stack.

Right. Now, calculate the size of the stack, and the number of needles, and I might be interested.

Until then, you don’t have a definition of CSI.

16. KF responds:

In one sharp, short word: lying.

Ho hum, that’ll advance the conversation.

17. “What you are saying is incorrect; I have told you it is incorrect, therefore for you to repeat it means you are lying. No, I won’t listen to your argument as to why I am incorrect, because I don’t listen to lies”.

18. Interesting piece by Harold Morowitz on the distinctions between thermodynamic entropy and logical entropy.

Good as far as it goes, till it starts on the ‘logical entropy’ of Life. It’s on Panspermia.org, in pursuit of what I would regard as a rather ‘unnecessary’ theory. Because replication, once in train, organises chemistry, it can in principle start from a single seed. The probability of that seed arising on a particular planet may be constrained by the probabilistic resources available on that planet. But if one brings in a whole set of planets, one increases the purchase of lottery tickets. Having started somewhere, Life (by rather vague means) can go everywhere. But it isn’t necessary to pass that spark around. The mere existence of N planets with the appropriate chemistry means that the extra probabilistic resources are being explored, regardless of any transfer mechanism. The planet(s) on which Life kicks off are the ones that possess it (duh).

So Morowitz starts off making a valid and important distinction, but then accuses ‘darwinists’ of seeking refuge in the confusion between informatic and thermodynamic entropy. Which I don’t think is the case. Accepting the ‘materialist’ account, travelling back through the generations to a primordial replicator – a system that had the capacity to make copies, however imperfect – it is typically assumed to have one of a limited set of configurations from the space of all sequences. Hitting upon that magical sequence is regarded as a blind search, by some dimly-imagined mechanism trying out random strings. We might have a hindsight view that this sequence has low informatic entropy, simply because of the many ways that sequence could be ordered that aren’t replicators.

This view, however, tends not to be taken by ‘darwinists’. They don’t tend to look at sequences as informatic entities at all. There are base combinations with the physical attributes that enable replication (given an enviroment that permits it). Their ‘unusualness’, in a freshly-minted planet’s worth of environments over a few hundred millions years, is simply unknown.

One possibility that I think is worth exploring is that the initial ‘selector’ was not absolute sequence at all, but complementarity. Single strand RNA cyclises at very short chain lengths, preventing further polymerisation, and is susceptible to attack by the 2′ -OH. But RNA strands will hybridise with their complement – they form double stranded RNA by hydrogen bonding. This stiffens the chain, reducing cyclisation, and renders -OH attack less catastrophic. The driver for hybridisation is good old-fashioned thermodynamic entropy. The absolute sequence does not matter; the stochastic presence of a complement does. Double strands are not indefinitely stable; among such a population of relatively stabilised double strands, those that can actively assemble their complement would have the advantage over those that have to ‘fish’ for it. This capacity is ‘informatic’ only in the sense that some base sequences would perform the assumed function better than others.

19. I have asked KF many times what specific biological process he imagines is occurring while the “haystatck” is being sampled.

To me that is rather important – even if there is almost all stack and few needles depending on the process that is taking place that might not even matter.

Of course, he cannot state the process because then it would be obvious that no actual biologist would recognize that process as part of biology and there goes KF’s entire argument. It’s in ID’s interest not to get too specific I think.

As KF himself says

At this stage, to erect that sort of strawman in the teeth of easily accessible and abundant evidence to the contrary is not only a loaded strawman argument but one rooted in willful disregard for duties of care to truth and fairness; in the hope of profiting by a misrepresentation or outright untruth being perceived as true.

It’s simply a strawman that (for example) proteins assemble from scratch totally randomly into complex configurations. 0 to 60 without passing 1-59. Yet KF’s entire argument depends on that parody, a strawman that would not be recognized as a true representation of observed reality were it to be spelled out in biological terms rather the couched in analogy.

As a matter of fact, getting the right nut and bolt together and bolting it up to the right torque in the right place in a pile of galvanometer parts is already a stiff challenge for the tornado. And I would never trust an electrical “circuit” assembled by a tornado!

Except that nobody makes that claim KF, so why do you keep repeating it as if it accurately represents your opponents position?

Oh, that’s right…..

20. It seems to me that asking WJM to demonstrate his claim that FSCO/I is well defined and can be used to actually determine values for the FSCO/I present in arbitrary biological entities (or indeed anything at all) spooked them all badly. So much so that that particular avenue of conversation is now a “Immoral rhetorical stunt” that “goes to character”.

Cognitive dissonance crisis perhaps?

21. Well, yes, but he has to provide some support for any model he proposes. Astute creationists know that the Bible is not sufficient scientific authority in the real world, hence the whole ID farrago.

Employing KF’s analogy, we could say that without a theory of baryogenesis, no scientific theory has a “root”. We can discount all science. So why not apply the analogy to baryogenesis and be done with science?

Because it exposes the analogy as a complete crock, of course.

Anyhow, only a fool implies that not having a scientific theory for something advances your favourite conjecture one jot. It is to be hoped that, as a venue that values intellectual rigour, accuracy, and indeed right reason, the more enlightened patrons of UD might correct KF on this point.

22. Mike Elzinga: Here is Lloyd’s “Computational Capacity of the Universe.” I was right; Dembski didn’t read it. I would bet he never got past the abstract.

To emphasize just how bizarre Dembski’s use of Seth Lloyd’s calculation is in setting his SCI threshold, consider that Lloyd did his calculation for the entire universe from its beginning, according to the big bang model. Lloyd doesn’t mention dark matter or dark energy, but he mentions that most of his result comes from the computational capacity of the matter-dominated universe. Matter has already done its thing.

His conclusion states that the universe can have performed no more than 10^120 elementary logical operations on 10^90 bits. Think about that for a moment. Everything has already happened; including the formation of the planets and the evolution of life on at least one of those planets.

Now when Dembski or any of the ID/creationists over at UD assert that the amount of information in something, like a protein or any other subset of the universe, must require as much information or as many logical operations as the universe in order to get over Dembski’s threshold, they are saying that a subset of the universe requires more logical operations than the entire universe of which that subset is already a part.

In other words, the universe requires more logical ops than the universe requires.

Dembski has also swallowed his tail.

23. Today, William got an incredible deal on an old Victorian house. Highly satisfied with his business acumen, William settled in for a blissful night of sleep in his new home.

SLAM!

William woke with a start. He listened intently. But he didn’t hear anything, so he settled back to sleep.

Cree..eak

William listened even more closely this time until, after a bit, the creaking noise died away. For some reason, he recalled the seller’s maniacal laughter just after William signed the papers to buy the house.

SLAM!

William was trembling and his teeth were rattling. He thought about getting out of bed to investigate. Instead, he pulled the covers over his head.

Cree..eak

Hmm, William thought. Being a famous design theoretician, I can use the patented (not really) Dembski Inference to determine if the pattern is being caused by a ghost, er some unspecified intelligent cause.

SLAM!
Cree..eak
SLAM!
Cree..eak
SLAM!
Cree..eak
SLAM!
Cree..eak

Dembski Inference

χ= –log2 [ BIGNUM · φS(T)·P(T|H) ]

φS(T) and P(T|H) are independent measures, that together, even if nothing is known about how a pattern arose, reliably signal the action of an intelligent cause.

Per Dembski, there are two possible results. A negative is not determinative. It might be design, it might not. On the other hand, a positive is most certainly designed.

Let’s assume the sequence of Cree..eaks and SLAMs! appears superficially random. φS(T) is large, as the shortest description is the length of the entire sequence. This leads to a negative result. Unfortunately, this is not very comforting to our poor hero. It might still be a ghost!

Not being able to sleep, William ponders some more, and discovers a chaotic function that can describe the apparently random sequence. Now, φS(T) is small as it has short description. Now, how do we determine a plausible probability hypothesis, H? A uniform distribution, perhaps. Or a distribution that accounts for the overall observed distribution, better. In either case, P(T|H) is very small. So we have a positive match for CSI.

It’s a ghost!!
.
.

(Or a loose shutter.)
.

24. Another Bell – Jocelyn Bell Burnell – comes to mind. Regularity was observed in the data, a possible sign of intelligence, but actually deriving purely from rotation. Rotation gives a regular, economically-described signal as surely as night follows day …

The mechanism that ID-ers think is insufficient to the job of CSI-generation is point mutation and Natural Selection – a series of tiny changes every one of which confers an advantage. But sequence duplication is a powerful generator of regularity, and stochasticity a powerful means of breaking free of a PM+NS ‘island’.

If one has a short sequence that generates a turn of a helix, duplication will give two turns, another duplication four and so on. The longer helix can itself be recombined elsewhere, as a module in an unrelated sequence. In a modern, 20-acid system, subsequent substitutions can scramble the underlying regularity of the basic repetitive mechanism. But when you look closer, all you really need is a simple pattern of hydrophilic and hydrophobic residues. Looking at the longer string, and assuming it derived from improbable throws of many strings of equal length, comprised of many different kinds of subunit, betrays a blindness to actual biological mechanism.

25. Seems straightforward to me that to the creationists, the bible is TRUTH. This is by definition, not subject to question. The bible is Truth because God said so, and God said so in the bible which is how you know it’s Truth.

And ALL ELSE is a combination of trying to make non-negotiable religious faith sound scientistical, and trying to make it sciency-sounding enough to satisfy the courts.

And that means ID cannot be exposed as a crock, because it’s not a crock. God said so. If science disagrees, science must be a crock. If science only partially disagrees (the part that offends the specific religious doctrine ID was crafted to defend), then only that part of science is a crock (as are the subsequent accusations of inconsistency, of course).

But there is a reason why the ICR people must sign a loyalty oath to join, and why creationist sites do not tolerate dissent. When foregone conclusions cannot be altered, they must be rationalized. We can find millennia of this stuff, like Aquinas, constructing impressively convoluted justifications, misdirections, misrepresentations, cherry-picking, and the like.

I believe the UD people do indeed value intellectual rigor, accuracy, right reason, and enlightenment, but ONLY in the service of TRUTH. Just like you and me, only different.

26. Have the folks at UD ever resolved the
contradiction between Upright’s claim that information must be embodied and KF’s claim that “information is a distinct entity in the cell”?

Seems to me that ID reifies a metaphor.

27. When foregone conclusions cannot be altered, they must be rationalized. We can find millennia of this stuff, like Aquinas, constructing impressively convoluted justifications, misdirections, misrepresentations, cherry-picking, and the like.

You have just explained theology.

timothya:

Two questions:

1. Are you suggesting that we have to know the exact, precise, unequivocal probability of event X occurring by purely natural processes before we can draw an inference that event X did not occur by purely natural processes?

No.

2. On what basis do forensics experts and archaeologists draw an inference to design? Must they first lay out a precise formulation of all possible probabilities of the item in question having been produced by purely natural processes?

No (assuming that by “natural” you mean “unintended”, or something similar).

Look: Dembski proposed a formula – a metric – for inferring design, based on Fisherian hypothesis testing, that involves determining that the candidate pattern is in the rejection region of a probability distribution under the null of non-design. So, clearly, to, calculate that metric we need the probability distribution under the null. Dembski provides no way of calculating that distribution that does not involve first knowing what non-design processes can do. And if we knew that, he wouldn’t need his calculation. So his entire argument is circular.

That doesn’t mean that inferring design from a pattern isn’t possible; what it does mean is that CSI, and its relatives, are useless for doing so.

29. KF

There is no need for exact probability calcs or estimates.

That CSI/FSCO/I can’t be used is actually of no concern!

All we need are circumstances that sampling theory will let us see are of the needle in haystack variety, where is is simple to see that by its nature, the bulk of possible configs in a relevant situation will be gibberish.

And therefore Lizzie’s program generated CSI! If that’s “all that is needed” then that has already been demonstrated to be possible via GAs.

30. It is to be hoped that, as a venue that values intellectual rigour, accuracy, and indeed right reason, the more enlightened patrons of UD might correct KF on this point. [emphasis added]

31. Whether it is 10120 or 10300 is actually unimportant — the point is that even rather modest adaptations cannot be produced by the tornado-in-a-junkyard even once in the whole history of the universe.

I think the other points Elizabeth is making are more central.

32. I’ve been agreeing with this at my Dembski/CSI posts at Panda’s Thumb. Elizabeth has expressed it neatly.

You can also summarize it in a way that eliminates need to refer to CSI.

* We want to find out whether an adaptation this good or better can be produced by natural selection and random mutations (and other ordinary evolutionary forces.

* So first we evaluate the probability that adaptation this good or better can be produced by natural selection and random mutations (and other ordinary evolutionary forces.

… and then actually, we’re done, without getting to the CSI part.

33. That doesn’t mean that inferring design from a pattern isn’t possible; what it does mean is that CSI, and its relatives, are useless for doing so.

I’ve written about this before in various other blog-comments, but it bears repeating here: Real scientists can and do detect design. The standard methodology involves forming a hypothesis of how the maybe-Designed thingie was Manufactured, and then testing that hypothesis of Manufacture.
ID (as she is spoke by Behe/Dembski/etc), contrariwise, directly and explicitly ignores the question of Manufacture. ID-pushers directly and explicitly claim that Design can be detected in the absence of any knowledge of, or hypothesis regarding, the maybe-Designed thingie’s causal history. If that claim were actually true, that would be way the hell nifty. Alas, it is not true…

34. Whether it is 10^120 or 10^300 is actually unimportant — the point is that even rather modest adaptations cannot be produced by the tornado-in-a-junkyard even once in the whole history of the universe.

I think the other points Elizabeth is making are more central.

I am not missing Elizabeth’s or your point. I read Dembski’s “Specification” paper; and I also read Lloyd’s paper. I suspect you might appreciate Lloyd’s paper; it is based on good physics and is not difficult to follow.

The universe and all events in it did happen; that is what Lloyd’s calculations are based on. Like anyone who actually understands the relationship between physics and the ability to carry out logical operations, Lloyd relates the rates of operations and flipping bits to energy; he doesn’t abuse entropy, for example.

I elaborated on the irony of Dembski’s use of Lloyd’s 10^120 elementary logical operations over on your thread on Panda’s Thumb, and also in my other comment on this thread.

This is what Lloyd says explicitly on page 7 of his paper.

What is the universe computing? In the current matter-dominated universe most of the known energy is locked up in the mass of baryons. If one chooses to regard the universe as performing a computation, most of the elementary operations in that computation consists of protons, neutrons (and their constituent quarks and gluons), electrons and photons moving from place to place and interacting with each other according to the basic laws of physics. In other words, to the extent that most of the universe is performing a computation, it is ‘computing’ its own dynamical evolution. Only a small fraction of the universe is performing conventional digital computations.

This is not a trivial statement. Everything in the universe has happened; stars, galaxies, planets, and life on at least one planet. Lloyd has estimated the number of logical operations acting on 10^90 bits to make the universe as we know it.

Lloyd didn’t mention dark matter or dark energy; and there are many outstanding issues that are still being researched. But at least Lloyd offers a pretty concise argument for what it would take to actually simulate the universe according to the standard big bang, inflationary theory on a computer.

The irony of Dembski’s use of Lloyd’s calculation should not be missed. Dembski not only can’t produce a probability distribution function for the occurrence of events in the universe, he didn’t read, or didn’t comprehend, Lloyd’s calculation.

By asserting that a subset of events in the universe must meet his threshold in order to infer design, Dembski is claiming that this subset must contain more specified complexity and require more logical operations than the entire set to which it belongs.

With Lloyd’s paper, Dembski had in his hands an upper limit of what it takes to build a universe. And Lloyd showed examples in his paper that illustrate the obvious fact that the occurrence of any subset of events in that universe requires less. That is a pretty good first approximation.

Chi_500 = Ip*S – 500, where once Chi goes positive on a solar system scope (our practical cosmos for chemical interactions, absent invention of a warp drive) we can be assured blind search is all but utterly certain to fail. A practical impossibility.

For the millionth time . . . what is it in random mutation and natural selection that he thinks requires a “blind search” of cosmological proportions?

36. KF knows that evolution does not operate by blind search. It’s been pointed out to him again and again.

He also knows that this is fatal to his argument, so he chooses to ignore it.

37. Well, he’s taking refuge in Search for a Search, which does concede that some searches do do better than blind search, but claims that searching for the good searches is a blind search.

I think that’s his point (but it supercedes CSI).

38. Oh, I think Kairosfocus sees the question. I think that is why he kills every thread with a quilter’s nightmare whenever someone hones in on a key question.

39. 1. The word is spelled “supersedes”
2. Interesting that the phrase “to home in on” is rapidly becoming “to hone in on”

I’m OK with with language developing with use – it just grates sometimes.

Otherwise…

KF and Debski DO see the questions – this is not the only place they have been asked (they’ve been asked at UD but deleted; they have been asked many times in many other places; and in any case JoeG reports or tattles on TSZ to KF, although whether his reports are accurate must be in doubt)

They simply dare not even try to answer the questions in open forum, because they know their case is so weak.

All this guff of KF’s about TSZ being “enabling” of various forms of evil is smokescreen – cover for his own deficiencies and lack of moral fibre.

And yet Christ himself, upon whom you’d think KF would model his every waking moment, is reported to have gone into the temple and ejected the usurers i.e. attacked wrong-doing in its lair.

40. Alan Fox:

About my answer to Eric Anderson on UD: it is “in moderation” along with a couple of other contributions to the same thread. Whether they ever emerge depends on that site’s moderation policy. Not that my opinions are of much importance – the subject matter has been well canvassed in this thread.

41. timothya:
Oh, I think Kairosfocus sees the question. I think that is why he kills every thread with a quilter’s nightmare whenever someone hones in on a key question.

Well, he is prima facie confused on the difference between a sampling distribution and a probability distribution.

42. It’s so unfair! Why don’t I get moderated? I suspect it’s because I’m not very effective. If I spot it, and have the time, I will try and pick up on and post a point that seems pertinent, especially as the UD denizens are less inclined to peer out into the daylight currently. You and anyone else can always give me a heads-up if there’s something particular you’d like passed on.

43. Perhaps discussions about rebarbative neologisms should be moved to a thread of their own so that Grumpy Old Men can wave their walking sticks without derailing threads.

44. Let me get this right. Kairosfocus posted this:

The matter becomes instantly clear once you do the log reduction on Dembski’s result and then use a reasonable upper limit on search and observation resources, here the solar system’s 10^57 atoms and ~10^17 s.

Does he seriously believe that a log transform changes the relationship between the dependent variable (the one at the left of the equals sign) and the set of independent variables (the explanatory ones on the right of the equals sign)?

This site uses Akismet to reduce spam. Learn how your comment data is processed.