The Mice once called a meeting to decide on a plan to free themselves of their enemy, the Cat. At least they wished to find some way of knowing when she was coming, so they might have time to run away. Indeed, something had to be done, for they lived in such constant fear of her claws that they hardly dared stir from their dens by night or day. Many plans were discussed, but none of them was thought good enough. At last a very young Mouse got up and said: “I have a plan that seems very simple, but I know it will be successful. All we have to do is to hang a bell about the Cat’s neck. When we hear the bell ringing we will know immediately that our enemy is coming.” All the Mice were much surprised that they had not thought of such a plan before. But in the midst of the rejoicing over their good fortune, an old Mouse arose and said: “I will say that the plan of the young Mouse is very good. But let me ask one question: Who will bell the Cat?”
More heat than light seems to me to be generated by the demand for IDists to “define CSI” and the equations that are fired back in response. Nobody is disputing that we have plenty of equations. Here is that bright young mouse, Dembski’s:
χ= –log2[10120 · φS(T)·P(T|H)]
The problem seems to me to lie in Belling the Cat.
So let’s take a closer look at that equation.
Dembski defines φS(T) as:
The number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T
Where S is “a semiotic agent”. Fair enough. If a “semiotic agent” (me, you, Dembski, a visting Martian) spots a pattern that can be described simply enough, it is a candidate for CSI testing, whether it is a black monolith on the moon (“a black monolith”) faces on Mount Rushmore (“faces of American presidents”), or a sequence of nucleotides that results in a protein that helps an organism survive (“functional protein”), it’s a candidate.
However, it’s the next bit that presents the cat-belling problem, and it’s a problem, I suggest, with any of the definitions of CSI, or its various acronymic relatives, so far proffered: H.
To infer design, Dembski requires that we reject the “chance hypotheses, H“: In his Specification paper, Dembski suggests various examples of chance hypotheses, the rejection of which might lead us to conclude Design.
- that a coin is fair
- that an archer hit a small target by chance
- that a die is fair, and that the rolls are stochastically independent
All fine so far. But then:
- the relevant chance hypothesis that takes into account Darwinian and other material mechanisms
And there’s your belling problem, right there. Dembki’s entire solution to the problem of detecting design absolutely depends on the proper calculation of the distribution of probabilities under his null hypothesis. As he himself says:
We begin with an agent S trying to determine whether an event E that has occurred did so by chance according to some chance hypothesis H (or, equivalently, according to some probability distribution P(·|H)).
That is just fine, if you’ve got a nice tame cat, like a fair coin or die, and we simply want to know whether the coin or die is indeed fair, because we can define “fair” as a very specific probability distribution, because we have a perfectly good theorem. We can also compute a fairly good probability distribution for the landing points reached by arrows from a blind archer, either by empirical means, or by some kind of null model. But in the context of inferring design from biology, the probability distribution a “chance hypothesis that takes into account Darwinian and other material mechanisms” – is precisely what Darwin and evolutionary biologists spend their days trying to find out!
If ID proponents can calculate the probability distribution under a “chance hypothesis that takes into account Darwinian and other material mechanisms”, then, cool. Science will be done, and the Nobel committee can be disbanded.
But until they’ve done that, no matter how many equations they produce, they haven’t given us any definition of CSI that will allow us to detect design in biology, no matter how useful such definitions may be is for detecting nefarious design in seedy gaming houses, or whether an archer is peeking through a blindfold.
The cat remains unbelled.
And now Kairosfocus posted this at UD:
So now the origin of life is the same as the origin of cell based life? What does this mean?
I think KF is defining OOL as the origin of cell-based life, but it’s hard to say from what he’s written. But whatever he means in terms of OOL, his “first problem” makes no sense. That is, however one wishes to define “life”, one characteristic of life is reproduction. And so long as there is any kind of reproduction, there is going to be evolution. That’s the whole point of Evolutionary Theory.
But isn’t that one of the major problems with ID and its proponents?
When we read up on aspects of evolutionary theory, we find (the vast majority of the time) that terms are clearly defined and commonly understood. Discussions involving them are therefore clearly understandable and often fruitful.
Not so with ID proponents. Witness all the hooraw about the meaning of the term “information”. It’s one of the reasons ID is not scientific – it and its pushers lack scientific rigour.
That sentence of KF’s is typical – loosely constructed, and potentially ambiguous. It’s almost as if they do it with a view to being able in the future to wriggle out of a difficult position by claiming they were misunderstood.
ID can easily be defined as the science of labeling gaps.
If you want to see some dancing, ask KF to calculate the change in FSCI observed in the Lensky experiment.
If he says there was no change, then FSCI is irrelevant to the concept of evolution, because evolution can produce new functionality without increasing FSCI.
That neg log thing really annoys me. Sure, it’s sometimes convenient to use logs to multiply (I date from the log-table/slide rule age) but I have a suspicion that its purpose in ID discourse is to:
a) make the thing look really mathy
b) make it look digital (answer in bits)
c) misdirect attention from the hole in the middle.
That’s what Dembski is all about. Writing symbol laden bullshit that only a student of statistics can follow.
Contrast Einstein, whose work really was astonishing and transformative. He could conjure up thought experiments that could be followed by reasonably educated laymen.
I will give KF credit for his needle and island metaphors. They are understandable, testable, and unfortunately for him, wrong.
But at least they can be wrong, and that’s a start.
I would love to see an ID proponent address this in a top level post here. When I was last active at UD, I strove mightily to get some examples of CSI calculations, to no avail. Only vjtorley made an honest attempt to follow Dembski’s math, but he rejected the whole concept for coming up with the wrong answer (namely that gene duplication does create CSI).
Lenski’s results are a perfect test for Dembski’s metric (although I would still like to see the Steiner problems similarly evaluated). Is any ID proponent up to the challenge?
Interestingly, Abel et al do define their null, and give it as an equiprobable distribution of sequences of which a subset give rise to functional proteins.
Of course given a non-functional starting sequence, not all future sequences are equiprobable, even if none of them beneficial or deleterious , so that seems wrong for starters.
Is it possible you could provide a citation for that claim? That biologists claim that entities like “DNA, RNA and proteins etc, as well as the organised nanomachines” come about by ‘undirected chance and necessity’?
Otherwise you have spent many millions of words destroying a claim that nobody is actually making.
That calls out for a name; the Doh!-Nut argument. 😉
Until such time as the superscript tag is implemented here, the carat character—”^”—is an acceptable substitute. As in: 10^20 = ten to the twentieth power, and so on.
Thank you Joe for agreeing with me. They are indeed processes and not the random brute force searching misrepresented by KF with his “needles” analogy.
Indeed, but it’s the framework, the process that then takes that raw material and uses some, saves some for later (perhaps!) and discards some that’s important, as you rightly point out.
(OK, maybe this is supposed to go into the Sandbox): I used the SUP and /SUP tags. This did not show up correctly in the WYSIWYG as I typed. But when I used the Edit button after submitting it, the tags were still there, and when I re-saved that then the superscripting seemed to work.
But then, on coming back later to TSZ the superscripting formatting had disappeared.
Proof that non-material designers are at work.
Off topic, but I get the same captcha every time.
ATTENTION kairosfocus, Eric Anderson, Phinehas and other confused souls at UD.
Please tattoo the following on a prominent part of your anatomy, and refer to it often:
Evolution is not a blind search
Evolution is not a blind search
A blind search is one in which a point is selected completely at random from the entire search space. If the selected point falls within the target region, then the search has succeeded. Otherwise the search continues, and another point is selected completely at random from the entire search space.
The size of the haystack and the number of needles in it are relevant only if we are talking about a blind search. Since evolution is not a blind search, these numbers are irrelevant.
Just to hammer the point home:
In terms of the needle and haystack metaphor, the problem faced by evolution isn’t to find needles by probing random locations throughout the entire haystack — that would be a blind search — but something quite different. Evolution explores the haystack by starting from the current needle and exploring the points in the haystack that it can reach from there. If it finds a needle at any of those reachable points, then it proceeds to search from the new needle.
(Be cautious — the haystack metaphor has many of the same limitations as the ‘islands of function’ metaphor, as explained here. For example, the haystack has far more than three dimensions, and needles appear and disappear over time.)
Since evolution is performing a local search and hopping from one needle to the next, the size of the entire haystack is irrelevant, as is the total number of needles. What matters is the local distribution of needles along the path(s) traversed by evolution.
Thus, unless you know the structure of the haystack and the distribution of reachable needles at every point in time, you can’t even begin to estimate the probability that a particular needle could be reached by evolution.
So not only is the CSI argument circular; you couldn’t possibly calculate it even if there were some point to doing so, because you don’t have the requisite information. You don’t know where the reachable needles are.
And in case you’re tempted to argue that the evolutionist also needs this information to make his or her case, guess again. The hypothesis that evolution is unguided, and that the needles have a distribution that is favorable to evolution, fits the evidence literally trillions of times better than the beleaguered ID hypothesis.
Given the evidence, there is no rational justification for clinging to the ID hypothesis.
Joe doth protest:
I’m trying to figure out where Joe came up with “blind exploration”. I suspect he just doesn’t understand what you meant when you wrote, ” Evolution explores the haystack…”, still being stuck on the “blind search” misconception. Earth to Joe – evolution is not blind.
A) There is no reference to anything called “Intelligent Design Evolution” in any scientific textbook or journal that I can find. Please provide a scientific reference that describes what this is.
B) Define “blind” as you are using it above.
Oh, I have no doubts that you can provide what you believe to be a reference, but I’m skeptical that any reference you provide will actually support your claim above. But hey…surprise me Joe. I’m in the mood to be astonished…
Third times a charm for Joe:
Why in the world would I provide a reference for something I never claimed or accepted? Of course there’s no reference for “blind watchmaker or unguided Evolution”; you made the name up. And yet oddly, you can’t – contrary to your previous claim – provide a reference for your supposed “Intelligent Design Evolution”. As you say, “good luck with that.”
Here’s an actual reference to the Theory of Evolution as I understand it and accept it. You can do with this whatever you wish:
Bzzzzzz! I’m sorry, but that’s a fail. So much for that reference claim of yours. Nice try though.
Oh Joe…I read Coyne’s essay. These things are not that hard to find. And what’s funny is, he actually defines what he means when he uses the term “blind”, which oddly you left out:
So there you have it. Coyne is using the term “blind” to mean “without deity guidance”. Is that what you mean Joe? Because that isn’t what Keiths is addressing above and isn’t what KF is addressing either. So, you are either misunderstanding the discussion and use of the term, or you are equivocating. In either case, I’m afraid using Coyne’s essay merely demonstrates you don’t know what you are writing about in your rebuttal of Keith’s comment. Thanks for playing!
Actually, evolution is blind, in the sense that mutations happen randomly with no regard to their utility for the organism. Evolution has no foresight.
What KF doesn’t get is that evolution doesn’t search the entire haystack (aka ‘config space’), so all of his talk about its huge size is utterly irrelevant. What matters is the local structure: how are needles distributed in the vicinity of the needles that have already been ‘occupied’ by evolution?
Where the entire haystack is more relevant is not for evolution, but rather for the origin of life. But here KF makes more blunders:
1. He assumes that the initial replicator is hopelessly complicated, and that the haystack is therefore enormous, but he gives no justification for this bogus assumption.
2. He assumes that each point in the haystack is equally likely to be searched, thus totally ignoring the role of physics and chemistry in the process.
Between inflating the size of the haystack and misunderstanding the way it is explored, both by evolution and by nature during the process of OOL, KF’s position is a hopeless muddle.
KF is simply wrong about the sparseness of needles. Nearly two thirds of the bases on any arbitrary protein coding sequence can be replaced without breaking it. There are critical points that break it, but some of them simply change the fold.
If the sequence is a duplicate, there’s lots of wiggle room for neutral drift. There’s room for drift even if it’s not a dupe.
Absolutely. The point I am trying to make to Joe is that he’s misunderstanding what both you and Coyne (et al) are noting. Evolution is not looking for a goal, so it isn’t blind to how to get to that goal. It isn’t blind in that sense at all because there is no goal. And while it is blind in the sense that it can’t anticipate how any given change will affect the success of a group of organisms, changes are not selected blindly, but are directed by the relative survival of offspring with those changes. In other words, finding any given needle creates parameters for the likely needles that can next be found – closer needles be more likely than further needles.
None are so blind, etc etc.
According to the dictates of right reason
1. Unintelligent processes can’t do intelligent things
2. Specified complexity requires intelligence
3. A cell is specified and complex
4. Material processes aren’t intelligent
4. Therefore GOD designed living things
I’ve not encountered an ID proponent capable of seeing the errors in this argument. Or the very similar CS Lewis argument that logic can’t be the result of material processes.
They assume what they want to conclude, then introduce spurious extra steps to try to persuade themselves and others that they are actually reasoning.
It’s the same with their “evolution can’t do the search” arguments. Step one, assume the conclusion…
It’s all they ever do.
Almost everything I would want to say to anyone at UD about CSI is covered in this thread. I think I’ll just link to it when CSI comes up in future.
F/N: When we see people trying to argue that the “blind watchmaker” mindless chance variation driven processes THEIR SIDE HAS PUT FORWARD are “not blind” — not non- foresighted — that takes the cake.
Another “weasel” argument in the making there, perhaps.
We’d end up, no doubt, with evolution being “quasi-blind”
I find this a bit wearisome. For my money, evolution IS blind as the word is commonly understood. It is not searching for anything. It has been shown that what IDists call “CSI” can easily be produced by well-known undirected, non-foresighted processes.
I prefer to think of it that the “information” is simply not useful until an environment is encountered where it is. Until that happens it may be kept if it places no particular cost upon the organism, or lost for any of a number of reasons. Evolution doesn’t care – there’ll be another mutation along in a minute, or the organism goes extinct, or suffers a drastic bottleneck. Evolution doesn’t care
I agree with davehooke below. CSI as a useful metric has been pretty well coshed (again) in this thread, and the circularity of associated “logic” been thoroughly exposed
My problem with the blind metaphor is that in some contexts it implies that there is something for evolution to “see” in the first place. I think that’s the root of a lot of peoples’ misunderstanding and is certainly part of Dembski’s built in mischaracterization in a Search for a Search and the whole “latching” inanity concerning the Weasle program.
Evolution, as I understand it, doesn’t search. It isn’t looking for anything. It provides a mechanism for biological variation that is either useful, neutral, or detrimental to biological entities in given environments. The environments as well are variable, thus there is an incredible array of biological/ecological combination sampling, along with an incredible array of biological/functionality sampling. But in no case is evolution “looking” for any given outcome, combination, or characteristic. Either an outcome works, or it doesn’t.
Why the need to describe this process in terms of “searches” or “blindness”? Such terms, at least to me, imply analysis and/or goal seeking and I just don’t see evolution doing either.
I disagree slightly. Structures are objective, and coding sequences are objectively tied to structures as interpreted templates.
I think it is fair to think of sequences as findable objects.
But sequence function is complex and multidimensional. One cannot know without trying whether a novel sequence is a needle or a straw. That is the blindness.
While sequences are certainly objectively tied to structures, I don’t think evolution is “looking” for any particular structures (needles). In fact, evolution doesn’t “knows” what any given structure is, never mind what any sequence is, so how can it – even by analogy – be blindly “looking” for such things? Those types of metaphors just don’t make sense to me.
I get Keith’s concept of “exploring” fitness landscape (though I tend to think of it more as “sampling” the fitness landscape), but even then I don’t conceptualize the process teleologically trying to find anything in particular. I get where you (and others) are coming from on this, but I just have difficulty thinking of evolution in those terms. To me, it would be like conceptualizing Earth’s hydrologic cycle as exploring the environment for evaporated water and then blindly searching for a place to dump it. The process itself doesn’t “know” what water is, nor does it “know” what the environment is; it just works because the properties of water allow for evaporation and condensation.
kairosfocus, via the loudspeaker in the ceiling, offers two breathtakingly inane examples of how to detect design using CSI:
Example 1, based on a definition of ‘aardvark’:
I may be a little slow, but I think I am beginning to see a pattern here.
IDists who are numerate (e.g. Dembski) are careful to always talk in the abstract about the ‘far tail of a probability distribution’ and never offer up example calculations.
IDists who are not numerate (e.g. Kairosfocus) are willing to do the calculations, and they always get them wrong.
Two questions for Kairosfocus et al:
1) Does your calculation of Chi_500 rely on the assumption that
P(A and B) = P(A) x P(B)?
2) Is the statement
P(A and B) = P(A) x P(B)
The CSI calculation of Dembski boils down to a complete obfuscation of a very simple notion from statistics. That kairosfocus character simply obfuscates even further.
If an event has a probability p, and the number of trials attempting to get that event is N, then the mean number of successes in achieving that event is Np.
Dembski’s calculation of Χ – after many paragraphs of rationalizations and side tracks – boils down to
Χ = – logNp = log(1/p) – logN.
with N = 10^120 φ(T) and p = P(T|H).
The 10^120 was taken from Seth Lloyd, and is Lloyd’s estimate of the number of logical operations it took to make our universe. Including the φ allows for the possibly of multiple universes involved in the number of trials, with 10^120 logical operations per universe.
Assuming only one universe, the calculation comes down to
Χ = log(1/p) – log10^120 = log (1/p) – 400
So 1/p is the number of trials required to get one instance of the specified event, and taking the logarithm gives the amount of “information” supposedly contained in that sample space, and log base 2 gives that “information” in bits. Note that it assumes uniform, random sampling. I am guessing that this would be what Dembski and Marks call “endogenous information.” The 400 would be the “exogenous information,” and their difference would be the “active information.”
As Elizabeth and Joe Felsenstein point out, the problem is coming up with a distribution for P(T|H).
In addition, Seth Lloyd’s calculation is based on the fact that the events being questioned by ID advocates – such as the origins of life and evolution – are already included in the 400 (or 500 in some of Dembski’s calculations). If Dembski wants to use Lloyd’s number in his CSI and apply it to events in the universe, it therefore makes no sense to assert that the “endogenous information” they contain is greater than the “endogenous information” in the entire universe.
Said more directly, the N trials to make the universe already produced the event in question; therefore the number of trials required to produce that particular event has to be less than the number of trials to produce all the events in the universe.
So here again we see the circularity contained in the assumption that such events do have more such information. One can very easily enumerate permutations and combinations of things and get numbers far larger than all the operations in the universe. It all depends on how one chooses to describe it. The ID descriptions are generally chosen to make events such as the origins of life look impossible.
I agree with you about the deliberate obfuscation, but isn’t the assertion of the intelligent design creationists that such an event was caused by an actor outside the universe? They want to keep multiplying probabilities until they can say “Couldn’t have happened!” then use the fact that it did happen as proof for their gods.
Or am I missing your point?
I posted a calculation for the CSI of a rock somewhere here on TSZ, but I don’t remember where, and I was having problems with the editor that led to my making some typos. It is also here over on Panda’s Thumb.
The problem is of course with the probability of the event, as Joe F and Elizabeth have already stated.
My point about Dembski’s use of Seth Lloyd’s calculation is that Lloyd’s calculation is done properly using physics. It includes those very events – such as the origins of life and evolution – that ID advocates question. Those events contribute to the “information” and the number of logical operations that produced the universe.
To use that 10^120 as an “upper bound” is to include the number of trials to produce life and evolution. To then turn right around and immediately “calculate” a larger number of trials for that particular event is to imply that the 10^120 is wrong. So why use it?
It is too easy to “specify” complex information. All one has to do is the same trick I did for the rock. It is the same mistake of declaring that a particular winner of the lottery was too improbable an event for it to have happened. I can choose any particular rock I want as the winner; and a probability calculation can be made to “prove” that it had to have been designed.
It is the same thing with card games. There are 52!/(47! 5!) = 2,598,960 ways to deal a five-card hand. Agreed upon convention decides which hands are winners; otherwise all hands are equally probable.
If all rocks are equally probable (they aren’t), why can’t I simply declare that THIS particular rock to be the winner and therefore its occurrence so improbable that some kind of intelligent cheat was involved in its existence?
It is the same game with any other events in the universe. We don’t know their probabilities; but we can’t just decide after the fact that some of them – such as the origin of life – are such improbable winners that some kind of intelligent cheat was involved in their occurrence.
We already know the physics and chemistry; and calculations based on the physics can already set bounds on how many logical operations are required. It’s less than 10^120.