Belling the Cat

As Aesop didn’t actually say:

The Mice once called a meeting to decide on a plan to free themselves of their enemy, the Cat. At least they wished to find some way of knowing when she was coming, so they might have time to run away. Indeed, something had to be done, for they lived in such constant fear of her claws that they hardly dared stir from their dens by night or day. Many plans were discussed, but none of them was thought good enough. At last a very young Mouse got up and said: “I have a plan that seems very simple, but I know it will be successful. All we have to do is to hang a bell about the Cat’s neck. When we hear the bell ringing we will know immediately that our enemy is coming.” All the Mice were much surprised that they had not thought of such a plan before. But in the midst of the rejoicing over their good fortune, an old Mouse arose and said: “I will say that the plan of the young Mouse is very good. But let me ask one question: Who will bell the Cat?”

More heat than light seems to me to be generated by the demand for IDists to “define CSI” and the equations that are fired back in response. Nobody is disputing that we have plenty of equations.  Here is that bright young mouse, Dembski’s:

χ= –log2[10120 · φS(TP(T|H)]

The problem seems to me to lie in Belling the Cat.

So let’s take a closer look at that equation.

Dembski defines φS(T) as:

The number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T

 

Where S is “a semiotic agent”.  Fair enough.  If a “semiotic agent” (me, you, Dembski, a visting Martian) spots a pattern that can be described simply enough, it is a candidate for CSI testing, whether it is a black monolith on the moon (“a black monolith”)  faces on Mount Rushmore (“faces of American presidents”), or a sequence of nucleotides that results in a protein that helps an organism survive (“functional protein”), it’s a candidate.

However, it’s the next bit that presents the cat-belling problem, and it’s a problem, I suggest, with any of the definitions of CSI, or its various acronymic relatives, so far proffered: H.

To infer design, Dembski requires that we reject the “chance hypotheses, H“: In his Specification paper, Dembski suggests various examples of chance hypotheses, the rejection of which might lead us to conclude Design.

  • that a coin is fair
  • that an archer hit a small target by chance
  • that a die is fair, and that the rolls are stochastically independent

All fine so far. But then:

  • the relevant chance hypothesis that takes into account Darwinian and other material mechanisms

And there’s your belling problem, right there.  Dembki’s entire solution to the problem of detecting design absolutely depends on the proper calculation of the distribution of probabilities under his null hypothesis.  As he himself says:

We begin with an agent S trying to determine whether an event E that has occurred did so by chance according to some chance hypothesis H (or, equivalently, according to some probability distribution P(·|H)).

That is just fine, if you’ve got a nice tame cat, like a fair coin or die, and we simply want to know whether the coin or die is indeed fair, because we can define “fair” as a very specific probability distribution, because we have a perfectly good theorem. We can also compute a fairly good probability distribution for the landing points reached by arrows from a blind archer, either by empirical means, or by some kind of null model. But in the context of inferring design from biology, the probability distribution a “chance hypothesis that takes into account Darwinian and other material mechanisms” – is precisely what Darwin and evolutionary biologists spend their days trying to find out!

If ID proponents can calculate the probability distribution under a “chance hypothesis that takes into account Darwinian and other material mechanisms”, then, cool. Science will be done, and the Nobel committee can be disbanded.

But until they’ve done that, no matter how many equations they produce, they haven’t given us any definition of CSI that will allow us to detect design in biology, no matter how useful such definitions may be is for detecting nefarious design in seedy gaming houses, or whether an archer is peeking through a blindfold.

The cat remains unbelled.

 

83 thoughts on “Belling the Cat

  1. I mostly keep out of CSI discussions, because the whole topic seems bogus. For that matter, I remain unconvinced that semiotics is a legitimate area of study.

    Dembski and co apparently take “information” to be the name of a natural kind. And that seems completely wrong to me. To me, the term “information” implies abstractness, detachment from physical process.

    As an example, consider the player pianos that were at one time common. You inserted a roll of paper with punched holes, and the mechanism of the player piano used those holes to trigger the motion of the piano keys. To me, that player roll was never information. It was more like a template. It was something used as part of a causal role.

    By contrast, consider sheet music, with notes written on a staff. Those are information — or, more properly, a representation of information. The difference is that they are detached from the causal mechanism.

    I see DNA as more like the piano roll than like the sheet music. It is part of a causal mechanism. It is more like a template than like information.

  2. Oh, I agree. It’s bogus in so many ways.

    But the biggest bogosity has to be in the entire idea making an inference by rejecting a null you can’t actually compute without knowing the answer to the question you wanted to address in the first place.

  3. I mean, you can reject the null that a coin is fair, because we define a fair coin as one that will, when tossed, produce series of outcomes that exhibit a certain probability distribution.

    We can’t reject the null that life forms are undesigned, because we don’t define undesigned forms as forms that have a certain probability distribution.

  4. Although some will say (Upright Biped, for example) that DNA isn’t quite like a piano roll, because there is an arbitrary relationship between codon and amino acid, not a straightforward mechanical one.

    And it may indeed be arbitrary – on an alien planet the relationship might be quite different. But that doesn’t mean that it isn’t still mechanical. Given the messenger RNA sequences we happen to have, the codon-amino acid mapping is what it is.

    And plenty of selective advantages to any proto-coding system that tended to be reliable, i.e. for optimum mappings.

  5. Lizzie,

    And this is why I’d be interested to see KF (et al) defend his “needle in a haystack the size of the cosmos” analogy and relate it to actual biology, but in a discussion with actual experts in the field rather then the echo-chamber.

  6. Kairosfocus responds to me, although whether he actually read my piece here is unclear. I will assume he has, and invite him to come here and discuss it in person if he would like (rather than holler across the Gulf):

    EL (via Joe):

    Nope.

    The pivotal issue is sampling theory, not probability distributions.

    Dembski’s CSI definition actually contains a probability distribution, and if he can’t compute it (which he can’t), he can’t do the Fisherian hypothesis testing, which is what he advocates. So this is a “pivotal issue”: Dembski’s CSI falls at the first fence when it comes to biological systems because there is no way of computing the probability distribution under his null.

    In essence, as has been pointed out over and over and over again, but ignored, when one takes a relatively small sample of a large population, one only reasonably expects to capture the bulk, not special zones like the far tails or isolated and highly atypical zones. This is like the old trick of reaching a hand deep into a sack of beans and pulling out a handful or two to get a good picture of the overall sack’s contents.

    Actually, the relative sample size is irrelevant – what matters in sampling theory is the absolute sample size, and once you have a substantial sample then the population size ceases to matter, as long as you have sampled randomly. So if want to estimate the mean size of bean in a sack of beans, you need a decent sample size. But the sample size you need does not depend on the size of the sack. So if it has been “pointed out over and over and over again” it’s been wrong over and over and over again. But I think KF is confused about sampling theory…

    When we have config spaces for 500 bits or more, we are dealing with pops of 3.27 * 10^150 and up, sharply up. The atomic resources of the solar system working at fastest chemical reaction rates and for the scope of the age of the cosmos, would only be able to sample as one straw to a cubical haystack 1,000 LY thick, about as thick as our Galaxy. The only thing we could reasonably expect to pick up on a blind sample of such scope, would be the bulk. Here, straw, and not stars or solar systems etc.

    KF doesn’t actually seem to be talking about “sampling theory”, as the term is normally understood, at all. What he is talking about is what Dembski calls “probabilistic resources” – the number of “trials” (as they are often called – Excel calls them that, for one) that you would have to have in order to have a decent chance of least one result in the tail of some. The more extreme the tail, the greater number of trials you’d need before you netted one. But that’s exactly what the term probability distribution means – whereas in a frequency distribution, the height of the histogram bars tells you how often the observation in question occurs, in a probability distribution, these heights are expressed as a proportion of the total number of observations, giving you an estimate of how probable it is that you would make any given observation on a single occasion, from which you can compute very simply what that probability would be for N trials: 1-(1-p)N. You don’t need sampling theory to do this – you just need the probability distribution and a calculator. But you can’t do it without the probability distribution, which is precisely what neither Dembski nor Kairosfocus have.

    Where also, the other thing that you have long, and unreasonably, refused to accept is that once we deal with specifically functional configs of sufficient complexity, the constraints of proper arrangement of the right parts to achieve function confine us to narrow and very unrepresentative zones of the space of possibilities. Islands of function for illustrative metaphor. All of this has been accessible long since but you have refused to listen.

    Somebody is refusing to listen, it seems, KF, but I don’t think it’s me :) The above is assertion, not argument, and I do not share your view that it is correct. Clearly some people do (Behe, for instance) but the vast majority of biologists do not. If “biological fitness space” to use the jargon, is smooth, and multi-dimensional, then there is no reason to suppose that even complex configurations are on “Islands of function”. And as long as similar genotypes result in similar phenotypes (they do), “fitness space” will be smooth, and as long as there are many different traits that can potentially enhance the chances of living and breeding successfully (there are), fitness space will be multi-dimensional. So why would we expect “islands”? Leaving aside, of course island of self-replication itself, which remains without a detailed theory at present. But if that was the basis of ID, then why all the sniping at poor old Darwin, who never even claimed to be able to explain how self-replication got going in the first place?

    I will simply say that by looking at sampling theory without having to try to get through a thicket of real and imaginary objections to probabilistic calculations, we can easily and readily see why it is unreasonable on the gamut of the solar system (or for 1,000 bits the observed cosmos) to expect to encounter FSCO/I by blind chance and mechanical necessity.

    The only sense in which “sampling theory” has any bearing on anything I’ve said is that if you take a random sample from what you think is a single population (of mice, for instance) and you find an outlying value that is extremely unlikely to have turned up in a random sample, you are entitled to conclude that it probably came from a different population (of rats, for instance). But the point is that in order to work out whether the observed value is an outlier, you have to have some way of computing the probability distribution under your null (that the sample is all mice). So you simply cannot escape from the part of the CSI definition that neither you nor Dembski can provide: the probability distribution under the null of non-design.

    Where also of course the thresholds of complexity chosen were chosen exactly for being cutoffs where the idea that chance and necessity would be reasonable would become patently ridiculous. It just turns out that even 1,000 bits is 100 – 1,000 times fewer bits than are in the genome for credible first cell based life.

     

    No, it doesn’t “turn out” like that – what it “turns out” to be depends absolutely on your null distribution. Packing that into a black box called “what would be reasonable” is begging the entire question. It’s like saying that this boa constrictor is longer than a piece of string, without specifying how long the piece of string is. Simply saying “a reasonable piece of string” gets us nowhere.

    And, that is the pivotal case as this is the root of the suggested Darwinian tree of life. Where, precisely because the von Neumann self replicator [vNSR] required for self replication is not on the table, cutting off the hoped for excuse of the wonderful — though undemonstrated — powers of natural selection acting on chance variations.

    The only empirically warranted explanation for the FSCO/I pivotal to first life is the same as the only observed source of such: design.

    Not unless you can produce the probability distribution for OOL under the null of no design. We know even less about that we do about the probability distribution of complex life, given simple life, but that doesn’t entitle us to infer anything from its tails – quite the reverse. All we can conclude, as most of us do conclude, is that we don’t yet know how self-replication on earth got started.

    The rest of KF’s post is about Search for a Search, and we already have a thread on that, so I’ll leave it there, repeating my open invitation to KF to come over and discuss it in person, although I will ask KF to consider the possibility that the objections to the Search for a Search argument are substantial, and, as ever, hinge on characterising probability distributions that we simply do not have the information necessary to compute.

    Those probability distributions won’t go away. Giving them algebraic or acronymic representations won’t butter no parsnips.

  7. If you mix DNA/RNA oligomers, they will find their complementary sequence (if it exists) and bind to it. If one could create an ‘infinite string’ of random DNA, and chuck oligomers at it, they would all bind somewhere. They would all ‘find’ the relevant ‘information’. CSI does not come into it. The ‘information’ is in relative binding energies – physics. And this is also true of tRNA. Anticodons bind complementary sequence. They do so in a ‘controlled’ way, but there is no decision-maker – the tRNA that binds most strongly displaces all others. Which is why the system does not need constant supervision.

  8. The probability of KF coming here is very low. The official reason is that this is a fever swamp of Nazi Marxists family-threateners and Nazi Marxist family-threatening enablers. I doubt even the head of TWT on a stick would satisfy him, although he may see it as a basis for negotiations.

  9. Man, I finally get the argument. Or rather, I finally understand the details of the argument thanks to this analogy:

    No, it doesn’t “turn out” like that – what it “turns out” to be depends absolutely on your null distribution. Packing that into a black box called “what would be reasonable” is begging the entire question. It’s like saying that this boa constrictor is longer than a piece of string, without specifying how long the piece of string is. Simply saying “a reasonable piece of string” gets us nowhere.

    Thanks Lizzie. I think I really get the slight of hand that Dembski is attempting now.

  10. Of course, KF’s root and tree analogy does not mean that the tree of life cannot “grow” without a “root” theory of origin of life. The tree is a diagram. He is confusing a diagram with, well, an actual tree that grows in the ground. This is stretching an analogy too far.

    All explanations of the theory of evolution implicitly begin with “Given self-replicators…” And they are (we are) of course a given.

    KF’s tortured analogy is like saying you can’t have the standard model of Physics without a complete theory of baryogenesis*. I know KF likes his scientists and philosophers to have matured for a few centuries, but even he must have heard of the standard model.

    You will report his error back to him, won’t you Joe?

    *It is entirely possible that a creationist might attribute this to a cosmic battle between Jesus and Satan, with Jesus winning out against the evil one’s anti-matter. I’m sure there are many wonderful theories waiting to be expressed when it comes to cosmology.

  11. Yeah, it’s kind of an aha! thing. But it’s the reason why people have been pointing out for so long that Dembski is constructing what Dawkins called “argument by incredulity.” Dembski is saying “life is so complicated that it doesn’t seem reasonable to me that it could have happened without the Designer.”

    Now, a related question might be how we could construct a probability distrubution that seems reasonable to Dembski, such that we could take unknown objects, apply the “reasonable to Dembski” test, and determine Design that way. And I recall that Dembski has been presented with plenty of objects to do this with, provided he shows his calculations. He’s never accepted this challenge.

    At Dover, Behe essentially testified that “design” is an attribute of an object like color or mass, directly observable. He also admitted that only members of a particular religious sect could see it!

    So we come full circle. Dembski’s slight of hand isn’t in hiding his inability to calculate his probability distribution behind a lot of misdirection. His slight of hand is in taking a particular religious doctrine and trying to make it all sciency. If it weren’t religion, he’d long since have admitted error like any scientist.

  12. davehooke:
    KF’s tortured analogy is like saying you can’t have the standard model of Physics without a complete theory of baryogenesis*. I know KF likes his scientists and philosophers to have matured for a few centuries, but even he must have heard of the standard model.

    Remember that the Creationist Model holds that everything was poofed in a single atomic event. All mass, energy, physics and chemistry and everything — including life, which has not changed since it all got poofed up 6000 years ago.

    So to force-fit reality within this model, evolution MUST BE this initial act of creation. There is no implication of process in the creationist model. KF’s model of physics, similarly, must and does include baryonogenesis. Part of the Great Poof.

  13. On page 23 of Dembski’s “Specification” paper, he purports to make “specified complexity” insensitive to context by borrowing from Seth Lloyd.

    Even so, it is possible to define specified complexity so that it is not context sensitive in this way. Theoretical computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history. This number sets an upper limit on the number of agents that can be embodied in the universe and the number of events that, in principle, they can observe. Accordingly, for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M•N will be bounded above by 10^120.

    The reference is to Seth Lloyd, “Computational Capacity of the Universe,” Physical Review Letters 88(23) (2002): 7901–4.

    This gets to the heart of Dembski’s – and all of ID/creationism’s – misconceptions and misrepresentations about the physical universe.

    Seth Lloyd’s paper was in Physical Review Letters. One doesn’t publish a paper in that journal by getting the physics wrong. Even though I haven’t read that particular paper, I am pretty sure that Lloyd wasn’t discussing the fact that matter is condensing even as we can look out and see billions upon billions of stars and galaxies along with all the stuff we see on planets that are the result of all that condensation.

    Why Dembski thinks Lloyd’s calculation has anything to do with how matter behaves would escape me if it weren’t for the fact that I already know where ID advocates get their ideas about atoms and molecules. It goes right back to “Spontaneous Molecular Chaos” and all those tornado-in-a-junkyard types of argument; to Henry Morris and Duane Gish, and now Granville Sewell.

    Condensing matter is an indisputable, observable fact that anybody can verify instantly even as they sit at their computers asserting otherwise.

    With just that basic knowledge, there would be no way that Dembski could assert that what we see is the result of nothing but uniform random sampling that is “impossible” in the lifetime of the universe.

    This is simply another example of hijacking a paper and abusing it.

  14. Lizzie:

    But the biggest bogosity has to be in the entire idea making an inference by rejecting a null you can’t actually compute without knowing the answer to the question you wanted to address in the first place.

    Exactly. It’s a circular argument.

    I’ve been raising this issue since at least 2006:

    Dembski’s refinement runs into trouble, though, because he admits that to determine that a system has CSI, we must estimate the probability of its production by natural means. Systems with CSI have a low probability of arising through natural means.

    This renders the reasoning circular:

    1. Some systems in nature cannot have been produced through undirected natural means.

    2. Which ones? The ones with high CSI.

    3. How do you determine the CSI of a system? Measure the probability that it was produced through undirected natural means. If the probability is vanishingly small, it has CSI.

    4. Ergo, the systems that could not have been produced through undirected natural means are the ones which could not have been produced through undirected natural means.

    (Posting as ‘Karl Pfluger’ at UD)

    The circularity also plagues other acronymic derivatives of CSI such as gpuccio’s “dFSCI” and KF’s “FSCO/I”.

    I keep raising the issue with ID proponents, but they have no coherent answer.

  15. Well, KF has responded, but unfortunately doesn’t appear to have read what we’ve written:

    Joe (& TSZ):

    There is no sleight of hand involved.

    Sampling theory is well known and is routinely used to characterise distributions on the known properties of such sampling in light of the law of large numbers [i.e. a large enough sample, often 25 - 30 in cases relevant to bell type distributions -- the far tails are special regions and tend not to be picked up, we have had the discussion about dropping darts from a ladder to a paper cut-out long since (years and years ago . . . ), you are just not going to easily hit the tail by chance with reasonable numbers of "drops" . . .] and related results.

    Let me repeat: you are confusing “sampling theory” – which is primarily to do with the size of sample you need to get a reliable estimate of the parameters of a population, and which is independent of the size of the population – with the principle of a probability distribution, which tells you the probability (vertical axis) of observing a given event type (horizontal axis), given one try/draw/trial.

    We all know that if an event is in the extreme tails of a probability distribution, you are going to need a lot of observations (usually not called a “sample”, so that may be where you are confused – usually called trials, or draws, or, indeed, observations) in order to have a decent chance of seeing it, which is why bird-watching is so boring, unless you are fascinated by house sparrows. Nobody is disputing this point.

    What we are disputing is where you got that distribution in the first place.

    Indeed, this theory and its needle in the haystack result is what lies behind the statistical form of the second law of thermodynamics.

    The basic point is a simple as the point of the proverb about (blindly) searching for needles in haystacks.

    Namely, if there is ever so much stack and ever so little needle, it is going to be very hard to find the needle in the stack.

    Right. Now, calculate the size of the stack, and the number of needles, and I might be interested.

    Until then, you don’t have a definition of CSI.

  16. “What you are saying is incorrect; I have told you it is incorrect, therefore for you to repeat it means you are lying. No, I won’t listen to your argument as to why I am incorrect, because I don’t listen to lies”.

  17. Interesting piece by Harold Morowitz on the distinctions between thermodynamic entropy and logical entropy.

    Good as far as it goes, till it starts on the ‘logical entropy’ of Life. It’s on Panspermia.org, in pursuit of what I would regard as a rather ‘unnecessary’ theory. Because replication, once in train, organises chemistry, it can in principle start from a single seed. The probability of that seed arising on a particular planet may be constrained by the probabilistic resources available on that planet. But if one brings in a whole set of planets, one increases the purchase of lottery tickets. Having started somewhere, Life (by rather vague means) can go everywhere. But it isn’t necessary to pass that spark around. The mere existence of N planets with the appropriate chemistry means that the extra probabilistic resources are being explored, regardless of any transfer mechanism. The planet(s) on which Life kicks off are the ones that possess it (duh).

    So Morowitz starts off making a valid and important distinction, but then accuses ‘darwinists’ of seeking refuge in the confusion between informatic and thermodynamic entropy. Which I don’t think is the case. Accepting the ‘materialist’ account, travelling back through the generations to a primordial replicator – a system that had the capacity to make copies, however imperfect – it is typically assumed to have one of a limited set of configurations from the space of all sequences. Hitting upon that magical sequence is regarded as a blind search, by some dimly-imagined mechanism trying out random strings. We might have a hindsight view that this sequence has low informatic entropy, simply because of the many ways that sequence could be ordered that aren’t replicators.

    This view, however, tends not to be taken by ‘darwinists’. They don’t tend to look at sequences as informatic entities at all. There are base combinations with the physical attributes that enable replication (given an enviroment that permits it). Their ‘unusualness’, in a freshly-minted planet’s worth of environments over a few hundred millions years, is simply unknown.

    One possibility that I think is worth exploring is that the initial ‘selector’ was not absolute sequence at all, but complementarity. Single strand RNA cyclises at very short chain lengths, preventing further polymerisation, and is susceptible to attack by the 2′ -OH. But RNA strands will hybridise with their complement – they form double stranded RNA by hydrogen bonding. This stiffens the chain, reducing cyclisation, and renders -OH attack less catastrophic. The driver for hybridisation is good old-fashioned thermodynamic entropy. The absolute sequence does not matter; the stochastic presence of a complement does. Double strands are not indefinitely stable; among such a population of relatively stabilised double strands, those that can actively assemble their complement would have the advantage over those that have to ‘fish’ for it. This capacity is ‘informatic’ only in the sense that some base sequences would perform the assumed function better than others.

  18. I have asked KF many times what specific biological process he imagines is occurring while the “haystatck” is being sampled.

    To me that is rather important – even if there is almost all stack and few needles depending on the process that is taking place that might not even matter.

    Of course, he cannot state the process because then it would be obvious that no actual biologist would recognize that process as part of biology and there goes KF’s entire argument. It’s in ID’s interest not to get too specific I think.

    As KF himself says

    At this stage, to erect that sort of strawman in the teeth of easily accessible and abundant evidence to the contrary is not only a loaded strawman argument but one rooted in willful disregard for duties of care to truth and fairness; in the hope of profiting by a misrepresentation or outright untruth being perceived as true.

    It’s simply a strawman that (for example) proteins assemble from scratch totally randomly into complex configurations. 0 to 60 without passing 1-59. Yet KF’s entire argument depends on that parody, a strawman that would not be recognized as a true representation of observed reality were it to be spelled out in biological terms rather the couched in analogy.

    As a matter of fact, getting the right nut and bolt together and bolting it up to the right torque in the right place in a pile of galvanometer parts is already a stiff challenge for the tornado. And I would never trust an electrical “circuit” assembled by a tornado!

    Except that nobody makes that claim KF, so why do you keep repeating it as if it accurately represents your opponents position?

    Oh, that’s right…..

  19. It seems to me that asking WJM to demonstrate his claim that FSCO/I is well defined and can be used to actually determine values for the FSCO/I present in arbitrary biological entities (or indeed anything at all) spooked them all badly. So much so that that particular avenue of conversation is now a “Immoral rhetorical stunt” that “goes to character”.

    Cognitive dissonance crisis perhaps?

  20. Flint,

    Well, yes, but he has to provide some support for any model he proposes. Astute creationists know that the Bible is not sufficient scientific authority in the real world, hence the whole ID farrago.

    Employing KF’s analogy, we could say that without a theory of baryogenesis, no scientific theory has a “root”. We can discount all science. So why not apply the analogy to baryogenesis and be done with science?

    Because it exposes the analogy as a complete crock, of course.

    Anyhow, only a fool implies that not having a scientific theory for something advances your favourite conjecture one jot. It is to be hoped that, as a venue that values intellectual rigour, accuracy, and indeed right reason, the more enlightened patrons of UD might correct KF on this point.

  21. Mike Elzinga: Here is Lloyd’s “Computational Capacity of the Universe.” I was right; Dembski didn’t read it. I would bet he never got past the abstract.

    To emphasize just how bizarre Dembski’s use of Seth Lloyd’s calculation is in setting his SCI threshold, consider that Lloyd did his calculation for the entire universe from its beginning, according to the big bang model. Lloyd doesn’t mention dark matter or dark energy, but he mentions that most of his result comes from the computational capacity of the matter-dominated universe. Matter has already done its thing.

    His conclusion states that the universe can have performed no more than 10^120 elementary logical operations on 10^90 bits. Think about that for a moment. Everything has already happened; including the formation of the planets and the evolution of life on at least one of those planets.

    Now when Dembski or any of the ID/creationists over at UD assert that the amount of information in something, like a protein or any other subset of the universe, must require as much information or as many logical operations as the universe in order to get over Dembski’s threshold, they are saying that a subset of the universe requires more logical operations than the entire universe of which that subset is already a part.

    In other words, the universe requires more logical ops than the universe requires.

    Dembski has also swallowed his tail.

  22. Today, William got an incredible deal on an old Victorian house. Highly satisfied with his business acumen, William settled in for a blissful night of sleep in his new home.

    SLAM!

    William woke with a start. He listened intently. But he didn’t hear anything, so he settled back to sleep.

    Cree..eak

    William listened even more closely this time until, after a bit, the creaking noise died away. For some reason, he recalled the seller’s maniacal laughter just after William signed the papers to buy the house.

    SLAM!

    William was trembling and his teeth were rattling. He thought about getting out of bed to investigate. Instead, he pulled the covers over his head.

    Cree..eak

    Hmm, William thought. Being a famous design theoretician, I can use the patented (not really) Dembski Inference to determine if the pattern is being caused by a ghost, er some unspecified intelligent cause.

    SLAM!
    Cree..eak
    SLAM!
    Cree..eak
    SLAM!
    Cree..eak
    SLAM!
    Cree..eak

    Dembski Inference

    χ= –log2 [ BIGNUM · φS(T)·P(T|H) ]

    φS(T) and P(T|H) are independent measures, that together, even if nothing is known about how a pattern arose, reliably signal the action of an intelligent cause.

    Per Dembski, there are two possible results. A negative is not determinative. It might be design, it might not. On the other hand, a positive is most certainly designed.

    Let’s assume the sequence of Cree..eaks and SLAMs! appears superficially random. φS(T) is large, as the shortest description is the length of the entire sequence. This leads to a negative result. Unfortunately, this is not very comforting to our poor hero. It might still be a ghost!

    Not being able to sleep, William ponders some more, and discovers a chaotic function that can describe the apparently random sequence. Now, φS(T) is small as it has short description. Now, how do we determine a plausible probability hypothesis, H? A uniform distribution, perhaps. Or a distribution that accounts for the overall observed distribution, better. In either case, P(T|H) is very small. So we have a positive match for CSI.

    It’s a ghost!!
    .
    .

    (Or a loose shutter.)
    .

  23. Another Bell – Jocelyn Bell Burnell – comes to mind. Regularity was observed in the data, a possible sign of intelligence, but actually deriving purely from rotation. Rotation gives a regular, economically-described signal as surely as night follows day …

    The mechanism that ID-ers think is insufficient to the job of CSI-generation is point mutation and Natural Selection – a series of tiny changes every one of which confers an advantage. But sequence duplication is a powerful generator of regularity, and stochasticity a powerful means of breaking free of a PM+NS ‘island’.

    If one has a short sequence that generates a turn of a helix, duplication will give two turns, another duplication four and so on. The longer helix can itself be recombined elsewhere, as a module in an unrelated sequence. In a modern, 20-acid system, subsequent substitutions can scramble the underlying regularity of the basic repetitive mechanism. But when you look closer, all you really need is a simple pattern of hydrophilic and hydrophobic residues. Looking at the longer string, and assuming it derived from improbable throws of many strings of equal length, comprised of many different kinds of subunit, betrays a blindness to actual biological mechanism.

  24. Seems straightforward to me that to the creationists, the bible is TRUTH. This is by definition, not subject to question. The bible is Truth because God said so, and God said so in the bible which is how you know it’s Truth.

    And ALL ELSE is a combination of trying to make non-negotiable religious faith sound scientistical, and trying to make it sciency-sounding enough to satisfy the courts.

    And that means ID cannot be exposed as a crock, because it’s not a crock. God said so. If science disagrees, science must be a crock. If science only partially disagrees (the part that offends the specific religious doctrine ID was crafted to defend), then only that part of science is a crock (as are the subsequent accusations of inconsistency, of course).

    But there is a reason why the ICR people must sign a loyalty oath to join, and why creationist sites do not tolerate dissent. When foregone conclusions cannot be altered, they must be rationalized. We can find millennia of this stuff, like Aquinas, constructing impressively convoluted justifications, misdirections, misrepresentations, cherry-picking, and the like.

    I believe the UD people do indeed value intellectual rigor, accuracy, right reason, and enlightenment, but ONLY in the service of TRUTH. Just like you and me, only different.

  25. Have the folks at UD ever resolved the
    contradiction between Upright’s claim that information must be embodied and KF’s claim that “information is a distinct entity in the cell”?

    Seems to me that ID reifies a metaphor.

  26. When foregone conclusions cannot be altered, they must be rationalized. We can find millennia of this stuff, like Aquinas, constructing impressively convoluted justifications, misdirections, misrepresentations, cherry-picking, and the like.

    You have just explained theology.

  27. Eric asks timothya:

    timothya:

    Two questions:

    1. Are you suggesting that we have to know the exact, precise, unequivocal probability of event X occurring by purely natural processes before we can draw an inference that event X did not occur by purely natural processes?

    No.

    2. On what basis do forensics experts and archaeologists draw an inference to design? Must they first lay out a precise formulation of all possible probabilities of the item in question having been produced by purely natural processes?

    No (assuming that by “natural” you mean “unintended”, or something similar).

    Look: Dembski proposed a formula – a metric – for inferring design, based on Fisherian hypothesis testing, that involves determining that the candidate pattern is in the rejection region of a probability distribution under the null of non-design. So, clearly, to, calculate that metric we need the probability distribution under the null. Dembski provides no way of calculating that distribution that does not involve first knowing what non-design processes can do. And if we knew that, he wouldn’t need his calculation. So his entire argument is circular.

    That doesn’t mean that inferring design from a pattern isn’t possible; what it does mean is that CSI, and its relatives, are useless for doing so.

  28. KF

    There is no need for exact probability calcs or estimates.

    http://www.uncommondescent.com/science-education/oldies-but-baddies-af-repeats-ncses-eight-challenges-to-id-from-ten-years-ago/#comment-452664
    That CSI/FSCO/I can’t be used is actually of no concern!

    All we need are circumstances that sampling theory will let us see are of the needle in haystack variety, where is is simple to see that by its nature, the bulk of possible configs in a relevant situation will be gibberish.

    And therefore Lizzie’s program generated CSI! If that’s “all that is needed” then that has already been demonstrated to be possible via GAs.

  29. davehooke:
    Flint,

    It is to be hoped that, as a venue that values intellectual rigour, accuracy, and indeed right reason, the more enlightened patrons of UD might correct KF on this point. [emphasis added]

    There’s your problem right there.

  30. Whether it is 10120 or 10300 is actually unimportant — the point is that even rather modest adaptations cannot be produced by the tornado-in-a-junkyard even once in the whole history of the universe.

    I think the other points Elizabeth is making are more central.

  31. I’ve been agreeing with this at my Dembski/CSI posts at Panda’s Thumb. Elizabeth has expressed it neatly.

    You can also summarize it in a way that eliminates need to refer to CSI.

    * We want to find out whether an adaptation this good or better can be produced by natural selection and random mutations (and other ordinary evolutionary forces.

    * So first we evaluate the probability that adaptation this good or better can be produced by natural selection and random mutations (and other ordinary evolutionary forces.

    … and then actually, we’re done, without getting to the CSI part.

  32. Lizzie:

    That doesn’t mean that inferring design from a pattern isn’t possible; what it does mean is that CSI, and its relatives, are useless for doing so.

    I’ve written about this before in various other blog-comments, but it bears repeating here: Real scientists can and do detect design. The standard methodology involves forming a hypothesis of how the maybe-Designed thingie was Manufactured, and then testing that hypothesis of Manufacture.
    ID (as she is spoke by Behe/Dembski/etc), contrariwise, directly and explicitly ignores the question of Manufacture. ID-pushers directly and explicitly claim that Design can be detected in the absence of any knowledge of, or hypothesis regarding, the maybe-Designed thingie’s causal history. If that claim were actually true, that would be way the hell nifty. Alas, it is not true…

  33. Joe Felsenstein:

    Whether it is 10^120 or 10^300 is actually unimportant — the point is that even rather modest adaptations cannot be produced by the tornado-in-a-junkyard even once in the whole history of the universe.

    I think the other points Elizabeth is making are more central.

    I am not missing Elizabeth’s or your point. I read Dembski’s “Specification” paper; and I also read Lloyd’s paper. I suspect you might appreciate Lloyd’s paper; it is based on good physics and is not difficult to follow.

    The universe and all events in it did happen; that is what Lloyd’s calculations are based on. Like anyone who actually understands the relationship between physics and the ability to carry out logical operations, Lloyd relates the rates of operations and flipping bits to energy; he doesn’t abuse entropy, for example.

    I elaborated on the irony of Dembski’s use of Lloyd’s 10^120 elementary logical operations over on your thread on Panda’s Thumb, and also in my other comment on this thread.

    This is what Lloyd says explicitly on page 7 of his paper.

    What is the universe computing? In the current matter-dominated universe most of the known energy is locked up in the mass of baryons. If one chooses to regard the universe as performing a computation, most of the elementary operations in that computation consists of protons, neutrons (and their constituent quarks and gluons), electrons and photons moving from place to place and interacting with each other according to the basic laws of physics. In other words, to the extent that most of the universe is performing a computation, it is ‘computing’ its own dynamical evolution. Only a small fraction of the universe is performing conventional digital computations.

    This is not a trivial statement. Everything in the universe has happened; stars, galaxies, planets, and life on at least one planet. Lloyd has estimated the number of logical operations acting on 10^90 bits to make the universe as we know it.

    Lloyd didn’t mention dark matter or dark energy; and there are many outstanding issues that are still being researched. But at least Lloyd offers a pretty concise argument for what it would take to actually simulate the universe according to the standard big bang, inflationary theory on a computer.

    The irony of Dembski’s use of Lloyd’s calculation should not be missed. Dembski not only can’t produce a probability distribution function for the occurrence of events in the universe, he didn’t read, or didn’t comprehend, Lloyd’s calculation.

    By asserting that a subset of events in the universe must meet his threshold in order to infer design, Dembski is claiming that this subset must contain more specified complexity and require more logical operations than the entire set to which it belongs.

    With Lloyd’s paper, Dembski had in his hands an upper limit of what it takes to build a universe. And Lloyd showed examples in his paper that illustrate the obvious fact that the occurrence of any subset of events in that universe requires less. That is a pretty good first approximation.

  34. Kairosfocus loaded up this:

    Chi_500 = Ip*S – 500, where once Chi goes positive on a solar system scope (our practical cosmos for chemical interactions, absent invention of a warp drive) we can be assured blind search is all but utterly certain to fail. A practical impossibility.

    For the millionth time . . . what is it in random mutation and natural selection that he thinks requires a “blind search” of cosmological proportions?

  35. KF knows that evolution does not operate by blind search. It’s been pointed out to him again and again.

    He also knows that this is fatal to his argument, so he chooses to ignore it.

  36. Well, he’s taking refuge in Search for a Search, which does concede that some searches do do better than blind search, but claims that searching for the good searches is a blind search.

    I think that’s his point (but it supercedes CSI).

  37. Oh, I think Kairosfocus sees the question. I think that is why he kills every thread with a quilter’s nightmare whenever someone hones in on a key question.

  38. 1. The word is spelled “supersedes”
    2. Interesting that the phrase “to home in on” is rapidly becoming “to hone in on”

    I’m OK with with language developing with use – it just grates sometimes.

    Otherwise…

    KF and Debski DO see the questions – this is not the only place they have been asked (they’ve been asked at UD but deleted; they have been asked many times in many other places; and in any case JoeG reports or tattles on TSZ to KF, although whether his reports are accurate must be in doubt)

    They simply dare not even try to answer the questions in open forum, because they know their case is so weak.

    All this guff of KF’s about TSZ being “enabling” of various forms of evil is smokescreen – cover for his own deficiencies and lack of moral fibre.

    And yet Christ himself, upon whom you’d think KF would model his every waking moment, is reported to have gone into the temple and ejected the usurers i.e. attacked wrong-doing in its lair.

  39. Alan Fox:

    About my answer to Eric Anderson on UD: it is “in moderation” along with a couple of other contributions to the same thread. Whether they ever emerge depends on that site’s moderation policy. Not that my opinions are of much importance – the subject matter has been well canvassed in this thread.

  40. timothya:
    Oh, I think Kairosfocus sees the question. I think that is why he kills every thread with a quilter’s nightmare whenever someone hones in on a key question.

    Well, he is prima facie confused on the difference between a sampling distribution and a probability distribution.

  41. It’s so unfair! Why don’t I get moderated? I suspect it’s because I’m not very effective. If I spot it, and have the time, I will try and pick up on and post a point that seems pertinent, especially as the UD denizens are less inclined to peer out into the daylight currently. You and anyone else can always give me a heads-up if there’s something particular you’d like passed on.

  42. Perhaps discussions about rebarbative neologisms should be moved to a thread of their own so that Grumpy Old Men can wave their walking sticks without derailing threads.

  43. Let me get this right. Kairosfocus posted this:

    The matter becomes instantly clear once you do the log reduction on Dembski’s result and then use a reasonable upper limit on search and observation resources, here the solar system’s 10^57 atoms and ~10^17 s.

    Does he seriously believe that a log transform changes the relationship between the dependent variable (the one at the left of the equals sign) and the set of independent variables (the explanatory ones on the right of the equals sign)?

  44. And now Kairosfocus posted this at UD:

    1 –> First problem, there is no good reason to see that OOL, the origin of the cell based life, is amenable to Darwinian mechanisms.

    So now the origin of life is the same as the origin of cell based life? What does this mean?

  45. I think KF is defining OOL as the origin of cell-based life, but it’s hard to say from what he’s written. But whatever he means in terms of OOL, his “first problem” makes no sense. That is, however one wishes to define “life”, one characteristic of life is reproduction. And so long as there is any kind of reproduction, there is going to be evolution. That’s the whole point of Evolutionary Theory.

  46. But isn’t that one of the major problems with ID and its proponents?
    When we read up on aspects of evolutionary theory, we find (the vast majority of the time) that terms are clearly defined and commonly understood. Discussions involving them are therefore clearly understandable and often fruitful.

    Not so with ID proponents. Witness all the hooraw about the meaning of the term “information”. It’s one of the reasons ID is not scientific – it and its pushers lack scientific rigour.

    That sentence of KF’s is typical – loosely constructed, and potentially ambiguous. It’s almost as if they do it with a view to being able in the future to wriggle out of a difficult position by claiming they were misunderstood.

  47. ID can easily be defined as the science of labeling gaps.

    If you want to see some dancing, ask KF to calculate the change in FSCI observed in the Lensky experiment.

    If he says there was no change, then FSCI is irrelevant to the concept of evolution, because evolution can produce new functionality without increasing FSCI.

  48. timothya:
    Let me get this right. Kairosfocus posted this:

    Does he seriously believe that a log transform changes the relationship between the dependent variable (the one at the left of the equals sign) and the set of independent variables (the explanatory ones on the right of the equals sign)?

    That neg log thing really annoys me. Sure, it’s sometimes convenient to use logs to multiply (I date from the log-table/slide rule age) but I have a suspicion that its purpose in ID discourse is to:

    a) make the thing look really mathy
    b) make it look digital (answer in bits)
    c) misdirect attention from the hole in the middle.

  49. Lizzie:

    a) make the thing look really mathy

    That’s what Dembski is all about. Writing symbol laden bullshit that only a student of statistics can follow.

    Contrast Einstein, whose work really was astonishing and transformative. He could conjure up thought experiments that could be followed by reasonably educated laymen.

    I will give KF credit for his needle and island metaphors. They are understandable, testable, and unfortunately for him, wrong.

    But at least they can be wrong, and that’s a start.

  50. petrushka:
    ID can easily be defined as the science of labeling gaps.

    If you want to see some dancing, ask KF to calculate the change in FSCI observed in the Lensky experiment.

    If he says there was no change, then FSCI is irrelevant to the concept of evolution, because evolution can produce new functionality without increasing FSCI.

    I would love to see an ID proponent address this in a top level post here. When I was last active at UD, I strove mightily to get some examples of CSI calculations, to no avail. Only vjtorley made an honest attempt to follow Dembski’s math, but he rejected the whole concept for coming up with the wrong answer (namely that gene duplication does create CSI).

    Lenski’s results are a perfect test for Dembski’s metric (although I would still like to see the Steiner problems similarly evaluated). Is any ID proponent up to the challenge?

  51. Interestingly, Abel et al do define their null, and give it as an equiprobable distribution of sequences of which a subset give rise to functional proteins.

    Of course given a non-functional starting sequence, not all future sequences are equiprobable, even if none of them beneficial or deleterious , so that seems wrong for starters.

  52. KF,

    Like onto it, we know that he search challenge in configuration spaces will be so overwhelming that it is not plausible for the entities to come about by undirected chance and necessity.

    Is it possible you could provide a citation for that claim? That biologists claim that entities like “DNA, RNA and proteins etc, as well as the organised nanomachines” come about by ‘undirected chance and necessity’?

    Otherwise you have spent many millions of words destroying a claim that nobody is actually making.

  53. c) misdirect attention from the hole in the middle.

    That calls out for a name; the Doh!-Nut argument. ;)

  54. Joe Felsenstein:
    (of course I meant 10-to-the-120 and 10-to-the-300).

    Until such time as the superscript tag is implemented here, the carat character—”^”—is an acceptable substitute. As in: 10^20 = ten to the twentieth power, and so on.

  55. Joe

    Natural selection and drift are both blind and mindless processes

    Thank you Joe for agreeing with me. They are indeed processes and not the random brute force searching misrepresented by KF with his “needles” analogy.

    All mutations are said to be errors, mistakes and accidents, ie undirected chance.

    Indeed, but it’s the framework, the process that then takes that raw material and uses some, saves some for later (perhaps!) and discards some that’s important, as you rightly point out.

    Cheers :)

    Link

  56. (OK, maybe this is supposed to go into the Sandbox): I used the SUP and /SUP tags. This did not show up correctly in the WYSIWYG as I typed. But when I used the Edit button after submitting it, the tags were still there, and when I re-saved that then the superscripting seemed to work.

    But then, on coming back later to TSZ the superscripting formatting had disappeared.

  57. Proof that non-material designers are at work.

    Off topic, but I get the same captcha every time.

  58. ATTENTION kairosfocus, Eric Anderson, Phinehas and other confused souls at UD.

    Please tattoo the following on a prominent part of your anatomy, and refer to it often:

    Evolution is not a blind search

    I repeat:

    Evolution is not a blind search

    A blind search is one in which a point is selected completely at random from the entire search space. If the selected point falls within the target region, then the search has succeeded. Otherwise the search continues, and another point is selected completely at random from the entire search space.

    The size of the haystack and the number of needles in it are relevant only if we are talking about a blind search. Since evolution is not a blind search, these numbers are irrelevant.

  59. Just to hammer the point home:

    In terms of the needle and haystack metaphor, the problem faced by evolution isn’t to find needles by probing random locations throughout the entire haystack — that would be a blind search — but something quite different. Evolution explores the haystack by starting from the current needle and exploring the points in the haystack that it can reach from there. If it finds a needle at any of those reachable points, then it proceeds to search from the new needle.

    (Be cautious — the haystack metaphor has many of the same limitations as the ‘islands of function’ metaphor, as explained here. For example, the haystack has far more than three dimensions, and needles appear and disappear over time.)

    Since evolution is performing a local search and hopping from one needle to the next, the size of the entire haystack is irrelevant, as is the total number of needles. What matters is the local distribution of needles along the path(s) traversed by evolution.

    Thus, unless you know the structure of the haystack and the distribution of reachable needles at every point in time, you can’t even begin to estimate the probability that a particular needle could be reached by evolution.

    So not only is the CSI argument circular; you couldn’t possibly calculate it even if there were some point to doing so, because you don’t have the requisite information. You don’t know where the reachable needles are.

    And in case you’re tempted to argue that the evolutionist also needs this information to make his or her case, guess again. The hypothesis that evolution is unguided, and that the needles have a distribution that is favorable to evolution, fits the evidence literally trillions of times better than the beleaguered ID hypothesis.

    Given the evidence, there is no rational justification for clinging to the ID hypothesis.

  60. Joe doth protest:

    JoeApril 23, 2013 at 5:29 am

    So keiths sez that unguided evolution is not a blind search but a blind exploration?

    explore:

    2- To search into or travel in for the purpose of discovery

    search:

    1-To make a thorough examination of; look over carefully in order to find something; explore.

    There doesn’t seem to be much of a difference between searching and exploring. And that means keiths is a bigger dolt than we thought.

    I’m trying to figure out where Joe came up with “blind exploration”. I suspect he just doesn’t understand what you meant when you wrote, ” Evolution explores the haystack…”, still being stuck on the “blind search” misconception. Earth to Joe – evolution is not blind.

  61. Joe continues:

    Intelligent Design Evolution is not blind. Unguided evolution is blind, mindless and without purpose.

    A) There is no reference to anything called “Intelligent Design Evolution” in any scientific textbook or journal that I can find. Please provide a scientific reference that describes what this is.

    B) Define “blind” as you are using it above.

    I can provide references to support my claims.

    Oh, I have no doubts that you can provide what you believe to be a reference, but I’m skeptical that any reference you provide will actually support your claim above. But hey…surprise me Joe. I’m in the mood to be astonished…

  62. Third times a charm for Joe:

    Robin’s big comeback:

    A) There is no reference to anything called “Intelligent Design Evolution” in any scientific textbook or journal that I can find. Please provide a scientific reference that describes what this is.

    There is no reference to anything called “blind watchmaker or unguided Evolution” in any scientific textbook or journal that I can find. Please provide a scientific reference that describes what this is.

    Why in the world would I provide a reference for something I never claimed or accepted? Of course there’s no reference for “blind watchmaker or unguided Evolution”; you made the name up. And yet oddly, you can’t – contrary to your previous claim – provide a reference for your supposed “Intelligent Design Evolution”. As you say, “good luck with that.”

    Here’s an actual reference to the Theory of Evolution as I understand it and accept it. You can do with this whatever you wish:

    http://evolution.berkeley.edu/evolibrary/article/0_0_0/evo_02

    B) Define “blind” as you are using it above.

    The same way that Coyne, Dawkins et al., use it.

    Bzzzzzz! I’m sorry, but that’s a fail. So much for that reference claim of yours. Nice try though.

    Here, eat this:

    Natural selection and evolution: material, blind, mindless, and purposeless

    And I have noticed that you haven’t supported your claim. How typical.

    So here we have Robin, eating worms again. Yum, yum.

    Oh Joe…I read Coyne’s essay. These things are not that hard to find. And what’s funny is, he actually defines what he means when he uses the term “blind”, which oddly you left out:

    Coyne: But when reading Futuyma’s statements, I remembered that some people object to such a description as a needlessly “theological” assertion: a flat and insupportable claim that natural selection was not designed by, and is not being guided by, gods. How can you be so sure, some theologians say, that there really isn’t a goal, purpose, or mind behind evolution?

    And

    Coyne:In my classes, however, I still characterize evolution and selection as processes lacking mind, purpose, or supervision. Why? Because, as far as we can see, that’s the truth. Evolution and selection operate precisely as you’d expect them to if they were not designed by, or steered by, a deity—especially one who is omnipotent and benevolent.

    So there you have it. Coyne is using the term “blind” to mean “without deity guidance”. Is that what you mean Joe? Because that isn’t what Keiths is addressing above and isn’t what KF is addressing either. So, you are either misunderstanding the discussion and use of the term, or you are equivocating. In either case, I’m afraid using Coyne’s essay merely demonstrates you don’t know what you are writing about in your rebuttal of Keith’s comment. Thanks for playing!

  63. Actually, evolution is blind, in the sense that mutations happen randomly with no regard to their utility for the organism. Evolution has no foresight.

    What KF doesn’t get is that evolution doesn’t search the entire haystack (aka ‘config space’), so all of his talk about its huge size is utterly irrelevant. What matters is the local structure: how are needles distributed in the vicinity of the needles that have already been ‘occupied’ by evolution?

    Where the entire haystack is more relevant is not for evolution, but rather for the origin of life. But here KF makes more blunders:

    1. He assumes that the initial replicator is hopelessly complicated, and that the haystack is therefore enormous, but he gives no justification for this bogus assumption.

    2. He assumes that each point in the haystack is equally likely to be searched, thus totally ignoring the role of physics and chemistry in the process.

    Between inflating the size of the haystack and misunderstanding the way it is explored, both by evolution and by nature during the process of OOL, KF’s position is a hopeless muddle.

  64. KF is simply wrong about the sparseness of needles. Nearly two thirds of the bases on any arbitrary protein coding sequence can be replaced without breaking it. There are critical points that break it, but some of them simply change the fold.

    If the sequence is a duplicate, there’s lots of wiggle room for neutral drift. There’s room for drift even if it’s not a dupe.

  65. Absolutely. The point I am trying to make to Joe is that he’s misunderstanding what both you and Coyne (et al) are noting. Evolution is not looking for a goal, so it isn’t blind to how to get to that goal. It isn’t blind in that sense at all because there is no goal. And while it is blind in the sense that it can’t anticipate how any given change will affect the success of a group of organisms, changes are not selected blindly, but are directed by the relative survival of offspring with those changes. In other words, finding any given needle creates parameters for the likely needles that can next be found – closer needles be more likely than further needles.

  66. KF

    F/N: When we see people trying to argue that the “blind watchmaker” mindless chance variation driven processes THEIR SIDE HAS PUT FORWARD are “not blind” — not non- foresighted — that takes the cake.

    None are so blind, etc etc.

  67. According to the dictates of right reason

    1. Unintelligent processes can’t do intelligent things
    2. Specified complexity requires intelligence
    3. A cell is specified and complex
    4. Material processes aren’t intelligent
    4. Therefore GOD designed living things

    I’ve not encountered an ID proponent capable of seeing the errors in this argument. Or the very similar CS Lewis argument that logic can’t be the result of material processes.

    They assume what they want to conclude, then introduce spurious extra steps to try to persuade themselves and others that they are actually reasoning.

    It’s the same with their “evolution can’t do the search” arguments. Step one, assume the conclusion…

    It’s all they ever do.

    Almost everything I would want to say to anyone at UD about CSI is covered in this thread. I think I’ll just link to it when CSI comes up in future.

  68. OMagain,

    KF

    F/N: When we see people trying to argue that the “blind watchmaker” mindless chance variation driven processes THEIR SIDE HAS PUT FORWARD are “not blind” — not non- foresighted — that takes the cake.

    Another “weasel” argument in the making there, perhaps.

    We’d end up, no doubt, with evolution being “quasi-blind”

    I find this a bit wearisome. For my money, evolution IS blind as the word is commonly understood. It is not searching for anything. It has been shown that what IDists call “CSI” can easily be produced by well-known undirected, non-foresighted processes.

    I prefer to think of it that the “information” is simply not useful until an environment is encountered where it is. Until that happens it may be kept if it places no particular cost upon the organism, or lost for any of a number of reasons. Evolution doesn’t care – there’ll be another mutation along in a minute, or the organism goes extinct, or suffers a drastic bottleneck. Evolution doesn’t care

    I agree with davehooke below. CSI as a useful metric has been pretty well coshed (again) in this thread, and the circularity of associated “logic” been thoroughly exposed

  69. My problem with the blind metaphor is that in some contexts it implies that there is something for evolution to “see” in the first place. I think that’s the root of a lot of peoples’ misunderstanding and is certainly part of Dembski’s built in mischaracterization in a Search for a Search and the whole “latching” inanity concerning the Weasle program.

    Evolution, as I understand it, doesn’t search. It isn’t looking for anything. It provides a mechanism for biological variation that is either useful, neutral, or detrimental to biological entities in given environments. The environments as well are variable, thus there is an incredible array of biological/ecological combination sampling, along with an incredible array of biological/functionality sampling. But in no case is evolution “looking” for any given outcome, combination, or characteristic. Either an outcome works, or it doesn’t.

    Why the need to describe this process in terms of “searches” or “blindness”? Such terms, at least to me, imply analysis and/or goal seeking and I just don’t see evolution doing either.

  70. I disagree slightly. Structures are objective, and coding sequences are objectively tied to structures as interpreted templates.

    I think it is fair to think of sequences as findable objects.

    But sequence function is complex and multidimensional. One cannot know without trying whether a novel sequence is a needle or a straw. That is the blindness.

  71. While sequences are certainly objectively tied to structures, I don’t think evolution is “looking” for any particular structures (needles). In fact, evolution doesn’t “knows” what any given structure is, never mind what any sequence is, so how can it – even by analogy – be blindly “looking” for such things? Those types of metaphors just don’t make sense to me.

    I get Keith’s concept of “exploring” fitness landscape (though I tend to think of it more as “sampling” the fitness landscape), but even then I don’t conceptualize the process teleologically trying to find anything in particular. I get where you (and others) are coming from on this, but I just have difficulty thinking of evolution in those terms. To me, it would be like conceptualizing Earth’s hydrologic cycle as exploring the environment for evaporated water and then blindly searching for a place to dump it. The process itself doesn’t “know” what water is, nor does it “know” what the environment is; it just works because the properties of water allow for evaporation and condensation.

  72. kairosfocus, via the loudspeaker in the ceiling, offers two breathtakingly inane examples of how to detect design using CSI:

    Example 1, based on a definition of ‘aardvark’:

    CALC: 7 bits per ASCII character (not 5 per Baudot character) * 202 characters = 1414 bits.

    Functionally specific, so S = 1.

    Chi_500 = 1414 * 1 – 500 = 914 bits beyond the solar system threshold.

    Designed.

    KF

    Example 2:

    Your remarks just above are 123 ASCII characters in standard English [clearly objectively recognisable], at 7 bits per character.

    This gives us I*S = 861 * 1 = 861 functionally specific bits.

    Applying the Chi_500 metric that AF tries to imagine does not exist or is unusable:

    Chi_500 = 861 = 500 = 361 bits beyond the solar system threshold

    This is a second live demonstration in several days to AF.

    Let us see his next excuse for — sadly, predictably — dismissing and misrepresenting it.

    KF

  73. I may be a little slow, but I think I am beginning to see a pattern here.
    IDists who are numerate (e.g. Dembski) are careful to always talk in the abstract about the ‘far tail of a probability distribution’ and never offer up example calculations.
    IDists who are not numerate (e.g. Kairosfocus) are willing to do the calculations, and they always get them wrong.
    Two questions for Kairosfocus et al:
    1) Does your calculation of Chi_500 rely on the assumption that
    P(A and B) = P(A) x P(B)?

    2) Is the statement
    P(A and B) = P(A) x P(B)
    true?

  74. The CSI calculation of Dembski boils down to a complete obfuscation of a very simple notion from statistics. That kairosfocus character simply obfuscates even further.

    If an event has a probability p, and the number of trials attempting to get that event is N, then the mean number of successes in achieving that event is Np.

    Dembski’s calculation of Χ – after many paragraphs of rationalizations and side tracks – boils down to

    Χ = – logNp = log(1/p) – logN.

    with N = 10^120 φ(T) and p = P(T|H).

    The 10^120 was taken from Seth Lloyd, and is Lloyd’s estimate of the number of logical operations it took to make our universe. Including the φ allows for the possibly of multiple universes involved in the number of trials, with 10^120 logical operations per universe.

    Assuming only one universe, the calculation comes down to

    Χ = log(1/p) – log10^120 = log (1/p) – 400

    So 1/p is the number of trials required to get one instance of the specified event, and taking the logarithm gives the amount of “information” supposedly contained in that sample space, and log base 2 gives that “information” in bits. Note that it assumes uniform, random sampling. I am guessing that this would be what Dembski and Marks call “endogenous information.” The 400 would be the “exogenous information,” and their difference would be the “active information.”

    As Elizabeth and Joe Felsenstein point out, the problem is coming up with a distribution for P(T|H).

    In addition, Seth Lloyd’s calculation is based on the fact that the events being questioned by ID advocates – such as the origins of life and evolution – are already included in the 400 (or 500 in some of Dembski’s calculations). If Dembski wants to use Lloyd’s number in his CSI and apply it to events in the universe, it therefore makes no sense to assert that the “endogenous information” they contain is greater than the “endogenous information” in the entire universe.

    Said more directly, the N trials to make the universe already produced the event in question; therefore the number of trials required to produce that particular event has to be less than the number of trials to produce all the events in the universe.

    So here again we see the circularity contained in the assumption that such events do have more such information. One can very easily enumerate permutations and combinations of things and get numbers far larger than all the operations in the universe. It all depends on how one chooses to describe it. The ID descriptions are generally chosen to make events such as the origins of life look impossible.

  75. Mike Elzinga:
    . . .
    Said more directly, the N trials to make the universe already produced the event in question; therefore the number of trials required to produce that particular event has to be less than the number of trials to produce all the events in the universe.

    So here again we see the circularity contained in the assumption that such events do have more such information.
    . . . .

    I agree with you about the deliberate obfuscation, but isn’t the assertion of the intelligent design creationists that such an event was caused by an actor outside the universe? They want to keep multiplying probabilities until they can say “Couldn’t have happened!” then use the fact that it did happen as proof for their gods.

    Or am I missing your point?

  76. I posted a calculation for the CSI of a rock somewhere here on TSZ, but I don’t remember where, and I was having problems with the editor that led to my making some typos. It is also here over on Panda’s Thumb.

    The problem is of course with the probability of the event, as Joe F and Elizabeth have already stated.

    My point about Dembski’s use of Seth Lloyd’s calculation is that Lloyd’s calculation is done properly using physics. It includes those very events – such as the origins of life and evolution – that ID advocates question. Those events contribute to the “information” and the number of logical operations that produced the universe.

    To use that 10^120 as an “upper bound” is to include the number of trials to produce life and evolution. To then turn right around and immediately “calculate” a larger number of trials for that particular event is to imply that the 10^120 is wrong. So why use it?

    It is too easy to “specify” complex information. All one has to do is the same trick I did for the rock. It is the same mistake of declaring that a particular winner of the lottery was too improbable an event for it to have happened. I can choose any particular rock I want as the winner; and a probability calculation can be made to “prove” that it had to have been designed.

    It is the same thing with card games. There are 52!/(47! 5!) = 2,598,960 ways to deal a five-card hand. Agreed upon convention decides which hands are winners; otherwise all hands are equally probable.

    If all rocks are equally probable (they aren’t), why can’t I simply declare that THIS particular rock to be the winner and therefore its occurrence so improbable that some kind of intelligent cheat was involved in its existence?

    It is the same game with any other events in the universe. We don’t know their probabilities; but we can’t just decide after the fact that some of them – such as the origin of life – are such improbable winners that some kind of intelligent cheat was involved in their occurrence.

    We already know the physics and chemistry; and calculations based on the physics can already set bounds on how many logical operations are required. It’s less than 10^120.

Leave a Reply