Prizegiving!

Phinehas and Kairosfocus share second prize for my CSI challenge: yes, it is indeed “Ash on Ice” – it’s a Google Earth image of Skeiðarárjökull Glacier.

But of course the challenge was not to identify the picture, but to calculate its CSI. Vjtorley deserves, I think, first prize, not for calculating it, but for making so clear why we cannot calculate CSI for a pattern unless we can calculate “p(T|H)” for all possible “chance” (where “chance” means “non-design”) hypotheses.

In other words, unless we know, in advance, how likely we are to observe the candidate pattern, given non-chance, we cannot infer Design using CSI. Which is, by the Rules of Right Reason, the same as saying that in order to infer Design using CSI, we must first  calculate how likely our candidate pattern is under all possible non-Design hypotheses.

As Dr Torley rightly says:

Professor Felsenstein is quite correct in claiming that “CSI is not … something you could assess independently of knowing the processes that produced the pattern.”

And also, of course, in observing that Dembski acknowledges this in his paper, , Specification: The Pattern That Signifies Intelligence, as many of us have pointed out.  Which is why we keep saying that it can’t be calculated – you have to be able to quantify p(T|H), where H is the actual non-design hypothesis, not some random-independent-draw hypothesis.

I’d say (and interestingly Joe G seems to agree) that this makes CSI useless. I don’t think it means that we can’t detect design (I think there are loads of ways of detecting design) but it’s worth considering why vjtorley things CSI still has some use.  He says:

But how can we rule out all possible “chance hypotheses” for generating a pattern, when we haven’t had time to test them all? The answer is that if some “chance hypotheses” are much more probable than others, so that a few tower above all the rest, and the probabilities of the remaining chance hypotheses tend towards zero, then we may be able to estimate the probability of the entire ensemble of chance processes generating that pattern. And if this probability is so low that we would not expect to see the event realized even once in the entire history of the observable universe, then we could legitimately infer that the pattern was the product of Intelligent Design.

I think that’s entirely legitimate, but I don’t think it merits the description CSI if we take CSI to be the item in Dembski’s formula.  For example, if something isn’t obviously the result of some other iterative process like crystallisation or wave action (which my glacier is), or self-replication, then it might be perfectly reasonable to infer design as at least a possible, even likely, candidate (black monoliths on the moon would come into this category).  And in such circumstances, the next obvious question would be: well, what can we now infer about the designer? In the case of a black monolith, we could probably infer quite a lot, and we could start testing hypotheses about when the designers might have fabricated the object, and what tools they might have used, and what purpose it might have been intended to serve, etcetera.  Perhaps one day the Voyager capsule may be subjected to just those investigations by future alien scientists.

But ID proponents rule out such speculation.  The claim Dembski makes is that ID is the  the study of patterns in nature that are best explained as the products of intelligence, not the nature of the putative intelligent agent itself (which he, refreshingly, happily hands over to theology). And if CSI requires that we know the probability that the pattern can be explained by non-design before we can conclude that it is best explained by intelligence, then the entire question is begged if the pattern in question is something that could be produced by an iterative process, such as a glacier, or a crystal, or a self-replicating molecule.  Such things could be designed, of course, and are, and, indeed, design itself is, I would argue, an iterative process.  But in order to infer that design is the best explanation using CSI, we’d have to calculate the probability of seeing such a pattern under all possible non-design iterative hypotheses.

And if any ID proponent can suggest how we might make such a calculation, I would like to hear it.  Until then I shall continue to consider that we have no method for calculating CSI 🙂

56 thoughts on “Prizegiving!

  1. And let me say yet again, that anyone at UD who is interested in this problem (and clearly some are!) would be extremely welcome to come over here, or cross post here and at UD, so that we can save on throat pastels 🙂

  2. Don’t I recall numerous people saying, in past years in places like Uncommon Descent, that CSI can be determined to be present without knowing the process that produced the pattern?

  3. Lizzie:
    And let me say yet again, that anyone at UD who is interested in this problem (and clearly some are!) would be extremely welcome to come over here, or cross post here and at UD, so that we can save on throat pastels

    “Pastilles”, I think – unless you’re in the habit of painting your throat. 😛

    One can only speculate why the UDers don’t come here more often, especially since most of us are banned there. Discussions at TSZ are conducted civilly, and by thoughtful and informed people. Levels of snark are lower than they would be in face-to-face conversation, and there are very few if any direct insults. WJM seems to enjoy it here, at least: and I suspect that others from UD would, but they unwilling to endure the jeers, polemic and wrath that would be visited on them by the Baboon Tendency at UD. Just look at the stick they get for doubting the orthodoxy.

    And I would REALLY like to see one of ’em mounting a defence here of their views on the size and accessibility of “protein space”

  4. damitall2: “Pastilles”, I think – unless you’re in the habit of painting your throat.:P

    Heh. I’m usually a good speller*, but there are some things I just figured out wrong. I thought that pastel colours were called pastels because they were the colour of pastilles (or possibly the other way round). I also thought that leasure was spelled leasure as in pleasure, which made sense to me. I wrote a whole essay about “leasure” in my first year at university, and was terribly embarrassed to discover I’d been spelling it wrong all my young life! I shall spell “pastille” correctly from henceforth, even though I’ve been spelling it wrong for longer than I have a chance to spell it right, and I shall leave my post uncorrected as a monument to this moment.

    BTW I also as I child figured that “caution” meant “luggage” or maybe “cargo”, because in the bus, on the back of the seats, there was a sign saying “caution racks over head”. Also, signs on the road saying “caution lorries”. It was a while before someone asked what on earth I was talking about when I said that I needed to pack my caution.

    ETA* but a homophonic typist, for some reason. But that wasn’t my typing fingers, it was a top-down error 🙂

  5. Lizzie,

    😀 😀 😀

    I’m sorry about that – these errors just glare at me. I hope your life is not too much changed. A one-time boss of mine, a man of grandiose ideas, was wont to spell and pronounce the word “grandoise” (why ever he even needed the word in a clinical microbiology lab was and is beyond me). A callow youth then, I corrected him. He never used the word again, and I was made to spend a couple of weeks doing the dirtiest and most boring tasks available.
    He also occasionally called people a “sofanobitch”. I kept schtumm.

    But, come to think of it, it’s a good idea to take a modicum of caution with you wherever you go

  6. Lizzie you talk always about “self replicative molecules”. Do you have any example other than RNA?

  7. Blas:
    Lizzie you talk always about “self replicative molecules”. Do you have any example other than RNA?

    Peptides are a possibility. Actually many molecules self-replicate – any compound that catalyses its own synthesis is a form of molecular self-replicator. Rust, for instance.

    The neat thing about self-replicating polymers that the polymer is a class of molecule made of linked monomers that can be linked in some sequence, so each individual examplar of that polymer might have a different sequence, and that sequence will be found in the daughter molecule as well as the parent.

  8. Lizzie,

    Could you point me a paper of peptides replicating themselves?
    When you mention “compounds that catalyses its own synthesis” what are you thinking apart peptides or RNA?
    Thanks

  9. KF writes:

    F/N: I see that despite explicit use of the explanatory filter in inferring not-designed, some over at TSZ — RTH, this means you in particular — are unable to recognise it in action. Sadly but unsurprisingly revealing. KF

    Well, no. What you did, KF, was to find out what it was, saw that it was a photograph of something that was designed, and then concluded that it was not designed!

    What if it had turned out to be a butterfly wing, as some suggested? Would you also have concluded that it wasn’t designed?

    The EF is as useless as CSI. Both require that you figure out, in advance, how likely it is that a thing had natural causes, or chance causes, and then reject it if that probability is low. CSI wraps it into p(T|H).

    But p(T|H) is the very thing nobody (including you) tells us how to calculate for something of unknown origin.

    Anyone can tell that something they know wasn’t designed, wasn’t designed! What we want to know is how you tell that something wasn’t designed if you don’t know, a priori, that it wasn’t.

    Looking it up on google first isn’t using either the EF or CSI!

  10. Moreover…

    That glacier produced that pattern. That pattern is pretty cool, and complex, and specified (the glacier even more so than the photo). So that glacier “found” that pattern.

    But according to the Law of Conservation of Information, that search itself had to be “found”. How did that Glacier, with its pattern-finding properties, come to be? Must have had a Designer!

    Yes! Without a Designer there would be no glacier to find that pattern! There would be no earth to produce that volcano, no water to form ice! Nature itself has CSI!

    Therefore Design!

    Looks to me as though ID boils down to Anselm’s ontological proof.

    So can we give Darwin a break now?

  11. Blas:
    Lizzie,

    Could you point me a paper of peptides replicating themselves?

    A self-replicating peptide

    When you mention “compounds that catalyses its own synthesis” what are you thinking apartpeptides or RNA?

    Iron oxide for instance. It catalyses its own formation in the presence of air and water. Which is why when you seen a rust spot on your car, you need to fix it quickly, because once you have a little rust it will spread rapidly.

    Thanks

    You’re welcome 🙂

  12. A nearby university warns of speed bumps in the roads by painting on the roadway in ten foot high letters, “HUMP SLOW.”

  13. But ID proponents rule out such speculation.

    Surely you realize that investigations into the nature of the designer, tools, manufacturing process, design goals, etc. have nothing to do with actually detecting the design in the first place. Those are different kinds of investigations that can only be pursued after the design has been detected.

    We do not “rule out” such “speculations”, but rather rightly do not place the cart before the horse.

  14. Joe Felsenstein:
    Don’t I recall numerous people saying, in past years in places like Uncommon Descent, that CSI can be determined to be present without knowing the process that produced the pattern?

    Yes, based on Dembski’s own words:

    By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause.

    If knowledge of the artifact’s history is used, it’s not a calculation of CSI.

  15. I’d say ‘self-replicating’ is a bit of a stretch! Self-ligating, perhaps, joining 15 and 17 acid subunits into the 32-acid structure that catalyses the reaction. The general trouble with peptides as sequential reading frames is that they are so damned lumpy and bent! There’s a similar ribozyme-catalysed joining of elements to make a whole that catalyses the joining of the elements … the problem is in generating the elements, of course.

  16. William J. Murray: Surely you realize that investigations into the nature of the designer, tools, manufacturing process, design goals, etc. have nothing to do with actually detecting the design in the first place. Those are different kinds of investigations that can only be pursued after the design has been detected.

    We do not “rule out” such “speculations”, but rather rightly do not place the cart before the horse.

    Well, I’d say that in practice, design detection involves iterating between cart and horse, over and over.

    And certainly if you find a designer, then you have an important piece of evidence for deciding if something was designed!

    But the point here is that it seems that horse-first method doesn’t actually work or at least not CSI.

    Because, as we were discussing on the other thread, William, we don’t have a method for computing p(T|H)!

    KF only managed it for my picture by googling it first! And even then, he hasn’t shown me that it wasn’t designed, only that its proximal cause was non-design.

    And we know that the proximal cause of a living cell is non-design. It’s the distal cause that is at issue.

  17. Patrick: Yes, based on Dembski’s own words:

    If knowledge of the artifact’s history is used, it’s not a calculation of CSI.

    Yes. But in the same paragraph he requires that we compute the probability of the Target under all non-design hypotheses, for which we must, if it is a living thing, include Darwinian processes.

    If Dembski isn’t taken seriously by what he likes to call “the Academy” he should consider whether it’s because he doesn’t actually make sense before concluding that it’s all a materialist stitch-up.

  18. Lizzie,

    Apart from miller observation, one of the peptides was preactivated as thyobenzil ester, and given the initial concentration of the purified reactants also the “self” of self ligating is a bit stretch.
    Also rust, it is not “self” replicating. Iron became rust because is the lower energy state given the conditions. That is a chemical reaction not a “self replication”.
    There are any other examples?

  19. Blas:
    Lizzie,

    Apart from miller observation, one of the peptides was preactivated as thyobenzil ester,and given the initial concentration of the purified reactants also the “self” of self ligating is a bit stretch.

    You asked me for an example other than RNA. Peptides are a possibility. Then you asked for a paper, so I gave you one link to a paper that explored that possiblity. I wasn’t making any great claims for a peptide-first OOL theory.

    Also rust, it is not “self” replicating. Iron became rust because is the lower energy state given the conditions. That is a chemical reaction not a “self replication”.
    There are any other examples?

    In what sense is “self-replication” not a chemical reaction? And if a molecule catalyses a reaction that produces molecules like itself, what sense is that not “self-replication”?

    It’s not how we usually think about self-replication, but autocatalysis is a form of self-replication. If not, why not?

  20. D’oh! The Canadian band was actually Cowboy Junkies – The Caution Horses was one of their albums. Very, very quiet. But there is also a Kent band of that name. A much noisier proposition.

  21. It’s not how we usually think about self-replication, but autocatalysis is a form of self-replication. If not, why not?

    Yes, it’s an interesting semantic distinction. A macromolecule that serially condenses monomers to make a copy of itself, and one that joins two ‘pre-packed’ collections of monomers to make a version of itself, are not all that dissimilar in what they are doing. The population of molecule A increases in the world, due to the action of molecule A upon substrates. So maybe I was wrong … too hung up on ‘copying’.

  22. KF says (do come over, KF, we’re not that bad :))

    EL: Indeed, the glacier — or the tree growing and then being cut, resulted in a phenomenon exhibiting complexity, coming from a wide space of possibilities, W. However, in neither case, is there any constraint that locks the outcomes to a simply separately/independently describable narrow zone T. You can see that by examining a stack of plywood sheets at your local shop, or planks: the patterns vary al over the place and that makes but little difference. That would be sharply different from a cluster of evident sculptural portraits at a certain mountain in the US. And in the case of parts that have to fit and work together to achieve a function, such is even more evident. KF

    Yes, there are constraints, and I calculated one such constraint. Dembski says that if you can describe a subset of patterns simply then that is a specification. My pattern is one of a tiny proportion of patterns with pixel values drawn from the same distribution that has a mean autocorrelation as great or greater than .89. And I calculated that proportion, and this pattern had a z score of several thousand – so large that I couldn’t calculate it in bits as I had precision overflow. but as 500 bits comes out to a z score of 26, clearly my pattern way exceeds it, under the null of random independent draws of each pixel. In other words, random independent draws for each pixel (or each molecule in that glacier if you want) could not have produced that pattern in the life time of the universe.

    So the question is: how do I calculate that p(T|H) where my null H takes into account all non-chance hypotheses, not just random independent draws?

    This is what Dembski requires us to do. How?

  23. Allan Miller:
    It’s not how we usually think about self-replication, but autocatalysis is a form of self-replication. If not, why not?

    Yes, it’s an interesting semantic distinction. A macromolecule that serially condenses monomers to make a copy of itself, and one that joins two ‘pre-packed’ collections of monomers to make a version of itself, are not all that dissimilar in what they are doing. The population of molecule A increases in the world, due to the action of molecule A upon substrates. So maybe I was wrong … too hung up on ‘copying’.

    Well, I think that “copying” only really becomes meaningful if we have a population of molecules all self-replicating but all slightly different, but consistently producing daughter molecules that resemble their parent more than would be expected under the null of independence.

    And more interesting, from a Darwinian perspective, if the daughters are not perfect copies!

  24. Lizzie: You asked me for an example other than RNA.Peptides are a possibility.Then you asked for a paper, so I gave you one link to a paper that explored that possiblity.I wasn’t making any great claims for a peptide-first OOL theory.

    In what sense is “self-replication” not a chemical reaction?And if a molecule catalyses a reaction that produces molecules like itself, what sense is that not “self-replication”?

    It’s not how we usually think about self-replication, but autocatalysis is a form of self-replication.If not, why not?

    The self replicators that materilist needs in order to make life a natural process (self replicator- FLCA-UCLA) something that has a low probability of appearance but his presence make the probability of his appearance very higher. Example when you drop a box of matches very few will form a square , if adding a square of matches to a new box increases the number of squares when you drop the box, then you have a replicator. You can immagine a flow of matches falling and the square of matches replicating squares of matches.
    Ferric oxid do not increase the probability of appearance of more ferrico oxide, only makes it happen faster.

  25. Blas:
    Ferric oxid do not increase the probability of appearance of more ferrico oxide, only makes it happen faster.

    If it makes it happen faster, it increases the probability of it happening!

    That’s why probability is normalised frequency!

  26. Lizzie: If it makes it happen faster, it increases the probability of it happening!

    That’s why probability is normalised frequency!

    No. The probability it happen is the same in the same conditions. The catalist do not change the value of concentrations in the equilibrium, only make the equilibrim is reached faster. That is basic chemistry.
    No matter of that then examples of self-replicators are RNA, peptides and rust? Nothing else?

  27. Oh, for goodness sake!

    Eric Anderson says:

    If Lizzie thinks a glacier pattern or some similar natural phenomenon is “specified,” then this means she simply doesn’t understand what is meant by a specification.

    I’m using Dembski’s definition, which is a member of subset that can be described (by “semiotic agent S” – thassme!) as or more simply than the Target. If you don’t like Dembski’s definition, Eric, then that’s fine. But as we are using Dembski’s definition of CSI in this exercise, I will stick with it. I can describe this pattern as a subset of patterns that have as high, or higher, a mean autocorrelation than this pattern. That is a very tiny proportion of the total number of patterns. So tiny that its bits are off the scale of my computer.The fact that it is a pixelated photograph is neither here nor there – the pattern occurred in nature, and the photograph is merely a record of it (a simplified record) just as a string of amino acid symbols is a record of the protein. Both can be analysed as patterns found in nature. If CSI is any use, then it ought to be possible to compute it for my pattern.

    Further, the photo was put forward, it was analyzed, the design filter worked. But now there seems to be a lot of backpedaling. Why can’t they say, “Well done. Looks like the filter worked in this case.”

    The filter worked? Google worked!

    The only filter that was applied was this:

      1. Find out what it is.
      2. Ask: is that thing designed? If no – answer: it isn’t designed. If yes: it is designed.

    What if it had been a butterfly wing, or a piece of tree bark? What it were some pattern on Mars, and you didn’t know what it was made of?

    How would your filter have worked then?

    And: how do you compute its CSI?

    But now there seems to be a lot of backpedaling.

    heh. I’ll say.

    Come on, Eric, get yourself over here and sort this out 🙂

  28. Thanks, that is very helpful. Winston Ewert and VJTorley (and I assume Dembski himself as well) seem to be saying otherwise, saying that Dembski always meant to include the P(T|H) term in the definition of CSI. That all of us who interpreted it otherwise were wrong.

    I had forgotten that quote, so thanks.

  29. Blas: No. The probability it happen is the same in the same conditions. The catalist do not change the value of concentrations in the equilibrium, only make the equilibrim is reached faster. That is basic chemistry.

    It may be basic chemistry but it’s not probability.

    Try binning your events into a time series. If your events (iron oxide molecule formation) happen faster, then then in any given time sample, the probability of seeing a molecule form increases.

    Therefore the iron oxide molecule, by catalysing the reaction that makes more iron oxide molecules is catalysing copies of itself (i.e. giving rise to molecules like itself). Sure, those molecules can also arise ab initio without catalysis. But the presence of one molecule makes the next molecule’s appearance, in the next time bin, more probable. That’s why we derive probabilities from frequencies.

    No matter of that then examples of self-replicators are RNA, peptides and rust? Nothing else?

    There are other inorganic examples of autocatalysis. And obviously lots of examples in biology, but I assume you mean prebiotic examples.

  30. Eric again (do come over, Eric! We don’t have cooties! Well, not many!)

    The glacier “found” that pattern just like a fair dealer “finds” the improbable pattern of cards in each and every one of your hands. Without an independent specification, no set of five cards is more or less improbable than any other.

    I made an independent specification.

    Nor do you typically make an inference that the dealer is somehow designing exactly what cards are in your hand. You trust that random processes are at work as advertised. However, if an opponent suddenly starts getting Royal Flush after Royal Flush each and every hand, you WILL make a design inference. I guarantee it. You WILL suspect that the advertised random processes are no longer in effect and that the dealer is somehow designing the outcome.

    Exactly. So if I kept on getting patterns with autocorrelations of over .8, when the vast majority of patterns drawn from that distribution had autocorrelations near zero, I’d suspect something was going on! The question is, what? Clearly that pattern was not generated by independent random draws for each pixel from the distribution of pixel values. So what was generating the pattern? Design or natural causes?

    It’s exactly the same as if you repeatedly got hands in all the same suit, or hands of aces etc. In fact, it’s exactly like Dembski’s example of getting all heads, or all Republicans – the fact that it’s one of a small number of simply describable patterns is what makes it fishy, even though all patterns are equiprobable.

    HOW can you do this, given that each hand is just as improbable as any other? If you are honest and open-minded, you will conclude that your inference arises from your recognition that these particular cards are consistently lining up with an independent specification. (If you’ve got a better explanation as to how you’d infer design, I’d love to hear it. My point is that we both know that you WOULD infer design, and we both know that it WOULD be a valid inference.)

    In other words, to compute p(T|H) you need to know the probability of the target not just under the null of independent random draws, but under the null of some set of non-chance hypotheses that include stuff like glaciation and volcanoes, and Darwinian processes, and crystalisation, and things you haven’t even thought of but might have produced such a pattern!

    That’s why CSI (and the EF) don’t work. Not because its invalid to spot a fishy dealer in a casin0, but because it only works if you know the probability of your Target under the relevant null. If you don’t, and you just guess, then you don’t end up with any more information (hah!) than you started with. Once you’d googled my image, you knew the answer. The EF didn’t tell you anything more.

    Bringing this back around to pictures, if what is advertised as the random accumulation of volcanic ash on ice starts to resemble Ben Franklin (an independent specification) with more and more fidelity, there WILL once again be a point where you infer design. You KNOW that this sort of inference is valid, whether you resort to a formal calculation of CSI or not. Why keep acting like you don’t understand? It makes no sense and only serves to call your own faculties and credibility into question.

    Actually, Eric, it doesn’t. Please consider the alternative possibility that your own understanding of my position (and even of the problem) may be at fault.

    Firstly: my pattern has an independent specification (given above). Others are also possible, but I used a fairly simple one. There are probably others that would give an even smaller subset, but this is way small enough.

    Secondly, I have (again, see above) no problem with using CSI to infer Design where the null is “independent random draws”. I understand the math – I can do the math. But the “relevant chance hypothesis” in this case, as Dembski himself points out, needs to take into account “material mechanisms”, in addition to independent random draws. That’s why we don’t have to conclude design for my picture, even though its CSI is astronomically more than 500 bits if we merely assume (as we can with the card dealer) that the null is “independent random draws”.

    That’s why I set this exercise: to try to elicit from ID proponents how they compute p(T|H) when H, the “relevant chance hypothesis”, is not merely “independent random draws”.

    And the answer, I suggest is: you can’t.

  31. Lizzie: It may be basic chemistry but it’s not probability.

    There are other inorganic examples of autocatalysis.And obviously lots of examples in biology, but I assume you mean prebiotic examples.

    All that examples that you call self replicating has no chance to start a “darwinistic evolution” as none of them make a self replicator that replicates itself better than the original. So when you talk about “self replicators” you are referring to RNA or peptides.

  32. KF:

    F/N: On doing a CSI calc on the case. We had an image of some 500 kBits. There was no evidence of specificity:

    Chi_500 = 500kbits * 0 – 500 = – 500 bits

    That is on absence of evident specificity — as has been pointed out long since, and as has been explained long since, we are at the baseline, 500 bits short of the threshold.

    Sorry, that one won’t wash either. KF

    No evidence of specificity? You think that image is typical of images generated by random draws from the pixel-value distribution for each pixel?

    In any case, I’ve given a perfectly simple specification: mean autocorrelation equal or greater than 89, which is the mean autocorrelation in the image. Such a high autocorrelation is extremely improbable, under the null of independent pixels, and indeed I calculated just how improbable it is, and the z score came to more than 4000 (can’t remember the exact number). As the z score for 500 bits is 26 standard deviations, patterns of this class are extremely rare.

    So yes, not only is it specified, as it would be if it were the face of Benjamin Franklin, or my cat, or whatever, it is one of a very tiny subset of similarly specified patterns.

    And if we plugged that p(T|H) into the CSI formula (or the CSI-lite formula) you’d have to infer design. However, that isn’t the right value, because of course the “relevant chance hypothesis” isn’t “independent random pixel values”.

    So what is it?

    To put it another way: the reason you conclude that my pattern is not designed isn’t that it isn’t specified (it is) nor that the specification isn’t a small enough subset under the null of independent draws (it is) but because it’s the wrong null (the null should take into account volcanoes and glaciers). So how do you compute that null? Especially if you don’t know that it’s got anything to do with volcanoes or glaciers?

  33. Blas:
    Lizzie: It may be basic chemistry but it’s not probability.

    There are other inorganic examples of autocatalysis.And obviously lots of examples in biology, but I assume you mean prebiotic examples.

    All that examples that you call self replicating has no chance to start a “darwinistic evolution” as none of them make a self replicator that replicates itself better than the original. So when you talk about “self replicators” you are referring to RNA or peptides.

    Yes, I said that. I agree. I was just answering your question.

    I agree that we need some kind of autocatalysing molecule that has some kind of sequence that can vary between molecules, and which affects its ability to autocatalyse. RNA might be the answer. But as Szostak says, we also need some kind of mechanism for keeping keeping the molecules together – maybe a clay substrate, maybe lipid vesicles, maybe something else. There are lots of ideas around, but none that seem completely persuasive yet.

  34. KF:

    F:

    You are simply wrong, go up to 2 above, BEFORE I knew the image was a snow pattern. (I saw Phineas’ post AFTER I posted.)

    Notice, how I compared the case to wood grain, and pointed out how complexity and specificity were not apparently coupled? Notice, how I drew the inference that unless and until there was evidence of such a coupling of complexity and specificity, there would be a default to chance and necessity?

    But there IS a coupling of complexity and specificity. As I have pointed out.

    Notice, how I accepted that the design inference process is quite willing to misdiagnose actual cases of design that do not pass the criterion of specificity?

    WHY ARE YOU TRYING TO REVISE THE FACTS AFTER THE FACT?

    You will notice that I then saw Phineas’ comment, and remarked on that, highlighting WHY such a case would not couple specificity to complexity?

    Thereafter, I did a Google search, which is a TARGETTED search, and from that identified the credible source. I then was able to fit the clip from TSZ into the image more or less, I think there is a bit of distortion there.

    This confirmed the assessment.

    What your search confirmed was what I said: that the pattern was a photograph.

    So, the truth is that the EF did work, and did what it was supposed to do. It identified that complexity without specificity to a narrow zone T,

    It totally failed to do this, because my pattern can indeed be specified to a narrow zone T. You just assumed it couldn’t be, without really thinking. If you’d read my post properly, and done the math, you’d have realised that the chances of such a high-contrast pattern being generated by a randomly generated pattern of pixels drawn from the same distribution as my image was vanishingly small. Its specification is staring you in the face as clearly as a run of several million heads would stare at you.

    will not be enough. It was clear that this could be a case where actual design is such that it cannot be detected — recall my remarks on not being a general decrypting algorithm? — and then we were able to confirm the evident absence of such a match. Unless there is some steganography hiding in the code that I do not have time or inclination to try to hunt down.

    What else is clear, is that the test is a strawman.

    It is not a straw man. You were faced with a pattern that clearly could not have been generated by a process of independent draws – that was clearly the product of some non-random process. But it was also likely to be a non-design process. However, rather than actually attempt to compute the probability of such a pattern by non-design mechanisms, you guessed – a safe guess, because ID only claims false negatives, not false positives.

    That pattern has a very tight specification. It was also generated by a process (not a random independent process) that would not normally be considered the work of a Designer (although of course presumably you regard the entire universe, glaciers and all, as Designed). So it should not have CSI. And I agree, it does not have CSI. But the reason it doesn’t have CSI isn’t that it isn’t complex or specified (it is both) but because p(T|H) is undoubtedly high.

    We agree that it is high – because we both know that it was made by volcanic ash falling on a glacier!

    In other words all CSI, or the EF does is tell you the opinion you had in the first place!

    What is needed to really test the design inference is a case where design will be identified and it is not present.

    But that, I am afraid — as the random document generation tests show — will be very hard to do.

    Of course. Because you have no way of determining whether design is not present – whether your original determination was false.

    What you have succeeded in doing, is to show us that we are not dealing with a reasonable minded, fair process or people.

    Which, unfortunately, on long experience, we have come to expect by now.

    I think you have some self examination to do, sir.

    Kairofocus, with respect, I think you need to do the same.

    Let me restate: what proportion of patterns generated by random independent pixel selection will produce an image with anything like the degree of contrast in my image? And in what possible sense is that degree of contrast NOT a specification?

  35. Eric:

    What does that mean, Lizzie. Are you asking for a calculation of specificity? If so, then you don’t know what you are talking about and demonstrate that you still don’t understand the design inference.

    No, I’m not asking for that, Eric. I can do that, as you will see if you read my posts here. But I’ll rephrase, as clearly as I can:

    My pattern consists of a large number of pixels. They represent dark and light spots on a glacier, just as amino-acid symbols represent amino acids in a protein.

    Both are representations of real patterns found in nature.

    However, just as with the amino acids in a functional protein, the pixels in my image form a rather special pattern. And, as Dembski says, if I can describe my pattern as one of a small subset of patterns that can be described very simply, then I have specified that subset. That image is a very high contrast image – it has strong bands of dark and light. We could also say it has a low dominant spatial frequency. Such a pattern is very unlikely to be generated by random independent draws for each pixel – clearly some process has ensured that adjoining pixels have a high probability of having similar values.

    So I made a simple specification (it could have been far tighter) for my image, which was to describe it as one of a subset of images in which the correlation between the value of pixel and its neighbour is very high. In my image, that correlation is .89. Clearly, the chances of gettring such a high autocorrelation by chance draws of pixel value is extremely low – like getting a hand consisting of all the same suit, time after time, in a series of card deals.

    So I have a simple specification for my image.

    Here is Dembski:

    S [“the “semiotic agent], to identify a pattern T exhibited by an event E , formulates a description of that pattern. To formulate such a description, S employs a communication system, that is, a system of signs. S is therefore not merely an agent but a semiotic agent . Accordingly, φ′ S ( T ) denotes the complexity/semiotic cost that S must overcome in formulating a description of the pattern T . This complexity measure is well-defined and is not rendered multivalent on account of any semiotic subsystems that S may employ. We may imagine, for instance, a semiotic agent S who has facility with va rious languages (let us sa y both natural and artificial [e.g., computational programming] languages)
    and is able to describe the pattern T more simply in one such language than another. In that case, the complexity φ′ S ( T ) will reflect the simplest of S’ semiotic options for describing T .

    Well, my description is: patterns with autocorrelations >=.89. Clearly the higher the autocorrelation, the more easily compressed the pattern is, because as each neighbouring pixel becomes more highly dependent on its neighbour, the residuals become steadily less. A perfectly autocorrelated pattern will be extremely compressible.

    So no, I’m not asking for a calculation of specificity. I’ve done that, and I’ve done it correctly, according to Dembski’s definition.

    What I am asking for is a calculation of p(T|H). Because if H is simply random independent draws for each pixel from the same distribution as my image, my image has massive CSI, and we must conclude design. However, clearly that glacier is not designed (or at least, can be described in terms of known behaviour of physical objects). So how do we get the CSI equation to give us the right answer? Well, we need to accommodate in our p(T|H) any material mechanisms that could have produced the pattern (i.e. not just independent random draws).

    In this case, we can take an intuitive stab at it and say that p(T|H) is sufficiently high that the CSI number drops well below 500 bits.

    But that’s because we know what the thing is, and what caused it. In other words CSI tells us nothing we didn’t know in the first place. Our estimate of p(T|H) is simply an estimate of what we think we know already.

    It doesn’t tell us whether the thing is designed. It merely tells us whether we think it was designed.

    Unless you can provide an objective way of calculating p(T|H).

  36. Perhaps reversing it might work.

    IDers, what could be provided to you such that you would then be able to perform “the design inference”? Any examples from the ID camp you are happy to defend and go into detail for along the lines of the one given here?

    It was clear that this could be a case where actual design is such that it cannot be detected — recall my remarks on not being a general decrypting algorithm?

    Fine. Can you KF provide a case study then where:

    A) It’s not clear if X is designed
    B) ID indicates X is designed.

    Lizzie. Are you asking for a calculation of specificity? If so, then you don’t know what you are talking about and demonstrate that you still don’t understand the design inference.

    Perhaps an example, similar to the attempt in the OP here, would help illustrate matters then?

    Explain it in terms even I can understand!

  37. Lizzie: Yes, I said that.I agree.I was just answering your question.

    I agree that we need some kind of autocatalysing molecule that has some kind of sequence that can vary between molecules, and which affects its ability to autocatalyse.RNA might be the answer.But as Szostak says, we also need some kind of mechanism for keeping keeping the molecules together – maybe a clay substrate, maybe lipid vesicles, maybe something else.There are lots of ideas around, but none that seem completely persuasive yet.

    I asked because usually materialist talk about “self replicators” that we can find around the next corner, I wanted to check what really exists. I see that apart from RNA it is all about voluntaristic imagination.

  38. Blas,

    It is the Designer who is an imaginary construct. We actually have evidence that chemistry exists.

  39. Blas: I asked because usually materialist talk about “self replicators” that we can find around the next corner, I wanted to check what really exists. I see that apart from RNA it is all about voluntaristic imagination.

    Yes of course. All science starts with trying to come up with an explanation for something, which is a highly creative process. The next part is deriving predictive hypotheses from your imaginative explanation and testing them against actual data.

    OOL research is in the early stages. But there are some promising results from hypothesis testing experiments so far.

  40. The postulated Designer is also an imaginative construct. That’s fine – but in order to move it out of the realms of imagination only, you would normally derive testable hypotheses from your explanation.

    That’s what doesn’t happen with ID.

  41. Lizzie:
    OOL research is in the early stages.

    More than fifty years from the Miller experiment, eighty? from the Oparin vescicles.
    If you wnat to call it “early stages”.

    Lizzie:
    But there are some promising results from hypothesis testing experiments so far.

    Sure, we are solving the chirality problem soon.

  42. Lizzie:
    The postulated Designer is also an imaginative construct.That’s fine – but in order to move it out of the realms of imagination only, you would normally derive testable hypotheses from your explanation.

    Could you explain by testable?

    Lizzie:
    That’s what doesn’t happen with ID.

    Sorry, I do not think ID is science more than ToE.

Leave a Reply