Dr Nim

It has struck me more than once that a lot of the confusion that accompanies discussions about stuff like consciousness and free will, and intelligent design, and teleology, even the explanatory filter, and the words “chance” and “random” arises from lack of clarity over the difference between decision-making and intention.  I think it’s useful to separate them, especially given the tendency for people, especially those making pro-ID arguments, but also those making ghost-in-the-machine consciousness or free will arguments, to regard “random” as meaning “unintentional”.  Informed decisions are not random.  Not all informed decisions involve intention.

This was my first computer:

It was called Dr Nim.  It was a computer game, but completely mechanical – no batteries required.  You had to beat Dr Nim, by not being the one left with the last marble, and you took turns with Dr Nim (the plastic board).  It was possible to beat Dr Nim, but usually Dr Nim won.

Dr Nim was a decision-making machine.  It would decide how many marbles to release depending on how many you had released.  Frustratingly, it seemed  no matter how clever you got, Dr Nim nearly always left you with the last marble. Here is a youtube demonstration:

Clearly, Dr Nim is not acting “randomly”.  It wins far more often than would a random system that selected 1, 2, or 3 marbles regardless of how many there are left.  In fact, it seems to “know” how many marbles there are left, and chooses the best number to drop, accordingly. In other words, Dr Nim makes informed decisions. But clearly Dr Nim is not an “intentional agent”.  It’s just a few plastic gates mounted on a channeled board.

And yet Dr Nim behaves like an intelligent agent.  It was certainly smarter than me at playing Nim!

I suggest that the products of evolution look like the products of intelligence( as in informed, non-random, decision-making) because they are the products of intelligence (as in informed, non-random, decision-making).  The mistake I think ID proponents make it to assume that such a system must be intentional.

What’s the difference?  I suggest that an intentional decision-maker is one that is able to model a distal goal, and select actions from a range of possible actions, on the basis of which is most likely to bring about that goal.  And I suggest that humans, for example, do this by simulating the outcomes of those actions, and feeding the results of those simulations back into the decision-making process.  This allows us to cut corners in a way that evolutionary processes can not, and evidently, do not.  It also, I suggest, gives us enormous freedom of action – as in “degrees of freedom” – not to do “random” things (which would be the opposite of “intentional”) but things that we will – intend.  Although sometimes it makes us not quite as clever.

113 thoughts on “Dr Nim

  1. William J. Murray:
    So, in your opinion, animal mimicry is not a form of deception?

    My opinion is irrelevant. But by the definition I have given, no it isn’t. If we use a different definition, it may be. That’s the point of being clear about operational definitions.

    I do not think that a milk snake deliberately looks like a coral snake in order to deceive predators into think it’s poisonous. I doubt that a mother bird deliberately pretends to have a broken wing in order to deceive a hawk into attacking her. I suspect that a chimp who has sex with another chimp behind a log so that it will not be observed by the alpha chimp is being deliberately deceptive. I know that when I told my mother I hadn’t stolen half a crown out of her purse I was being deliberately deceptive.

    So if we regard being deceitful as a necessarily deliberate act, no, a coral snake is not deceitful. The predator may, however be deceived.

    Consider the word to teach:

    “I taught him to the best of my ability, yet he learned nothing”. Is this an oxymoron? Depends whether you regard “teach” as meaning “enabled somebody to learned something” or “tried to enable somebody to learn something”.

    Operational definitions matter. That’s why I am giving some – not to deceive, but to clarify what I mean.

  2. William J. Murray: These are all attributes of the only currently known ID agent – humans.That is all we currently have to base our definition of intelligence, and thus Intelligent Design, on.

    Well, in that case a) you differ from Dembski, which is fine, but you might as well be clear about that and b) if that is the definition of intelligence you are using, then it is important to understand that non-intelligent processes (by your definition) can produce highly non-random results, and indeed, I would argue, be effective problem solvers and decision-makers.

    Therefore we cannot infer “intelligence” (by your definition) from the existence of structures that appear to have resulted from an effective problem solving, decision-making processes.

  3. William,

    You can define words however you want, Liz. IMO, however, what you are doing is deceitful, in the same way that the phrase “compatibalist” free will is deceitful.

    Funny, as I’d also put claiming that FSCO/I is a useful, easily calculable metric is also deceitful. And you’ve done that.

    After all, if it was as you claimed you’d have just calculated it on request to prove the point, instead you left the thread after dropping that bombshell that FSCO/I is actually usable. Would you like me to pull up some quote? I’m happy to do so, you made some strong, specific claims.

    Why not try and calculate it for, say, what you claim to be the first cell (of which apparently you have knowledge the rest of us don’t have) and it’s 250 proteins. Then you can prove that the first cell was intelligently designed due to it’s FSCO/I content. Please note, saying that there is “loads” of FSCO/I present (as KF does) or that it’s over some arbitrary threshold (as KF does) is going to end in a very low scoring round.

    Notice how nobody actually bothers to calculate it? It’s because they can’t calculate it for anything relating to biology and the claims they are making off the back of it. Yet you seem fooled, perhaps you’d unfool yourself were you to make the attempt to calculate it yourself.

  4. Notice how nobody actually bothers to calculate it?

    KF has done so ad nauseum at UD, for all sorts of phenomena, including biological. I cannot show you what you refuse to see.

  5. I cannot even begin to comprehend how you can think that what you said can possibly follow from the definition of intelligence I provided.

  6. William J. Murray:
    I cannot even begin to comprehend how you can think that what you said can possibly follow from the definition of intelligence I provided.

    How William defined intelligence:

    Intelligence: capacity to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn.

    What I said:

    a) you differ from Dembski, which is fine, but you might as well be clear about that

    How Dembski defined intelligence:

    by intelligence I mean the power and facility to choose between options–this coincides with the Latin etymology of “intelligence,” namely, “to choose between”

    What I also said:

    b) if that is the definition of intelligence you are using, then it is important to understand that non-intelligent processes (by your definition) can produce highly non-random results, and indeed, I would argue, be effective problem solvers and decision-makers

    Dr Nim is not an intelligent system, by your definition. Yet it can produce highly non-random results.

    A GA is not an intelligent system, by your definition. Yet it can produce effective decisions and solutions to problems.

    Ergo, non-intelligent systems, where we define intelligence as per your, William’s definition, can produce can produce highly non-random results, and indeed, I would argue, be effective problem solvers and decision-makers. Which is what I said.

    Now it is perfectly possible that such systems as Dr Nim and GAs cannot be produced without an external intelligent designer, and we can argue about that. But that makes no different to the fact that the systems themselves are a) non-intelligent (by your definition), and yet can produce high non-random results etc etc.

    Yes? Indeed, they can tell us things that we don’t even know, and solve problems that we cannot solve otherwise.

  7. William J. Murray: KF has done so ad nauseum at UD, for all sorts of phenomena, including biological.I cannot show you what you refuse to see.

    No, he hasn’t, William. He’s posted some stuff, but it does not mean what you (or he) thinks it means.

  8. William J. Murray:

    [OMagain] Notice how nobody actually bothers to calculate it?

    KF has done so ad nauseum at UD, for all sorts of phenomena, including biological. I cannot show you what you refuse to see.

    I’ll concede that KF manipulates some figures. For example he confirmed that to calculate CSI (or whatever the current acronym is) of a protein, the raw data used is just the number of residues. This is a calculation in name only. F(n) merely maps back to n, which is our original datum, the number of residues. This is pointless, trivial and, most importantly, does not tell us anything we did not already know.

  9. And this is why I don’t bother to pursue this. I cannot show something to those that have, IMO, already refused to see it countless times.

  10. William J. Murray:
    And this is why I don’t bother to pursue this. I cannot show something to those that have, IMO, already refused to see it countless times.

    So you just believe that kairosfocus has done the right math, even though you yourself do not have the expertise to evaluate it?

    Fair enough, but don’t expect the rest of us to take it on trust, especially when some of us do have the skills to evaluate it and can see that it is GiGo.

  11. William J. Murray: KF has done so ad nauseum at UD, for all sorts of phenomena, including biological.I cannot show you what you refuse to see.

    You cannot show what does not exist.

    With one exception, neither kairosfocus nor any other UD commenter has calculated any of their alphabet soup of supposed metrics for any biological “phenomena” that corresponds to known evolutionary mechanisms.

    The one exception is vjtorley who attempted to calculate Dembski’s CSI for a gene duplication event. He came to the conclusion that such an event does, in fact, generate CSI by Dembski’s own metric.

    Clearly this is the wrong answer at UD, so vjtorley later posted, at length, about how it is unreasonable to try to calculate CSI at all.

    If you dispute my claims here, I invite you to provide links to where kairosfocus or any other IDCist at UD has calculated CSI or a similar metric for a real biological artifact.

  12. Whether or not KF’s calculations are correct or incorrect, or if there is room to challenge the formula used, or if there are grounds to challenge the premises used in the construction of the means of calculation; whether or not there is an argument against the merit of FSCO/I as an indicator of “best explanation” for ID, to claim that “no one has done the calculations” for something like a protein or a vNSR (OOL) is just blanket denial.

    To claim that the 500 chi metric is “arbitrary” is self-induced, willful blindness. It might not be valid; but it is certainly not arbitrary. How it is calculated for something like a functioning protein is not arbitrary. It may not be based on what someone else might consider to be valid correspondences for the bit input; but the bit input is certainly not “arbitrary”. The 500 chi metric threshold might be challenged as invalid, but it’s certainly not arbitrary.

    But those components to the means of evaluating the FSCO/I of a protein, or a protein structure, are certainly not “arbitrary”, and it is certainly not true that “no one has done the calculations”.

  13. My argument here has nothing to do with whether or not anyone’s math is correct, or if their formula is correct; I’m claiming that he has a formula, presumably for calculating a commodity referred to FSCO/I (and explained exhautively) of a biological entity like a protein (and many other things that lend itself to functional node correspondence to bit input for the formula); that the components of that formula are not arbitrary (whether or not they are valid or correct), and that he has performed such calculations, including describing what the functionally specified nodes are on the thing being described, and why the 500 bit threshold should render a judgement that ID should be included as a necessary part of the explanation.

  14. William J. Murray:
    Whether or not KF’s calculations are correct or incorrect, or if there is room to challenge the formula used, or if there are grounds to challenge the premises used in the construction of the means of calculation; whether or not there is an argument against the merit of FSCO/I as an indicator of “best explanation” for ID, to claim that “no one has done the calculations” for something like a protein or a vNSR (OOL) is just blanket denial.

    It is trivially true that people have done some calculations, and called the output “FSCO/I” or some such. I don’t think anyone is claiming that they haven’t. What we are claiming is that such calculations can, or do, tell you how unlikely it is that the pattern was not designed, which is what these things claim to do.

    The reason they do not, is that either they make an unsupported assumption about the probability under the null (Kairosfocus) or they do not tell us how to calculate the probability under the null (Dembski).

    To claim that the 500 chi metric is “arbitrary” is self-induced, willful blindness. It might not be valid; but it is certainly not arbitrary. How it is calculated for something like afunctioning protein is not arbitrary. It may not be based on what someone else might consider to be valid correspondences for the bit input; but the bit input is certainly not “arbitrary”. The 500 chi metric threshold might be challenged as invalid, but it’s certainly not arbitrary.

    Agreed. It’s based on a mistake, but it’s not arbitrary. tbh, I think it’s irrelevant. I could be convinced by something far less conservative.

    But those components to the means of evaluating the FSCO/I of a protein, or a protein structure, are certainly not “arbitrary”, and it is certainly not true that “no one has done the calculations”.

    Well, what is true is that the calculations do not tell you how unlikely the observed pattern is under the null of non-design, which is what is claimed. Either the null is wrong, or not specified.

    Sure calculations have been done. They just don’t do what they say on the tin.

  15. William J. Murray:
    My argument here has nothing to do with whether or not anyone’s math is correct, or if their formula is correct; I’m claiming that he has a formula, presumably for calculating a commodity referred toFSCO/I (and explained exhautively) of a biological entity like a protein (and many other things that lend itself to functional node correspondence to bit input for the formula); that the components of that formula are not arbitrary (whether or not they are valid or correct), and that he has performed such calculations, including describing what the functionally specified nodes are on the thing being described, and why the 500 bit threshold should rendera judgement that ID should be included as a necessary part of the explanation.

    What is arbitrary about KF’s calculations is his assumption of uniform probability under the null.

  16. William J. Murray:
    My argument here has nothing to do with whether or not anyone’s math is correct, or if their formula is correct; I’m claiming that he has a formula, presumably for calculating a commodity referred toFSCO/I (and explained exhautively) of a biological entity like a protein (and many other things that lend itself to functional node correspondence to bit input for the formula); that the components of that formula are not arbitrary (whether or not they are valid or correct), and that he has performed such calculations, including describing what the functionally specified nodes are on the thing being described, and why the 500 bit threshold should rendera judgement that ID should be included as a necessary part of the explanation.

    Please provide either a summary of or a link to a page or comment where:
    a) FSCO/I is rigorously defined mathematically
    b) FSCO/I is calculated for a biological artifact or system, taking into account known physics, chemistry, biology, and evolutionary mechanisms

  17. I’m not saying we need agree that his formula is valid for finding FSCO/I, or even that FSCO/I is a valid means of concluding if an object is the product of ID or not; I’m not qualified to scientifically or mathematically vet that kind of an argument anyway.

    But, those are different kinds of arguments than those presented by people here who assert that no mathematical formula has been offered, or that the 500 chi metric is arbitrary, or that “nobody has done the calculations”.

  18. You might not agree with his reasons for doing so, but that doesn’t make it “arbitrary”. Or are you claiming that he has expressed no reason for doing so?

  19. William J. Murray:
    I’m not saying we need agree that his formula is valid for finding FSCO/I, or even that FSCO/I is a valid means of concluding if an object is the product of ID or not; I’m not qualified to scientifically or mathematically vet that kind of an argument anyway.

    But, those are different kinds of arguments than those presented by people here who assert that no mathematical formula has been offered, or that the 500 chi metric is arbitrary, or that “nobody has done the calculations”.

    Well, there’s been miscommunication I think. I think we can all agree that calculations have been offered.

    Some of us think that those calculations are not calculations of the probability that a pattern could have been the result of non-design processes, which is what it claims to be. KF’s is simply the calculation of the probability that the pattern was produced by independent draws. Darwinian processes aren’t independent draws, as Dembski acknowledges.

  20. William J. Murray:
    You might not agree with his reasons for doing so, but that doesn’t make it “arbitrary”. Or are you claiming that he has expressed no reason for doing so?

    He has expressed no reason for doing so that I am aware of. I don’t think he’s even aware that it’s what he’s doing. He certainly hasn’t pointed out that what he’s doing is different from what Dembski advocates.

  21. William J. Murray:
    I’m not saying we need agree that his formula is valid for finding FSCO/I, or even that FSCO/I is a valid means of concluding if an object is the product of ID or not; I’m not qualified to scientifically or mathematically vet that kind of an argument anyway.

    But, those are different kinds of arguments than those presented by people here who assert that no mathematical formula has been offered, or that the 500 chi metric is arbitrary, or that “nobody has done the calculations”.

    William,
    If I told you that the product of 1.9, 2.8 and 3.8 was 6, you might quite reasonably respond that I had not calculated the product of those three numbers.
    I could keep repeating “1.9 x 2.8 x 3.8 = 6” endlessly, but I still have not calculated the product.
    If I eventually explained that “I am ignoring the digits after the decimal” (which kairosfocus has yet to do), you might then respond “multiplication, you are doing it wrong”.
    KF’s “calculation” is as meaningless as mine: I am assuming that we can ignore the digits after the decimal, and he is implicitly assuming (amongst other things) that
    P(A and B) = P(A) x P(B)

  22. William J. Murray:
    http://www.uncommondescent.com/intelligent-design/how-to-calculate-chi_500-a-log-reduced-simplified-form-of-the-dembski-chi-metric-for-csi/

    From the post: “How to calculate Chi_500, a log-reduced, simplified form of the Dembski Chi-metric for CSI?”, the calculated results for selected proteins:

    William, show me where, in that post, Kairosfocus demonstrates how to compute p(T|H).

    It doesn’t matter if you don’t know what it means – just find a sentence or equation where he shows how to compute it.

  23. DNA_Jock: KF’s “calculation” is as meaningless as mine: I am assuming that we can ignore the digits after the decimal, and he is implicitly assuming (amongst other things) that
    P(A and B) = P(A) x P(B)

    He also says that:

    If P(T|H) is very low, then this formula will be very closely approximated [HT: Giem] by the formula:

    CSI-lite=-log2[10^150.P(T|H)] . . . eqn n1c

    and then proceeds to use “CSI-lite”.

    But never justifies, or attempts to justify, why we should assume that p(P|H) is very low.

    All he says is:

    “And, once we have a reasonable estimate of the direct or implied specific and/or functionally specific (especially code based) information in an entity of interest, we have an estimate of or credible substitute for the value of – log2(p(T|H))”

    He seems to assume that if FSC is high, then p(T|H) must be low. Which would be assuming his conclusion. FSC can’t be the “pattern that signifies design” unless we know that it’s, um, the pattern that specifies design.

    And we can’t know that unless we know p(T|H).

  24. William, you said:

    No. It is rigorously defined, and it can be calculated handily. You can find the definition and reference in the FAQ and Glossary on this site, or by googling “kairosfocus FSCO/I” and finding many exhaustive epxlanations and examples on this site and others.

    This is most interesting: “ It is rigorously defined, and it can be calculated handily
    That’s your claim, not anybody else nor UD’s faq. So stand behind it and tell me the rigorous definition then use that definition to handily calculate FSCO/I“.

    Or accept that you’ve been mislead. You said:

    That is exactly what the FSCO/I metric is claimed to provide. It is the same as an attempt to provide a formula for the recognizable effects of gravity; we might not know now gravity/ID works, or how it “makes happen” the effects/patterns we find, but if it is scientific to call an effect an effect of gravity without knowing what gravity is, or the mechanism it uses to achieve the recognizable pattern of effects, then it is scientific to call the theory of ID scientific.

    Unless, of course, one insists that the effects of ID are not recognizable; is that your position? That ID doesn’t produce any recognizable effects that we can – as best explanation – reasonably infer were the product of ID? If so, do you have an answer for my ancient alien artifact challenge?

    Although you remember to qualify your claim here with a “is claimed to provide” you then go on to assume that it works as described on the tin. When it’s noted no, fundamental problems exist with the implementation of the idea you said:

    And I – and others — are of the opposite opinion.

    So it seems to me you have to stand behind your claim, you are after all of the opposite opinion.
    Although that was from November 7, 2011 so perhaps you’ve simply decided to believe something difference since then, as is your wont.

    You later note it’s a side issue if the metric is even valid. So it seems to me you get to sidestep any questions about it by saying that well, yes FSCO/I is a concept that somebody invented and, yes, that somebody and some others have claimed to calculate it BUT those are simply bare statements of fact without any endorsement of the validity of those concepts/metrics personally.

    So if all you are going to do is that then I’ll sign up to the UD RSS feed instead.

    And claiming that you are not a scientist so you can’t comment if the mathematics etc is valid is really a very poor show. If it’s legitimate then it should be reproducible. Other people should be able to calculate FSCO/I and get the same result for the same target.

    If, as you claim FSCO/I is rigorously defined and can be calculated handily then why do you suppose nobody has asked Lenski for the genetic sequence of his bacteria pre and post “Citrate event” and calculated the FSCO/I for it? Regardless of the actual results (same, more, less) having a “model” example for the calculation of FSCO/I can only be a good thing for ID right?

    So why do you suppose it’s never been done, despite the fact that the data is (most likely) available for the asking?

  25. You know there might be a good point lurking here, not that the fact that it’s lurking is to the credit of the alphabet-soup producers.

    There have been some decent attempts to calculate what is sometimes called things like “Specified Complexity” or “Functional Information” and the like, which is simply the proportion of patterns specified in some way out of the total number of possible patterns. Getting the specification appropriate can be tricky, but it’s perfectly doable.

    But any version of a definition that includes p(T|H) (which Dembski’s and KF’s does, but Durston’s and Abel’s doesn’t) is NOT possible to compute, because nobody has every told us how to compute p(T|H).

    So at least some of the protests – “look, it’s been done! Durston and Abel did it!” is equivocation. Yes, they calculated something with some initials. And yes it was to do with functional/specified complexity. But it WAS NOT what KF claims to be able to compute, which requires a value for p(T|H).

    If you disagree, William, please point to any definition of any metric that is supposed to mean design if it exceeds 500 bits, and show where the value for p(T|H) is computed, and how.

  26. or that the 500 chi metric is arbitrary

    Would 475 or 525 not do just as well? What about 499 and 501? Is a thing scoring 499 not designed? 501 is? How sure are you about that? What would a thing scoring a 1 look like? 10? 100? Just a rough idea would be fine.

    What difference does it make what you pick if you use a straw-man caricature of your opponents position anyway? The FSCO/I verbiage that KF copy and paste screens is a post hoc justification for an already assumed conclusion. Honestly, if the “cosmos is designed” then what are the chances that life itself is not?

    I’d love to attempt to program a FSCO/I calculator, however crude, but um, I can’t. And neither can anyone else it seems!

  27. KF, in one of his calcualtions of CSI says:

    The cell is chock full of FSCO/I

    Starting to see the problem yet William?
    He goes onto further refine that into cells are “replete” with FSCO/I.

    I have left some further thoughts on that in the sandbox.

  28. Well, the 500 bits thing is silly, and actually wrong, as I understand it, but it pales into insignificance IMO against the total circularity of including the probability of the data given the hypothesis you are seeking to reject in your formula without providing a way of calculating that probability!

    Which of course we can’t do. IDists keep saying “show what the probability is for Darwinian evolution!” Well, we can’t. But by precisely the same token, IDists can’t use CSI to reject it, because that probability value is sitting right there, uncalculated, in the middle of the formula they use to tell us it’s too small to have happened!

  29. William J. Murray:
    http://www.uncommondescent.com/intelligent-design/how-to-calculate-chi_500-a-log-reduced-simplified-form-of-the-dembski-chi-metric-for-csi/

    From the post: “How to calculate Chi_500, a log-reduced, simplified form of the Dembski Chi-metric for CSI?”, the calculated results for selected proteins:

    RecA: 242 AA, 832 fits, Chi: 332 bits beyond

    SecY: 342 AA, 688 fits, Chi: 188 bits beyond

    Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond

    Wow, William, you appear to be . . . easily impressed. I regret that, in order to view this, I broke my usual rule of not sending traffic to UD.

    In your referenced post, kairosfocus starts with a few excerpts from Dembski’s specification paper, does a little simple arithmetic, invents a couple of new terms, waves his hands a bit, quotes a semi-related post by vjtorley, assumes his conclusions (“In addition, the only observed cause of information beyond such a threshold is the now proverbial intelligent semiotic agents.”), and then pulls in some results, that happen to have units of bits, from a completely different algorithm.

    That’s not even close to a mathematically rigorous definition.

    He also makes it very clear that his calculations, such as they are, are based on the idea that proteins assemble de novo (e.g. “the total number of Planck-time quantum states for the atoms of the solar system”). This is the tornado in a junkyard fallacy and is related to the P(T|H) issues raised by others in this thread.

    Even if they were rigorous, his calculations have nothing to do with observed biological systems and provide no support for the claim that an intelligent agent is required for their existence.

    Coopting results from Durston’s paper is not the same as calculating an ID metric.

  30. When he describes those values, he prefaces it with a link to a paper by Dembski – Specification: The Pattern That Signifies Intelligence – which appears to me – but I’m no mathematician – to more clearly define the nature of those terms and how to compute them.

  31. If I’m correct, P(T/H) = the probability of a target zone being acquired/hit by a chance hypothesis. I would assume this means, in biological terms, the probability that a chance (unintelligent) means could hit the target sequence given the full range of possibilities.

    Douglas Axe and others have published work that makes the case that the number of potential functional proteins (as sequences of amino acids) compared to the possible number of sequences (functional and non-functional) is tiny (he arrives at a specific number of 1 in 10 to a certain power).

    From what I read in Meyer’s first book, there are no chemical affinities that would lead anyone to believe that, before life began, functional proteins would be favored over non-functional. I could be wrong about this – both in fact, and whether or not it was in Meyer’s book.

    Correct me if I’m wrong, but in my view, unless there is something skewing the bindings of amino acids towards the generation of functional proteins, that would mean we have at best a flat probability distribution of the generation of a functional protein (without intelligent guidance), and if the target zone is what Axe and others have computed it to be (given current information) compared to the full range of possibilities – isn’t that enough to calculate the P(T/H) as it correlates to the spontaneous emergence of particular protein sequences, given the number of binding sites necessary to it’s particular function, the conditional probability taking into account how specific the sequence must be for the protein to function properly (again, accounted for – as far as I can tell – in Axe’s work)?

  32. William J. Murray: The Pattern That Signifies Intelligence – which appears to me – but I’m no mathematician – to more clearly define the nature of those terms and how to compute them.

    Well you don’t need to be a mathematician to see the absurdity at the heart of the claim.
    What’s the (im)probability for the Darwinian evolution of the (a) bacterial flagellum?
    Um, don’t know. High?
    Therefore ID.

  33. William J. Murray: given the number of binding sites necessary to it’s particular function, the conditional probability taking into account the how specific the sequence must be for the protein to function properly

    What function? Assuming there is a transition between chemistry and life what do you imagine that junction between what we’d call life and non-life would look like? You seem to have some quite specific ideas. It’s a shame because it’s actually much more interesting then you make out.

    Tell me, what function is a protein performing if it’s step N of NN in an ancestral history of “reactions that happened” along the way to that junction between life and non-life?

    You’ve already ruled out natural selection etc at OOL, so what’s left? The spontaneous appearance of a low probability protein sequence that must be almost perfect to function properly? It’s that all you’ve left me?

    Perhaps if you heard what people are actually saying rather then what the ID books would have you believe?
    E.G.
    http://astrobiology2.arc.nasa.gov/focus-groups/current/origins-of-life/seminars/

    It’s honestly much more interesting then anything Meyer has to say.

  34. William J. Murray:
    When he describes those values, he prefaces it with a link to a paper by Dembski – Specification: The Pattern That Signifies Intelligence – which appears to me – but I’m no mathematician – to more clearly define the nature of those terms and how to compute them.

    But nowhere in Dembski’s paper does he tell us that. You don’t even need to be a mathemetician to confirm that what I’m saying is true (I’m not a mathematician myself, although I’m reasonably literate in statistics and probability).

    Here’s the link to the paper: Specification: The Pattern That Signifies Complexity. Press control F then enter p(T|H). You’ll get lots of hits but not one that tells you how to compute it.

    The reason is: you can’t. So our claim stands: nobody has shown how to calculate CSI (or FSCO/I). Sure, people like Durston, Abel and Hazen have computed something like Specified Complexity, or Functional Complexity, using part of Dembski’s equation, but they don’t plug in the p(T|H) part. All they do is compute p(T|random independent draws).

    Which, in English, is: what proportion of the time you’d see candidate observed pattern, or one comparably specified/functional, if patterns were drawn independently out of a hat containing an infinite number of patterns of the same size consisting of the same elements in the same proportion.

    Which is not what anyone claims evolution is, as Dembski clearly acknowledges in that paper:

    Next, define p=P(T|H) as the probability for the chance formation for the bacterial flagellum. T, here, is conceived not as a pattern but as the evolutionary event/pathway that brings about that pattern (i.e., the bacterial
    flagellar structure). Moreover, H , here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms.

    (The symbol | by the way, means “given”, and “p” usually stands for probability.)

  35. Essentially, William, what Dembski (and KF, though I don’t think he’s aware of it) is saying that if we know that the probability of something occurring by non-design is tiny we can infer that it was designed.

    But they don’t tell us how to compute the probability that something occurred by non-design. They just keep repeating that if it’s tiny, we must infer design.

    Sure. I agree (well that 500 bits is probably wrong, but I’d accept an alpha of 500 bits – even physicists traditionally accept 5 sigma, which is only 22 bits).

    But the 64 thousand bit question is: how tiny is p(T|H)?

  36. All they do is compute p(T|random independent draws).

    I guess you’re arguing that there are some chance (unintelligent) searches (sets of draws stemming from some particular chance hypothesis) that offer a better than “random-independent-draw” outcome in terms of acquiring a functional protein.

    This is something Dembski has published papers on – that no chance (unintelligent) search can statistically outperform a blind (random-independent-draw) search, and that any supposedly chance search system that exceeds a flat-draw probability curve (such as supposed evolution-simulating programs) does so only because of oracle (target) information that is supplied to the system in some form.

  37. So, if Dembski is right, and no unintelligent search can increase the chances of finding a functional protein before life; and if Axe is right, and the target space of functional proteins is X, and the entire space of protein configurations is Y, then we should assume a flat distribution and the value of P(T|H) is – as far as I can tell – rather easily arrived at.

    One might argue that this is all well and good before life, but that after life has kicked of Darwinian processes, we have no better odds at finding new, functioning proteins through chance (unintelligent) biological mechanisms, as per Dembski’s papers and Axe’s.

  38. William J. Murray: I guess you’re arguing that there are some chance (unintelligent) searches (sets of draws stemming from some particular chance hypothesis) that offer a better than “random-independent-draw” outcome in terms of acquiring a functional protein.

    Well, first of all I’m making it clear that when we say that nobody has told us how to compute CSI, we are being absolutely accurate. It’s not that Dembski thinks that p(T|H) is the same as a random draw, because he specifically says that it needs to take account of “Darwinian and other material mechanisms”. If he thought that they were the same as random draw, he wouldn’t need to say that, would he? But he doesn’t say how to compute it. Ergo, we can’t compute CSI.

    This is something Dembski has published papers on – that no chance (unintelligent) search can statistically outperform a blind (random-independent-draw) search, and that any supposedly chance search system that exceeds a flat-draw probability curve (such as supposed evolution-simulating programs) does so only because of oracle (target) information that is supplied to the system in some form.

    Well, he’s written lots of contradictory and many invalid things. First of all he claimed that no algorithm could outperform blind search, then, when it became clear that this was false, he tried to say that, well, you’ve got to search for a any search that will outperform blind search, and that’s even harder. But also says stuff about smuggling in an oracle. And goes back again and again to Dawkins’ WEASEL, although acknowledging that WEASEL is unlike most of such algorithms (and unlike any useful one) in that the solution is also the problem (the fitness function is identical to the solution). This is not the case with any functional GA, which we set up in order to find (i.e. search for) the answer to a problem that we don’t know in advance (Target). And GAs find them, not because we give them the answer (the “Target”) but because we give them the problem.

    In nature, nobody has to give self-replicating critters a problem. Their problem is intrinsic to the fact of their self-replication – it’s to self-replicate successfully in the environment in which they find themselves. If they didn’t self-replicate they wouldn’t be self-replicators! And so the target is: “self-replicate in this environment”. No smuggling required. And as the environment changes, so will the problem, and the solutions.

  39. William J. Murray: we have no better odds at finding new, functioning proteins through chance (unintelligent) biological mechanisms, as per Dembski’s papers and Axe’s.

    Well, in fact I think you’ll find that those “new, functioning proteins” are all available straight away. They are those that surround the current configuration. Some of them will allow a viable organism. And that’s the point.

  40. William J. Murray: only because of oracle (target) information that is supplied to the system in some form

    That would be correct, yes. It’s in essence the environment.

    The oracle “said” you died. The oracle “said” you lived. Etc.

  41. William J. Murray:
    So, if Dembski is right, and no unintelligent search can increase the chances of finding a functional protein before life; and if Axe is right, and the target space of functional proteins is X, and the entire space of protein configurations is Y, then we should assume a flat distribution and the value of P(T|H) is – as far as I can tell – rather easily arrived at.

    Sure. But we can’t assume Dembski is right (that p(T|H) is the same as p(T|independent random draws) before calculating the very thing that is supposed to tell us whether he’s right! That’s the circularity I keep banging on about. Sure, if evolution is impossible without design, then there must have been a Designer.

    And your inference re Axe is incorrect. Even if Axe were right, and had shown that the target space of functional proteins is tiny, and the entire space of protein configurations is vast, protein configurations are not independently drawn. Even if it were the case that all amino acids are equally prevalent (flat distribution) which they aren’t, that makes no difference to my point which is given sequence A, sequences close to sequence A are more likely than very different sequences. And if sequence A has phenotypic effects, then that will hugely influence the probability of future A sequences and of sequences similar to A.

    This is another point that Axe, Gauger, and Dembski keep appearing not to see – they seem to think that “fitness landscapes” are drawn from a flat distribution of possible fitness landscapes, of which the vast majority are untraversible. They aren’t. Smooth, high dimensioned fitness landscapes are intrinsic to the fact of critters that self-replicate with heritable variance in reproductive success in an environment rich in opportunities and threats. The first makes the landscape smooth; the second makes it high dimensioned. Both make it highly traversable.

    Of course we could tack back further to OOL, but ID proponents can’t get rid of their antipathy to poor old Darwin!

    One might argue that this is all well and good before life, but that after life has kicked of Darwinian processes, we have no better odds at finding new, functioning proteins through chance (unintelligent) biological mechanisms, as per Dembski’s papers and Axe’s.

  42. William J. Murray: I guess you’re arguing that there are some chance (unintelligent) searches (sets of draws stemming from some particular chance hypothesis) that offer a better than “random-independent-draw” outcome in terms of acquiring a functional protein.

    This is something Dembski has published papers on – that no chance (unintelligent) search can statistically outperform a blind (random-independent-draw) search, and that any supposedly chance search system that exceeds a flat-draw probability curve (such as supposed evolution-simulating programs) does so only because of oracle (target) information that is supplied to the system in some form.

    You are apparently referring to Dembski’s gross misuse and abuse of the No Free Lunch theorems. Your paraphrasing, while common at UD, does not reflect the actual mathematics.

    What the NFL theorems say is that averaged over all possible fitness landscapes no algorithm will be better than blind search. That bit I put in italics is important. We’re not averaging over all possible fitness landscapes, we’re dealing with the one fitness landscape of the universe we inhabit generally and the planet we’re on specifically. For a particular fitness landscape, certain algorithms definitely out perform blind search and other algorithms.

    Known evolutionary mechanisms demonstrably perform better than blind search on the type of fitness landscape we inhabit. Nothing in the NFL theorems contradicts this, nor does any of Dembski’s work.

    There are also some issues with attempting to apply the NFL theorems to dynamic landscapes, but that’s a different topic.

    ETA: Treating evolution as a search is also problematic, but that’s another different topic.

  43. William J. Murray:

    One might argue that this is all well and good before life, but that after life has kicked of Darwinian processes, we have no better odds at finding new, functioning proteins through chance (unintelligent) biological mechanisms, as per Dembski’s papers and Axe’s.

    William, you seem to have changed horses here – are you now defining “unintelligent” as “chance”? You had a very nice definition of “intelligence” earlier – have you changed it to “non-chance”?

    If not why do you write “chance (unintelligent) biological mechanisms”?

  44. I missed this post, William (maybe I should turn off nesting completely). Some additional comments:

    William J. Murray:
    If I’m correct, P(T/H) = the probability of a target zone being acquired/hit by a chance hypothesis.I would assume this means, in biological terms, the probability that a chance (unintelligent) means could hit the target sequence given the full range of possibilities.

    Yes, according to Dembski, P(T|H) is the probability of a Target, given the “chance” hypothesis, where the “chance” hypothesis consists of all non-design hypotheses. However, this opens the very can of worms my OP was designed to open – Dembski appears to assume that “chance” is anything that is “unintelligent”, or, conversely that “intelligent” means “non-chance”. Yet neither his definition of “intelligent”, nor yours (which is different) is “non-chance”. And, as we have seen, many things can be non-intelligent (by your definition), and yet not “chance”, unless we are simply defining “chance” as “not-intelligent” which would fly in the face of normal usage – it would mean that it is mere “chance” that an apple falls to the ground, rather than fly off into space. Note that Dembski used to have a second category of causation, “Necessity”, but that no longer appears in his CSI argument. He subsumes into his “chance” hypothesis, “Darwinian and other material mechanisms”.

    Douglas Axe and others have published work that makes the case that the number of potential functional proteins (as sequences of amino acids) compared to the possible number of sequences (functional and non-functional) is tiny (he arrives at a specific number of 1 in 10 to a certain power).

    From what I read in Meyer’s first book, there are no chemical affinities that would lead anyone to believe that, before life began, functional proteins would be favored over non-functional. I could be wrong about this – both in fact, and whether or not it was in Meyer’s book.

    Meyer’s book is a mess. But whether or not what you say was in it, what you say doesn’t make a lot of sense. “Before life began” – do you mean, before any self-replicating entity began, or before DNA-protein-based life began? If the former, then “functional” had no meaning before there was any self-replication, because “function” in biology, means “has phenotypic effects”. A sequence that codes for a protein, but where that protein has no effect on the phenotype (the organism) then it isn’t serving any function. If you mean the latter, I guess you are saying that there is no greater reason for a DNA variant to code for a functional protein (one with phenotypic effects) than for a non-functional one (without phenotypic effects). That is probably true.

    But what also true is that DNA sequences that produce proteins with small phenotypic effects are more likely to have offspring with similar sequences, and proteins coded by similar sequences are likely to have similar effects. Some of these will be more functional (improve the organisms reproductive chances relative to the parent sequence), others less functional. The former will become more prevalent. Rinse and repeat.

    Correct me if I’m wrong, but in my view, unless there is something skewing the bindings of amino acids towards the generation of functional proteins, that would mean we have at best a flat probability distribution of the generation of a functional protein (without intelligent guidance), and if the target zone is what Axe and others have computed it to be (given current information) compared to the full range of possibilities

    The issue is not whether the bindings of amino acids are skewed towards proteins that help the organism reproduce (functional) but whether similar coding sequences result in proteins with similar phenotypic properties, which they do. Given that this is the case, any sequence that results in a minimally functional protein will becomes highly prevalent in future generations i.e. there will be far more individuals carrying that sequence than not carrying it. Therefore there will be gazillions of opportunities for some offspring of some parent somewhere to find itself with a sequence that produces an even more effective version of that protein.

    Proteins are not “islands of function” as KF likes to imply. Even if functional proteins are rare compared with a vast space of non-functional proteins, if they are coded by similar sequences they will be connected to each other by causeways. P(T|H) (the vertical bar is usually shift backslash, next to the left shift key) must take this connectedness into account.

    – isn’t that enough to calculate the P(T/H) as it correlates to the spontaneous emergence of particular protein sequences, given the number of binding sites necessary to it’s particular function, the conditional probability taking into account how specific the sequence must be for the protein to function properly (again, accounted for – as far as I can tell – in Axe’s work)?

    No. To calculate P(T|H), we need to know several things that are not easily knowable, possibly never knowable: we need to know how many possible proteins have possible phenotypic effects in any organism that might have lived, whether it did or not; in what environments such phenotypic effects would have increased the organisms chances of successful reproduction; and whether those environments were around at the time our putative functional-protein-bearer lived.

    In other words, to calculate P(T|H) we need to know the probability that Darwinian evolution could have produced what we observe. And as the entire point of the calculation is to determine whether it could have done, the entire concept is circular. You need to know the answer before you can find the answer!

    My advice to ID proponents: drop CSI like a hot potato, and embrace IC rather than CSI. And focus on determining whether or not the simplest possible common ancestor of known living things (drop all these guys at UD who are trying to persuade vjtorley that common descent is not plausible – it is, and hugely supported, as Behe also acknowledges) is IC. And if you can do that, then you can maybe rope Dembski back in to show whether that simplest possible common ancestor has CSI (because now you won’t have to compute the probability under Darwinian mechanisms, because you will already have ruled out Darwinian mechanisms).

    At that point, ID may have a case. Right now, it hasn’t. Although it may still be correct.

  45. tbh I think that talking about “functional proteins” is somewhat incoherent. The word “function” implies that the protein is doing something (serving some function) in an organism. The same protein may be highly functional (do something very useful) in one organism and be useless, or even lethal, in another. What’s more, in multicellular organisms, the same protein may be highly functional in some tissues, but pathogenic in others. Or functional in moderate quantities and lethal in more. Or more effective up to a certain number of repeats, and pathogenic beyond that. Or highly functional in one environment and dysfunctional in another.

    But this discussion probably better belongs here:Protein Space. Big, isn’t it?

Leave a Reply