Intelligence and Design.

My copy of No Free Lunch arrived a few days ago, and there are a couple of posts I want to make about it, but the first thing that struck me, reading the preface, and not for the first time, is how little Dembski (and other Intelligent Design proponents) seem to know about either Intelligence or Design.

As it happens, I have a relevant background in both.  I’m a cognitive scientist, and I came into cognitive science from a background in educational psychology, so I’ve always been interested in intelligence – how it works, how it is measured, what factors affect it, etc.  And, somewhat unusually for a cognitive scientist, I also have a training in design – I trained as an architect, a design training that is specifically focussed on “problem solving”, but I also applied that training to other “design” modalities, including composing music, and writing children’s books that attempted to explain something, both to commission, and therefore with a “design brief”.

And in both areas, what is abundantly clear, is that learning is critical.

When a child is struggling, cognitively, we say she is “learning disabled”, or is a “slow learner”.  When we design a building, or a piece of music, or a piece of writing, we embark on an iterative process in which our output feeds back as input into the process of critical appraisal and re-appraisal that informs sometimes radical, more often incremental, changes to our current creation.

In other words, “intelligent design” is a process  in which feedback from the environment, including our own output, iteratively serves as input into the design process.  Both intelligence in general, and design in particular, are learning processes.

But to read Dembski’s preface, you would not know it:

How a designer gets from thought to thing is, at least in broad strokes, straightforward: (1) A designer conceives a purpose.  (2) To accomplish that purpose, the designer forms a plan.  (3) To execute the plan, the designer specifies building materials and assembly instructions.  (4) Finally the designer or some surrogate applies the assembly instructions to the building materials.  What emerges is a designed object, and the designer is successful to the degree that the object fulfills the designer’s purpose.

Well, not exactly, IMO, and the part that Dembski misses (or, at best, glosses over) is precisely the part that most resembles evolution: the iterative feedback from the environment that results in the incremental adjustment of the prototype so that it ever more closely fulfils some function.  Not only that, but that function is not by any means always the original one.  For a building, typically it is, at least for its first occupants.  But buildings that survive the longest and are best maintained are those that are readily incrementally adapted for other functions.  And anyone who has ever made a pot, or carved a block of wood or marble, knows that what emerges is the result of a kind of dialogue between the sculptor and the material, and the result may be something very different to what the designer had in mind when she started.  Click on my sister’s blog in the blog roll if you don’t believe me 🙂

In fact, I’d go so far to say that the one thing that separates “intentional” design” from, I dunno, “iterative” or “tactile” design is that humans are capable of simulating the results of their iterative design before execution, so that we don’t have to build first, then dismantle.  But even then, we actually make models, often very crude models, out of crude materials, in the early processes of a design (well, this is true of architecture any way) – three dimensional back-of-the-envelope sketches, made of corrugated cardboard, bits of mesh, gauze, sponge, silver paper, prototypes we can nudge and fix and re-order and reassemble, according to how well the thing seems to work.

Intelligent design is very like evolutionary processes, in other words.  So it’s not surprising that the products of both should show a family resemblance.  Oddly, I agree with Dembski that he has put is finger on a kind of pattern that is distinctive, when he talks about “specified complexity”.  I just don’t think it has much to do with intention, and everything to do with iterative adjustments in response to environmental feedback.

Biology has all the hallmarks of a learning process, in other words.  Evolutionary processes are learning processes, as is human intelligence.  Those would seem to be reasonable candidate authors of a pattern that exhibited “specified complexity”.  An omniscient and omnipotent creator, not so much.

 

 

 

180 thoughts on “Intelligence and Design.

  1. William J. Murray: No, they haven’t. They’ve been about what intention and goals are in relationship to any designer, and the difference between those normative concepts and non-artificial process descriptions.

    Are you saying then that “the designer” posited by IDists could be, or is, “any designer”?

  2. Are you saying then that “the designer” posited by IDists could be, or is, “any designer”?

    All major ID proponents state that ID has nothing to do with any particular designer. ID is fundamentally about design detection, not designer identification.

  3. If one is going to claim that natural selection optimizes self-replication, then surely they can explain what that means in terms of the difference between optimal and non-optimal self-replication patterns.

    One treads warily when Merriam-Webster is brought into play … there are not two discrete points – ‘optimal’ and ‘non-optimal’. The operative words would be “as possible”. There is a continuum, on which an entity can be above or below the other entities that are also replicating – each is more or less ‘optimised’ than its rivals, in terms of fighting its corner in the reproductive ‘competition’. The differential is typically reduced to differences in reproductive output, so the more optimal variety would be that producing the greater numbers of offspring. Ultimately, such optimisation leads nowhere, since once that variety eliminates its rival, the differential no longer exists – but the new population is a little more ‘tuned’ to the environment.

    Natural selection is both shorthand for the many causal agents that lead to differential reproduction, and the result of the operation of those agents, which is (because those causal agents are the environment) an increase in the ability of those replicators to prosper in that environment.

  4. William J. Murray: All major ID proponents state that ID has nothing to do with any particular designer. ID is fundamentally about design detection, not designer identification.

  5. The differential is typically reduced to differences in reproductive output, so the more optimal variety would be that producing the greater numbers of offspring.

    Natural selection is both shorthand for the many causal agents that lead to differential reproduction, and the result of the operation of those agents, which is (because those causal agents are the environment) an increase in the ability of those replicators to prosper in that environment.

    Unless terms like “prosper” are quantitively defined, they to vague to be worth anything. If you mean “produce greater numbers of offspring”, then you have something measurable.

    The problem is that when I challenge this view of natural selection – that the qualitative measurement is about producing the most offspring – defenders of natural selection claim I’m misrepresenting natural selection. In fact, NS defenders will not (in my experience) tie NS to any particular metric and stick with it, because as soon as the problem with that metric is pointed out, they back off.

    Case in point: if NS is about producing the “most” offspring in a given environment, then why, in billions of years of evolution, has it not improved on the geometric progression rate of that which was pretty much available from the beginning? In fact, it seems that, historically, evolution has produced less and less fecund organisms over time.

    Another case in point: survivability. Sometimes NS defenders argue that another thing taken into account in the NS metric is survivability (after all, you have to survive first in order to reproduce); but once again, we see the history of evolution on earth not as a progression from poor survivability to greater survivability, but rather just the opposite, as evolution started early on with – again, arguably – a very hardy life form, and ever since has generated nothing but less hardy life forms, more uniquely suited to niche environments and incapable of survival too far outside of their niche.

    Also, it is clear that the more interdependently complex and specified for purpose an organism’s make-up is, and the more such parts it has, the more energy it must consume and the more easily it can fail, due to error or environmental difficulty. Also, noticeably, the more difficult the process of procreating becomes.

    These two classic NS metrics do not categorically describe or predict what darwinian forces have supposedly wrought here on Earth, which is – arguably – a reduction in both survivability and fecundity as evolution generates biological entities more and more distant from their microbial origins.

    So again, and more fully:

    (1) What is the measurable optimization metric NS utilizes?

    (2) How can it be expressed in quantifiable terms (between optimal and non-optimal, at least in terms of a gradient)?

    (3) How does that difference engine categorically explain the evolutionary generation of the diversity of life?

    Note: a process that might allow something to occur is not an explanation for the occurrence. Because there is no physical barrier for the evolutionary production and maintenance of less survivable, less fecund organisms, does not mean that a more-survivable, more-fecund metric explains the existence of that category of organisms. That category of organism would exist in spite of the overall course of the metric, not because of it.

  6. William J. Murray: All major ID proponents state that ID has nothing to do with any particular designer. ID is fundamentally about design detection, not designer identification.

    Then ID isn’t about origins after all, is it?.

  7. WJM:

    Case in point: if NS is about producing the “most” offspring in a given environment, then why, in billions of years of evolution, has it not improved on the geometric progression rate of that which was pretty much available from the beginning? In fact, it seems that, historically, evolution has produced less and less fecund organisms over time.

    For someone who makes regular complaint about people missing his points, you have a rare gift for missing others’. I have addressed this point before, and also in the post you quote, in this sentence:

    Ultimately, such optimisation leads nowhere, since once that variety eliminates its rival, the differential no longer exists – but the new population is a little more ‘tuned’ to the environment.

    In a finite world, replicators initially at an exponential rate of progression will ultimately hit the carrying capacity of the environment. They then approach a steady state, which amounts to one replacement genome for each whole genome in the starting population. Your bizarre expectation that casting Natural Selection in reproductive terms means that organisms should now be producing many more offspring than formerly is a straw man if ever I saw one.

    From bacteria to flatworm to giraffe to Man, limitations restrict increase. This is the work of Malthus that so influenced Darwin and Wallace. But within a population, a new variant can arise that produces more offspring than the existing incumbents. It does so until the population is entirely occupied by the new variant. Then what? If the new variant produced 2 offspring each to the ancestor’s 1, does this mean that the new population will grow exponentially? For a time, it may do, but it does not have to for the fitter to replace the less fit. Sooner or later, it reaches a new steady state. Exponential growth cannot carry on forever, either in the world of economics or of biology.

    The other error is to suggest that there is some kind of a progression to evolution. The ‘less fecund’ organisms you refer to are presumably the complex ones, like us and giraffes and willows. But these are no more ‘the products of evolution’ than the rapidly-reproducing bacteria. Complexification is something that has arisen in a specific kind of organism – the eukaryote, more specifically the sexual eukaryote. Regardless, all of us are producing an average of one genome copy per individual, whatever our organismal form. For a time, some may do better – but nearly always at the expense of another. Sometimes populations grow, sometimes they shrink, and sometimes they shrink so much they go extinct. We are all fighting for limited resources, and ultimately our ‘fight’ boils down to attempting to replicate our genome. The fact that all organisms are doing the same thing means that all gains are temporary ones – but out of the ‘attempt’ comes adaptation.

    (1) What is the measurable optimization metric NS utilizes?
    The ‘measurable optimisation metric’ is the selective advantage of an allele – in a succession of lives, the fitness (reproductive output) of carriers vs non-carriers.

    (2) How can it be expressed in quantifiable terms (between optimal and non-optimal, at least in terms of a gradient)?

    The fact that there is a differential gives rise to the gradient. That gradient is severely buffetted by the effects of Drift (sample error), and so diffusion equations tend to be used – they have both a directional and a non-directional component.

    (3) How does that difference engine categorically explain the evolutionary generation of the diversity of life?
    It doesn’t. It is not always easy to measure and, in a historical sense, not recoverable at all. Additionally, one needs to incorporate migration and reproductive isolation – additional historical facts not preserved. The specific points occupied by present survivors in the vast ‘space of all possible organisms’ is not categorically explicable – it is subject to massive contingency. But the principles by which replicating populations diverge to yield such points is predictive – IF mutation-fixation in isolated lines of descent is the process by which diversity originates, THEN we expect to find certain patterns but not others. We find the patterns the theory predicts, which is why the theory survives.

  8. Strange that you cannot demonstrate “evolutionary” processes actually designing stuff- not in biology anyway.

  9. Intelligent Design is Not optimal Design:

    The confusion centered on what the adjective “intelligent” is doing in the phrase “intelligent design.” “Intelligent,” after all, can mean nothing more than being the result of an intelligent agent, even one who acts stupidly. On the other hand, it can mean that an intelligent agent acted with skill, mastery, and eclat. Shermer and Prothero understood the “intelligent” in “intelligent design” to mean the latter, and thus presumed that intelligent design must entail optimal design. The intelligent design community, on the other hand, means the former and thus separates intelligent design from questions of optimality.

    But why then place the adjective “intelligent” in front of the noun “design”? Doesn’t design already include the idea of intelligent agency, so that juxtaposing the two becomes an exercise in redundancy? Not at all. Intelligent design needs to be distinguished from apparent design on the one hand and optimal design on the other. Apparent design looks designed but really isn’t. Optimal design is perfect design and hence cannot exist except in an idealized realm (sometimes called a “Platonic heaven”). Apparent and optimal design empty design of all practical significance.

  10. William J. Murray: How do you know pattern replication has been optimized?IOW, what is the significant difference between an optimized and non-optimized pattern?

    OK, we don’t know whether “pattern replication has been optimized”. We do know that the patterns replicate remarkably well. The key point is that both iterative design processes (testing of prototypes, either in mental or real simulation or in actual use followed by tweaking of the design in the light of real or simulated performance) and evolutionary processes (testing of all variants, with retention of those that perform best i.e. replicate most efficiently) move the “design” (or whatever you want to call it) towards an optimumum. In biology, we know more or less that an optimum has been reached when the vast majority of variants are neutral or deleterious, i.e. when a population has been similar for many millions of years, natural selection serving to maintain optimum adaptation rather than improve on it.

  11. OK, we don’t know whether “pattern replication has been optimized”. We do know that the patterns replicate remarkably well.

    How do you know that? How do you quantify “remarkably”? How do you know the patterns don’t replicate horribly compared to some optimal replication? Compared to the every 20 minutes geometric replication of the bacterial pattern, I’d say most “higher” life forms have abysmal replication “optimization”.

    The key point…

    Well, I guess if we’re going to change “key points” from replication optimization to something else ….

    is that both iterative design processes (testing of prototypes, either in mental or real simulation or in actual use followed by tweaking of the design in the light of real or simulated performance) and evolutionary processes (testing of all variants, with retention of those that perform best i.e. replicate most efficiently)…

    My, you’re now sneaking in a whole host of normative terms here. Replicate most “efficiently” in what sense? According to what defining, quantitative test of “efficiency”? How do you know they replicate the most “efficiently”? “Testing” for what purpose? According to what design requirement? “Best” according to what quantitative value? Are you just looking to flood the conversation with normative terms now that you’ve apparently realized “optimized” isn’t going to get the job done?

    … move the “design” (or whatever you want to call it)

    But that’s a key part of the problem; you keep inserting stolen normative concepts into your description of what you are asserting is a non-normative (positive) process. The reason this is key, from my perspective, is because your use of those terms as if they translate from the qualitative to the quantitative is what is keeping you from seeing the bankruptcy of the Darwinian explanation, and the complete non-validity of NS as an explanation for anything significant.

    …. towards an optimumum. In biology, we know more or less that an optimum has been reached when the vast majority of variants are neutral or deleterious, i.e. when a population has been similar for many millions of years, natural selection serving to maintain optimum adaptation rather than improve on it.

    Note how you did exactly what I said Darwinists do in such a debate; as soon as I challenge your “replication” bid, you duck and weave and put up some other NS metric. If the NS metric produces optimized population stasis, you have the same problem that you have with “reproduction” as your NS metric; while it might allow the production of organisms outside of the current stasis norm, it doesn’t explain such divergences, and the broad diversity of life would have come to exist in spite of the NS metric tendency to optimize species stasis.

    How about you explain the NS metric in quantitative terminology that doesn’t employ normative concepts. Then we’ll have something we can really work with.

  12. (3) How does that difference engine categorically explain the evolutionary generation of the diversity of life?

    It doesn’t.

    Well, at least Allan Miller can admit it.

  13. Joe G:
    Strange that you cannot demonstrate “evolutionary” processes actually designing stuff- not in biology anyway.

    I thought you say that ID is not anti-evolution? And it’s strange that you cannot demonstrate “the designer” actually designing stuff, in biology or anything else.

  14. WJM

    (3) How does that difference engine categorically explain the evolutionary generation of the diversity of life?

    AGM:

    It doesn’t.

    WJM:

    Well, at least Allan Miller can admit it.

    Yes, but I think you managed to snip a substantial part of the matter in that quote-mine. What explains the fact that there is diversity in life, in a ‘natural’, non-intentional scenario, is not the “difference engine” (interesting borrow from mechanical calculation!) – ie, it is not Natural Selection. Natural Selection is but one of a set of factors concentrating and diluting mutations in a population, and those, differentially generated and stochastically fixed, are the source of diversity, in much the same way that languages diverge from common roots. But of course the ‘categorical’ evidence has all but disappeared, just as has every word your ancestors ever uttered – it does not mean they therefore communicated telepathically.

    Evolution proceeds compleletely irrespective of whether there is any intrinsic difference in replicating capacity between varieties – that is, whether the ‘gradient’ between varieties is a slope or a horizontal line. One variety will come to dominate whatever you do (short of deliberately opposing this tendency). And since evolution has no memory, it cannot distinguish between the ‘old’ and the ‘new’ form. It will replace old with new at a baseline probability of p/N – where p=current frequency and N=population size. Natural selection buys extra ‘tickets’ in this baseline lottery by enhancing the chances of the fitter allele (and simultaneously reducing the chances of the less fit variant against which it competes). But since there is no ‘blueprint’, beyond the current set of genomes in the population, there is no static reference, and evolution is much more likely than stasis.

    WJM

    Compared to the every 20 minutes geometric replication of the bacterial pattern, I’d say most “higher” life forms have abysmal replication “optimization”.

    You are comparing chalk and cheese.

    Why do you bring the factor of time into play, with approval? I guess this is where your ‘normative’ language comes in. Organisms should replicate in 20 minutes because … well, because WJM thinks so. Bacteria only replicate once every 20 minutes in optimal conditions. Among those conditions is an unoccupied niche. If resources are limited (as they always are eventually) a 20-minute bacterium hits the exact same limitation as a 50-year elephant. One genome per genome. That’s the replacement rate, and that’s all you ever get in steady state, however long it takes to effect that replacement. Of course, if bacteria were competing directly with elephants, they would probably have the edge with their 20-minute rate. But they do not compete, so there is not an issue there.

    Now, at one time, the ancestors of elephants started to diverge from the ancestors of the modern bacterium. Eukaryotic cells internalised their energy generation, and ‘discovered’ sex. Among the many consequences of this was the relaxation of the constraint on generation time. Bacteria have to replicate quickly, else they get swamped. They are competing in a niche where fast replication carries a selective advantage. But eukaryotes trade some of that speed for occupation of a different niche – one in which colonial cells, comprising a ‘master’ genome and several hundred bacterial-sized mitochondria harnessed for energy generation, form a collective whole. This creates a new, collectivised way of making a living, less diffusion-bound. Subsequent to that, the constraints of sexual reproduction created a means by which a multicellular ‘collective’ of cells operates in the mutual interest of its germ line – again, pooling resources for mutual benefit. Without sex, this cannot happen. Again, a new niche is occupied – a way of making a living capable of scaling up to elephant-size (or whale-size in water). Each elephant produces billions upon billions of copies of its genome. Yet, ultimately, only a few survive – an average two per parent, one for each half-genome. For all your assumed bacterial fecundity, they too produce just two offspring of which only an average one survives in steady state. Take a finite population of bacteria, and a finite population of elephants, and come back in 100 years, and you will find that the apparent ‘fecundity’ of the bacterium has gained it not one genome copy. Complex organisms try harder and harder, squirting their genome to the four winds – and they get no ultimate increase for their trouble. They just adopt increasingly elaborate strategies, which gain representation in the population because they do better than their contemporary rivals, not because they do better than bacteria. Produce twice as many offspring as your rivals, and all your descendants will end up producing twice as many too, until everyone is producing … the same. Find another way of producing more still, and descendants will spread that capacity, until …

  15. How about you explain the NS metric in quantitative terminology that doesn’t employ normative concepts. Then we’ll have something we can really work with.

    I take it you are aware that there is a vast field of population genetics which endeavours to investigate evolution (not just “NS”) using extensive mathematic and computational analysis? It is rarely the subject of popular treatment, but ‘explaining NS in quantitative terminology’ is pretty central to this work. Or do you mean something else?

  16. William J. Murray: How do you know that? How do you quantify “remarkably”? How do you know the patterns don’t replicate horribly compared to some optimal replication? Compared to theevery 20 minutes geometric replication of the bacterial pattern, I’d say most “higher” life forms have abysmal replication “optimization”.

    Why? After all, it’s not that hard to model some operational definition of “optimal”. We observe that if replication is too perfect, the organisms can’t adapt. If it’s too sloppy, the offspring can’t survive. We can observe speciation tracking environmental changes. So it seems sensible to conceive of an “optimal zone” of replication fidelity, variable enough to track normal environmental change rates. This doesn’t strike me as entirely arbitrary.

    Well, I guess if we’re going to change “key points” from replication optimization to something else ….

    Well, if we’re going to ignore the point being made in the hope of winning semantical games…

    My, you’re now sneaking in a whole host of normative terms here.Replicate most “efficiently” in what sense?

    Maximize the survival rate of the offspring.

    According to what defining, quantitative test of “efficiency”?

    Stable species population level from one generation to the next?

    How do you know they replicate the most “efficiently”?

    Wrong question, and not addressing what you are being told. Efficiently means, the species population at least remains stable. Inefficient means, the species goes extinct.

    “Testing” for what purpose? According to what design requirement?

    Species survives across generations, it passes the design test. Species goes extinct, it fails. Survival is the “design requirement.”

    “Best” according to what quantitative value? Are you just looking to flood the conversation with normative terms now that you’ve apparently realized “optimized” isn’t going to get the job done?

    You have made absolutely no effort to understand what you are being told. None whatsoever. You are manufacturing objections, largely irrelevant, useless, and even hostile, instead of discussing in good faith.

    And when asked for even a single detail about your Designer alternative, you suddenly pretend you’ve never mentioned any Designer, so you’re not going to answer any questions. How very convenient. How very dishonest.

  17. Allan Miller: I take it you are aware that there is a vast field of population genetics which endeavours to investigate evolution (not just “NS”) using extensive mathematic and computational analysis? It is rarely the subject of popular treatment, but ‘explaining NS in quantitative terminology’ is pretty central to this work. Or do you mean something else?

    LoL! Unfortunately no one has taken population genetics and applied it to populations in the wild. No one has ever observed a mutation reach fixation in wild populations.

  18. The problem with all of these definitions of the NS metric is that none of them explain the existence CSI features in question. They allow for them to come into existence, but they do not elelevate what is possible to what is probable to what is plausible.

    This is the same thing with Sewell’s argument. Because 2LoT and known laws and energy from the sun and material interaction patterns do not prevent atoms from arranging themselves into encyclopedias and battleships, doesn’t make such an occurrence plausible without some other additional explanatory agency.

    “Nothing stops it from happening” is not an explanation, and “the possible combinations are skewed towards X (greater progeny success)” adds nothing whatsoever to the explanation of Y (increased CSI). Unless NS as a metric skews results towards greater CSI, it adds nothing to the explanation of the existence of CSI – even if we ignore the origin of self-replicating, self-monitoring, self-correcting machines. There is no “rule” that greater CSI = greater progeny success; to claim it is based on the existing results of evolution (life forms present to day) is an invalid tautology.

    That a self-replicating (with variation) system **can** generate increased CSI is of no more value as an argument than stating that a tornado **can** build a house out of rubble. The argument is not about bare possibility, but about scientific plausibility. Unless one can establish that greater CSI = greater progeny success (as a general rule), then these descriptions of the NS metric are worthless as explanations of increased CSI.

  19. They allow for them to come into existence, but they do not elelevate what is possible to what is probable to what is plausible.

    I meant, “They allow for them to come into existence, but they do not elelevate what is possible to what is plausible”.

  20. William J. Murray:
    There are no goals in darwinian evolution – there are only mechanistic outcomes that are the function of what precedes them. Design is a strictly normative concept.Natural forces do not design anything, they simply produce whatever they happen to produce as a function of prior states.

    Elizabeth: We do realise this, William.The question is: can you tell, from looking at the result, whether it was designed with a goal in mind, or is the outcome of a continuous process of optimisation to the current environment?

    I’d actually say “yes” Human artefacts show far more evidence of distal-goal oriented processes that biological organisms do IMO.

    I’m catching up on the discussion after traveling for a couple of days. When I read this, I was looking forward to William’s response, but the original question seems to have been missed in the noise around the side topic of optimization.

    William, please do respond to Elizabeth’s core question: “Can you tell, from looking at the result, whether it was designed with a goal in mind or is the outcome of a continuous process of optimization to the current environment?”

  21. William J. Murray:
    The problem with all of these definitions of the NS metric is that none of them explain the existence CSI features in question. They allow for them to come into existence, but they do not elelevate what is possible to what is [probable to what is] plausible.

    What “NS metric”? NS is a process, not a “metric”. And my exercise very simply demonstrates how NS converts “what is possible” (but extremely unlikely) into what is not only “plausible” but near-certain.

    This is the same thing with Sewell’s argument.Because 2LoT and known laws and energy from the sun and material interaction patterns do not prevent atoms from arranging themselves into encyclopedias and battleships, doesn’t make such an occurrence plausible without some other additional explanatory agency.

    You are still fighting the wrong war, William. Nobody is saying that evolution/intelligence could, possibly, result in a decrease of entropy, even if it is unlikely. We are saying they don’t do it. That there isn’t a weirdness to explain, not that “remote possibility” explains the weirdness.

    That’s why I was talking about the builder building the house that the tornado couldn’t. Sure, she can build a house, and the tornado can’t. But that’s not because her intelligence allows her to best the probabilities stacked against her in trying to decrease entropy. She is increasing entropy even as she builds. Hence the deodorant.

    “Nothing stops it from happening” is not an explanation, and “the possible combinations are skewed towards X (greater progeny success)” adds nothing whatsoever to the explanation of Y (increased CSI). Unless NS as a metric skews results towards greater CSI, it adds nothing to the explanation of the existence of CSI – even if we ignore the origin of self-replicating, self-monitoring, self-correcting machines. There is no “rule” that greater CSI = greater progeny success; to claim it is based on the existing results of evolution (life forms present to day) is an invalid tautology.

    Still puzzled by your phrase “NS as a metric”. It isn’t a metric, it’s a process. And, depending on your mathematical definition of CSI, it may well skew things towards CSI, not because there’s a “rule that greater CSI = greater progeny success” but because if there are vastly more theoretically possible DNA combos that don’t result in greater progeny than do, selection for greater progeny will be selection for that tiny subset of combos that do just this, and if that is our “specification” then selection will indeed be “skewed towards CSI”.

    Of course if the specification is for a protein that isn’t actually helpful for the prototype, then it won’t – but are we really trying to explain the origin of toxic proteins here?

    That a self-replicating (with variation) system **can** generate increased CSI is of no more value as an argument than stating that a tornado **can** build a house out of rubble. The argument is not about bare possibility, but about scientific plausibility. Unless one can establish that greater CSI = greater progeny success (as a general rule), then these descriptions of the NS metric are worthless as explanations of increased CSI.

    I actually think that you are missing an important mathematical and scientific point here, William.

    Can you explain exactly how you are defining CSI here (and what you mean by “the NS metric”) – because I think you are confused.

  22. That’s why I was talking about the builder building the house that the tornado couldn’t. Sure, she can build a house, and the tornado can’t. But that’s not because her intelligence allows her to best the probabilities stacked against her in trying to decrease entropy. She is increasing entropy even as she builds. Hence the deodorant.

    I’ve already said that the application of ID (to my knowledge or understanding) doesn’t violate 2LoT; but also that without ID, the building of the house is inexplicable.

    Here’s a question:

    Which is more likely:

    1 After 4 billion years of evolution, the earth populated by nothing more than variant versions of single-celled creatures, nothing too distant from the UCA,

    or

    2. After 4 billion years of evolution, the Earth is populated by the organisms we find as it is now?

  23. Actually, most of the hard inventions of evolution were made by microbes. We are not that distant from microbes at the genome level.

    The more reasonable question is which is more likely: that the diversity of life is attributable to an observable process that has been studied for 150 years, or attributable to a fictional character.

  24. Here’s a question:

    Which is more likely:

    1 After 4 billion years of evolution, the earth populated by nothing more than variant versions of single-celled creatures, nothing too distant from the UCA,
    or
    2. After 4 billion years of evolution, the Earth is populated by the organisms we find as it is now?

    Hehe, this one is so easy even I can do it:

    Probability of 1: 0
    Probability of 2: 1

    More seriously: how on Earth would you compute these probabilities prior to knowing the outcome?

    fG

  25. More seriously: how on Earth would you compute these probabilities prior to knowing the outcome?

    Let me see if anyone else wants to throw in a response first, then I’ll get back to you.

  26. William J. Murray: Which is more likely:

    1 After 4 billion years of evolution, the earth populated by nothing more than variant versions of single-celled creatures, nothing too distant from the UCA,

    or

    2. After 4 billion years of evolution, the Earth is populated by the organisms we find as it is now?

    Speaking from this particular point in history, after approximately 4 billion years of evolution, the probability of life existing as we find it is exactly 1. If you’re talking about re-running the tape of life and asking what would happen, the answer is that it’s impossible to say; there are far too many contingencies along the way.

  27. Norm Olsen,

    So you’re saying that if we re-ran the tape, there’s no way to predict which one is more likely to exist after 4 billion years of evolution: a world like ours (different major phyla, families, multi-cellular creatures with organs an features that dwell in ocean, on land, fly, burrow, plant – like, insect-like, mammal-like, etc., with variant major characteristics), or a microbial world populated by single-celled organisms not very distant (morphologically and feature-wise) from the UCA?

  28. Joe G: LoL! Unfortunately no one has taken population genetics and applied it to populations in the wild. No one has ever observed a mutation reach fixation in wild populations.

    Every living thing that anyone has ever observed (or not) is an example of mutations reaching fixation.

    Have you ever observed “the designer” doing anything at all?

  29. Nearly every living thing is also a variant, proof that alleles can exist and that there are many neutral and near neutral mutations in every population. If not, every living thing would be an identical twin.

  30. William J. Murray:
    Norm Olsen,

    So you’re saying that if we re-ran the tape, there’s no way to predict which one is more likely to exist after 4 billion years of evolution:a world like ours (different major phyla, families, multi-cellular creatures with organs an features that dwell in ocean, on land, fly, burrow,plant – like, insect-like, mammal-like, etc., with variant major characteristics), or a microbial world populated by single-celled organisms not very distant (morphologically and feature-wise) from the UCA?

    And your point is?

  31. And your point is?

    Do you agree that it cannot be said which is more likely to exist after 4 billion years of evolution, if we ran the tape over again here on Earth?

  32. How is that different from asking whether the earth and moon would exist in their current configuration if we ran the tape of the solar system formation over again?

  33. William J. Murray:
    Norm Olsen,

    So you’re saying that if we re-ran the tape, there’s no way to predict which one is more likely to exist after 4 billion years of evolution:a world like ours (different major phyla, families, multi-cellular creatures with organs an features that dwell in ocean, on land, fly, burrow,plant – like, insect-like, mammal-like, etc., with variant major characteristics), or a microbial world populated by single-celled organisms not very distant (morphologically and feature-wise) from the UCA?

    My hunch would be to expect more diversity, but how much and what forms it would take is impossible to say. Keep in mind that for the vast majority of its history on Earth, life consisted of single celled organisms. It’s not too difficult to imagine that such a state might have continued even to today.

  34. I’m wondering why this is even considered a serious question. It seems to me that the contingencies led to the ascendency of mammals and then of apes are rather extraordinary.

    I don’t see how things like ice ages and asteroid impacts qualify as privileged, but they are probably unusual in their timing. But then if the Designer is running something like The Truman Show, perhaps the sequence is going for a prize in some celestial art show.

  35. petrushka:
    How is that different from asking whether the earth and moon would exist in their current configuration if we ran the tape of the solar system formation over again?

    Well, obviously, the earth and moon were also designed by The Designer *wink* so that is the same question WJM is asking.

    How could Earth’s biota have ended up so “improbably” diversified and complicated, if not for Design and Intelligent Intervention ? It couldn’t have happened by chance, and the fact that if we could rewind the tape of abiogenesis and evolution, we couldn’t be sure of ending up with any beautiful lifeforms (nor perhaps anything multicellular, at all) surely proves that it had to have been Design Intervention, at least at some points in the process.

    How could Earth’s planet and moon configuration have ended up so improbable, if not for Design and Intelligent Intervention ? It couldn’t have happened by chance. How special it is, to be at exactly the right distance from the sun for a broad habitable zone, with that sizable moon perfectly placed to raise tides that could give impetus for organisms evolving to first merely survive then to take advantage of dry shore environments. The fact that if we could rewind the tape of solar system formation, we couldn’t be sure of assembling any Earth-type planet at all, much less our favored planet, surely proves that it had to have been Design Intervention.

    Even though, of course, we are mocked for speculation about the identity of the Designer and of what kind of Design Fingers it used for its meddling in the process …

  36. Well, so far 2 answers and one take-back with a “hunch” attached. Anyone else want to try for a serious answer?

  37. Those are serious answers. they’re just not the answers you are looking for.

  38. They may be serious answers, but they don’t address the questions I asked. They address what the poste’s think my beliefs are about related matters.

    So here’s the question again:

    Which is more likely (and why is it more likely):

    1 After 4 billion years of evolution, the earth populated by nothing more than variant versions of single-celled creatures, nothing too distant from the UCA,
    or
    2. After 4 billion years of evolution, the Earth is populated by the organisms similar to what we find it currently populated with?

    If you don’t consider it a serious question, or one worth answering, tell me why.

  39. Typical. Elizabeth asked Murray:

    Can you explain exactly how you are defining CSI here (and what you mean by “the NS metric”) – because I think you are confused.

    And he launches another distracting side issue.

    Why don’t you answer those questions, WJM? If you did, you might advance the discussion.

  40. William J. Murray: I’ve already said that the application of ID (to my knowledge or understanding) doesn’t violate 2LoT;but also that without ID, the building of the house is inexplicable.

    Well, that’s fine then. So why did you say:

    William J. Murray: This is the same thing with Sewell’s argument. Because 2LoT and known laws and energy from the sun and material interaction patterns do not prevent atoms from arranging themselves into encyclopedias and battleships, doesn’t make such an occurrence plausible without some other additional explanatory agency.

    ? If you agree that Sewell is wrong, why keep bringing his argument up?

    And I now have three questions waiting for answers from you, William:

    1. Can you tell, from looking at the result, whether it was designed with a goal in mind or is the outcome of a continuous process of optimization to the current environment?

    2. What do you mean by an “NS metric”?

    3. How, precisely, are you defining CSI?

    Here’s a question:

    Which is more likely:

    1 After 4 billion years of evolution, the earth populated by nothing more than variant versions of single-celled creatures, nothing too distant from the UCA,

    or

    2. After 4 billion years of evolution, the Earth is populated by the organisms we find as it is now?

    Depends on what you mean by “re-run” – which parameters would you allow to change on the re-run? If they were pretty similar to those that pertained in Run 1, I’d expect similar variety on Run 2. I guess what you are really asking is how much did certain crucial developments (DNA? multicellularity? hox genes?) depend on events likely to be very rare.

    Dunno. I don’t think we can tell, post hoc. My hunch would be: well, it happened, so circumstances were probably such that it, or something similar, would have happened, just as while I realise that the reason I am married to the man I am depended on a fluke, I probably would have married somebody even if that fluke hadn’t happened (glad it did, though). Just as, while it was a fluke that any of us were conceived (why just that sperm, and just that egg?) the chances that somebody roughly similar would have been born if we hadn’t is still quite high (apart from my son, who was literally my last egg – that was a fluke if anything was)

    Now can you answer my questions? I do try to respond to your posts in full, William, and it’s a bit annoying only to get a part-response in return!

  41. Dunno. I don’t think we can tell, post hoc.

    Bingo. If the NS metric (its sorting process as an algorithm) doesn’t categorically skew evolutionary results towards the kind of biological world we see, then NS doesn’t add to any explanation of it. Because it can possibly generate a biological world like ours is of no more importance than the claim that a tornado can possibly build a house out of debris.

    For all you know, NS skews evolutionary product away from the kind of biological world we see, not towards it.

  42. William J Murray: “Bingo. If the NS metric (its sorting process as an algorithm) doesn’t categorically skew evolutionary results towards the kind of biological world we see, then NS doesn’t add to any explanation of it. ”

    If you roll a ball down a hill twice, and it doesn’t take the same path both times, that is not evidence that gravity did not act on the ball.

  43. William J. Murray: Because it can possibly generate a biological world like ours is of no more importance than the claim that a tornado can possibly build a house out of debris.
    For all you know, NS skews evolutionary product away from the kind of biological world we see, not towards it.

    Is this why you choose to remain totally ignorant of physics and chemistry; just so you can imagine all sorts of things you don’t have to answer for?

    Nobody is under any obligation to bend the laws of physics and chemistry to answer phony questions based on ID/creationist misconceptions and ignorance.

    Normal people are able to observe even simple things like solids and liquids and take the hint that it is not all “spontaneous molecular chaos” out there.

    Seriously, William; in the time you take to agonize and mud-wrestle over trivia, thousands of other people get advanced degrees in science and learn the answers to questions you can’t even imagine.

  44. William J. Murray: If the NS metric (its sorting process as an algorithm) doesn’t categorically skew evolutionary results towardsthe kind of biological world we see,

    The “kind of biological world”? Could you please explain in what way your two scenarios are *categorically* different? I.e. could you explain what you mean by *nothing too distant from the UCA*, and in what way / ways organisms are expected to be *similar to what we find [the earth] currently populated with*?

  45. William J. Murray: Elizabeth said: “Nobody said pattern replication has been optimized.”

    Elizabeth said: “The question is: can you tell, from looking at the result, whether it was designed with a goal in mind, or is the outcome of a continuous process of optimisation to the current environment?”

    WJM asked: “What does optimised mean here? IOW, optimsed to what aspect of the environment, and for what purpose?”

    Elizabeth responded: “Optimised such that the pattern continues to be replicated.”

    Elizabeth is the one that claimed the natural (unintelligent) process was optimizing self-replication (unless I misunderstood the above).

    You indeed misunderstood what Elizabeth was saying. She did not claim that the process in question was optimizing self-replication, but that *optimization to the current environment* meant *continues to be replicated* as opposed to *ceases to be replicated* in the current environment. IOW, the significant difference between an optimized and non-optimized pattern here is: extant versus extinct.

  46. I think Elizabeth nailed William’s answers perfectly when she wrote:

    It was raining. My son refused (as usual) to wear his raincoat. Instead, he carried a cup, which he held out in front of him. He argued that he was going to catch the rain drops in the cup so that by the time he got to the place the raindrops had been, they’d be in the cup and he’d be dry. We went out, with cup, sans rain coat. My son got wet. He insisted he remained dry.

    Yes, it’s the Black Knight! By now, William is drowing under all the responses. People are telling him repeatedly that he’s all wet, showing him the water, detailing the drips, and William insists he remains dry!

    The late Molly Ivins wrote a column once about thousands of sheep that suddenly dropped dead next door to an air base. The ranchers demanded an explanation from the military. So the military showed up for a meeting, held in a field with thousands of rotting sheep. Masks were required just to breathe. And the military said “sheep? What sheep? We don’t see any sheep.” And THAT was the resolution of the affair. Ivins said a true military denier can look you straight in the eye, tell you you’re not there, and sincerely believe it!. William has missed his true calling.

  47. William J. Murray: Bingo. If the NS metric (its sorting process as an algorithm) doesn’t categorically skew evolutionary results towardsthe kind of biological world we see, then NS doesn’t add to any explanation of it. Because it can possibly generate a biological world like ours is of no more importance than the claim that a tornado can possibly build a house out of debris.

    For all you know, NS skews evolutionary product away from the kind of biological world we see, not towards it.

    This response makes no sense to me, William. I’m hoping enlightenment may follow when you answer my questions. Will you?

    But my short response is: yes, NS will “tend to skew” the results “towards the kind of biological world we see”. NS is a “skewing” process. In fact, rather than talk about “natural selection”, I’d prefer to use the word “biased sampling” in each generation of the variants of the previoius generation. The bias (“skew” if you prefer) is imposed by the environment and is in favour of what promotes successful reproduction in that environment. If that environment includes niches where multi-cellular organisms reproduce successfully, then multi-cellular organisms will probably evolve, if they are within the biochemical repertoire of the ancestral population.

  48. Is it worth pointing out that niches are moving targets. Environments can change slowly, quickly and catastrophically, over continents and microscopically. The passive nature of organisms occupying a niche is well illustrated by plant species.

    *cue Art Hunt*

  49. Yup. And the evolving population itself is part of the [changing] environment.

Leave a Reply