Creating CSI with NS

Imagine a coin-tossing game.  On each turn, players toss a fair coin 500 times.  As they do so, they record all runs of heads, so that if they toss H T T H H H T H T T H H H H T T T, they will record: 1, 3, 1, 4, representing the number of heads in each run.

At the end of each round, each player computes the product of their runs-of-heads.  The person with the highest product wins.

In addition, there is a House jackpot.  Any person whose product exceeds 1060 wins the House jackpot.

There are 2500 possible runs of coin-tosses.  However, I’m not sure exactly how many of that vast number of possible series would give a product exceeding 1060. However, if some bright mathematician can work it out for me, we can work out whether a series whose product exceeds 1060 has CSI.  My ballpark estimate says it has.

That means, clearly, that if we randomly generate many series of 500 coin-tosses, it is exceedingly unlikely, in the history of the universe, that we will get a product that exceeds 1060.

However, starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection, whereby I will cull the 50 series with the lowest products, and produce “offspring”, with random point mutations from each of the survivors, and repeat this over many generations.

I’ve already reliably got to products exceeding 1058, but it’s possible that I may have got stuck in a local maximum.

However, before I go further: would an ID proponent like to tell me whether, if I succeed in hitting the jackpot, I have satisfactorily refuted Dembski’s case? And would a mathematician like to check the jackpot?

I’ve done it in MatLab, and will post the script below.  Sorry I don’t speak anything more geek-friendly than MatLab (well, a little Java, but MatLab is way easier for this).

529 thoughts on “Creating CSI with NS

  1. Elizabeth, is this the sort of thing you offered to do at UD, way back, but couldn’t get started because no-one there could agree definitions of terms, a set of starting conditions, or how to assess the results?

  2. damitall:
    Elizabeth, is this the sort of thing you offered to do at UD, way back, but couldn’t get started because no-one there could agree definitions of terms, a set of starting conditions, or how to assess the results?

    It was what I originally offered to do – demonstrate that evolutionary processes could generate Information, naively assuming that ID proponents used Dembski’s CSI metric. However, after a bunch of wrangling, it turned out that Upright BiPed didn’t use that one and had his own definition, to do with semiotics. For a while it sounded interesting anyway (but I was never confident I could do it, as I always as with this, hence my original claim), but as you say, we got totally bogged down in the operationalisation. So I withdrew my claim wrt to UBP’s definition.

    It could still be interesting though I found that someone had done something similar to what I’d envisaged, but I’ve forgotten the name and can’t seem to re-google it – essentially setting up a simulation in which a population of self-replicators emerged from a population of non-self-replicators.

  3. Joe G: “As I have told you, and supported, ID is not anti-evolution. When a GA solves the problem it was designed to solve, that is ID, not the blind watchmaker.”

    Evolution, as described by the Evo side, does not operate as a “blind watchmaker”.

    Feedback from the environment determines which “objects” or “life-forms” are allowed to reproduce.

    It is not a “blind” process at all as it depends feedback.

    Evolution doesn’t “ask”, What should I do next?”, rather it “asks”, “Was what I did OK?”.

  4. Joe G: 1- You need to explain how necessity and chance produced the original biological function or reproduction- you have not done so- you have just granted the very thing that needs explaining.

    I entirely agree, Joe. I have not even attempted to do this. What I have done is to show that Darwinian processes can create CSI I have not attempted to show that natural processes can generate Darwinian-capable self-replicators. The reason I have done so is that Dembski specifically claims that Darwinian mechanisms are inadequate to explain Specified Complexity, not that natural mechanisms are inadequate to explain Darwinian mechanisms.

    So I’m starting with a Darwinian-capable population, and demonstrating that it can generate a degree of Specified Complexity, as defined by Dembski in his paper: “Specification: the Pattern that signifies Intelligence”, that falls in what he regards the “rejection region” for non-design.

    Complaining that I am not accounting for Darwinian capability in the first place is moving the goal posts. They may be perfectly good goal-posts, but it is against Dembski’s goal I am attempting to score, not yours.

    2- You may generate some specification but you have not generated CSI especially when you start with reproducing entities-

    Well, obviously I can’t demonstrate that Darwinian mechanisms can produce CSI without using Darwinian mechanisms, can I? So obviously I start with simple self-replicators that self-replicate with variance. Without that, I don’t have a Darwinian mechanism to demonstrate.

    3- You only think we focus on biology

    You do.

    4-You are equivocating because you are assuming that evolutionary processes are not design processes.

    I’m demonstrating that they need not be intentional processes.

    As I have told you, and supported, ID is not anti-evolution. When a GA solves the problem it was designed to solve, that is ID, not the blind watchmaker.

  5. Toronto: Evolution, as described by the Evo side,does not operate as a “blind watchmaker”.

    Feedback from the environment determines which “objects” or”life-forms” are allowed to reproduce.

    It is not a “blind” process at all as it depends feedback.

    Evolution doesn’t “ask”, What should I do next?”, rather it “asks”, “Was what I did OK?”.

    Umm Dawkins, from the evo side, is the one who said it operates as a blind watchmaker.

  6. Elizabeth: I entirely agree, Joe.I have not even attempted to do this.What I have done is to show that Darwinian processes can create CSII have not attempted to show that natural processes can generate Darwinian-capable self-replicators.The reason I have done so is that Dembski specifically claims that Darwinian mechanisms are inadequate to explain Specified Complexity, not that natural mechanisms are inadequate to explain Darwinian mechanisms.

    So I’m starting with a Darwinian-capable population, and demonstrating that it can generate a degree of Specified Complexity, as defined by Dembski in his paper: “Specification: the Pattern that signifies Intelligence”, that falls in what he regards the “rejection region” for non-design.

    Complaining that I am not accounting for Darwinian capability in the first place is moving the goal posts.They may be perfectly good goal-posts, but it is against Dembski’s goal I am attempting to score, not yours.

    Well, obviously I can’t demonstrate that Darwinian mechanisms can produce CSI without using Darwinian mechanisms, can I?So obviously I start with simple self-replicators that self-replicate with variance.Without that, I don’t have a Darwinian mechanism to demonstrate.

    You do.

    4-You are equivocating because you are assuming that evolutionary processes are not design processes.

    I’m demonstrating that they need not be intentional processes.

    As I have told you, and supported, ID is not anti-evolution. When a GA solves the problem it was designed to solve, that is ID, not the blind watchmaker.

    Elizabeth- If Darwinian processes cannot explain reproduction then they cannot explain the ORIGIN of CSI in living organisms.

    I provided three paragraphs from Dembski that explained all of that. Why do you ignore it?

  7. It is not a “blind” process at all as it depends feedback.

    It is blind in the sense that it does not see or foresee. It feels its way.

    In the world of metaphors, the designer would know the outcome of a change prior to feedback. It is alleged that human designers can do this.

    I would argue that the cases where engineers can make things that just work, with no cut and try, are rare and trivial. Inventions are always the result of iterative processes involving trial and feedback.

    I think that design advocates fail to understand design as well as failing to understand biology.

  8. It’s a metaphor, Joe, and not a terribly good one IMO. I don’t actually think that Dawkins is the best explainer of evolutionary theory. He’s essentially a pop-science writer.

    Evolutionary processes have no long-distance goal, but you could consider that populations have an intrinsic “goal” which is simply to persist, and evolutionary processes are effectively reactive “steps” that populations “take” to achieve this “goal”, just as when you are riding a bike you take reactive action to stay upright. In that sense, evolutionary processes are themselves “intelligent” in a vegetable sort of manner.

  9. Joe G: “Umm Dawkins, from the evo side, is the one who said it operates as a blind watchmaker.”

    There’s a better analogy of evolution and that’s in American Idol.

    People sing, then people vote, and then someone gets booted off the show.

    That’s feedback from the environment.

  10. Elizabeth:
    It’s a metaphor, Joe, and not a terribly good one IMO.I don’t actually think that Dawkins is the best explainer of evolutionary theory.He’s essentially a pop-science writer.

    Evolutionary processes have no long-distance goal, but you could consider that populations have an intrinsic “goal” which is simply to persist, and evolutionary processes are effectively reactive “steps” that populations “take” to achieve this “goal”, just as when you are riding a bike you take reactive action to stay upright.In that sense, evolutionary processes are themselves “intelligent” in a vegetable sort of manner.

    According to evolutionay biologists natural selection is blind and mindless- as you said there isn’t any planning going on- the same can be said of drift.

    According to evolutionary biologists all mutations are undirected, basically they just happen. Of course some occur as the result of mutagens but mutations would occur without them- all undirected-> mistakes, copying errors, accidents- again not planned, not part of a plan.

    Again that is according to evolutionary biologists.

    OTOH ID claims some or even most mutations are directed, ie part of a plan-> part of the design.

  11. Toronto: There’s a better analogy of evolution and that’s in American Idol.

    People sing, then people vote, and then someone gets booted off the show.

    That’s feedback from the environment.

    In the real world whatever survives to reproduce, survives to reproduce. And organisms can fight back against their environment by making their own.

  12. JoeG – perhaps you can tell us, when considering whatever it was that in the next instant became a replicating molecule or entity (and thus susceptible to environmental conditions, hence NS) just how much CSI it possessed, and how you measured it.

    It seems to me it is unlikely to have possessed any, since it seems inescapable that it was a pretty simple molecule, which began to evolve (and we all “know” that ID is not ANTI-evolution, right?)
    Where, and when did CSI first appear, Joe?, and how much of there was it at its first appearance? What is the minimum size of molecule capable of possessing CSI?

  13. damitall:
    JoeG – perhaps you can tell us, when considering whatever it was that in the next instant became a replicating molecule or entity (and thus susceptible to environmental conditions, hence NS) just how much CSI it possessed, and how you measured it.

    It seems to me it is unlikely to have possessed any, since it seems inescapable that it was a pretty simple molecule, which began to evolve (and we all “know” that ID is not ANTI-evolution, right?)
    Where, and when did CSI first appear, Joe?, and how much of there was it at its first appearance? What is the minimum size of molecule capable of possessing CSI?

    Hey, look if you don’t like the design inference just step up and deliver the positive evidence for your position.

    If you are asking me these questions it means you have already given up hope of ever supporting the claims of your position.

  14. Joe G: Elizabeth- If Darwinian processes cannot explain reproduction then they cannot explain the ORIGIN of CSI in living organisms.

    Obviously Darwinian processes cannot explain self-replication because self-replication is a prerequisite of Darwinian processes. However, once you have self-replication with variance you have the prerequisites of Darwinian processes, and you can, as I have shown, get new, original, never before seen, CSI as a result, to the extent that chi>1.

    I provided three paragraphs from Dembski that explained all of that. Why do you ignore it?

    I didn’t ignore it, I commented on it. Thanks for typing it out. Here is a more detailed commentary:

    Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems. Darwinist Richard Dawkins cashes out biological specification in terms of the reproduction of genes.

    As I already said, above, I am c[r]ashing out the specification of my virtual organisms according to a specific function (product of lengths of runs-of-heads), which, according to my fitness function, determines the viability of the organism within its evolving population. The genome serves the function of providing a vector of values (sizes of runs-of-heads) that are then multiplied together to give a product that helps the phenotype survive, if it is good enough.

    Thus, in The Blind Watchmaker Dawkins writes, “Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone. In the case of living things, the quality is specified in advance is…the ability to propagate genes in reproduction.”

    Yes indeed. Others specify different properties, as Dembski says, and allows that the specification can be “cashed out in any number of ways”. However, clearly, self-replication itself cannot be accounted for by Darwinian processes, because self-replication is a prerequisite for Darwinian processes.

    The central problem of biology is therefore not simply the origin of information but the origin of complex specified information. Paul Davies emphasized this point in his recent book The Fifth Miracle where he summarizes the current state of origin-of-life research: “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.” The problem of specified complexity has dogged origin-of-life research now for decades. Leslie Orgel recognized the problem in the early 1970s: “Living organisms are distinguished by their specified complexity. Crystals such as granite fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.”

    Well, if we use Dembski’s definition of “Specified Complexity”, then I don’t think Orgel’s claim is correct. But I don’t know what Orgel’s definition was.

    Where, then, does complex specified information or CSI come from, and where is it incapable of coming from? According to Manfred Eigen, CSI comes from algorithms and natural laws. As he puts it, “Our task is to find an algorithm, a natural law that leads to the origin of [complex specified] information.” The only question for Eigen is which algorithms and natural laws explain the origin of CSI.

    Right. And I suggest that many processes that involve feedback result in specified complexity, as defined by Dembski, including crystalisation. They also tend to involve some kind of self-replication with variance, which is a property of many systems, not just biological systems.

    The logically prior question of whether algorithms and natural laws are even in principle capable of explaining the origin of CSI is one he ignores. And yet it is this very question that undermines the entire project of naturalistic origins-of-life research. Algorithms and natural laws are in principle incapable of explaining the origin of CSI. To be sure, algorithms and natural laws can explain the flow of CSI. Indeed, algorithms and natural laws are ideally suited for transmitting already existing CSI. As we shall see next, what they cannot do is explain its origin. (bold added)

    Well, this is what I have demonstrated is not the case. I end up with more CSI than I started with, as measured by Dembski’s own metric. It hasn’t simply “flowed” – new CSI has been generated. I’ve even posted a couple of plots in this thread showing the rate of generation.

  15. Just a simple analogy, Joe: if I have a small snowball, and I roll it down hill, I end up with a larger snowball, right?

    Now, would you say that the snow that has accumulated round my original small snowball isn’t “new snow”, just because it wouldn’t have got there had it not been for my “original” snowball?

    I started my exercise with a small snowball (a small amount of CSI if you like – indeed I measured it). I end up with an extremely large snowball (so much CSI that it exceeds Dembski’s threshold for rejecting non-Design).

    In what conceivable sense has my process not created CSI? Yes it need a starter quantity, but Dembski himself says that moderate amounts of CSI are perfectly within the realm of non-Design processes. What he claims is that CSI (measured as chi) >1 is beyond the capability of non-Design. Well, my system created enough to crash that threshold.

    Therefore Dembski is wrong.

  16. Joe G: “According to evolutionay biologists natural selection is blind and mindless- as you said there isn’t any planning going on- the same can be said of drift.”

    The same can be said of water flowing down a hill.

    The form of the hill and gravity “decide” what path the water will take even though there is no “planning” by a “mind” for the ultimate destination of the water.

  17. Joe G: Hey, look if you don’t like the design inference just step up and deliver the positive evidence for your position.

    If you are asking me these questions it means you have already given up hope of ever supporting the claims of your position.

    You’re the one blathering about CSI “pertains to origins”. Just tell us how you know, and what the properties of this CSI might have been. Because until you can give some description, I’m not inclined to believe it was there at all.
    Go on – treat yourself. Just for once, make your position on something clear and unequivocal.
    (Not holding my breath)

  18. Joe G to damitall: “Hey, look if you don’t like the design inference just step up and deliver the positive evidence for your position.”

    Elizabeth has done that for our side by showing that ID is not required to generate “CSI” above Dembski’s claimed limit for natural processes.

    We’re not the ones afraid of experiments.

    Now you should show “positive” evidence for your side.

  19. Elizabeth:

    So if you want to conclude an Intelligent Designer from the fact that the universe is non-uniform (lacks entropy, essentially)

    Derail alert! I’m not sure about that bit in parenthesis. IANAP, but shortly after the Big Bang, the entropy of the universe was presumably lowest, and has been getting higher ever since. Yet when entropy was lowest, the universe was (again, presumably) quite uniform. But the ‘energy available to do useful work’ was high. The extreme density may have had something to do with the ability of energy to equilibrate in such a circumstance. We have since had expansion, which reduces available energy (or smears it out, at least), but at the same time, that bizarre force gravity has made some very non-uniform distributions of matter (or ‘crystalline energy’, one might New-Age-ily call it) . Where those collections are big enough, nuclear fusion is ignited which releases vast amounts of energy from that matter. That energy is not retained by the same force it was when it was mass, so it is free to equilibrate – to rush outwards into space and become unavailable for work.

    Essentially, when you have mass, moving towards equilibrium (increasing entropy) actually involves creating a non-uniform distribution of matter. Responding to the ‘pull’ of gravity converts the potential energy into kinetic energy of fall, with entropy-increasing energy losses on conversion and on arrest of fall. You have to put energy back in if you want to move the masses apart again.

    If you don’t have mass the equilibrium situation (no gradient) is a uniform distribution in space. But if you do have mass, the equilibrium is a uniform surface – again, no gradient, quite literally. But I am emphatically not a physicist, so am prepared to be told that is all bunk! :o)

  20. Jon Garvey, at UD, asks:

    “However, starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection, whereby I will cull the 50 series with the lowest products, and produce “offspring”, with random point mutations from each of the survivors, and repeat this over many generations.”

    How does having a high product make a sequence self-reproducing so that natural selection is a valid thing to simulate?

    A “high product” doesn’t make the sequence self-reproducing. All the sequences are potentially self-reproducing. Think of them as a population of simple virtual organisms, that compete for limited resources. They way they get privileged access to the available resources is by having a genome that “codes for” runs of heads that, when the sizes are multiplied together, gives a high value. And in each generation, half of the population die off – those with the smallest products-of-runs-of-heads, just as in a population of biological organisms, those whose genomes code for something less effectively than do the genomes of their peers may lose the competition for resources. But, in each generation, the ones with the highest value output are the ones who go on to reproduce. Their offspring have small mutations. Some of these have even higher value output and out-compete their parents. In other words, the system is a model of Darwinian evolution – a population of virtual citters in competition for finite resources, the winners of which go on to produce offspring. High-value output promotes reproductive success.

    How does multiplying random runs of heads make the result in any way specified? A product of 10^60 would be complex, but hardly specified.

    It is specified by Dembski’s definition, because genomes (sequences of heads and tails) whose product of runs-of-heads is very large is a tiny subset of the vast set of possible genomes. If we were simply using a random search to find members of that subset, it would be beyond the “probabilistic resources” of the universe. However, using Darwinian search, we find that the members of that tiny subset very readily. And the Specified Complexity, of that tiny subset, using Dembski’s formula, gives a value >1, which means that it falls in Dembski’s rejection region for “no-Design”. And yet there was no Design – the subset was found by my Darwinian algorithm, not by me.

  21. Just a simple analogy, Joe: if I have a small snowball, and I roll it down hill, I end up with a larger snowball, right?

    It all depends on the type of snow. The powdery stuff doesn’t stick.

    Now, would you say that the snow that has accumulated round my original small snowball isn’t “new snow”, just because it wouldn’t have got there had it not been for my “original” snowball?

    Well your snowball doesn’t explain the origin of the snow. And just chucking DNA onto existing DNA does not add CSI anymore than adding random words onto the end of a definition adds CSI to that definition.

    I started my exercise with a small snowball (a small amount of CSI if you like – indeed I measured it). I end up with an extremely large snowball (so much CSI that it exceeds Dembski’s threshold for rejecting non-Design).

    If you think so- I strongly disagree that what you did has anything to do with Dembski’s claim.

    In what conceivable sense has my process not created CSI? Yes it need a starter quantity, but Dembski himself says that moderate amounts of CSI are perfectly within the realm of non-Design processes.

    He makes no such claim. He claims CSI = design, period, end of story. Your misinterpretation of his paper, while entertaining, is still a misinterpretation.

    What he claims is that CSI (measured as chi) >1 is beyond the capability of non-Design. Well, my system created enough to crash that threshold.

    Yes an intelligent agency can write a program to produce something it thinks is CSI.

    Therefore Dembski is wrong.

    Or you are. I know where my money is being placed.

  22. And a message to any UD readers who follow Joe G’s link (thanks Joe!) to here: Welcome and feel free to register if you’d like to join the discussion. First posts are automatically moderated, but I will release them as soon as I see the alert!

  23. I see Jon Garvey’s post is followed by one from kairosfocus, with predictable content.
    What a shame he does not feel able to join the discussion here – which, BTW, I personally have found extremely illuminating.

  24. Elizabeth,

    I like your analogy. Joe misunderstands Dembski’s claim to be that nature cannot produce more than 500 bits of SI starting with zero SI. Of course, Dembski’s claim is much stronger — he says that nature cannot increase the amount of SI by more than 500 bits.

    Joe’s version of ID doesn’t place any limits at all on nature. As Dembski has pointed out several times, nature can produce small amounts of SI, and Joe thinks that nature can take a small amount of SI and increase it beyond the 500 bit threshold (or at least that ID arguments don’t preclude this). So nature could produce a small amount of SI from scratch and then increase it to CSI. I think we can all get on board with Joe on this one.

  25. Great contrast of ‘defending CSI’ vs. ‘exploring & understanding CSI’ right now.

  26. Joe G: It all depends on the type of snow. The powdery stuff doesn’t stick.

    True.

    Well your snowball doesn’t explain the origin of the snow. And just chucking DNA onto existing DNA does not add CSI anymore than adding random words onto the end of a definition adds CSI to that definition.

    Well, usually true. However, adding more stuff to the end of a sequence, or mutating the sequence can add CSI, and reliably will if you have natural selection. As you can see from my graph, the fitness of the winning lineage steadily increases over the generations (which it would not do in the absence of NS), and we can also compute (OlegT has done so) that the subset of very high value genomes is very tiny in relation to the total number of possible genomes. If we plug these numbers into Dembski’s formula, we can see that the fittest phenotypes have genomes with chi>1.

    If you think so- I strongly disagree that what you did has anything to do with Dembski’s claim.

    Well, we will have to agree to differ. I have made my case.

    He makes no such claim. He claims CSI = design, period, end of story. Your misinterpretation of his paper, while entertaining, is still a misinterpretation.

    No, he doesn’t. Please read the paper. He defines Specified Complexity (his term in this paper for CSI) as chi, and then says that if chi>1, we can reject non-Design, i.e. infer Design. It’s the whole point of all that stuff nearer the beginning about Fisher and the rejection region. Look at the figure on page 6. And he gives the cut-off for inferring Design as chi>1 (page 21).

    Yes an intelligent agency can write a program to produce something it thinks is CSI.

    Yes, but in this case, the intelligent agent (me) simply designed a population of simple self-replicators and let them get on with it, using Darwinian evolution. It’s the Darwinian evolution that generated that supra-threshold quantity of CSI, not me.

    Or you are. I know where my money is being placed.

    Well, if you are going to gamble, I suggest you use a Darwinian algorithm to throw the dice :)

  27. Joe G: It all depends on the type of snow. The powdery stuff doesn’t stick.

    It’s right in front of your face and you don’t see it. What accounts for differences in show?

    Well your snowball doesn’t explain the origin of the snow.

    Can you explain the origin of show? Do you know how many different types of snow there are? Do you know why there are so many types?

    Do you have any idea of the implications of this example of the lowly water molecule? Do you see the effects of gravity coming into the picture as the size of a snowball becomes bigger?

    A simple water molecule, Joe; and so much can happen. Do you even know why?

    We keep asking you to look; but all you do is look at Dembski. Look at the real world instead.

  28. Yes. Joe G has implied this a few times, and I’ve agreed with him.

    The weird thing is that the seems to agree with us and not Dembski, but thinks that he agrees with Dembski and not us!

    He should read that paper of Dembski’s.

  29. Elizabeth,

    Your “example” has nothing to do with biology as you have no idea what the minimal requirement is to get Darwinian evolution started. So your population of self-replicators is meaningless because in order to even get Darwinian processes going you have to start with an amazing amount of CSI. And if you are designing that then darwin has nothing to do with the subsequent evolution.

    Even starting with the CSI in existing bacteria no one has ever observed CSI being evolved via Darwinian processes- certainly not Lenski.

    So yes, if you start with CSI, and write a program to generate what you think is more CSI, Darwin has nothing to do with it.

  30. Mike Elzinga: It’s right in front of your face and you don’t see it.What accounts for differences in show?

    Can you explain the origin of show?Do you know how many different types of snow there are?Do you know why there are so many types?

    Do you have any idea of the implications of this example of the lowly water molecule?Do you see the effects of gravity coming into the picture as the size of a snowball becomes bigger?

    A simple water molecule, Joe; and so much can happen.Do you even know why?

    We keep asking you to look; but all you do is look at Dembski.Look at the real world instead.

    Hi Mike- materialism can’t explain water. It can’t explain gravity. It just has to start with all the stuff it needs to explain.

    I look at the real world, Mike. And the evidence for Intelligent Design is there. And if you had any evidence to support your position, we wouldn’t be having this discussion. So just the fact taht we are having it, and all you can to is try to insult me, is evidence enough that you have nothing.

    Thank you

  31. R0b:
    Elizabeth,

    I like your analogy.Joe misunderstands Dembski’s claim to be that nature cannot produce more than 500 bits of SI starting with zero SI.Of course, Dembski’s claim is much stronger — he says that nature cannot increase the amount of SI by more than 500 bits.

    Joe’s version of ID doesn’t place any limits at all on nature.As Dembski has pointed out several times, nature can produce small amounts of SI, and Joe thinks that nature can take a small amount of SI and increase it beyond the 500 bit threshold (or at least that ID arguments don’t preclude this).So nature could produce a small amount of SI from scratch and then increase it to CSI.I think we can all get on board with Joe on this one.

    Umm I agree with Dembski- nature, operating freely cannot produce CSI and is very limited in the amount of SI it can produce, which doesn’t appear to be additive in any fashion.

  32. Kairosfocus responds at UD:

    Joe

    Thanks.

    From your clip, at most we have here another hill climber, within an island of function.

    Not a hill-climber, an evolutionary processes, that are, as my plot here demonstrates, capable of going downhill, in other words, of finding “islands of function” as long as the gulf isn’t too deep.

    Not an explanation for how you get the relevant function, with reproductive capacity from an arbitrary initial point.

    No, and it is not intended to be a demonstration of how reproductive capacity arises. It is a refutation of Dembski’s claim that Darwinian processes cannot generate CSI, and clearly, for Darwinian process to try to do so, there has to be reproductive capacity, because self-replication with variation is a prerequisite. So unless Dembski was simply saying: Darwinian process can’t create CSI because Darwinian processes can’t begin without CSI (and he isn’t), my demonstration is a simple and direct refutation of his claim.

    Someone needs to tell Dr Liddle et al, that the first issue is to get to the first life form with metabolism and self-replication.

    Not to refute Dembski. To explain the origin of life, sure, but I am not attempting to explain OOL! I’m simply showing that Dembski’s specific claim is incorrect – fallacious. There may be other perfectly good claims for ID that are not fallacioius, but this one of Dembki’s isn’t one of them.

    Until you can show that on chance plus necessity, all you have is something that says that a designed reproductive capacity can support hill climbing if you have a selection filter. Within an island of function in short.

    And all “evolutionists” every claim is that once you are on the “island” of Darwinian-capable self-replicators, you can get to anywhere else in biology. If you want people to claim that you can get to that island in the first place you need to talk to the OOL people, not Darwinists. I am certainly not making that claim. We do not, as yet, have a complete theory, yet alone a complete supported theory, as to how the simplest possible Darwinian-capable self-replicators came about.

    Which was never in dispute or doubt.

    Indeed. Which is why I am not disputing it. I am more than happy to stipulate that we do not know how the first self-replicators got going.

    She may have complexity indeed, any string of 500 bits is complex, but there is no specification there independent of the string.

    Yes, there is. I have explained that in detail. The specification is that tiny subset of coin-toss sequences in which the product of the sizes of the runs-of-heads is very large. Plug those numbers into Dembski’s formula and you get chi>1.

    As you agree, kairosfocus, this isn’t difficult, given a Darwinian process. But Dembski says it’s impossible without Design, and that Darwinian processes are inadequate to account for that amount of CSI.

    But of course that is probably the point, to try to break down the idea that here is such a thing as a definable specification that is independent of the series of bits in the string and in relevant cases will function in a way dependent on the specific pattern.

    Not at all. Unlike many non-IDists, I think CSI is potentially a useful concept, although I think that Hazen et al have a better operationalised version. In fact I wouldn’t even set the cut-off so high – it’s just that I don’t think that it is “the pattern that signifies Intelligence”. I think it is “the pattern that signifies replication with variation and feedback”.

    Unfortunately, we are not really dealing with a serious discussion, just something to get to a point where the concept of specification will be dismissed by those disinclined to follow where it points.

    I hope not. I think it is a very valid concept. I have used it here.

    To see how pointless it is, let us suppose she manages to obtain a string that when the runs of H’s are multiplied the product exceeds 10^60 or whatever. Let’s say it actually fits some definition of a “specification.” Would that have shown that by chance and blind necessity, we can generate functional codes that carry out relevant life function, especially by incremental changes?

    No, of course not. But it would have refuted (and has refuted) Dembski’s claim that it is not possible in principle.

    The purpose is rhetorical not serious.

    At most it may tell us we need to tighten the way we talk about a specification.

    My purpose is perfectly serious, and if it causes ID proponents to tighten up their definitions, that is an excellent result. I do recommend the Hazen et al paper, here, which I know Joe G likes. That version of Functional Complexity works very well, I think. But again, it doesn’t show that FC can’t be generated by Darwinian mechanisms – clearly it can.

    I agree entirely that OOL remains unexplained satisfactorily. But that is not the point that Dembsi is making, and wouldn’t explain the torrent of “anti-Darwin” posts at UD!

    And, coming back to the real world, we are dealing with functionally specific complex organisation and associated information. The sort of thing that makes a key fit a lock and open it, but not another apparently similar key. Likewise, the way certain strings of two-bit elements code for proteins that work, and fairly slight disruptions tend to lead of non-function. Where also, we have that protein fold domains are isolated in the space of protein sequences to something like 1 in 10^70.

    Similarly, for the evolutionary materialist narrative to gain traction, they have to account for the origin of a metabolising entity with a code based replication facility.

    Again, I am not attempting to tackle the issue of OOL>

    And what is happening here is that the relevant config spaces are huge, beyond astronomical, and the co-ordinated complexity of what works, makes sure things are pretty unrepresentative of the field of possible strings. So, chance and blind mechanism do not have capacity to successfully search the space.

    Yes, I know the config space is huge. I calculated it it: 2500 possible genomes, of which a handful are specified. Yet my system finds members of that tiny handful, desplite being “blind” and “chance” (random mutations; no foresight).

    If you are moving around in an island where neighbours are selected and rewarded incrementally, you are within an island of function and have failed to address where the problem begins. Back to the Weasel “misleading” — Dawkins’ admission — example, in yet another guise.

    The only “island” is the island of things that reproduce with variance. Once on that island, powerful CSI generation is clearly possible.

    And this is NOT like Weasel, in several key respects, the most important being that I do not pre-specify the winning sequences i.e. the winning sequences are not the fitness function. The fitness criteria is simply a property of a tiny subset of the total possible sequences, and the searching is done purely by random point mutations that are then filtered by that criterion.

  33. Oh, and Joe, you are still in diametrical disagreement with Dembski. Dembski gives “compressibility” as the measure of specification. You are using incompressibility. Check out section 4 in his paper.

    To sum up, the collection of algorithmically compressible (and therefore nonrandom) sequences has small probability among the totality of sequences, so that observing such a sequence is reason to look for explanations other than chance.

    Note that it is compressible patterns that he says require us to “look for explanation other than chance”, not incompressible ones.

    You might want to check out Abel as well, who makes the same point.

    They are both wrong, of course :)

  34. Joe G: 1- You need to explain how necessity and chance produced the original biological function or reproduction- you have not done so- you have just granted the very thing that needs explaining.

    2- You may generate some specification but you have not generated CSI especially when you start with reproducing entities-

    3- You only think we focus on biology

    4-You are equivocating because you are assuming that evolutionary processes are not design processes.

    As I have told you, and supported, ID is not anti-evolution. When a GA solves the problem it was designed to solve, that is ID, not the blind watchmaker.

    You need to explain the how, when, where, who, and why of ‘the designer’ – you have not done so- you have just granted the very things that need explaining.

    2. What happened to side loading/intervention?

    3. If you don’t focus on biology, what do you focus on? Did ‘the designer’ design everything in the universe? Does everything have CSI? Is everything irreducibly complex?

    4. You are equivocating because you are assuming that what you call design processes are not evolutionary processes.

    ID is anti-evolution. ID depends on an extra cosmic designer (a God), evolution does not. ID depends on assertions about CSI, IC, the EF, and other mumbo jumbo that evolution does not depend on. ID is based on religious beliefs, evolution is not.

  35. Joe G:
    Elizabeth,

    Your “example” has nothing to do with biology as you have no idea what the minimal requirement is to get Darwinian evolution started.

    It has plenty to do with biology, but I agree that I do not know what the minimal requirement is to get Darwinian evolution started is.

    So your population of self-replicators is meaningless because in order to even get Darwinian processes going you have to start with an amazing amount of CSI.

    Can you support this assertion? Do you know what the minimal requirement is to get Darwinian evolution started is?

    And if you are designing that then darwin has nothing to do with the subsequent evolution.

    Because some VIP lays a foundation stone for a building, the building contractors have nothing to do with the rest of the building?

    Even starting with the CSI in existing bacteria no one has ever observed CSI being evolved via Darwinian processes- certainly not Lenski.

    I just showed you CSI being evolved via Darwinian processes. And haven’t you read that Hazen paper you rightly keep recommending?

    So yes, if you start with CSI, and write a program to generate what you think is more CSI, Darwin has nothing to do with it.

    Well, yes it has. You just need the minimal prerequisites for Darwinian processes, and Darwinian processes do the rest. As I have shown.

  36. Joe G: Hey, look if you don’t like the design inference just step up and deliver the positive evidence for your position.

    If you are asking me these questions it means you have already given up hope of ever supporting the claims of your position.

    The fact that you avoid relevant questions over and over again, and try to turn everything around and put the burden on other people to prove that your claims are wrong means that you have already given up hope of ever supporting your claims.

  37. Joe G: Hi Mike- materialism can’t explain water. It can’t explain gravity. It just has to start with all the stuff it needs to explain.
    I look at the real world, Mike. And the evidence for Intelligent Design is there. And if you had any evidence to support your position, we wouldn’t be having this discussion. So just the fact taht we are having it, and all you can to is try to insult me, is evidence enough that you have nothing.
    Thank you

    No; you don’t look at the world. You have no idea of what has been accounted for.

    You accused me of making up physics and chemistry examples when I tried to direct your attention to many of the very interesting things that go on in the natural world.

    So, despite your assertions, I don’t see any evidence you really pay attention to anything other than ID/creationism; and as Elizabeth has already pointed out, you don’t even get that right.

    Yes, I am quite aware of the fact that “materialism” is something you fear and loathe. But your fear and loathing and your refusal to even look at nature doesn’t change the fact that nature does what it does whether you like it or not.

    Elizabeth’s demo has a specific point which she has stated clearly and repeatedly. What is more, many of these little demonstrations can be used as models for specific phenomena that occur in nature. We make these kinds of models all the time. The more correctly we get nature correct in our models, the more the models replicate what we see in nature.

    It’s not hard to understand. You just need to go out and look. There are huge fields of condensed matter and organic chemistry on which major industries in the world are based. There are absolutely NONE based on the ideological writings of Dembski and other ID/creationists. Perhaps you could explain why that is.

  38. Creodont: The fact that you avoid relevant questions over and over again, and try to turn everything around and put the burden on other people to prove that your claims are wrong means that you have already given up hope of ever supporting your claims.

    Reminds me of:

    As for your example, I’m not going to take the bait. You’re asking me to play a game: “Provide as much detail in terms of possible causal mechanisms for your ID position as I do for my Darwinian position.” ID is not a mechanistic theory, and it’s not ID’s task to match your pathetic level of detail in telling mechanistic stories. If ID is correct and an intelligence is responsible and indispensable for certain structures, then it makes no sense to try to ape your method of connecting the dots. True, there may be dots to be connected. But there may also be fundamental discontinuities, and with IC systems that is what ID is discovering.”

    ID: Questions everything, offers nothing.

  39. Well, this thing is certainly a nice example of how, when a population is a long way from adaptation, most mutations are beneficial, and when it is near optimal, most mutations are deleterious. Still, the score is creeping upwards. Got to 3.0135e+59.

    ETA some time tomorrow evening, I think. Just as well the Earth is Old….

  40. Elizabeth:
    Oh, and Joe, you are still in diametrical disagreement with Dembski.Dembski gives “compressibility” as the measure of specification.You are using incompressibility.Check out section 4 in his paper.

    Note that it is compressible patterns that he says require us to “look for explanation other than chance”, not incompressible ones.

    You might want to check out Abel as well, who makes the same point.

    They are both wrong, of course

    And necessity does the job at producing algorithmically compressible patterns! Laws can produce specificity- crytals represent specificity- snowflakes represent specificity

    And again computer programs, assembly instructions and encyclopedia articles are all CSI and not one can be algorithmically compressed.

    A DNA sequence of a gene cannot be algorithmically compressed.

    Complex sequences, however, cannot be compressed to, or expressed by, a shorter sequence of coding instructions. (Or rather, to be more precise, the complexity of a sequence reflects the extent to which it cannot be compressed.)- Stephen C. Meyer page 106 of “Signature in the Cell”

    Meyer and Dembski have actually collaborated on ID concepts.

  41. You just need the minimal prerequisites for Darwinian processes, and Darwinian processes do the rest.

    The Sanford paper about AVIDA demonstrates otherwise- seeing that paper went through peer-review I will go with that.

    The effects of low-impact mutations in digital organisms

    Chase W. Nelson and John C. Sanford

    Theoretical Biology and Medical Modelling, 2011, 8:9 | doi:10.1186/1742-4682-8-9

    Abstract:

    Background: Avida is a computer program that performs evolution experiments with digital organisms. Previous work has used the program to study the evolutionary origin of complex features, namely logic operations, but has consistently used extremely large mutational fitness effects. The present study uses Avida to better understand the role of low-impact mutations in evolution.

    Results:

    When mutational fitness effects were approximately 0.075 or less, no new logic operations evolved, and those that had previously evolved were lost. When fitness effects were approximately 0.2, only half of the operations evolved, reflecting a threshold for selection breakdown. In contrast, when Avida’s default fitness effects were used, all operations routinely evolved to high frequencies and fitness increased by an average of 20 million in only 10,000 generations.

    Conclusions:

    Avidian organisms evolve new logic operations only when mutations producing them are assigned high-impact fitness effects. Furthermore, purifying selection cannot protect operations with low-impact benefits from mutational deterioration. These results suggest that selection breaks down for low-impact mutations below a certain fitness effect, the selection threshold. Experiments using biologically relevant parameter settings show the tendency for increasing genetic load to lead to loss of biological functionality. An understanding of such genetic deterioration is relevant to human disease, and may be applicable to the control of pathogens by use of lethal mutagenesis.

  42. Liz you say:

    “It is specified by Dembski’s definition, because genomes (sequences of heads and tails) whose product of runs-of-heads is very large is a tiny subset of the vast set of possible genomes.”

    // Now here is Dembski explaining specificity maps pertaining to coin tosses, (R-) and (R’) are coin toss strings:

    “6. Specificity:
    The crucial difference between (R-) and (R’) is that (R’) exhibits a simple, easily described pattern whereas (R-) does not. To describe (R’), it is enough to note that this sequence lists binary numbers in increasing order. By contrast, (R-) cannot, so far as we can tell, be described any more simply than by repeating the sequence. Thus, what makes the pattern exhibited by (R’) a specification is that the pattern is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance. It’s this combination of pattern simplicity
    (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing
    the corresponding event by chance) that makes the pattern exhibited by (R’) — but not (R-) — a specification.”

    // There is a clear cut disconnect between what Demski says and what you say he says. Such as I had noted earlier. A binary population of {H,T}, a string of random coin tosses has no function, therefore to simulate the specific arrangements of dna sequences that function / all dna sequences with coin flips, your string needs to be simply describable, such as “every forth coin is heads.”

    What you are doing has nothing to do with Dembski. More from Dembski:
    .
    “[Chaitin, Kolmogorov, and Solomonoff] What they said was that a string of 0s and 1s becomes increasingly random as the shortest computer program that generates the string increases in length. For the moment, we can think of a computer program as a short-hand description of a sequence of coin tosses. Thus, the sequence (N) is not very random because it has a very short description, namely, repeat ‘1’ a hundred times.”

    “To sum up, the collection of algorithmically compressible (and therefore nonrandom) sequences has small probability among the totality of sequences, so that observing such a sequence is reason to look for explanations other than chance.”

    So you say:
    “I started my exercise with a small snowball (a small amount of CSI if you like – indeed I measured it).

    And Dembski:

    “Algorithms and natural laws are in principle incapable of explaining the origin of CSI. To be sure, algorithms and natural laws can explain the flow of CSI. Indeed, algorithms and natural laws are ideally suited for transmitting already existing CSI. As we shall see next, what they cannot do is explain its origin.”

  43. junkdnaforlife: So you say:
    “I started my exercise with a small snowball (a small amount of CSI if you like – indeed I measured it).
    And Dembski:
    “Algorithms and natural laws are in principle incapable of explaining the origin of CSI. To be sure, algorithms and natural laws can explain the flow of CSI. Indeed, algorithms and natural laws are ideally suited for transmitting already existing CSI. As we shall see next, what they cannot do is explain its origin.”

    The proper answer to Dembski is that science cannot explain Cloufrisk Sclmoracted Ignaphrashism either. So what?

    But insofar as anyone can nail down what Dembski means by CSI in a given instance, showing that Dembski is wrong is not too difficult. It doesn’t help any of the followers of ID/creationism to keep trying to change the definition of CSI every time someone succeeds in showing Dembski to be wrong.

    Watch Elizabeth’s demonstration and learn something about nature even if you can’t settle on a definition of CSI. This is really interesting.

    Fortunately, science doesn’t need CSI.

  44. junkdnaforlife: “”[Chaitin, Kolmogorov, and Solomonoff] What they said was that a string of 0s and 1s becomes increasingly random as the shortest computer program that generates the string increases in length. For the moment, we can think of a computer program as a short-hand description of a sequence of coin tosses. Thus, the sequence (N) is not very random because it has a very short description, namely, repeat ‘1’ a hundred times.””

    Please help me understand this.

    If I write a program that has an 8-bit immediate = 100, and then another that has a 16-bit immediate = 100, the program will still generate 100 ’1′ s in both cases.

    According to you, because one program is 1 byte shorter, my output information is less random.

    How can that be if both output strings, (i.e., the “information” I see), are identical?

  45. Toronto,

    In your example, if both strings have 100 1′s, then they both are simply describable as “100 1′s”. We can recover the set from that description.

  46. junkdnaforlife:
    Elizabeth,

    Specified relates to (proteins) by the specific arrangements that execute function / all possible arrangements. A protein sequence is specified for a
    particular function. Random coin flips have no function to obsererve, therefore compressibility is measured. Another way to explain the simply describable/compressibility relationship between a binary population {H,T} and dna sequences is- a protien-coding dna sequence that can be simply describable as,”interacts with the glucocorticoid receptor,” such that the number of simply describable protien-coding dna sequences / all possible dna sequences, shares a relationship with, the number of simply describable {H,T} patterns / the number of all possible {H,T} patterns.

    Ok. Except in Lizzie’s example the random coin flips HAVE a function to fulfill: *maximize the product of H-runs*. Thus, her example is a lot closer to actual protein-coding DNA sequences, and has no use for the compressibility equivalent. It is, instead, simply describable: *maximize the product of H-runs* which would be equvalent to *interacts with the glucocorticoid receptor*.

  47. Mike Elzinga,

    You have already offered a perfectly coherent argument many weeks ago. That particles on the atomic level do not behave the way abstract representations in information theory behave. And that our ID/Creationist maps won’t hold. They won’t hold because research someday will show the relationships via chemistry an physics. ID/Creationist’s say that physics and chemistry will come up short, precisely because of what the maps are telling us. You argue the maps won’t hold. Specifically in the case of origins, I argue they will.

    You want to hug it out now?

Leave a Reply