Creating CSI with NS

Imagine a coin-tossing game.  On each turn, players toss a fair coin 500 times.  As they do so, they record all runs of heads, so that if they toss H T T H H H T H T T H H H H T T T, they will record: 1, 3, 1, 4, representing the number of heads in each run.

At the end of each round, each player computes the product of their runs-of-heads.  The person with the highest product wins.

In addition, there is a House jackpot.  Any person whose product exceeds 1060 wins the House jackpot.

There are 2500 possible runs of coin-tosses.  However, I’m not sure exactly how many of that vast number of possible series would give a product exceeding 1060. However, if some bright mathematician can work it out for me, we can work out whether a series whose product exceeds 1060 has CSI.  My ballpark estimate says it has.

That means, clearly, that if we randomly generate many series of 500 coin-tosses, it is exceedingly unlikely, in the history of the universe, that we will get a product that exceeds 1060.

However, starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection, whereby I will cull the 50 series with the lowest products, and produce “offspring”, with random point mutations from each of the survivors, and repeat this over many generations.

I’ve already reliably got to products exceeding 1058, but it’s possible that I may have got stuck in a local maximum.

However, before I go further: would an ID proponent like to tell me whether, if I succeed in hitting the jackpot, I have satisfactorily refuted Dembski’s case? And would a mathematician like to check the jackpot?

I’ve done it in MatLab, and will post the script below.  Sorry I don’t speak anything more geek-friendly than MatLab (well, a little Java, but MatLab is way easier for this).

529 thoughts on “Creating CSI with NS

  1. Mike wrote

    I have written just such a program in which the target changes in the middle of an evolutionary run. The population just tags along. My my; who would have thought? 🙂

    The second genetic algorithm I ever wrote, sometime back in the early 1980s, had two different selective environments to which the population was exposed in alternate generations. Over some fairly small number of generations the population split into two subpops, one subpop fairly well adapted to environment A and pretty badly adapted to environment B (though not badly adapted enough to go extinct in one generation)
    RBH,

    , the other subpop the reverse. Speciation in action! 🙂

  2. And I have no idea what happened to the formatting in that comment. Please disregard the interpolated “RBH”.

  3. Joe Felsenstein:
    The “latching” issue is a minor matter. Dembski and others inflated it into a Big Deal by raising the issue, implying (but not saying explicitly) that it was an important property that was critical to explaining why the Weasel algorithm reached is goal so much faster than blind search.

    In fact, as others here have noted:(1) the Weasel algorithm didn’t “latch”, and (2) even if it had done so, that would have made little difference to the effectiveness of the search.So the whole issue was bogus.

    I agree – “latching” is a non-issue, a side-track. Quite a while ago, I calculated the probabilities to observe the fact the the algorithm doesn’t “latch”…

  4. RBH
    Mike wrote

    The second genetic algorithm I ever wrote, sometime back in the early 1980s, had two different selective environments to which the population was exposed in alternate generations. Over some fairly small number of generations the population split into two subpops, one subpop fairly well adapted to environment A and pretty badly adapted to environment B (though not badly adapted enough to go extinct in one generation), the other subpop the reverse. Speciation in action! :-)

    One of the nice things I learned from The Beak of the Finch is the way that populations colonise however many niches you provide, so the distributions of the relevant dimensions track the changes in available niches.

  5. Elizabeth:
    What I find sort of interesting, is that none of the objections to my falsifications are those raised by Dembski, who seems happy to concede that exercises such as mine result in patterns that exhibit CSI with chi>1.But he would, I guess, lump it in with “Weasel” as being the result of an algorithm in which the pattern is embodied somehow in the algorithm itself, in the form of the fitness function.

    So he would (correctly, as it happens, in this case) conclude “ID” from my pattern, on the grounds that it must have been the result of an intelligently selected fitness function.

    I think there is a gaping hole in this argument, but I’m sort of surprised no-one has made it here (or, if they have, not as clearly as Dembski does).

    junkdnaforlife hints at it, but has not yet come up with specification criteria for the aa output and the output from my exercise that are comparable.I hope s/he will do so.

    Umm, excuse me but I have been trying to tell you that for quite some time-

    And oleg, I am well aware of page 193, and 194 and all the pages in the book. However, unlike you, I am able to read them in context and don’t just consider them in isolation.

    Ya see before page 193 comes page 149- section 3.8- “The Origin of Complex Specified Information”. But even before that is:

    Algorithms and natural laws are in principle incapable of explaining the origin of CSI. To be sure, algorithms and natural laws can explain the flow of CSI. Indeed, algorithms and natural laws are ideally suited for transmitting already existing CSI. As we shall see next, what they cannot do is explain its origin. (bold added)

  6. Joe Felsenstein: Let me add to that.People who make the point that mutations are much more likely to be deleterious than advantageous often think that this means that evolution must therefore lose ground.But if they actually calculated fixation probabilities they might come to a different conclusion.

    For example, with a population size of 100,000 individuals, and new mutations that reduce fitness by 0.0001 when heterozygous (and twice as much when homozygous) the probability of fixation is only 8.498 x 10^(-22), so basically they can never get fixed.A favorable mutation that has an advantage of 0.00001, only one-tenth as strong, also mostly gets lost.But it gets fixed a fraction 0.00002037 of the time,which is vastly more often. tthose deleterious mutations.

    Evolution would then not lose ground even if deleterious mutations were a million times more common than advantageous ones. (I have been using Motoo Kimura’s famous 1962 fixation probability formula).

    Has anyone ever confirmed any mutation rate/ fixation rate equations on populations in the wild?

    We know that “beneficial” is relative- that is what is beneficial for one genration may not be beneficial for the next- and environments change.

  7. Latching may be a non-issue, but if Dembski says a latching algorithm is ten times as effective as a non-latching, his statement is worth a mention.

  8. junkdnaforlife:

    Madbat, you say: “Proteins and their sequences are NOT predictable from postulating a function.” And here we actually agree on something.

    Good. I am glad you finally realize that. In your earlier posts you kept insisting that a scientist should be able to construct a protein from knowing nothing but its function.

    your objections as to whether a function has 26, 3, or k possible sequences seems to be statistically insignificant (as long as k is not a large value) when we are dealing with a much larger value of n, (all possible sequences).

    That’s an empty claim as long as you have no idea whether k is or is not a large value, in other words: until you can tell us what k is. Can you?

    If k is the amount of fitness functions that will shift frequencies in a binary population [I am assuming you mean: all sequences of 500 coin tosses that have a product of H-runs above 10^60], and n is all possible fitness function sequences [all possible sequences of 500 coin tosses], and if we allow c to represent the amount of amino acid sequences that will regulate thin filament length in mice, and call m all possible amino acid sequences, then k represents a far greater value then c, and it simply follows that k is far less specific.

    Really? Again, show your math: Elizabeth, with the help of many others on this forum, has provided the k and n, so you just need to look these up. Now please give us the c and the m, so we can compare and evaluate your claim.

    The rest of your post has already been addressed by Liz and others.

  9. I see questions raised by patrick and madbat as well as commentary by Liz. The comment whereas Liz considers that the specified Nebulin protein function described as “regulates thin filament length in mice” is analogous to “God did it” is especially astonishing. This may be evidence of a disconnect far more severe than I considered. I’m going to try and connect with these all at once so I’ll catch up within the day.

  10. What I meant, junkdnaforlife, before you spend too much time considering a response, is that “short” does not equal “compressed”. It may simply mean “lacks detail”.

  11. Furthermore: the equivalent of “regulates thin filament length in mice” in mine is unspecified. It could be “maximises energy consumption”.

    Make sure you are comparing like with like.

  12. Joe G,

    “And oleg, I am well aware of page 193, and 194 and all the pages in the book. However, unlike you, I am able to read them in context and don’t just consider them in isolation.

    Ya see before page 193 comes page 149- section 3.8- “The Origin of Complex Specified Information”. But even before that is:

    Algorithms and natural laws are in principle incapable of explaining the origin of CSI. To be sure, algorithms and natural laws can explain the flow of CSI. Indeed, algorithms and natural laws are ideally suited for transmitting already existing CSI. As we shall see next, what they cannot do is explain its origin. (bold added)”

    So, it’s the origins claim again. Can you explain, demonstrate, and show positive, testable evidence of/for the origin of CSI? Can Dembski?

    Don’t you claim that CSI and algorithms were front loaded into every ‘kind’ of organism at the moment of creation by “the intelligent designer”? But don’t you claim that ID is also OK with side loading/intervention in organisms (i. e. new, amended, or revised CSI and algorithms) by “the intelligent designer”? And don’t you claim that the universe itself was front loaded and/or was/is side loaded with CSI, natural laws, and algorithms by “the intelligent designer”? Can you produce positive, testable evidence and a testable hypothesis for any of those claims?

    Since Dembski and you rely on what he says: “Algorithms and natural laws are in principle incapable of explaining the origin of CSI”, and since you (and apparently Dembski) claim that origins are what ID/CSI are all about, and since, according to you and other IDists, the origins originated in/from “the intelligent designer”, I’ll remind you that you have the burden of explaining, demonstrating, and producing positive, testable evidence of “the designer” and the ultimate origin of “the designer”, and everything else.

  13. Well, I got the jackpot of jackpots 🙂

    Here is my winning sequence of runs-of-heads:

    4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4

    And here is the lineage of the winner (white=Heads, Black=Tails, generations run from top to bottom):

    Winning Lineage

    Product is 1.6069e+60

  14. Elizabeth: Well, I got the jackpot of jackpots
    Here is my winning sequence of runs-of-heads:

    It is interesting that after about 250 generations the sequences tend to become more “robust” in sequences of 4 heads. Once the pattern becomes established, it is less likely that the next generations will deviate much from these sequences with 4 heads.

    This phenomenon seems to occur even without any partial or total “latching” in these programs. If one includes some low probability of “latching” in order to simulate the influence of being “deeper in a potential well,” the shape of the “decay” curve toward the target is affected. Usually one can find a “latching coefficient” that produces a nice exponential decay curve that plots as a decreasing straight line on a log versus linear plot, the curve representing the number of members of the population that have NOT totally adapted in a given generation.

  15. Well, what it’s taught me, which was something I hadn’t explicitly appreciated before, is how important the “connectivity” between possible combinations is, in terms of mutation types.

    It’s not something I’ve seen referred to (or not in language I’ve registered!) I’m trying to make a simplified matrix showing connectivity patterns conferred by different mutation types.

    For my winning run, parents “gave birth” to offspring with one of four “mutations types”: None (i.e. identical); .01 probability at each locus of a flip (Heads to Tails or Tails to Heads) i.e. point mutation; randomly picked string of randomly selected length (from poisson distribution) removed and inserted elsewhere, i.e. deletion and insertion of the deleted portion; randomly picked string of randomly selected length (from poisson distribution) duplicated in place of some part of the existing string (possibly overlapping with the duplicated portion.

    On that last run I increased the mean for my poisson distribution to 50, which meant that substantial portions could be duplicated. If these were bad portions, then the thing will fail. Even if quite a good portion gets duplicated, if it creates short runs in doing so, it may fail.

    But what it does is to increase connectivity between fitness peaks at the upper end of fitness.

    And while there is no reason to think that fitness peaks, when plotted on to a phase space of given connectivity, clearly the more connected phase space is by one-step mutations, the closer fitness peaks are likely to be, thus smoothing the landscape.

  16. I stared appreciating connectivity when I tried making words, It’s a non-issue if you have a target, but if you are assigning fitness to substrings then you have the IC problem to overcome. How do substrings get connected to form words?

    It depends entirely on the characteristics of the functional space. If sequence space is not connectable, then Behe is probably right. That’s why the work of Thornton is so important in this argument.

  17. petrushka:
    I stared appreciating connectivity when I tried making words, It’s a non-issue if you have a target, but if you are assigning fitness to substrings then you have the IC problem to overcome. How do substrings get connected to form words?

    It depends entirely on the characteristics of the functional space. If sequence space is not connectable, then Behe is probably right. That’s why the work of Thornton is so important in this argument.

    Yes, exactly. Behe still makes the best ID argument. I’m sure some things are unevolvable. The problem is: how do you tell from looking at a thing, whether it could have evolved or not?

    Taking away bits and seeing if it still works is clearly fallacious. We know IC structures can evolve, and that things can also evolve by “IC pathways” (that’s what AVIDA tells us).

    That doesn’t mean that everything is evolvable, it’s just that you can’t, post hoc, say that it can’t.

    Which means, of course, that you also can’t say for sure that it could. But that’s where the assymmetry comes in – biologists don’t claim that “there was no ID”, but IDists claim that there must have been. (p>UPB).

  18. Before leaving this topic I want to say just one more thing about my algorithm. The fitness function is quite capable of “rewarding” two or three substrings that can never fit together to form one word. So it is capable of getting into an IC hole from which it cannot climb.

    Sometimes it does and sometimes it doesn’t.

  19. Elizabeth: Yes, exactly.Behe still makes the best ID argument.I’m sure some things are unevolvable.The problem is: how do you tell from looking at a thing, whether it could have evolved or not?

    Taking away bits and seeing if it still works is clearly fallacious.We know IC structures can evolve, and that things can also evolve by “IC pathways” (that’s what AVIDA tells us).

    That doesn’t mean that everything is evolvable, it’s just that you can’t, post hoc, say that it can’t.

    Which means, of course, that you also can’t say for sure that it could.But that’s where the assymmetry comes in – biologists don’t claim that “there was no ID”, but IDists claim that there must have been. (p>UPB).

    Again AVIDA gas nothing to do with biology- nothing at all.

  20. Joe G: Has anyone ever confirmed any mutation rate/ fixation rate equations on populations in the wild?

    We don’t go around trying to confirm Pythagoras’s Theorem in the wild. Therefore it is wrong?

    We know that exact geometric triangles don’t exist in nature — every actual triangle is a little bit nontriangular. So therefore Pythagoras’s and Euclid’s results are useless?

    Like geometry, models of theoretical population genetics give us insight, and to deal with the complexities of nature we make them more complicated, in ways that can still be analyzed mathematically. ‘Nuf said.

  21. Elizabeth: I disagree.

    Disagree all you want- it doesn’t change tha=e fact that AVIDA does not represent real-world biology- for one the “organisms” are far too simple and for another the rewards are far too generous.

  22. Joe Felsenstein: We don’t go around trying to confirm Pythagoras’s Theorem in the wild.Therefore it is wrong?

    We know that exact geometric triangles don’t exist in nature — every actual triangle is a little bit nontriangular.So therefore Pythagoras’s and Euclid’s results are useless?

    Like geometry, models of theoretical population genetics give us insight, and to deal with the complexities of nature we make them more complicated, in ways that can still be analyzed mathematically.‘Nuf said.

    LoL! We can prove mathematics in the wild- we can prove geometry in the wild. What you cannot do is model biological evolution- and guess what? It appears we are having a tough time trying to model climate- that’s right the models that say we are doomed if we do not stop CO2 from rising have been shown to be nonsense- they were applied to see if they could retordict the past climate and they failed.

    So yes you can try to model biological evolution but you have no way of knowing if your models are correct/ reflect reality. ’nuff said, indeed.

  23. Creodont:
    Joe G,

    “And oleg, I am well aware of page 193, and 194 and all the pages in the book. However, unlike you, I am able to read them in context and don’t just consider them in isolation.

    Ya see before page 193 comes page 149- section 3.8- “The Origin of Complex Specified Information”. But even before that is:

    So, it’s the origins claim again. Can you explain, demonstrate, and show positive, testable evidence of/for the origin of CSI? Can Dembski?

    Don’t you claim that CSI and algorithms were front loaded into every ‘kind’ of organism at the moment of creation by “the intelligent designer”? But don’t you claim that ID is also OK with side loading/intervention in organisms (i. e. new, amended, or revised CSI and algorithms) by “the intelligent designer”? And don’t you claim that the universe itself was front loaded and/or was/is side loaded with CSI, natural laws, and algorithms by “the intelligent designer”? Can you produce positive, testable evidence and a testable hypothesis for any of those claims?

    Since Dembski and you rely on what he says: “Algorithms and natural laws are in principle incapable of explaining the origin of CSI”, and since you (and apparently Dembski) claim that origins are what ID/CSI are all about, and since, according to you and other IDists, the origins originated in/from “the intelligent designer”, I’ll remind you that you have the burden of explaining, demonstrating, and producing positive, testable evidence of “the designer” and the ultimate origin of “the designer”, and everything else.

    One more time-

    Until YOU step up and produce positive, testable evidence and a testable hypothesis for your position- whatever that is- there is no use discussing science with you.

    So please step up and actually say something so I know what you will and do accept as “science”.

  24. I see I have a little time today to make a quick post.
    A finite state machine is a very important concept with respect to information processors and information processing systems.
    All states that are mapped (implicitly or explicitly) within a complex system must keep track of possible state trajectories, this includes its own state, a next state, a previous state etc…including error-checking and correction mechanisms which needs to take place at any given state depending on subsequent input.
    For a complex system to function all states depend on each other and so we must have at at least error detection and correction mechanism ready at each state for expected or unexpected input.
    Basic stuff really, when we discuss it, but as anyone who’s written more than 1000 lines of code knows – things slowly become intractable, OOP or not. The intractability problems comes with a greater amount of information to control, which only adds to the problem.
    The problem with evolutionary algorithms like weasel relative to reality is that at each mutable instance there is no state-tracking involved, no possible way of knowing what state has been affected until the outcome and what magnitude the random change had on the system as a whole. You could say ” well it survived or didn’t”, but that is not an explanation. If it survived, it is because it survived the volatility because of implicit error-detection and error-correction mechanisms.
    That is all for now.

Leave a Reply