Evo-Info 3: Evolution is not search

Introduction to Evolutionary Informatics, by Robert J. Marks II, the “Charles Darwin of Intelligent Design”; William A. Dembski, the “Isaac Newton of Information Theory”; and Winston Ewert, the “Charles Ingram of Active Information.” World Scientific, 332 pages.
Classification: Engineering mathematics. Engineering analysis. (TA347)
Subjects: Evolutionary computation. Information technology–Mathematics.

Marks, Dembski, and Ewert open Chapter 3 by stating the central fallacy of evolutionary informatics: “Evolution is often modeled by as [sic] a search process.” The long and the short of it is that they do not understand the models, and consequently mistake what a modeler does for what an engineer might do when searching for a solution to a given problem. What I hope to convey in this post, primarily by means of graphics, is that fine-tuning a model of evolution, and thereby obtaining an evolutionary process in which a maximally fit individual emerges rapidly, is nothing like informing evolution to search for the best solution to a problem. We consider, specifically, a simulation model presented by Christian apologist David Glass in a paper challenging evolutionary gradualism à la Dawkins. The behavior on exhibit below is qualitatively similar to that of various biological models of evolution.

Animation 1. Parental populations in the first 2000 generations of a run of the Glass model, with parameters (mutation rate .005, population size 500) tuned to speed the first occurrence of maximum fitness (1857 generations, on average), are shown in orange. Offspring are generated in pairs by recombination and mutation of heritable traits of randomly mated parents. The fitness of an individual in the parental population is, loosely, the number of pairs of offspring it is expected to leave. In each generation, the parental population is replaced by surviving offspring. Which of the offspring die is arbitrary. When the model is modified to begin with a maximally fit population, the long-term regime of the resulting process (blue) is the same as for the original process. Rather than seek out maximum fitness, the two evolutionary processes settle into statistical equilibrium.

Figure 1. The two bar charts, orange (Glass model) and blue (modified Glass model), are the mean frequencies of fitnesses in the parental populations of the 998,000 generations following the 2,000 shown in Animation 1. The mean frequency distributions approximate the equilibrium distribution to which the evolutionary processes converge. In both cases, the mean and standard deviation of the fitnesses are 39.5 and 2.84, respectively, and the average frequency of fitness 50 is 0.0034. Maximum fitness occurs in only 1 of 295 generations, on average.

I should explain immediately that an individual organism is characterized by 50 heritable traits. For each trait, there are several variants. Some variants contribute 1 to the average number offspring pairs left by individuals possessing them, and other variants contribute 0. The expected number of offspring pairs, or fitness, for an individual in the parental population is roughly the sum of the 0-1 contributions of its 50 traits. That is, fitness ranges from 0 to 50. It is irrelevant to the model what the traits and their variants actually are. In other words, there is no target type of organism specified independently of the evolutionary process. Note the circularity in saying that evolution searches for heritable traits that contribute to the propensity to leave offspring, whatever those traits might be.

The two evolutionary processes displayed above are identical, apart from their initial populations, and are statistically equivalent over the long term. Thus a general account of what occurs in one of them must apply to both of them. Surely you are not going to tell me that a search for the “target” of maximum fitness, when placed smack dab on the target, rushes away from the target, and subsequently finds it once in a blue moon. Hopefully you will allow that the occurrence of maximum fitness in an evolutionary process is an event of interest to us, not an event that evolution seeks to produce. Again, fitness is not the purpose of evolution, but instead the propensity of a type of organism to leave offspring. So why is it that, when the population is initially full of maximally fit individuals, the population does not stay that way indefinitely? In each generation, the parental population is replaced with surviving offspring, some of which are different in type (heritable traits) from their parents. The variety in offspring is due to recombination and mutation of parental traits. Even as the failure of parents to leave perfect copies of themselves contributes to the decrease of fitness in the blue process, it contributes also to the increase of fitness in the orange process.

Both of the evolutionary processes in Animation 1 settle into statistical equilibrium. That is, the effects of factors like differential reproduction and mutation on the frequencies of fitnesses in the population gradually come into balance. As the number of generations goes to infinity, the average frequencies of fitnesses cease to change (see “Wright, Fisher, and the Weasel,” by Joe Felsenstein). More precisely, the evolutionary processes converge to an equilibrium distribution, shown in Figure 1. This does not mean that the processes enter a state in which the frequencies of fitnesses in the population stay the same from one generation to the next. The equilibrium distribution is the underlying change­less­ness in a ceaselessly changing population. It is what your eyes would make of the flicker if I were to increase the frame rate of the animation, and show you a million generations in a minute.

Animation 2. As the mutation rate increases, the equilibrium distribution shifts from right to left, which is to say that the long-term mean fitness of the parental population decreases. The variance of the fitnesses (spread of the equilibrium distribution) increases until the mean reaches an intermediate value, and then decreases. Note that the fine-tuned mutation rate .005 ≈ 10–2.3 in Figure 1.

Let’s forget about the blue process now, and consider how the orange (randomly initialized) process settles into statistical equilibrium, moving from left to right in Animation 1. The mutation rate determines

  1. the location and the spread of the equilibrium distribution, and also
  2. the speed of convergence to the equilibrium distribution.

Animation 2 makes the first point clear. In visual terms, an effect of increasing the mutation rate is to move equilibrium distribution from right to left, placing it closer to the distribution of the initial population. The second point is intuitive: the closer the equilibrium distribution is to the frequency distribution of the initial population, the faster the evolutionary process “gets there.” Not only does the evolutionary process have “less far to go” to reach equilibrium, when the mutation rate is higher, but the frequency distribution of fitnesses changes faster. Animation 3 allows you to see the differences in rate of convergence to the equilibrium distribution for evolutionary processes with different mutation rates.

Animation 3. Shown are runs of the Glass model with mutation rate we have focused upon, .005, doubled and halved. That is,  = 2 ⨉ .005 = .01 for the blue process, and  = 1/2 ⨉ .005 = .0025 for the orange process.

An increase in mutation rate speeds convergence to the equilibrium distribution, and reduces the mean frequency of maximum fitness.

I have selected a mutation rate that strikes an optimal balance between the time it takes for the evolutionary process to settle into equilibrium, and the time it takes for maximum fitness to occur when the process is at (or near) equilibrium. With the mutation rate set to .005, the average wait for the first occurrence of maximum fitness, in 1001 runs of the Glass model, is 1857 generations. Over the long term, maximum fitness occurs in about 1 of 295 generations. Although it’s not entirely accurate, it’s not too terribly wrong to think in terms of waiting an average of 1562 generations for the evolutionary process to reach equilibrium, and then waiting an average of 295 generations for a maximally fit individual to emerge. Increasing the mutation rate will decrease the first wait, but the decrease will be more than offset by an increase in the second wait.

Figure 2. Regarding Glass’s algorithm (“Parameter Dependence in Cumulative Selection,” Section 3) as a problem solver, the optimal mutation rate is inversely related to the squared string length (compare to his Figure 3). We focus on the case of string length (number of heritable traits) L = 50, population size N = 500, and mutation rate  = .005, with scaled mutation rate uʹ L2 = 12.5 ≈ 23.64. The actual rate of mutation, commonly denoted u, is 26/27 times the rate reported by Glass. Note that each point on a curve corresponds to an evolutionary process. Setting the parameters does not inform the evolutionary search, as Marks et al. would have you believe, but instead defines an evolutionary process.

Figure 2 provides another perspective on the point at which changes in the two waiting times balance. In each curve, going from left to right, the mutation rate is increasing, the mean fitness at equilibrium is decreasing, and the speed of convergence to the equilibrium distribution is increasing. The middle curve (L = 50) in the middle pane (N = 500) corresponds to Animation 2. As we slide down the curve from the left, the equilibrium distribution in the animation moves to the left. The knee of the curve is the point where the increase in speed of convergence no longer offsets the increase in expected wait for maximum fitness to occur when the process is near equilibrium. The equilibrium distribution at that point is the one shown in Figure 1. Continuing along the curve, we now climb steeply. And it’s easy to see why, looking again at Figure 1. A small shift of the equilibrium distribution to the left, corresponding to a slight increase in mutation rate, greatly reduces the (already low) incidence of maximum fitness. This brings us to an important question, which I’m going to punt into the comments section: why would a biologist care about the expected wait for the first appearance of a type of organism that appears rarely?

You will not make sense of what you’ve seen if you cling to the misconception that evolution searches for the “target” of maximally fit organisms, and that I must have informed the search where to look. What I actually did, by fine-tuning the parameters of the Glass model, was to determine the location and the shape of the equilibrium distribution. For the mutation rate that I selected, the long-term average fitness of the population is only 79 percent of the maximum. So I did not inform the evolutionary process to seek out individuals of maximum fitness. I selected a process that settles far away from the maximum, but not too far away to suit my purpose, which is to observe maximum fitness rapidly. If my objective were to observe maximum fitness often, then I would reduce the mutation rate, and expect to wait longer for the evolutionary process to settle into equilibrium. In any case, my purpose for selecting a process is not the purpose of the process itself. All that the evolutionary process “does” is to settle into statistical equilibrium.

Sanity check of some claims in the book

Unfortunately, the most important thing to know about the Glass model is something that cannot be expressed in pictures: fitness has nothing to do with an objective specified independently of the evolutionary process. Which variants of traits contribute 1 to fitness, and which contribute 0, is irrelevant. The fact of the matter is that I ignore traits entirely in my implementation of the model, and keep track of 1s and 0s instead. Yet I have replicated Glass’s results. You cannot argue that I’ve informed the computer to search for a solution to a given problem when the solution simply does not exist within my program.

Let’s quickly test some assertions by Marks et al. (emphasis added by me) against the reality of the Glass model.

There have been numerous models proposed for Darwinian evolution. […] We show repeatedly that the proposed models all require inclusion of significant knowledge about the problem being solved. If a goal of a model is specified in advance, that’s not Darwinian evolution: it’s intelligent design. So ironically, these models of evolution purported to demonstrate Darwinian evolution necessitate an intelligent designer.

Chapter 1, “Introduction”


[T]he fundamentals of evolutionary models offered by Darwinists and those used by engineers and computer scientists are the same. There is always a teleological goal imposed by an omnipotent programmer, a fitness associated with the goal, a source of active information …, and stochastic updates.

Chapter 6, “Analysis of Some Biologically Motivated Evolutionary Models”


Evolution is often modeled by as [sic] a search process. Mutation, survival of the fittest and repopulation are the components of evolutionary search. Evolutionary search computer programs used by computer scientists for design are typically teleological — they have a goal in mind. This is a significant departure from the off-heard [sic] claim that Darwinian evolution has no goal in mind.

Chapter 3, “Design Search in Evolution and the Requirement of Intelligence”

My implementation of the Glass model tracks only fitnesses, not associated traits, so there cannot be a goal or problem specified independently of the evolutionary process.

Evolutionary models to date point strongly to the necessity of design. Indeed, all current models of evolution require information from an external designer in order to work. All current evolutionary models simply do not work without tapping into an external information source.

Preface to Introduction to Evolutionary Informatics


The sources of information in the fundamental Darwinian evolutionary model include (1) a large population of agents, (2) beneficial mutation, (3) survival of the fittest and (4) initialization.

Chapter 5, “Conservation of Information in Computer Search”

The enumerated items are attributes of an evolutionary process. Change the attributes, and you do not inform the process to search, but instead define a different process. Fitness is the probabilistic propensity of a type of organism to leave offspring, not search guidance coming from an “external information source.” The components of evolution in the Glass model are differential reproduction of individuals as a consequence of their differences in heritable traits, variety in the heritable traits of offspring resulting from recombination and mutation of parental traits, and a greater number of offspring than available resources permit to survive and reproduce. That, and nothing you will find in Introduction to Evolutionary Informatics, is a fundamental Darwinian account.

1,439 thoughts on “Evo-Info 3: Evolution is not search

  1. Tom English: You need to unlearn “independent events.” … Scientists sometimes speak of independent events. But the term has no meaning in probability theory.

    Why on earth would you say such a thing? I’d really like to know.

    Were you having a “Rumraket moment”?

  2. I’d just like to add that I understand what an independent event is and I disagree with Tom here. The concept of an independent event is pretty well understood in probability theory. The simplest analogy is throwing coins. Throwing a coin once,will not affect the toss of the next throw of the coin. The tossing of the coin is an independent event from any previous and any following coin tosses.

    The issue is that Mung is hellbent on misreading everything I say, so he has gotten the idea into his head that I’m (in his view) saying that if eyes evolved once, they are so likely they’ll evolve again, and if they evolved twice, they’re so likely they’ll evolve thrice. And so on. This is the view he’s trying to saddle me with, but it isn’t a view I hold.

    I have merely kept trying to explain what a frequentist probability is. You observe the frequency with which certain events come up when you toss the coin, and from that you infer a probability. The probability is the frequency with coin-tosses.

    Mung wants to construe this as if I’m saying that if eyes evolved in some lineage, this will affect the probability that they evolve in another lineage. Which is obviously preposterous and I’ve never claimed it works like that. Nor does it follow from anything I’ve said. Which is why rather than just quote statements I make, Mung is forced to sort of re-state them in his own words so he can make up the error he wants to exist.

  3. Yes, even when people explicitly deny they are invoking some kind of Sheldrakian causation, that’s what opponents (all of them, it seems) seem to read them as doing.

  4. Rumraket: I have merely kept trying to explain what a frequentist probability is. You observe the frequency with which certain events come up when you toss the coin, and from that you infer a probability. The probability is the frequency with coin-tosses.

    I don’t think there is such a thing as a frequentist probability. I think there is probability, and a frequentist interpretation of it. But hey, I’m still learning. 🙂

    You’ve been claiming more than this. You’ve been making explicit claims about eye evolution and the probability that eyes will evolve or are easy to evolve.

    If you toss a coin, what does the sample space look like? Is it finite? If you roll a die what does the sample space look like? Is it finite? Now apply that to eye evolution, if you can.

    You’re no dummy Rumraket, by any stretch. But sometimes you seem compelled to be disagreeable regardless of the soundness of the position you take. I have no idea what the probabilities are with respect to eye evolution, and neither do you. I don’t know how to calculate them, and neither do you. Not even using the frequentest interpretation.

    With coin tosses and tosses of a die you can calculate the probabilities. Show us how to do that with eye evolution.

    Why not just agree with me and chalk up another thing we can agree on. 🙂

    ETA: Yes, I use hyperbole.

  5. Mung: I have no idea what the probabilities are with respect to eye evolution

    Mung retracting all his arguments from improbability right there. And he will never even know he did, hehe

  6. …defining a probability as a frequency is not merely an excuse for ignoring the laws of physics, it is more serious than that. We want to show that maintenance of a frequency interpretation to the exclusion of all others requires one to ignore virtually all the professional knowledge that scientists have about real phenomena. If the aim is to draw inferences about real phenomena, this is hardly the way to begin.

    – E.T. Jaynes

  7. Mung: I don’t think there is such a thing as a frequentist probability. I think there is probability, and a frequentist interpretation of it. But hey, I’m still learning.

    No I agree there, it is the frequentist interpretation.

    You’ve been claiming more than this. You’ve been making explicit claims about eye evolution and the probability that eyes will evolve or are easy to evolve.

    Yes more specifically I agreed with the statement that with evolution, eyes are likely.

    If you toss a coin, what does the sample space look like? Is it finite?

    It’s heads and it’s tails. Or well technically it can land so it stands on edge. Yes it’s finite.

    If you roll a die what does the sample space look like? Is it finite? Now apply that to eye evolution, if you can.

    I don’t know the total possible space of sampleable phenotypes. But I don’t think that is necessary in order to be able to claim with good evidential justification, that with evolution, eyes are probable. I will elaborate below.

    You’re no dummy Rumraket, by any stretch. But sometimes you seem compelled to be disagreeable regardless of the soundness of the position you take. I have no idea what the probabilities are with respect to eye evolution, and neither do you. I don’t know how to calculate them, and neither do you. Not even using the frequentest interpretation.

    I think the frequentist interpretation would be the only way of getting at some sort of number. But there’d be two different ways of getting a frequency.

    One way (which is not the way I intended) would be to record the frequency with which eyes evolve on some branch, out of the total number of emerging branches. Which I think would actually be a low number, because most lineages would either go extinct(many of which would be outcompeted by lineages that evolve eyes) or move to/stay in a lightless niche.

    Hold, I suspect you’re now thinking “but you said that with evolution, eyes are likely, and now you’re saying it would be a low frequency”? Yes I did. The claim was not that for any given lineage, eyes are likely. It was broader, it was about evolution in general. With a process of branching descent with modification and natural selection, there will likely evolve eyes in some lineage. Those eyes will come about through a lot of death, failure and extinction.

    Put another way. The claim is not that if we could view all the branches on the tree of life, that we could put eyes as independent events on a large fraction of them. The claim is not that the independent origin of eyes will have a large frequency among the total number of branches.

    Which gets us to the other way to understand the frequency, the one I intended.
    The claim is that, if we re-rolled the tape of the history of life, eyes would evolve again. And do so probably close to every time. This is the sense in which I claim eyes are likely with evolution, as opposed to unlikely without evolution. We see something like eyes even in simple bacterial life. Cyanobacteria are known to control their motility so they move towards the light that powers their photosynthesis, which indicates they have some sort of light-detecting organ that also gives them a sense of direction.

    So the claim is that the frequency of independent life-histories where eyes evolve (supposing, of course, that it is life that lives in an environment with significant light) is high. And I think this claim is justified by the observation that basically anything that lives in an environment with light, has either evolved them independently, or inherited them through common descent and retained them through selection. And that primitive yet highly beneficial light-sensitive mechanisms exist even in bacterial life.

  8. Laplace was defended staunchly by the mathematician Augustus de Morgan (1838, 1847) and the physicist W. Stanley Jevons, who understood Laplace’s motivations and for whom his beautiful mathematics was a delight rather than a pain. Nevertheless, the attacks of Boole and Venn found a sympathetic hearing in England among non-physicists. Perhaps this was because biologists, whose training in physics and mathematics was for the most part not much better than Venn’s, were trying to find empirical evidence for Darwin’s theory and realized that it would be necessary to collect and analyze large masses of data in order to detect the small, slow trends that they visualized as the means by which evolution proceeds. Finding Laplace’s mathematical works too much to digest, and since the profession of statistician did not yet exist,, they would naturally welcome suggestions that they need not read Laplace after all.

    – E.T. Jaynes

    Damned English. Biologists. Darwinists.

  9. The data indicate that “eyes” (light-responsive information-gathering organelles) appeared independently several times in life’s history. However, the structures of these organelles vary (e.g. the vertebrate vs the mollusk vs the insect ‘eye.’)

    So I suggest that it’s a waste of time to argue about the probability of ‘eye’ evolution when you’re speculating about the origins of different structures that provide similar functions.

    Compare the different structural bases of flight in disparate organisms.

    Evolution is a tinkerer and whatever works survives:

    http://nicorg.pbworks.com/w/file/fetch/41529753/Jacob%20Evolution%20and%20Tinkering.pdf

  10. Mung:

    You’re no dummy Rumraket, by any stretch. But sometimes you seem compelled to be disagreeable regardless of the soundness of the position you take.

    Oh, the irony.

    Mung, you are constantly disagreeing with — and attempting to condescend to — people like Rumraket, who are far brighter and better educated than you are.

    Try to keep in mind that you are bad at science, math, probability, and logical thinking. When you find yourself in disagreement with opponents who are good at those things, the most likely explanation is that you’ve screwed up yet again. Take that as your working hypothesis and try to figure out where your mistake is.

    In the rare event that you are actually correct and your brighter opponent is mistaken, that will come out in the end. But since it is highly unlikely, don’t make it your starting assumption.

    You seem compelled to be disagreeable regardless of the soundness of your position, and it makes you look even more foolish than necessary.

  11. Rumraket: No, not “if eyes have evolved then even more will evolve”. Nobody has claimed this anywhere.

    Nobody says “eyes evolved once, so they will evolve twice, or three times or even more”.

    You wrote:

    Rumraket: If something evolves over and over again, then yes obviously it is evident that the event is increasingly likely for every time it occurs. The assumption that the evolution of eyes must be very unlikely is contradicted by real-world evidence.

    So in tossing your coin, do you think that the event HEADS make it increasingly likely that HEADS will appear when your coin is tossed again?

    Toss your coin 100 times. For each time HEADS appears you say the event is increasingly likely? So if we toss the coin enough times, then HEADS will always appear. That’s what your frequentist logic leads us.

    Or maybe after twenty tosses the number of TAILS outnumbers the number of HEADS and makes the even TAILS increasingly likely. Eventually we’ll have all TAILS. Every toss will be a TAIL.

    I think what you meant to say is that if we count every light sensitive spot as an eye, then as we count them, the count will increase. Well no shit. Brilliant observation.

    You’re confusing simple counting with probabilities.

    The assumption that the evolution of eyes must be very unlikely is contradicted by real-world evidence.

    The original claim, made by an evolutionist, is that the arrangements of matter than can function as an eye are an infinitesimally small fraction of the possible arrangements of matter.

    Do you disagree with that?

  12. If keiths posted his probability calculations for evolving an eye I’d be grateful if someone would quote him. I’m predicting based on past content-less snark even more content-less snark.

    OMG! I’M A FREQUENTIST!

  13. LoL. You’d think I attacked someone’s religion!

    Is Pedant upset for not making the unholy triumvirate?

  14. Mung: So in tossing your coin, do you think that the event HEADS make it increasingly likely that HEADS will appear when your coin is tossed again?

    Imagine someone has a bag with 90 white and 110 red balls.
    She has no idea how many of them are white and how many are red.
    She begins drawing (sampling) balls at random from the bag and counting the number of each color, but every time the sampled ball goes back into the bag.

    She doesn’t know it, but the probability is 45% white and 55% red for EVERY draw.

    After 1 draw, she’s counted 0% white, 100% red.
    After 10 draws, she’s counted 30% white, 70% red.
    After 100 draws, she’s counted 43% white, 57% red.
    After 1000 draws, she’s counted 44.6% white, 55.4% red.

    So after 10 draws she thinks maybe about a 30% of th balls are white
    Was she justified in believing that the frequency of white balls was closer to 45% than to 30% later on after 100 and even more after 1000 draws? of course she was

    Does that mean that probability of drawing a white ball increased after the 10th draw? Of course not, because it was always 45%

    Basic high school level stuff, where’s the difficulty?

  15. dazz,

    Basic high school level stuff, where’s the difficulty?

    The difficulty is that “high school level” = “beyond Mung’s capabilities”.

    Mung is too dim to see the difference between epistemic and objective probabilities.

    It’s similar to the mistake he kept making in our discussion of entropy. He couldn’t get it through his head that the entropy of a system, as seen by a particular observer, depends on that particular observer’s knowledge regarding the microstate of the system.

  16. Probability Theory: The Logic of Science by E.T. Jaynes. Chapter 10: Physics of ‘random experiements’. Deals specifically with the physics of coin tossing.

    It is difficult to see how one could define a ‘fair’ method of tossing except by the condition that it should result in a certain frequency of heads; and so we are involved in a circular argument.

    That’s where the frequentist interpretation takes you. But hey, if circular arguments are what it takes to support evolutionary claims don’t let me stop you! Whatever floats your boat.

  17. Mung: You wrote:

    So in tossing your coin, do you think that the event HEADS make it increasingly likely that HEADS will appear when your coin is tossed again?

    Mung fails basic science once again.

    There is no selection pressure for a coin to come up heads or tails. A fair coin will have a 0.5 probability of a heads outcome with each toss.

    On a planet bathed in electromagnetic radiation in 380 to 700 nm wavelength range the there is considerable selectable advantage in developing an organ to detect and process those wavelengths.

    Amazing how stupid some Creationists will act in their attention whoring.

  18. Adapa: Mung fails basic science once again.

    What scientific evidence do you have that drinking your own urine will make you smarter?

    ETA: Because your momma told you so doesn’t count.

  19. Adapa: There is no selection pressure for a coin to come up heads or tails.

    Says who? A coin can be biased. A coin toss can be controlled. And who the hell but you knows what you mean by “selection pressure” when you’re talking about coin tossing.

    Mung fails basic science once again.

    You’re not even ready for science yet. You can’t even grasp the basics of tossing a coin.

  20. Adapa: A fair coin will have a 0.5 probability of a heads outcome with each toss.

    Not according to Rumraket and his defenders. We have to actually observe the tosses before we can make any inference. And as I’ve already pointed out, the definition of what makes for a fair coin is circular.

    Better luck next time!

  21. Mung: You wrote:

    So in tossing your coin, do you think that the event HEADS make it increasingly likely that HEADS will appear when your coin is tossed again?

    No, because… they’re independent events.

    Toss your coin 100 times. For each time HEADS appears you say the event is increasingly likely?

    If the frequency of heads is an increasing proportion of total tosses, yes. Then we are in fact determining that the probability of heads is increasingly likely. We are NOT determining that it will keep rising. That’s not what I’m saying.

    You toss the coin 50 times, and 27 of them are heads, then the measured probability of heads is 54%. If you go on tossing and by the time you’ve tossed it 100 times, 68 of them are heads, then the measured probability of heads is 68%. So the probability of heads rose from 54% to 68% over the course of tossing the coin 100 times.

    So in so far as the frequency with which you toss heads out of total coin tosses, is an increasing proportion of total coin tosses, then yes in fact the estimation of the probability of tossing heads will increase. BUT NOBODY IS SAYING THIS TREND WILL CONTINUE.

    I know you’d really really like me to say something stupid like that. So now that I’ve explicitly denied thinking like that, can we stop arguing about it so you can just concede I don’t actually believe this?

    So if we toss the coin enough times, then HEADS will always appear. That’s what your frequentist logic leads us.

    First of all it’s not my frequentist logic, it’s just frequentist logic. Second, no, I’m still not making the claim that the frequency will keep rising for future coin tosses. I’m merely making the claim that for every toss, if the frequency (of heads, or tails) rises, the new and higher frequency you measure, is the probability. As explained above.

    I hope I don’t have to explain this again. I’m not making the claim that the tosses of the coin will affect future tosses of the coin. I’m not making the claim that if over the course of tossing the coin, we observe an average increase in frequency of heads, that this observed tendency will continue for future coin tosses yet to be made.
    I’m not saying that.
    I’m not saying that.
    I’m not saying that.
    (Is it sinking in yet?)

    Or maybe after twenty tosses the number of TAILS outnumbers the number of HEADS and makes the even TAILS increasingly likely. Eventually we’ll have all TAILS. Every toss will be a TAIL.

    No, not that.
    I’m not saying that.
    I’m not saying that.
    I’m not saying that.

    I think what you meant to say is that if we count every light sensitive spot as an eye, then as we count them, the count will increase. Well no shit. Brilliant observation.

    No, I didn’t mean to say that at all. I mean to say the things I write, not the things Mung wants them to read as.

    You’re confusing simple counting with probabilities.

    No, not at all.

    If you toss a coin a thousand times, and 680 of them are heads, then the probability of heads for that coin is very likely to be close to 68%. Then it is decidedly NOT a fair coint. You might have assumed it was a fair coin, but the measured frequency of heads falsified that assumption.

    The original claim, made by an evolutionist, is that the arrangements of matter than can function as an eye are an infinitesimally small fraction of the possible arrangements of matter.

    Do you disagree with that?

    I don’t disagree. I agree with that.

  22. Mung:
    Probability Theory: The Logic of Science by E.T. Jaynes. Chapter 10: Physics of ‘random experiements’. Deals specifically with the physics of coin tossing.

    It is difficult to see how one could define a ‘fair’ method of tossing except by the condition that it should result in a certain frequency of heads; and so we are involved in a circular argument.

    That’s where the frequentist interpretation takes you. But hey, if circular arguments are what it takes to support evolutionary claims don’t let me stop you! Whatever floats your boat.

    I like how you find your way to a position simply by observing some people with whom you disagree on something else, take up that position. If I had been advocating another position on probability, you’d be busy finding quotes to contradict that.

    It is easy to see how, if I had instead just insisted that we assume all coins are fair, yet on experiment they turned out to be strongly biased, you’d have been best new friends with a frequentist interpretation.

    Hey Mung, how do we determine if a coin is a fair coin? Do we just assume that? And if we toss it 1000 times and it comes up heads 60% of the time, is it then still a fair coin? No? Then you’re a frequentist and you should deal with it.

  23. Mung: Not according to Rumraket and his defenders. We have to actually observe the tosses before we can make any inference. And as I’ve already pointed out, the definition of what makes for a fair coin is circular.

    No, according to me, the definition of a fair coin really is a coin that will have a 0.5 probability of coming up heads on every toss. And as for definitions, that is a fine definition, I have no objections to it.

    It’s just that, when it comes to the real world, we first have to determine if we really in fact have a fair coin. So how do we determine that Mung? Suppose you were to make a bet on coin tosses. Someone says “it’s a fair coin, promise!”. Is it then a fair coin and are you willing to bet your money on it?

  24. Mung: And who the hell but you knows what you mean by “selection pressure” when you’re talking about coin tossing.

    Who the hell knows what selection pressure means when talking about anything?

    Its a completely fictitious force.

  25. Mung,

    I have a fair coin, but I have been tossing it for the past three years, and it comes up heads every time.

    I guess eventually it will even out.

  26. phoodoo: Who the hell knows what selection pressure means when talking about anything?

    Its a completely fictitious force.

    I want to make sure I understand this correctly. Are you really saying that there is no such thing as a selective pressure?

    So when, over the course of the long-term evolution experiment with E coli, the replication speed has improved from something like an hour when the experiment began 20 years ago, to them dividing in about 25 minutes now, that this … what? Didn’t actually happen?
    Or are you saying that the bacteria that can make the most offspring in the same amount of time, don’t actually come to constitute a greater proportion of the population? Is that what you’re saying?

    If you mean something else, then please elaborate. What do you mean by a selective pressure being “a completely fictitious force”?

  27. phoodoo:
    Mung,

    I have a fair coin, but I have been tossing it for the past three years, and it comes up heads every time.

    I guess eventually it will even out.

    Need to go to Vegas

  28. Mung: Why on earth would you say such a thing? I’d really like to know.

    Exhaustion. I hope nothing else. It’s hard to believe that I would write such a thing under any circumstance at all. But I see this morning that I did. It’s disturbing.

  29. phoodoo:
    Mung,

    I have a fair coin, but I have been tossing it for the past three years, and it comes up heads every time.

    I guess eventually it will even out.

    Perfect metaphor for your religious beliefs

  30. Joe Felsenstein: On the other hand (NPI)

    Google: “Dynamical bias in the coin toss”

    Joe Felsenstein:
    Now that I’m at a computer, I can post a link:here

    [Tom English bends the TSZ rules, and fixes the link.]

    Rumraket:
    Link doens’t work.

    I fixed Joe’s link to Diaconis, Holmes, and Montgomery, “Dynamical Bias in the Coin Toss.” The bias is in the toss, not the coin. From Science News Online (2004), “Toss Out the Toss-Up: Bias in heads-or-tails“:

    In 1986, mathematician Joseph Keller, now an emeritus professor at Stanford, proved that one fair way to toss a coin is to throw it so that it spins perfectly around a horizontal axis through the coin’s center.

    Such a perfect toss would require superhuman precision. Every other possible toss is biased, according to an analysis described on Feb. 14 in Seattle at the annual meeting of the American Association for the Advancement of Science.

    Joe, I assume you attended the meeting. Were you one of the organizers? Did you see the presentation?

    What’s most important about this, in the context of ID, is that the process is not random, but instead (quasi-)deterministic. We model it as random.

  31. Tom English: In 1986, mathematician Joseph Keller, now an emeritus professor at Stanford, proved that one fair way to toss a coin is to throw it so that it spins perfectly around a horizontal axis through the coin’s center.

    Such a perfect toss would require superhuman precision. Every other possible toss is biased, according to an analysis described on Feb. 14 in Seattle at the annual meeting of the American Association for the Advancement of Science.

    Joe, I assume you attended the meeting. Were you one of the organizers? Did you see the presentation?

    What’s most important about this, in the context of ID, is that the process is not random, but instead (quasi-)deterministic. We model it as random.

    My travel records have no record as to whether I attended the 1986 AAAS regional meetings (the 1986 annual meetings were in Philadelphia, I see). Of course it was not “travel” so I may not have kept a record in that folder. I had no role in organizing them, partly because I was not an AAAS member then. I do not recall having heard Joe Keller give that talk. I have heard him give a great one on the math of stealth technology, which he was a pioneer in.

    The uncontrollable part of the coin toss is our fallible control over our muscles and nerves when tossing. It is like pseudorandom numbers, but even less controlled than that.

    I was quite impressed that Diaconis, Holmes, and Montgomery managed to construct a perfect coin-tossing machine that gives heads every time.

  32. While we’re at it, I must report a survey I did in a large class in the late 1970s. I was talking about genetic drift, or perhaps Mendelian segregation, and I wanted to convey the notion of independent tosses. I posed the following to my class: You have a fair coin and are making independent tosses. You toss 10 times and get 10 Heads. The coin is not biased. You make another independent toss. Which is more likely for you to get: Heads or Tails?

    The class split into three groups, those who said Heads, those who said Tails, and those who said it was 50:50. There were more of the first group. a minority of the second, and a small minority of the third. The third group is of course correct.

    The first group argued that you’ve got a “run of luck” going, so you’re more likely to get Heads. The second group was more interesting. They argued that, since the fraction of tosses tends to be 50:50 in the long run, that therefore you were more likely to get Tails! I declared the third group to be correct, but I’m pretty sure that I didn’t change many minds. I was astonished that the second group used a supposedly-sophisticated argument about the Law of Large Numbers to come to a false conclusion.

  33. Joe Felsenstein: My travel records have no record as to whether I attended the 1986 AAAS regional meetings (the 1986 annual meetings were in Philadelphia, I see).

    Diaconis et al. presented at the 2004 AAAS meeting in Seattle. Sorry not to have made that clear.

  34. Joe Felsenstein: I was quite impressed that Diaconis, Holmes, and Montgomery managed to construct a perfect coin-tossing machine that gives heads every time.

    I’m waiting with bated breath to hear from Mung and/or phoodoo. “How could you know that? Did you run the machine forever? Does the machine never break? Is it an exception to the Second Law of Thermodynamics?”

  35. Tom English: Diaconis et al. presented at the 2004 AAAS meeting in Seattle. Sorry not to have made that clear.

    Persi did visit here about then, but I recall him talking about something else quite interesting, and I am not sure it was under the auspices of AAAS. As I continued not to be a member of AAAS I was not involved in any hosting of those meetings.

    Persi is famous for having been a very-high-level professional magician as a young man, so I always assumed he would be a somewhat elusive and difficult person. When I met him, which was about then, I was surprised that he turned out to be a very unassuming and easygoing person, a nice guy.

  36. Mung: No. No. The more heads you get the even more heads you’ll get.

    I think this could help to explain how fish got overlapping scales that help them glide effortlessly through water, and how birds got feathers.

    If you get a mutation for ONE scale, like say somewhere around your left ear, then the odds of getting another one, around your right ear, well, they have just gone up. And then if you get another lucky mutation, which gives you a fish scale near your right ear, well, you have just struck gold. Pop, pop, pop, the lucky mutations are going to start flying (or swimming) now, baby!

    Its really lucky for lucky things, that once something lucky happens, it is more likely to get lucky.

    But some would argue, that still doesn’t explain the overlapping part of the perfect fish scales. Sure it does. Luck. Multiplied!

  37. phoodoo: Yea, you would believe that this analogy applies to evolution, wouldn’t you Rum? Is the bag finite? Do you know how many are in the bag to start with? Why would pulling anything out of the bag tell you what is remaining in the bag, if you don’t know how many kinds of things, or how much was in the bag to begin with?

    What you have pulled out tells you absolutely zero about the frequencies inside remaining, if you never know how many there were. If you pulled out ten, how do you know there isn’t a billion inside? So you pulled out 7 red things and 2 blue things, do you know anything about what else is inside? If you pulled out 1000, how do you know there isn’t 10*25 things inside?

    Okaaay, let’s get this misconception out of the way first. What we are interested in, in this analogy, is the proportions of different types of balls in the bag, the relative frequencies. Let’s recognize that the bag is very, very big compared with the size of our sample.
    If we pulled out 70 red things and 20 blue things, and nothing else, we would have gained knowledge about the probable proportions of things remaining in the bag. The bag is unlikely to be 99% green things. And even if the sampling is heavily biased, it is still wrong to state “What you have pulled out tells you absolutely zero about the frequencies inside remaining”. Plain wrong.
    Now let’s deal with Mung’s original mocking of Rumraket’s comment re eye evolution, and his subsequent Gish Gallop away from the fact that Mung was wrong.
    Let’s consider a very large black bag, from which we extract balls. We sample 100 balls, and one of them has an eye on. Now we might like to estimate that the proportion of balls with eyes is less than 10%, but the problem here is that we know that our sampling is somewhat hinky, and not truly representative. Let’s just say that the proportion of balls in the black bag with eyes on is x% for now.
    Imagine a second, white, bag. We draw 100 balls from it using the same hinky sampling procedure and get 75 with eyes on. Interesting. We don’t know what x is, but thanks to the fact that the sampling process is identical, we can say that the proportion of eyes is greater in the white bag.
    Mung, the black bag and the white bag represent alternative universes in which eye evolution is more or less unlikely.
    Rumraket was correct.
    (In your coin toss analogy, carefully chosen to be deceptive thanks to our innate assumption that coins are fair, the correct analogy would be to a dime-tossing machine and a nickel-tossing machine: the dime tossing machine has produced heads 95 times and tails 5 times. the nickel tossing machine 95 T and 5 H. Rumraket is pointing out that the dime-tosser is more likely to produce heads.)
    E4typo

  38. DNA_Jock: If we pulled out 70 red things and 20 blue things, and nothing else, we would have gained knowledge about the probable proportions of things remaining in the bag. The bag is unlikely to be 99% green things.

    Oh really? Well, just how unlikely is it? Like, sort of unlikely? Kinda? Like I doubt it unlikely? Or is it like, “Oh pleeeeeaaasse. There is no way it is 99% green, I think.”

    Because you see DNA, what you have actually done, by taking out 90 objects, is to sample .000000000000000000000 0000000000000000000
    0000000000000000000 0000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000 00000000000000000000000000000001 (I left out 10*486 zeros to save space) % of the entire contents of the bag.

    So how unlikely is it that it is 99% green? Maybe its only “Like, oh yea right!” unlikely? Or is it “Ahh, maybe”?

    See kids, these are the math experts who are going to teach you to be smart-excited kids? Sorry, its the best we got.

Leave a Reply