Evo-Info 3: Evolution is not search

Introduction to Evolutionary Informatics, by Robert J. Marks II, the “Charles Darwin of Intelligent Design”; William A. Dembski, the “Isaac Newton of Information Theory”; and Winston Ewert, the “Charles Ingram of Active Information.” World Scientific, 332 pages.
Classification: Engineering mathematics. Engineering analysis. (TA347)
Subjects: Evolutionary computation. Information technology–Mathematics.

Marks, Dembski, and Ewert open Chapter 3 by stating the central fallacy of evolutionary informatics: “Evolution is often modeled by as [sic] a search process.” The long and the short of it is that they do not understand the models, and consequently mistake what a modeler does for what an engineer might do when searching for a solution to a given problem. What I hope to convey in this post, primarily by means of graphics, is that fine-tuning a model of evolution, and thereby obtaining an evolutionary process in which a maximally fit individual emerges rapidly, is nothing like informing evolution to search for the best solution to a problem. We consider, specifically, a simulation model presented by Christian apologist David Glass in a paper challenging evolutionary gradualism à la Dawkins. The behavior on exhibit below is qualitatively similar to that of various biological models of evolution.

Animation 1. Parental populations in the first 2000 generations of a run of the Glass model, with parameters (mutation rate .005, population size 500) tuned to speed the first occurrence of maximum fitness (1857 generations, on average), are shown in orange. Offspring are generated in pairs by recombination and mutation of heritable traits of randomly mated parents. The fitness of an individual in the parental population is, loosely, the number of pairs of offspring it is expected to leave. In each generation, the parental population is replaced by surviving offspring. Which of the offspring die is arbitrary. When the model is modified to begin with a maximally fit population, the long-term regime of the resulting process (blue) is the same as for the original process. Rather than seek out maximum fitness, the two evolutionary processes settle into statistical equilibrium.

Figure 1. The two bar charts, orange (Glass model) and blue (modified Glass model), are the mean frequencies of fitnesses in the parental populations of the 998,000 generations following the 2,000 shown in Animation 1. The mean frequency distributions approximate the equilibrium distribution to which the evolutionary processes converge. In both cases, the mean and standard deviation of the fitnesses are 39.5 and 2.84, respectively, and the average frequency of fitness 50 is 0.0034. Maximum fitness occurs in only 1 of 295 generations, on average.

I should explain immediately that an individual organism is characterized by 50 heritable traits. For each trait, there are several variants. Some variants contribute 1 to the average number offspring pairs left by individuals possessing them, and other variants contribute 0. The expected number of offspring pairs, or fitness, for an individual in the parental population is roughly the sum of the 0-1 contributions of its 50 traits. That is, fitness ranges from 0 to 50. It is irrelevant to the model what the traits and their variants actually are. In other words, there is no target type of organism specified independently of the evolutionary process. Note the circularity in saying that evolution searches for heritable traits that contribute to the propensity to leave offspring, whatever those traits might be.

The two evolutionary processes displayed above are identical, apart from their initial populations, and are statistically equivalent over the long term. Thus a general account of what occurs in one of them must apply to both of them. Surely you are not going to tell me that a search for the “target” of maximum fitness, when placed smack dab on the target, rushes away from the target, and subsequently finds it once in a blue moon. Hopefully you will allow that the occurrence of maximum fitness in an evolutionary process is an event of interest to us, not an event that evolution seeks to produce. Again, fitness is not the purpose of evolution, but instead the propensity of a type of organism to leave offspring. So why is it that, when the population is initially full of maximally fit individuals, the population does not stay that way indefinitely? In each generation, the parental population is replaced with surviving offspring, some of which are different in type (heritable traits) from their parents. The variety in offspring is due to recombination and mutation of parental traits. Even as the failure of parents to leave perfect copies of themselves contributes to the decrease of fitness in the blue process, it contributes also to the increase of fitness in the orange process.

Both of the evolutionary processes in Animation 1 settle into statistical equilibrium. That is, the effects of factors like differential reproduction and mutation on the frequencies of fitnesses in the population gradually come into balance. As the number of generations goes to infinity, the average frequencies of fitnesses cease to change (see “Wright, Fisher, and the Weasel,” by Joe Felsenstein). More precisely, the evolutionary processes converge to an equilibrium distribution, shown in Figure 1. This does not mean that the processes enter a state in which the frequencies of fitnesses in the population stay the same from one generation to the next. The equilibrium distribution is the underlying change­less­ness in a ceaselessly changing population. It is what your eyes would make of the flicker if I were to increase the frame rate of the animation, and show you a million generations in a minute.

Animation 2. As the mutation rate increases, the equilibrium distribution shifts from right to left, which is to say that the long-term mean fitness of the parental population decreases. The variance of the fitnesses (spread of the equilibrium distribution) increases until the mean reaches an intermediate value, and then decreases. Note that the fine-tuned mutation rate .005 ≈ 10–2.3 in Figure 1.

Let’s forget about the blue process now, and consider how the orange (randomly initialized) process settles into statistical equilibrium, moving from left to right in Animation 1. The mutation rate determines

  1. the location and the spread of the equilibrium distribution, and also
  2. the speed of convergence to the equilibrium distribution.

Animation 2 makes the first point clear. In visual terms, an effect of increasing the mutation rate is to move equilibrium distribution from right to left, placing it closer to the distribution of the initial population. The second point is intuitive: the closer the equilibrium distribution is to the frequency distribution of the initial population, the faster the evolutionary process “gets there.” Not only does the evolutionary process have “less far to go” to reach equilibrium, when the mutation rate is higher, but the frequency distribution of fitnesses changes faster. Animation 3 allows you to see the differences in rate of convergence to the equilibrium distribution for evolutionary processes with different mutation rates.

Animation 3. Shown are runs of the Glass model with mutation rate we have focused upon, .005, doubled and halved. That is,  = 2 ⨉ .005 = .01 for the blue process, and  = 1/2 ⨉ .005 = .0025 for the orange process.

An increase in mutation rate speeds convergence to the equilibrium distribution, and reduces the mean frequency of maximum fitness.

I have selected a mutation rate that strikes an optimal balance between the time it takes for the evolutionary process to settle into equilibrium, and the time it takes for maximum fitness to occur when the process is at (or near) equilibrium. With the mutation rate set to .005, the average wait for the first occurrence of maximum fitness, in 1001 runs of the Glass model, is 1857 generations. Over the long term, maximum fitness occurs in about 1 of 295 generations. Although it’s not entirely accurate, it’s not too terribly wrong to think in terms of waiting an average of 1562 generations for the evolutionary process to reach equilibrium, and then waiting an average of 295 generations for a maximally fit individual to emerge. Increasing the mutation rate will decrease the first wait, but the decrease will be more than offset by an increase in the second wait.

Figure 2. Regarding Glass’s algorithm (“Parameter Dependence in Cumulative Selection,” Section 3) as a problem solver, the optimal mutation rate is inversely related to the squared string length (compare to his Figure 3). We focus on the case of string length (number of heritable traits) L = 50, population size N = 500, and mutation rate  = .005, with scaled mutation rate uʹ L2 = 12.5 ≈ 23.64. The actual rate of mutation, commonly denoted u, is 26/27 times the rate reported by Glass. Note that each point on a curve corresponds to an evolutionary process. Setting the parameters does not inform the evolutionary search, as Marks et al. would have you believe, but instead defines an evolutionary process.

Figure 2 provides another perspective on the point at which changes in the two waiting times balance. In each curve, going from left to right, the mutation rate is increasing, the mean fitness at equilibrium is decreasing, and the speed of convergence to the equilibrium distribution is increasing. The middle curve (L = 50) in the middle pane (N = 500) corresponds to Animation 2. As we slide down the curve from the left, the equilibrium distribution in the animation moves to the left. The knee of the curve is the point where the increase in speed of convergence no longer offsets the increase in expected wait for maximum fitness to occur when the process is near equilibrium. The equilibrium distribution at that point is the one shown in Figure 1. Continuing along the curve, we now climb steeply. And it’s easy to see why, looking again at Figure 1. A small shift of the equilibrium distribution to the left, corresponding to a slight increase in mutation rate, greatly reduces the (already low) incidence of maximum fitness. This brings us to an important question, which I’m going to punt into the comments section: why would a biologist care about the expected wait for the first appearance of a type of organism that appears rarely?

You will not make sense of what you’ve seen if you cling to the misconception that evolution searches for the “target” of maximally fit organisms, and that I must have informed the search where to look. What I actually did, by fine-tuning the parameters of the Glass model, was to determine the location and the shape of the equilibrium distribution. For the mutation rate that I selected, the long-term average fitness of the population is only 79 percent of the maximum. So I did not inform the evolutionary process to seek out individuals of maximum fitness. I selected a process that settles far away from the maximum, but not too far away to suit my purpose, which is to observe maximum fitness rapidly. If my objective were to observe maximum fitness often, then I would reduce the mutation rate, and expect to wait longer for the evolutionary process to settle into equilibrium. In any case, my purpose for selecting a process is not the purpose of the process itself. All that the evolutionary process “does” is to settle into statistical equilibrium.

Sanity check of some claims in the book

Unfortunately, the most important thing to know about the Glass model is something that cannot be expressed in pictures: fitness has nothing to do with an objective specified independently of the evolutionary process. Which variants of traits contribute 1 to fitness, and which contribute 0, is irrelevant. The fact of the matter is that I ignore traits entirely in my implementation of the model, and keep track of 1s and 0s instead. Yet I have replicated Glass’s results. You cannot argue that I’ve informed the computer to search for a solution to a given problem when the solution simply does not exist within my program.

Let’s quickly test some assertions by Marks et al. (emphasis added by me) against the reality of the Glass model.

There have been numerous models proposed for Darwinian evolution. […] We show repeatedly that the proposed models all require inclusion of significant knowledge about the problem being solved. If a goal of a model is specified in advance, that’s not Darwinian evolution: it’s intelligent design. So ironically, these models of evolution purported to demonstrate Darwinian evolution necessitate an intelligent designer.

Chapter 1, “Introduction”


[T]he fundamentals of evolutionary models offered by Darwinists and those used by engineers and computer scientists are the same. There is always a teleological goal imposed by an omnipotent programmer, a fitness associated with the goal, a source of active information …, and stochastic updates.

Chapter 6, “Analysis of Some Biologically Motivated Evolutionary Models”


Evolution is often modeled by as [sic] a search process. Mutation, survival of the fittest and repopulation are the components of evolutionary search. Evolutionary search computer programs used by computer scientists for design are typically teleological — they have a goal in mind. This is a significant departure from the off-heard [sic] claim that Darwinian evolution has no goal in mind.

Chapter 3, “Design Search in Evolution and the Requirement of Intelligence”

My implementation of the Glass model tracks only fitnesses, not associated traits, so there cannot be a goal or problem specified independently of the evolutionary process.

Evolutionary models to date point strongly to the necessity of design. Indeed, all current models of evolution require information from an external designer in order to work. All current evolutionary models simply do not work without tapping into an external information source.

Preface to Introduction to Evolutionary Informatics


The sources of information in the fundamental Darwinian evolutionary model include (1) a large population of agents, (2) beneficial mutation, (3) survival of the fittest and (4) initialization.

Chapter 5, “Conservation of Information in Computer Search”

The enumerated items are attributes of an evolutionary process. Change the attributes, and you do not inform the process to search, but instead define a different process. Fitness is the probabilistic propensity of a type of organism to leave offspring, not search guidance coming from an “external information source.” The components of evolution in the Glass model are differential reproduction of individuals as a consequence of their differences in heritable traits, variety in the heritable traits of offspring resulting from recombination and mutation of parental traits, and a greater number of offspring than available resources permit to survive and reproduce. That, and nothing you will find in Introduction to Evolutionary Informatics, is a fundamental Darwinian account.

1,439 thoughts on “Evo-Info 3: Evolution is not search

  1. DNA_Jock: I was addressing phoodoo’s misconception, phoodoo’s hopeless analogy, re the effect of the size of the bag, which I “got out of the way first”. You even admitted that you disagree with phoodoo, here.

    Yet oddly enough the exact same elements feature in the portions of your post addressed to me. And you never did answer my questions. Would you like me to repeat them?

    Yet all that appears to be moot at this point, since this entire analogy is irrelevant. You apparently thought not, though. So you too were fooled. As was keiths. And Allan. But you weren’t fooled by Rumraket. So I’m interested in how you got fooled. And by whom. Aren’t you?

    Why are you talking about relative frequencies and proportions and balls drawn from bags, given that they are quite irrelevant to Rumraket’s argument?

  2. DNA_Jock: Now let’s deal with Mung’s original mocking of Rumraket’s comment re eye evolution, and his subsequent Gish Gallop away from the fact that Mung was wrong.
    Let’s consider a very large black bag, from which we extract balls.

    Why? How is that relevant to Rumraket’s argument. He says it’s not.

    DNA_Jock:
    Mung, the black bag and the white bag represent alternative universes in which eye evolution is more or less unlikely. Rumraket was correct.

    How can that be so, since he clearly states his argument has nothing to do with any of this?

  3. Rumraket: There’d be no reason to “admit” to a factoid not relevant to any argument I’ve made.

    If you disagree that it isn’t irrelevant, I have to ask what specific argument of mine you think the relative frequencies of eyes vs “not eyes” applies to?

  4. Mung: I would like to thank DNA_Jock for helping clarify the exact nature of the problem with Rumraket’s claims about eye evolution. He has absolutely no clue what the relative frequencies are.

    Can we all agree that Rumraket has no idea what the relative frequencies are?

    If we can agree on that, do you agree with Rumraket that the relative frequencies are irrelevant?

  5. I repeat:

    Rumraket, to Mung:

    The post of yours I am responding to, does. You quote me making a particular statement, and then you caricature it with an uncharitable interpretation. I want you to stop quoting that statement and giving that stupid interpretation of it when I have explicitly denied having engaged in such a line of reasoning. I think this is a fair request.

    It’s a request that shouldn’t need to be made. Your intended meaning was obvious from the outset, and you clearly were not making the dumb mistake that Mung was attributing to you. He was dishonestly attempting to score gotcha points.

    What do you accomplish with this bottom-feeder behavior, Mung?

  6. Rumraket: Mung: Let’s see if we can’t get back to something we both agree on.

    I quoted a couple authors who claimed that it’s easy to evolve an eye and that eyes are not so improbable after all. I took exception to those claims pointing out they were based on lousy reasoning. Ring any bells?

    Then you came to the defense of those arguments.

    Are we on the same page yet?

    Yep, same page so far.

    We going to continue this Mung, or… ?

  7. keiths: What do you accomplish with this bottom-feeder behavior, Mung?

    It no doubt gives him the sense of achievement he imagines that you all have when you actually progress the boundaries of human knowledge. It’s cargo-cult science.

  8. Rumraket,

    Ok, well that ironically just proves my point even better.

    Yes, your point is valid – we all chatter away, and it becomes confusing. (eta – I will note that your discussion has spilled over into Tom’s thread. It’s ironic that you’re chastising me, however mildly, for a derail!).

    I also failed to understand what the hell you were trying to argue.

    There was a more serious point underlying the ‘bag’ variant though – Mung was sticking to cards, where the permutation space is already known in a standard 52 card deck. Whereas a more useful case IMO was the gaining of information from an unknown space by sampling.

  9. Allan Miller: There was a more serious point underlying the ‘bag’ variant though – Mung was sticking to cards, where the permutation space is already known in a standard 52 card deck. Whereas a more useful case IMO was the gaining of information from an unknown space by sampling.

    Indeed.
    And whilst the card deck was misleading, the coin-tossing is even more misleading.
    My point to phoodoo was that one can gain information about the unsampled population from looking at the sampled population, with the caveat that biased sampling may lead you astray. A point that he, with his terrier-like focus on the size of the unsampled population, does not appear to grasp.
    My example for Mung did NOT feature “the exact same elements”; Mung’s ability to misconstrue is, ahem, impressive. The way I was thinking of the eye-ball analogy was as follows:
    The bag contains all possible organisms. We sample from the bag, in a distinctly biased manner: for instance, we only sample organisms that are extant or, conceivably, left good enough fossil remains.
    Calculating precisely how unlikely event X might be from this sampling is problematic. For ID.
    Balls extracted from the bag fall into different categories: “not-eye”, “seen it before”, and “Whoa! WTF? New category!”.
    Rumraket’s point, AIUI, was that if our sampling reveals multiple separate occasions where event X occurred, then our best estimate of how unlikely event X is gets revised.
    Mung’s point, if he can be said to have one, would be that our estimate only gets revised if the proportion of W!WTF?NC!’s differs from our previous expectation. Well, duh. No-one* doubted that for a second.
    I may be giving Mung too much credit here, however.

    *site rules, mate, site rules.

  10. DNA_Jock: A point that he, with his terrier-like focus on the size of the unsampled population, does not appear to grasp.

    No I don’t think you grasped it.

    If you say that we assume 1% of the contents on an unknown population is unlike the other 99% (of course we could have said 10% is different, or 30%, but you went the easy route with 1%, fine), then we should expect 1% of the time to choose that, if we only chose once. So we of course don’t know if the one time we chose, we just so happened to choose that 1% that is not green (again, you went with 99%, choosing the most extreme hypothesis, which makes it easier for you to argue). Then if we happened to chose again, we might once again have chosen from that 1% that is not like the rest-we have done it twice now!

    Now, on our third attempt is it still a 1% chance we are going to chose the minority color, or have our odds gone down even more, since we have chosen three now. So what are the odds on the next draw? And the next?

    Now what happens if we want to know if 60% are green? We chose once, its not green. Well, we had a 40% chance. We chose again, still have a 40% chance right-so how unlikely is it we won’t chose green-not very unlikely right?

    But if you start claiming, that after 5, or 6 or ten draws, if 60% are green, we should finally get a green. Well, why, we had a 40% chance each time that it is not green.

    So if your claim is that, after 20 draws, we should get green at least once, maybe twice, if 60% are green, because after all, with each draw the chances increase that we will get green, then you have just made the argument that Joe argued against-that every draw is independent. So if I bet against getting 10 red in a row on roulette, if after 6 times I got all reds, are my chances increasing that I will get black on the next one, or the next one?

    Or put another way, should I bet against ever getting 20 reds in a row, or 30? Or 40 reds in a row? So if I go to a roulette table, and I see 6 reds in a row, I should bet on black. If it gets to 7 in a row, I should really bet on black, and if it gets to 11 times in a row, all red, I should really really bet on black, because what are the odds, THAT THIS TIME that I happened to be betting, there will be 12 reds in a row, very low right? And even lower that it will be 13, even much lower that it will be 14!

    Each choice, each spin, each flip is either independent, or it is not.

    I know some statisticians will claim they can solve this conundrum, and I have heard their explanation. And I win in roulette 99% of the time (that is no exaggeration). Go figure.

  11. phoodoo,

    You start talking about an unknown sittuation, but then illustrate it with a known one. Roulette colours are (ignoring ‘house wins ’00’) the same as coin flips: 50/50.You already know that, so the gambler would indeed be participating in a fallacy.

    Better to choose a roulette wheel with a completely unknown distribution of colours, to be consistent. Each time you spin (you can’t see the wheel, only the outcome of 1 spin) you gain information about the wheel, which could indeed better inform your future bets.

    Of course, you will now come up with a particular wheel that this doesn’t work for. For example, one where every single slot has a different colour.

  12. Allan Miller: I will note that your discussion has spilled over into Tom’s thread.

    I do keep watching for something I can tie to the OP. I rather like references to sampling. The title of the post is “Evolution Is Not Search.” When Dembski delivered his swan song at the University of Chicago, three years ago this month, someone in the audience pointed out that evolution is not search. And Dembski demanded to know, if it isn’t search, then what is it? I decided not to give in the OP what is the most natural answer for me, biased sampling. The problem is that not all biased sampling processes are evolutionary processes. I didn’t want to trade one misconception for another, even if the new misconception is better than the old one.

  13. phoodoo: hen you have just made the argument that Joe argued against-that every draw is independent.

    I argued what? In my classroom example I specifically decreed that the tosses were independent, and then asked what people expected. Most of them had one of two different delusions, ones which would only be valid if the tosses weren’t independent. But I was very clear to them that they should start from the assumption that the tosses were independent, and they had just happened to get 10 Heads in their 10 tosses.

    I don’t have the energy to figure out the rest of phoodoo’s comment, but it looks odd.

  14. phoodoo: I win in roulette 99% of the time (that is no exaggeration).

    Well then, why on earth are you bothering making comments here when you could be off bankrupting casinos and spending the billions of dollars that would result?

    If you insist on wasting your time commenting here, perhaps you could devote some of the comments to teaching us how to win at roulette, or at the racetrack, or in the stock market.

    I’m of course doubtful that you have a system. As I will be doubtful if an alleged psychic claims to have powers to foresee the future. For if they do they should instead be using them in the stock market and becoming fantastically wealthy.

  15. Mung: My recent post has nothing to do with independent events. It has to do with sampling. It has to do with not specifying either the number of eyes sampled nor the number of “not eyes” sampled. It has to do with not knowing what the relative frequencies are because we lack the relevant data.

    Actually there is a sense in which the frequencies matter for my argument, but it is the frequency with which eyes evolve, out of some number of times independent evolutionary histories take place on some planet.

    So you are right, I concede that it was a total brainfart on my part to say my argument has nothing to do with relative frequencies. It does and I made a mistake when i said it didn’t.

  16. phoodoo,

    No I don’t think you grasped it.

    Not surprised that you don’t think I grasped it. The question is, which of us is wrong…

    [snip]
    But if you start claiming, that after 5, or 6 or ten draws, if 60% are green, we should finally get a green. Well, why, we had a 40% chance each time that it is not green.
    So if your claim is that, after 20 draws, we should get green at least once, maybe twice, if 60% are green, because after all, with each draw the chances increase that we will get green, then you have just made the argument that Joe argued against-that every draw is independent.

    If 60% are green and either {we are sampling-with-replacement, thus independent events} or {the bag is really big, thus effectively independent events} then after 20 draws
    P(0 green) = 1 x 10^-8
    P(1 green) = 3 x 10^-7
    P(2 green) = 4,7 x 10^-6
    P(more than 2 green) = 99.995%
    So “at least once, maybe twice,” is a bit of an underestimate…
    This is High School math : Binomial(20, 0.6)
    The chance that the next ball is green never changes (grade school math).

    So if I bet against getting 10 red in a row on roulette, if after 6 times I got all reds, are my chances increasing that I will get black on the next one, or the next one?

    Nope.

    Or put another way, should I bet against ever getting 20 reds in a row, or 30? Or 40 reds in a row? So if I go to a roulette table, and I see 6 reds in a row, I should bet on black. If it gets to 7 in a row, I should really bet on black, and if it gets to 11 times in a row, all red, I should really really bet on black, because what are the odds, THAT THIS TIME that I happened to be betting, there will be 12 reds in a row, very low right? And even lower that it will be 13, even much lower that it will be 14!

    Nope. Cute, though, that you have invoked an entirely different fallacy, the Gambler’s Fallacy, and gotten Mung’s mischaracterization of Rumraket’s point BACKWARDS:
    Rumraket would, according to Mung, expect red to be MORE likely following a run of reds, not less.

    Each choice, each spin, each flip is either independent, or it is not.
    I know some statisticians will claim they can solve this conundrum, and I have heard their explanation. And I win in roulette 99% of the time (that is no exaggeration). Go figure.

    Here’s a way of winning 99% of the time, which seems like a good idea, thanks to the Gambler’s Fallacy. (I suspect that you might even use it…) :
    Bet “Martingale”. That is, bet red on every spin. If you lose, double your bet. After winning, return to your original bet.
    Start with ~500 chips, and you will win on ~99% of your trips to the table.
    But when you loose…
    Moving on.
    There’s a reason for thinking about pulling balls out of bags, rather than coin flips, roulette wheels and poker hands.
    With the roulette wheel, we KNOW that red/black odd/even high/low are 50:50 propositions. In the real world of sampling and inference, we don’t know what is in the bag. We gain information about the contents of the bag, based on what we have drawn from it. You do not seem to understand this. I will try again.
    I present you with four bags, each contains 1000 balls.
    The bags are labeled U1, K1, U2, K2.
    You have personally confirmed that bags K1 and K2 each contain 500 red balls, and 500 white balls. You have no idea whatsoever about the color content of U1 and U2.
    Blindfolded, and after a lot of shaking, you draw 10 balls from each bag.
    The results:
    K1 : 9 red, 1 white
    U1 : 9 red, 1 white
    K2 : 1 red, 9 white
    U2 : 1 red, 9 white
    There are now 990 balls in each bag. I now say to you “Draw one ball from the bag of your choice. If it’s red, I pay you $2,000; if not, I kick you in the nuts.”
    You KNOW that your chances are slightly better than 50:50 if you go for bag K2 : it’s 499/990.
    K1 is a bad idea – chances of red are only 491/990.
    But what about U1? What do you reckon the proportion of red balls might be in that bag?
    What’s your best estimate of the proportion of red balls in that bag? How would it change if you drew a further ten balls from that bag, and they turned out to be 5 of each color? THIS is what the grown-ups have been talking about.

  17. Tom English,

    I volunteer “Exploration” as tribute. A key point being that much of the bias comes from adjacency, and inaccessibility matters…

  18. DNA_Jock,

    No DNA, I let you play along with your game of changing the goalposts several times now, but not this time. I said nothing about a bag which has a known quantity inside, I said an unknown quantity. It was you who said that you can learn about the contents of a bag of unknown quantities by virtue of what you have removed. Of course you can learn trivial things about what the bag isn’t, by what you have drawn, but you can’t learn about what the bag is. For instance, if you pull a red one, you can know that the bag didn’t contain 100% black things, but it still doesn’t tell you what actually is in the bag-you can’t change that.

    Furthermore you didn’t really address the point here either. What prevents an unlikely draw from occurring multiple times? Nothing. And further, you can’t say when the threshold of likely or unlikely is crossed-there is no measure of that. So perhaps, each time you chose a red, you simply are choosing from the 1 percent that is not green. You have no way of saying for sure how many times choosing from the 1 percent can’t happen. In 1000 picks, can I choose from the 1% each time? You can’t answer that definitively.

    When does unlikely become impossible, you have no idea.

  19. phoodoo: What prevents an unlikely draw from occurring multiple times?

    The Law of Large Numbers.

    When does unlikely become impossible, you have no idea.

    It doesn’t become impossible, just increasingly unlikely.
    What is it with creationists and absolutes?
    Grade School math, phoodoo.

    phoodoo: It was you who said that you can learn about the contents of a bag of unknown quantities by virtue of what you have removed. Of course you can learn trivial things about what the bag isn’t, by what you have drawn,…

    Glad to see you finally admit that you were wrong with your “What you have pulled out tells you absolutely zero about the frequencies inside remaining” claim.

  20. Joe Felsenstein,

    First off I do do a lot of gambling, and I have considered retiring and doing that full time. Second, I win, virtually always, but it takes a lot of time. I have done this hundreds of times, and I have only ever walked away losing one time. You can believe whatever you want, but I have the story to prove it.

    Secondly, when do you determine that something is an independent event, and when is it not? Each time you flip a coin it is an independent event, but each time you draw from an infinite bag it is not?

  21. DNA_Jock,

    No, read again. You can only learn what it is not, not what it is.

    DNA_Jock: Glad to see you finally admit that you were wrong with your “What you have pulled out tells you absolutely zero about the frequencies inside remaining” claim.

  22. DNA_Jock: What is it with creationists and absolutes?

    What is it with evolutionists and relativism? Oh, wait, relative frequencies are irrelevant. All of a sudden.

  23. phoodoo: What prevents an unlikely draw from occurring multiple times? Nothing.

    Right. It happens in evolution all the time!

  24. DNA_Jock: …and gotten Mung’s mischaracterization of Rumraket’s point BACKWARDS

    LoL. No wonder you don’t have a clue what I’m actually saying.

  25. DNA_Jock: There’s a reason for thinking about pulling balls out of bags, rather than coin flips, roulette wheels and poker hands. With the roulette wheel, we KNOW that red/black odd/even high/low are 50:50 propositions. In the real world of sampling and inference, we don’t know what is in the bag. We gain information about the contents of the bag, based on what we have drawn from it. You do not seem to understand this. I will try again.

    We know there are eyes. We know there are far more things that are not an eye. So we can expect to pull far more items from the bag that are not an eye. It doesn’t matter how many eyes you pull from the bag, nothing is going to change the fact that the ratio is in favor of you not getting an eye.

    Do you understand that yet?

  26. DNA_Jock: There are now 990 balls in each bag. I now say to you “Draw one ball from the bag of your choice. If it’s red, I pay you $2,000; if not, I lick you in the nuts.”

    Sounds like a no lose proposition to me. Sign me up!

  27. Rumraket: So you are right, I concede that it was a total brainfart on my part to say my argument has nothing to do with relative frequencies. It does and I made a mistake when i said it didn’t.

    Thank you. Thank you. Thank you.

  28. Joe Felsenstein: If you insist on wasting your time commenting here, perhaps you could devote some of the comments to teaching us how to win at roulette, or at the racetrack, or in the stock market.

    No one wants to learn how to win at life anymore.

  29. Tom English: I do keep watching for something I can tie to the OP. I rather like references to sampling.

    I think it was the introduction of sampling with respect to eyes that brought that line into this thread.

  30. Mung: We know there are eyes. We know there are far more things that are not an eye. So we can expect to pull far more items from the bag that are not an eye. It doesn’t matter how many eyes you pull from the bag, nothing is going to change the fact that the ratio is in favor of you not getting an eye.

    No doubt, but the more eyes you pull out the less unfavorable it was to start with

  31. Mung: We know there are eyes. We know there are far more things that are not an eye. So we can expect to pull far more items from the bag that are not an eye. It doesn’t matter how many eyes you pull from the bag, nothing is going to change the fact that the ratio is in favor of you not getting an eye.

    Do you understand that yet?

    Shouldn’t that argument also work for all the other things that are not eyes?
    Does that mean that every time you pull anything from the bag, whatever it is, you just beat the odds? Should we label this the miraculous interpretation of probability?

  32. These questions are beyond my ability to answer. Perhaps keiths will step up to the plate. I’m sure he knows the answers.

  33. I particularly enjoy the fact that Mung only responds to what I write addressing phoodoo’s misconceptions, never replying to what is addressed to Mung.
    Giddy up!

  34. DNA_Jock: I particularly enjoy the fact that Mung only responds to what I write addressing phoodoo’s misconceptions, never replying to what is addressed to Mung.
    Giddy up!

    Blatantly and egregiously false. First thing being that there is nothing new that you addressed to me. Certainly nothing since my post at the top of this page which were addressed to things you wrote.

    ETA: What is your most recent post that I failed to respond to?

  35. phoodoo: Second, I win, virtually always, but it takes a lot of time. I have done this hundreds of times, and I have only ever walked away losing one time.

    Darn. I thought you had a system that would guarantee winning on 99% of the spins of the roulette wheel. Walking away with a $1 profit after a whole day of play seems less dramatic, though maybe not easily achievable.

  36. Mung: DNA_Jock: I particularly enjoy the fact that Mung only responds to what I write addressing phoodoo’s misconceptions, never replying to what is addressed to Mung.
    Giddy up!

    Blatantly and egregiously false. First thing being that there is nothing new that you addressed to me. Certainly nothing since my post at the top of this page which were addressed to things you wrote.

    ETA: What is your most recent post that I failed to respond to?

    ROFL.
    You sure have a habit of responding to my posts – either reacting to something I wrote to phoodoo, or throwing out random Gish questions. You have demonstrably and repeatedly failed to address the portions of those same posts that were directed to you and your “argument”. But that’s because I never address anything new to you, apparently. Sure, Mung, whatever you have to tell yourself.
    TAYQ: see posts 509, 499, 480 (Bert and Ernie), 470, and 449 in this thread.

  37. phoodoo: Secondly, when do you determine that something is an independent event, and when is it not? Each time you flip a coin it is an independent event, but each time you draw from an infinite bag it is not?

    Did I ever say that it is not? I actually did not say anything about drawing from a bag, and posted a reference about how tossing a coin may not be as random as we like to think.

  38. DNA_Jock: I particularly enjoy the fact that Mung only responds to what I write addressing phoodoo’s misconceptions, never replying to what is addressed to Mung.

    Let’s see if that’s true. For example, DNA_Jock to me:

    DNA_Jock: Would you like to confirm that for him? He might believe you.

    To which I replied:

    Mung: I disagree with phoodoo. But it could be that I don’t understand his argument. Maybe he’s channeling Hume.

    So there we drew a sample from our black bag and came out with one ball in favor of Mung does reply to what is addressed to Mung.

  39. DNA_Jock, are we going to count me responding to your post accusing me of not responding as a response? Because maybe it wasn’t addressed to me. 🙂

  40. DNA_Jock: TAYQ: see posts 509, 499, 480 (Bert and Ernie), 470, and 449 in this thread.

    I don’t know which posts these three digit numbers refer to. Are you using some sort of reader that displays those numbers to you? The Bert and Ernie reference helped on one.

    DNA_Jock: Increasingly likely relative to our previous, less well-informed, estimate.

    Yeah. Funny. No wonder I didn’t respond. I was supposed to take that seriously?

    ETA: I also did respond to your other comments in that same post. But no, I didn’t respond to your analogy in that post. You want me to respond to your analogy?

  41. DNA_Jock: Yet another analogy:
    Bert: “A perfect game is incredibly unlikely — it has only ever happened once, ”
    Ernie: “I don’t think so, Bert. There have been 23 Perfect Games in MLB. I guess Perfect Games are more likely than you originally thought.”
    Big Bird: “Guess so, kids.”
    Bert: “Rubbish! You don’t know how unlikely a Perfect Game is! How many times has a game not been a perfect game, huh? Did you take into account the players’ strike and the introduction of the Divisional Series? Huh? Huh?”
    Big Bird: “No Bert, what Ernie said is true.”

    No one ever claimed there’s only ever been one perfect game pitched. So what is this supposed to be analogous to?

    No one but you know what you mean by “incredibly unlikely.” Want to put some numbers to that? Do you think that perfect games are incredibly like? Likely? Not likely? How do we decide where to put the line between very unlikely and incredibly unlikely? Any thoughts?

    Concerning major league baseball.

    No pitcher has ever pitched more than one perfect game. I’d say that makes them pretty rare. Wouldn’t you?

    There have only been 23 perfect games pitched. I’d say that makes them pretty rare, wouldn’t you?

    We can even talk about relative frequency and proportionality. Unless you just don’t think those are relevant. Don’t you think they are relevant?

    Out of over 210,00 games played, only 23 perfect games. I’d say that makes them pretty rare. Incredibly unlikely. Who on earth goes to a ball game expecting to see a perfect game pitched? What are the Vegas odds, I wonder.

    So if we’re picking games out of our hat, we’re far more likely to pick out a game that is not a perfect game. And it’s not like 1 in 10. Or 1 in 100. Or even 1 in 1000.

    Pull out your handy dandy pocket calculator and divide 23 by 210,000.

    1.095238095238095e-4

    Now let’s say God’s smiling on you and you go to a ballgame, and lo and behlld you get to watch the 24th perfect game ever. Boy, that odds for more perfect games being pitched just went through the roof!

    1.142857142857143e-4

    Of course, don’t expect to see a perfect game the next time you go out. And don’t bet on it. And as the games that are not perfect games continue to add up, things will be as they ever were. Perfect games will still be extremely rare. Even if you did miraculously get to see one.

    Somewhere along the line I seem to have totally missed the point of this analogy.

    Which was?

    I guess Perfect Games are more likely than you originally thought.

    No, they are still rare. And every time a perfect game is not pitched they become ever more unlikely. LoL!

  42. Mung: And every time a perfect game is not pitched they become ever more unlikely. LoL!

    By how much? no handy dandy pocket calculator there?

  43. Hey, Mung, nice to see you finally respond to something that addresses your argument.

    Mung: Somewhere along the line I seem to have totally missed the point of this analogy.

    I. Guess. So.

    I guess Perfect Games are more likely than you originally thought.

    No, they are still rare.

    Here, in distilled form, is your failure to comprehend. Perfect games are still rare, but they are also more likely than Bert, in this hypothetical, originally thought. There’s no contradiction at all.

    And every time a perfect game is not pitched they become ever more unlikely. LoL!

    Close, but no cigar. Each time a game is not perfect, our best estimate for P(Perfect Game) goes down. A little. And as you noted, every time a Perfect Game IS pitched our best estimate goes up. Noticeably. If, and only if, the proportion of Perfect Games in recently acquired data is higher than our previous estimate, then our estimate for P(Perfect Game) will be revised upwards.
    If out of, say. 210,000 games, we (like Bert) think that there has only been one Perfect Game, then we would estimate that P(Perfect Game) is about 1/210,000 (with a rather wide margin of error, btw). Ernie provides more complete data, showing that the correct numbers are 23/210,100. Holy shit! Our best estimate has gone up dramatically.
    Sure, Mung, they are still rare, just not as unlikely as Bert originally thought.
    AIUI, this was Rumraket’s point that you mis-construed. So when you find “Increasingly likely relative to our previous, less well-informed, estimate.” to be “funny” and “not to be taken seriously”, you are displaying your ostrich-like inability to comprehend. Positively phoodooesque.

  44. Mung, you asked re the post numbers 509, 499, 480 (Bert and Ernie), 470, and 449.
    You may not have noticed, but there are 50 comments on each page. So the formula
    (comment-page number -1) x 50 + (number on page) yields a unique post number that identifies any comment in a thread. The first comment in a thread is number 1, this comment is number 549.
    You are getting more and more like phoodoo, btw.

  45. DNA_Jock: Perfect games are still rare, but they are also more likely than Bert, in this hypothetical, originally thought. There’s no contradiction at all.

    Who ever claimed there was a contradiction? As I said, you get off on the wrong foot by having Bert assign an actual number to how rare a perfect game is, especially one that is so obviously wrong.

    However, if one evolutionist says what I originally quoted from the Ridley book, and another one comes along and says the opposite, that’s a contradiction. I’ll grant you that.

Leave a Reply