Evo-Info review: Do not buy the book until…

Introduction to Evolutionary Informatics, by Robert J. Marks II, the “Charles Darwin of Intelligent Design”; William A. Dembski, the “Isaac Newton of Information Theory”; and Winston Ewert, the “Charles Ingram of Active Information.” World Scientific, 332 pages.
Classification: Engineering mathematics. Engineering analysis. (TA347)
Subjects: Evolutionary computation. Information technology–Mathematics.

… the authors establish that their mathematical analysis of search applies to models of evolution.

I have all sorts of fancy stuff to say about the new book by Marks, Dembski, and Ewert. But I wonder whether I should say anything fancy at all. There is a ginormous flaw in evolutionary informatics, quite easy to see when it’s pointed out to you. The authors develop mathematical analysis of apples, and then apply it to oranges. You need not know what apples and oranges are to see that the authors have got some explaining to do. When applying the analysis to an orange, they must identify their assumptions about apples, and show that the assumptions hold also for the orange. Otherwise the results are meaningless.

The authors have proved that there is “conservation of information” in search for a solution to a problem. I have simplified, generalized, and trivialized their results. I have also explained that their measure of “information” is actually a measure of performance. But I see now that the technical points really do not matter. What matters is that the authors have never identified, let alone justified, the assumptions of the math in their studies of evolutionary models.a They have measured “information” in models, and made a big deal of it because “information” is conserved in search for a solution to a problem. What does search for a solution to a problem have to do with modeling of evolution? Search me. In the absence of a demonstration that their “conservation of information” math applies to a model of evolution, their measurement of “information” means nothing. It especially does not mean that the evolutionary process in the model is intelligently designed by the modeler.1

I was going to post an explanation of why the analysis of search does not apply to modeling of evolution. But I realized that it would give the impression that the burden is on me to show that the authors have misapplied the analysis.2 As soon as I raise objections, the “Charles Ingram of active information” will try to turn the issue into what I have said. The issue is what he and his coauthors have never bothered to say, from 2009 to the present. As I indicated above, they must start by stating the assumptions of the math. Then they must establish that the assumptions hold for a particular model that they address. Every one of you recognizes this as a correct description of how mathematical analysis works. I suspect that the authors recognize that they cannot deliver. In the book, they work hard at fostering the misconception that an evolutionary model is essentially the same as an evolutionary search. As I explained in a sidebar to the Evo-Info series, the two are definitely not the same. Most readers will swallow the false conflation, however, and consequently will be incapable of conceiving that analysis of an evolutionary model as search needs justification.

The premise of evolutionary informatics is that evolution requires information. Until the authors demonstrate that the “conservation of information” results for search apply to models of evolution, Introduction to Evolutionary Informatics will be worthless.


1 Joe Felsenstein came up with a striking demonstration that design is not required for “information.” In his GUC Bug model (presented in a post coauthored by me), genotypes are randomly associated with fitnesses. There obviously is no design in the fitness landscape, and yet we measured a substantial quantity of “information” in the model. The “Charles Ingram of active information” twice feigned a response, first ignoring our model entirely, and then silently changing both our model and his measure of active information.

2 Actually, I have already explained why the “conservation of information” math does not apply to models of evolution, including Joe’s GUC Bug. I recently wrote a much shorter and much sweeter explanation, to be posted in my own sweet time.

a ETA: Marks et al. measure the “information” of models developed by others. Basically, they claim to show that evolutionary processes succeed in solving problems only because the modelers supply the processes with information. In Chapter 1, freely available online, they write, “Our work was initially motivated by attempts of others to describe Darwinian evolution by computer simulation or mathematical models. The authors of these papers purport that their work relates to biological evolution. We show repeatedly that the proposed models all require inclusion of significant knowledge about the problem being solved. If a goal of a model is specified in advance, that’s not Darwinian evolution: it’s intelligent design. So ironically, these models of evolution purported to demonstrate Darwinian evolution necessitate an intelligent designer. The programmer’s contribution to success, dubbed active information, is measured in bits.” If you wonder Success at what? then you are on the right track.

588 thoughts on “Evo-Info review: Do not buy the book until…

  1. phoodoo: My goodness, do you not understand that in a computer algorithm nothing is better or worse, and there is no competition, until you make a competition and decide what wins and what loses.

    Here’s where the chain hops off: it is possible to make a program where nobody decides what wins and what loses. This is your mistake.

    That the idea of “better” or “worse”, is DEscriptive, not PREscriptive. You understand what that means? In this context, it means that when we say one simulated organism is “better”, all that means is a DESCRIPTION of the fact that the simulated organism persists in the simulation, where other simulated organisms do not. Whether you want to label the act of persisting in the simulation “better” or “fit” is totally irrelevant. The labels don’t matter.

    It doesn’t at all diminish the fact that nobody MADE them “to persist” or to out-compete those that fail to persist.

  2. Rumraket: It doesn’t at all diminish the fact that nobody MADE them “to persist” or to out-compete those that fail to persist.

    Compete? When do you introduce the idea of competing? If you make a program without deciding what wins and loses, there is no such thing as competing. You seem to be struggling with the verbage to discuss removing the bias towards one outcome, and then still having an outcome.

    The reason for that is because you can’t write a program without a goal, and then after it is over make inferences about an accomplished goal-“a winner” “outcompeted” “survivor”, etc.. If those conclusions are only reached AFTER the program is completed those words have no meaning in regards to the program.

    If we count how many of one groups exist, you can not say they after the fact, they outcompeted the other, if we never had that goal to begin with. Maybe the goal is to disappear off the computer faster, so those that still exist are the losers. There is no sensible conclusion you can draw about anything, for a computer program that did nothing but make mistakes. After the fact conclusions about ANY results are meaningless.

    Researcher: “There are 6 in group A, and 1 in group B, gee, I guess group A did a bad job of learning how to be eaten, otherwise there would only be 1 left…A is the least fit genotype”

    Programmer: “But I never said or designed any aspect about being eaten, and about it being good to be eaten..”

    Researcher: “Doesn’t matter, I have studied the results, none of the genotypes have turned blue, therefore group D is the most fit…”

    Programmer: But there is no group D…”

    Researcher: “That’s irrelevant.”

  3. Joe Felsenstein,

    What if someone comes along and says the definition of fitness is those that die fastest.

    Then is random better or worse than an evolutionary search?

  4. phoodoo: Joe Felsenstein,

    What if someone comes along and says the definition of fitness is those that die fastest.

    Then is random better or worse than an evolutionary search?

    Evolutionary biologists know enough not to define fitness that way. So the evolutionary models I am talking about do not work that way.

    However the “evolutionary searches” of Marks/Dembski/Ewert do include all possible crazy models like that. Which is why their average performance is miserable, and much worse than the average performance of models that have genotypes and have fitnesses that measure the expected number of offspring of a newborn.

    And that is why models of natural selection made by evolutionary biologists do better than random choice of offspring.

  5. Joe Felsenstein,

    Good, now maybe you can explain that to Allan and Rumraket. They seem to think it doesn’t matter what you define as fit.

    Thus we get their nonsensical hypothetical algorthims, in which you don’t need to ascribe a fitness function.

  6. phoodoo,

    Good, now maybe you can explain that to Allan and Rumraket. They seem to think it doesn’t matter what you define as fit.

    What was that about ‘skeptics’ and their belligerent insistence on rightitude?

    Thus we get their nonsensical hypothetical algorthims, in which you don’t need to ascribe a fitness function.

    I am quite certain that Joe is aware of algorithms that are truly evolutionary but do not have a fitness function. Natural selection is not a synonym for evolution. Of course, to do better than random you need fitness, though I have never defined the fitter as those genotypes which do worse.

  7. Considering the neutral case is instructive. I tried with the ‘M&M’ threads, where the model wasn’t in a computer at all, but a bag of sweeties. I failed with phoodoo, but I did give it a good go.

    I’m not saying that the neutral case does better than random. The neutral case is random, in that sense. But crucially it doesn’t just pick a genotype at random to be a ‘hit’, it picks a genotype at random to be the dominant genotype in the future population. You can’t not-get this behaviour. Even neutral models do something curious. But these aren’t the models that do better than random.

    In order to do better than random, one needs selection, it’s a given. But to criticise models-with-selection as models of ‘real’ evolution is to say that there is nothing in nature that does the equivalent of biasing genotypes. Which would be ridiculous, like saying there is nothing in nature that is analogous to Artificial Selection. That, despite the vagaries of climate, predation, food etc, in nature the neutral case holds, and every genotype has the same chance.

    [eta – even in the neutral case there is still fitness of course, if strings are copied. There is just no fitness differential]

  8. phoodoo: Compete? When do you introduce the idea of competing?

    You don’t have to. Again, the program contains organisms that make copies of themselves, and they consume resources. Resources are limited, so populations can’t get infinitely large. So if one organims makes more copies than another, it will consume more resources and leave fewer for the rest. In this sense, competition is emergent, nobody had to somehow “program into it” that there is “competition”. Competition is, again, a description of what happens.

    If you make a program without deciding what wins and loses, there is no such thing as competing.

    Yes there is. You are just plain wrong here. It is possible, and it has been done, to make a program where there isn’t any line (or collection of lines) of code that tells the program who “wins”.

    You seem to be struggling with the verbage to discuss removing the bias towards one outcome, and then still having an outcome.

    My struggle is with getting you to think, rather than feel like you just have to object to anything I say because you’d rather spew gibberish contrary to demonstrable fact than feel that you somehow “backed down”.

    The reason for that is because you can’t write a program without a goal and then after it is over make inferences about an accomplished goal-“a winner” “outcompeted” “survivor”, etc.

    Yes you can. For reasons already explained.

    Nobody had to sit down and write code that makes some organism “better” than others. Nobody had to define “better” to write the program. These are just metaphors for the complex interactions between the organisms simulated in the program.
    Fitness is not programmed into Avida, it is OBSERVATIONALLY defined AFTER you have observed what happens in the program. If you observe that one organism makes more copies of itself than another, then we just label that organism as more fit.

    At no point does the program “evaluate” how “fit” the organisms are and use this information to make something happen in the simulation.

    If those conclusions are only reached AFTER the program is completed those words have no meaning in regards to the program.

    They have plenty of meaning. They are DESCRIPTIONS of what happened in the program.

    If we count how many of one groups exist, you can not say they after the fact, they outcompeted the other, if we never had that goal to begin with.

    Uhh yes you can. Why the hell not?

    Maybe the goal is to disappear off the computer faster, so those that still exist are the losers.

    You can use whatever labels give you a nice feeling inside. As long as you define your terms, we can communicate. If what you describe as losers, is what I describe as organisms with high fitness, then we have two different words for the same thing. So we can still communicate and make sense of what happens in the program. The particular labels you use are irrelevant, it is the phenomenon they refer to that matters.

    There is no sensible conclusion you can draw about anything, for a computer program that did nothing but make mistakes.

    Sure there is. For example, you can conclude that copying mistakes can produce more complex functions.

    After the fact conclusions about ANY results are meaningless.

    What? WHen the fuck else are you supposed to conclude something from your results? BEFORE you get them?

    LOL

    Yep, you really wrote that sentence. Let it fester.

    Researcher: “There are 6 in group A, and 1 in group B, gee, I guess group A did a bad job of learning how to be eaten, otherwise there would only be 1 left…A is the least fit genotype”

    Programmer: “But I never said or designed any aspect about being eaten, and about it being good to be eaten..”

    Researcher: “Doesn’t matter, I have studied the results, none of the genotypes have turned blue, therefore group D is the most fit…”

    Programmer: But there is no group D…”

    Researcher: “That’s irrelevant.”

    Where did this conversation take place?

    Wait, I can also make up complete bullshit with no relation to reality and then pretend this constitute the morons I argue with. Wait, not even necessary. We can just observe you post, mr. “you can’t draw conclusions AFTER you get your results”.

    LOL.
    LOL.
    LOL.

  9. phoodoo: Good, now maybe you can explain that to Allan and Rumraket. They seem to think it doesn’t matter what you define as fit.

    Thus we get their nonsensical hypothetical algorthims, in which you don’t need to ascribe a fitness function.

    You’re confused. How biologists define the concept of fitness has no relation to what kind of code the programmers of simulations of evolution use to make their programs

    And the “nonsensical hypothetical algorithms” are none of those. They’re real and sensible. An example is Avida. There is no fitness function ascribed to the organisms in avida. There is no “top down” algorithm that somehow manages the population and decides which ones to copy to the next generation. The organism are simulated individually, and they copy themselves.

  10. phoodoo: There is no sensible conclusion you can draw about anything, for a computer program that did nothing but make mistakes

    If it’s always the same mistakes, you can probably conclude it’s a creotard simulation

  11. dazz: If it’s always the same mistakes, you can probably conclude it’s a creotard simulation

  12. Rumraket: and it has been done, to make a program where there isn’t any line (or collection of lines) of code that tells the program who “wins”.

    No, no, no..not WHO wins!! Its what the definition of winning is!!!

    THAT is the fatal flaw, that is flying right over your head. GEEZ!

  13. phoodoo,

    No, no, no..not WHO wins!! Its what the definition of winning is!!!

    THAT is the fatal flaw, that is flying right over your head. GEEZ!

    I am struggling to grasp the point you keep getting aggravated over. Say we generate a GA (population of genotypes, birth/death analogues, you know the drill) and we introduce a subroutine that measures something about the genotypes. Let’s say (to take a silly example) it works out the number of molecules that would be required to print the genotype in Courier New. And you decide that heavier outcomes get a higher score than light ones.

    The winner in that case is likely to be one of the ‘heavier’ genotypes.

    Then you decide to reverse the situation. The winner in that case is likely to be one of the ‘lighter’ genotypes. Clearly, someone has made a decision in that example, but you could just do a coin flip. It is quite trivial that the final population (whenever you decide to stop) will contain genotypes from the heavier (or lighter) end of the range, depending on the sign of the test.

    So, your beef is that the definition of winning is whichever is the winner. But I still think ‘so what’? It’s the old ‘NS is tautological’ thing updated for the computer age. Mebbe it is mebbe it isn’t, it still causes something to happen.

  14. phoodoo: No, no, no..not WHO wins!! Its what the definition of winning is!!!

    There isn’t any definition of winning in the program. Just like there isn’t any definition of winning for evolution to happen in nature. Whether an organism in nature lives or dies, nobody has to be around to define whether living or dying is good or bad.

    Are you capable of fathoming this elementary concept?

  15. Allan Miller,
    Again, I thought we were talking about algorithms that have something to say about evolution. Are we back to your weather simulations again?

  16. Phoodoo, if you have a EA with a fitness function that rewards higher speeds, and the algo succeeds at making the population faster, and then you try the same EA, but with a fitness function that rewards lower speeds, and the algo also succeeds at making the population slower, that means the opposite of what you’re implying: the algo works regardless of how you design the fitness function

    ETA: Not saying the fitness function doesn’t have an impact on performance

  17. phoodoo,

    Again, I thought we were talking about algorithms that have something to say about evolution. Are we back to your weather simulations again?

    Surely you can’t actually think that from what I wrote. Key phrase: “Say we generate a GA (population of genotypes, birth/death analogues, you know the drill) and we introduce a subroutine that measures something about the genotypes.”. That is a bog standard evolutionary process – apart from the slightly eccentric implementation of the fitness function.

  18. Rumraket:

    phoodoo: Compete? When do you introduce the idea of competing?

    You don’t have to. Again, the program contains organisms that make copies of themselves, and they consume resources. Resources are limited, so populations can’t get infinitely large. So if one organims makes more copies than another, it will consume more resources and leave fewer for the rest. In this sense, competition is emergent, nobody had to somehow “program into it” that there is “competition”. Competition is, again, a description of what happens.

    I do not think that any simulation which is based on “organisms” competing for limited resources is a true reflection of how life on earth behaves in reality. Life is sustained more by sacrifice and cooperation than by competition.

    Life at the lower level give itself up to maintain life at the higher level. This can be seen throughout nature. Predators kill individual wildebeest but this in turn keeps the herds stronger as a whole. Ants are willing to sacrifice themselves for the sake of the colony. Each of us consists of a multitude of individual cells, bone cells, skin cells, endosymbiotic bacterial cells. If it wasn’t for the life and death of these cells we could not persist as individuals.

    On the whole herbivores never consume the available vegetation to it’s limit and predators never kill all the available prey. The balance is maintained.

    Life as whole is kept viable by “selfishness” and “altruism” maintaining the balance while new forms develop which seem to be less fit than the earlier forms which persist. Bacteria are far more successful than humans at making copies of themselves.

  19. CharlieM,

    There is always a kind of competition in train in a finite population, even when mutualism occurs. A ‘co-operative gene’ can only occupy its locus by displacing or resisting less cooperative versions. Observing that genotypes are competitive (for loci) is not at odds with the observation that the strategies they correspond to may not be.

    You’re not as far from Dawkins as you might think.

  20. Allan Miller: You’re not as far from Dawkins as you might think.

    If you can call 180 degrees of a difference ‘not far’!

    I do not think organisms are lumbering vehicles for their genes, quite the opposite. And I wasn’t emphasising mutualism for the benefit of both, but of sacrifice for the benefit of the greater whole.

  21. GlenDavidson: Classic Darwinian selection.

    Glen Davidson

    Exactly. It is a narrowing constraining force, not a creative expansive force. The features that it selects must be present before it can work on them.

  22. CharlieM: Exactly. It is a narrowing constraining force, not a creative expansive force. The features that it selects must be present before it can work on them.

    Ok. So couple this “narrowing constraining force” with mutation. Doesn’t mutation fit the “present feature” (or at least, “present change towards a feature”) bill in your summary above?

  23. Allan Miller:
    phoodoo,

    Surely you can’t actually think that from what I wrote. Key phrase: “Say we generate a GA (population of genotypes, birth/death analogues, you know the drill) and we introduce a subroutine that measures something about the genotypes.”. That is a bog standard evolutionary process – apart from the slightly eccentric implementation of the fitness function.

    You mean, say you make a program with a fitness function of survival (you haven’t said what makes some individuals survive in your program and some not, but lets skip that for now) , then after you have run the program, THEN you change the definition of fitness to the lightest or heaviest or fastest, or whatever?

    So I am free to call fitness anything I want right? So now I can call those most fit in the population the bluest (it doesn’t matter if any of them are blue or not). Or I can call the lightest the most fit, so ones that have no genes at all are the most fit. Or I can call the ones that are the fastest the most fit, even doesn’t matter if any of them can move or not, because now they are all equally fit if none can move. Then I can call the ones that died first from your original definition of fitness the most fit.

    So now fitness has no meaning whatsoever. What have we proven exactly?

  24. CharlieM: Exactly. It is a narrowing constraining force, not a creative expansive force. The features that it selects must be present before it can work on them.

    Yes, but why do cone snails have high mutation rates for the DNA encoding their toxins? It appears to be in order to have the “creative force” that natural selection can preserve.

    Glen Davidson

  25. CharlieM,

    If you can call 180 degrees of a difference ‘not far’!

    Well, I repeat my phrase. I know that’s what you think, but … You might ignore the purple prose and try and understand the argument. The book is as much about altruism and cooperation as ‘selfishness’. It tries to resolve the apparent paradox between co-operative behaviours and gene-level contests.

  26. phoodoo,

    You mean, say you make a program with a fitness function of survival (you haven’t said what makes some individuals survive in your program and some not, but lets skip that for now)

    No, let’s not. The fitness function was related to the amount of ink required to render the genotype in Courier New. You must have seen that. I’m bloody sure I wrote it.

    then after you have run the program, THEN you change the definition of fitness to the lightest or heaviest or fastest, or whatever?

    No. You can just make it a parameter. You can write the program so it rewards either heavy or light depending on this parameter. You supply a runtime parameter to say which you want on this run. You don’t rewrite the program each time. You haven’t put the information into the program as to which will be fitter. Either can be, it’s all in the one program, runtime behaviour depending on a coin flip or choice if you prefer. Or get your dog to do it.

  27. Allan Miller: The fitness function was related to the amount of ink required to render the genotype in Courier New

    That was the fitness function before you ran the program or after you run the program?

    So you mean you took a population of individuals that already existed, and now you simply want to decide which are lighter and which are heavier? And that is not a search right? Because you have already claimed EA don’t need to be a search of anything, so this is how you are proving it doesn’t have to search? It doesn’t need a goal. The programmer doesn’t have to tell the program what to look for. And we can call anything we want fit after the program has run.

    I am enjoying that you are Rumraket are so willing to double down on such nonsense. Sorry to tell you, Joe hasn’t fallen for it.

  28. phoodoo,

    So now fitness has no meaning whatsoever. What have we proven exactly?

    The conventional meaning of fitness in biology is the number of offspring. Fitter genotypes have more offspring than less fit genotypes. And so it is in a computer simulation. That’s a meaning. I’m not sure why you think a number of offspring is meaningless, or why different relative rates of increase don’t have any effect.

    Elsewhere you were sneering with Mung about the trivial obviousness of it all. The type that produces more will produce more, hurr hurr. Well, yeah. Hurr hurr.

  29. Allan Miller,

    But Allan, your whole point that started this, was that you don’t need a fitness function in order to do a simulation of evolutionary processes. You can decide a fitness function after the program has run.

    And that is of course…funny.

  30. Allan, to phoodoo:

    Or get your dog to do it.

    The dog has a better chance of getting the point than phoodoo does.

  31. phoodoo,

    God, you’re hard work. We’re back on searches now? If your definition of a ‘search’ is one in which the population is a subset of all possible genotypes, then even the starting population results from a ‘phoodoo-search’. If it’s one in which the final population is a biased subset of all possible genotypes, then all programs with selection are phoodoo-searches. It costs me nothing to concede this, and tells us nothing, but well done, they are phoodoo-searches. Pat-pat.

  32. phoodoo seems to struggle with the idea that we can pick whatever fitness function we want in a GA. He thinks that must mean that the same applies to natural selection. just the good old map-territory confusion

  33. phoodoo,

    But Allan, your whole point that started this, was that you don’t need a fitness function in order to do a simulation of evolutionary processes. You can decide a fitness function after the program has run.

    And that is of course…funny.

    Laughing at things that aren’t there, phoodoo. You know what they say.

    You don’t decide the fitness function after the program has run. That would, as you say, be worthy of many a tee-hee. But you can pick it after the program has been written.

  34. phoodoo: So you mean you took a population of individuals that already existed, and now you simply want to decide which are lighter and which are heavier?

    You should know I’m struggling mightily against the conviction that you are being deliberately obtuse just for the fun of it.

    Remember the niche, phoodoo. The idea of evolution is that a population of individuals have some heritable variation that, using your example, would mean some lighter, some heavier individuals. Depending on the circumstances of the niche, being lighter may impart some slight reproductive advantage. The result is lighter individuals have more chance to produce offspring and, over time, the population as a whole will have more light individuals.

    The niche is the designer. It all depends on the niche.

  35. Allan Miller:
    phoodoo,

    Laughing at things that aren’t there, phoodoo. You know what they say.

    You don’t decide the fitness function after the program has run. That would, as you say, be worthy of many a tee-hee. But you can pick it after the program has been written.

    I think it would be useful to make some parallelisms with biological evolution.
    I’ll give it a shot: an asteroid hits the earth, the environment changes, selective pressures change (analogous to a change in fitness function)… but the “program”, evolution, keeps chugging along

  36. phoodoo: But Allan, your whole point that started this, was that you don’t need a fitness function in order to do a simulation of evolutionary processes.

    No phoodoo, that was MY point. It is possible to set up Avida so that there is no fitness function that artificially rewards some particular phenotype. And this is, yes, a simulation of an evolutionary process.
    In Avida, changes in relative reproductive success is emergent, not a line of code that “rewards” particular behaviors (though the program does have the option of doing that, artificially rewarding certain behaviors, like doing particular logical operations in code and rewarding the organism with more processing power).

    There are programs that do that, reward certain behaviors by copying those “organisms” that are good at those behaviors. One such program is the BoxCar2D simulation. Here, the cars that make it the furthest on the track are “rewarded” by being copied to the next generation. In this sense, the programmer has decided what is a useful behavior, and artifically rewards organisms with that behavior by increasing their reproductive success.

    Avida doesn’t work like that. Here, the organisms exist in some environment, and they copy themselves (it’s in their digital genomes), rather than the simulation somehow being “monitored” by some algorithm that then copies individual organisms that meet some criteria. No, again, they copy themselves. Like cells do.
    They also consume resources (processor calculations). And they mutate, their digital genomes mutate, when they copy themselves. Since your processor can only do a limited amount of calculations pr second, then there is a limit to how many organisms can coexist in the simulation.

    So competition between different organisms in Avida is emergent, it results from these basic facts. The Avida simulation doesn’t somehow “decide” what is best at taking up processing power. It doesn’t “reward” particular behaviors by deeming them “more fit” (though, it CAN be set up to do that). Rather, the mutations the organisms get, alter their abilities to consume processing power, and thereby affect their ability to make copies of themselves. Some mutate so they copy faster, which means they multiply faster than others. Eventually they take up all the processing power, which renders the slower ones extinct because there is no processing power left for them to consume.

  37. Allan Miller: It is an endless argument, one that I am embarrassed to contribute to further … but hey. If one has an evolutionary algorithm that starts with a single genotype and allows it to run with mutation only, no fitness criteria, one sees a particular kind of evolutionary behaviour.

    Allan, THIS IS WHAT YOU SAID!

    And its complete nonsense. If you run an evolutionary algorithm WITH NO FITNESS CRITERIA! what kind of evolutionary behavior do you think you are going to see???

    Who is being obtuse Alan? Alan?? Alan Fox are you listening?

  38. phoodoo,

    Allan, THIS IS WHAT YOU SAID!

    And its complete nonsense. If you run an evolutionary algorithm WITH NO FITNESS CRITERIA! what kind of evolutionary behavior do you think you are going to see???

    You are going to see the evolutionary behaviour already discussed in the ‘M&M’s threads, at length. You will see evolution even when the only forces are mutation and drift. OMagain even went to the trouble of writing a nice little simulation for you – that even had mutation turned off, but you could turn it back on again. You don’t need to have selection [eta: a fitness differential] to have an evolutionary process. Evolution is not synonymous with selection, as I may have said a few dozen times already. You are arguing as if it is.

    Who is being obtuse Alan? Alan?? Alan Fox are you listening?

    You are. To return somewhat to the the thread, when we say a search does ‘no better than random’, the kind of model I am talking of is an example of that very ‘random’. It’s not a random pick, but it has the same result as a random pick – one random genotype is concentrated by drift. It does no better than a random pick. It’s a baseline evolutionary process.

  39. Allan Miller: OMagain even went to the trouble of writing a nice little simulation for you – that even had mutation turned off, but you could turn it back on again.

    Hmm, I looked back on the M&M’s threads and couldn’t find any option to turn on mutation in OMagain’s simulation. I think I actually asked him if he could make it so new mutations continually pop up (so that we can see how new arising mutations behave under pure drift), but he never got around to implementing the feature.

    Are we talking about the same program?

  40. Rumraket,

    Hmm, I looked back on the M&M’s threads and couldn’t find any option to turn on mutation in OMagain’s simulation. I think I actually asked him if he could make it so new mutations continually pop up (so that we can see how new arising mutations behave under pure drift), but he never got around to implementing the feature.

    Are we talking about the same program?

    Sorry, I misspoke. Should have said ‘it had no mutation but you could stick it in’. A mutation version, with variable rate, would be cool.

    I have this mental fantasy of a program with knobs – real knobs, big ones for little fingers – that one could turn, marked ‘mutation’, ‘population size variance’, ‘selection intensity’ etc. The basic program works identically to such a program with those knobs set to zero.

  41. Rumraket:
    Ye, I want that too. I want toys goddamnit, magnificent toys.

    Ha! keep dreaming

    No Jesus, no Christmas
    No Christmas, no Santa
    No Santa, no toys.

    Atheism and it’s terrible consequences

  42. Allan Miller: I have this mental fantasy of a program with knobs – real knobs, big ones for little fingers – that one could turn, marked ‘mutation’, ‘population size variance’, ‘selection intensity’ etc.

    I read that God created just such a device and then used it to create the universe. Fine Tuning was born.

  43. Allan Miller: You don’t need to have selection [eta: a fitness differential] to have an evolutionary process. Evolution is not synonymous with selection…

    LoL. So much for “the power of cumulative selection.” Evolution is not synonymous with the power of cumulative selection. The evolutionary process does not need the power of cumulative selection.

    It just happened, that’s all.

  44. dazz: phoodoo seems to struggle with the idea that we can pick whatever fitness function we want in a GA.

    What prevents you from picking whatever fitness function you want in a GA? Allan picks one that assigns equal unfitness to everyone. What do you pick?

  45. Allan Miller: The conventional meaning of fitness in biology is the number of offspring.

    Not really. Leaving too many offspring could be a bad thing. Unfit.

  46. Mung,

    LoL. So much for “the power of cumulative selection.” Evolution is not synonymous with the power of cumulative selection. The evolutionary process does not need the power of cumulative selection.

    I really, really, don’t get why you think this clever. Evolution is not synonymous with selection /= selection never happens.

  47. Mung,

    The concept of fitness is distinct from the concept of fitter. The fitness of an individual is its number of offspring, which can obviously vary for many reasons unrelated to possessing a particular genotype.

    The fitness of a genotype is the mean of offspring accruing to its bearers over many lives.

    Strictly speaking, it’s the number of successful organismal cycles – zygote-zygote is the finish line. Obviously, a genotype producing 100 children none of which survive to maturity is less fit than one producing 5 that all do.

Leave a Reply