Evo-Info review: Do not buy the book until…

Introduction to Evolutionary Informatics, by Robert J. Marks II, the “Charles Darwin of Intelligent Design”; William A. Dembski, the “Isaac Newton of Information Theory”; and Winston Ewert, the “Charles Ingram of Active Information.” World Scientific, 332 pages.
Classification: Engineering mathematics. Engineering analysis. (TA347)
Subjects: Evolutionary computation. Information technology–Mathematics.

… the authors establish that their mathematical analysis of search applies to models of evolution.

I have all sorts of fancy stuff to say about the new book by Marks, Dembski, and Ewert. But I wonder whether I should say anything fancy at all. There is a ginormous flaw in evolutionary informatics, quite easy to see when it’s pointed out to you. The authors develop mathematical analysis of apples, and then apply it to oranges. You need not know what apples and oranges are to see that the authors have got some explaining to do. When applying the analysis to an orange, they must identify their assumptions about apples, and show that the assumptions hold also for the orange. Otherwise the results are meaningless.

The authors have proved that there is “conservation of information” in search for a solution to a problem. I have simplified, generalized, and trivialized their results. I have also explained that their measure of “information” is actually a measure of performance. But I see now that the technical points really do not matter. What matters is that the authors have never identified, let alone justified, the assumptions of the math in their studies of evolutionary models.a They have measured “information” in models, and made a big deal of it because “information” is conserved in search for a solution to a problem. What does search for a solution to a problem have to do with modeling of evolution? Search me. In the absence of a demonstration that their “conservation of information” math applies to a model of evolution, their measurement of “information” means nothing. It especially does not mean that the evolutionary process in the model is intelligently designed by the modeler.1

I was going to post an explanation of why the analysis of search does not apply to modeling of evolution. But I realized that it would give the impression that the burden is on me to show that the authors have misapplied the analysis.2 As soon as I raise objections, the “Charles Ingram of active information” will try to turn the issue into what I have said. The issue is what he and his coauthors have never bothered to say, from 2009 to the present. As I indicated above, they must start by stating the assumptions of the math. Then they must establish that the assumptions hold for a particular model that they address. Every one of you recognizes this as a correct description of how mathematical analysis works. I suspect that the authors recognize that they cannot deliver. In the book, they work hard at fostering the misconception that an evolutionary model is essentially the same as an evolutionary search. As I explained in a sidebar to the Evo-Info series, the two are definitely not the same. Most readers will swallow the false conflation, however, and consequently will be incapable of conceiving that analysis of an evolutionary model as search needs justification.

The premise of evolutionary informatics is that evolution requires information. Until the authors demonstrate that the “conservation of information” results for search apply to models of evolution, Introduction to Evolutionary Informatics will be worthless.


1 Joe Felsenstein came up with a striking demonstration that design is not required for “information.” In his GUC Bug model (presented in a post coauthored by me), genotypes are randomly associated with fitnesses. There obviously is no design in the fitness landscape, and yet we measured a substantial quantity of “information” in the model. The “Charles Ingram of active information” twice feigned a response, first ignoring our model entirely, and then silently changing both our model and his measure of active information.

2 Actually, I have already explained why the “conservation of information” math does not apply to models of evolution, including Joe’s GUC Bug. I recently wrote a much shorter and much sweeter explanation, to be posted in my own sweet time.

a ETA: Marks et al. measure the “information” of models developed by others. Basically, they claim to show that evolutionary processes succeed in solving problems only because the modelers supply the processes with information. In Chapter 1, freely available online, they write, “Our work was initially motivated by attempts of others to describe Darwinian evolution by computer simulation or mathematical models. The authors of these papers purport that their work relates to biological evolution. We show repeatedly that the proposed models all require inclusion of significant knowledge about the problem being solved. If a goal of a model is specified in advance, that’s not Darwinian evolution: it’s intelligent design. So ironically, these models of evolution purported to demonstrate Darwinian evolution necessitate an intelligent designer. The programmer’s contribution to success, dubbed active information, is measured in bits.” If you wonder Success at what? then you are on the right track.

588 thoughts on “Evo-Info review: Do not buy the book until…

  1. Hmmm…for some reason my response to Charlie has a message in it:
    “Your comment is awaiting moderation.” Did I spam the system or something? Too many hyperlinks? Curious minds want to know.

  2. Robin: All of that is in line with what the Theory explains.

    Indistinguishable from what a religious believer would say, so it counts for little to nothing.

  3. Tom English: You can redeem yourself by explaining Section 5.8, “The Search for the Search.”

    Evolution was obviously searching for an evolutionary search. And it just happened to find one. Or more. Take snails.

    So unless folks can agree that evolution is a search then trying to explain search for a search is rather pointless. Don’t you agree?

    Now, if we agree that certain alleged models of evolution are in fact search algorithms, then perhaps it will make sense to speak of how it is that those models evolved in just such a way as to find the search that they just happened to need.

    I really think we need to go back to absolute basics in computing. But I’m not going to lay that on you. When people cannot even agree on whether or not a given sequence of computer instructions constitutes a search algorithm, what is the point of going further?

  4. Mung: Evolution was obviously searching for an evolutionary search. And it just happened to find one. Or more. Take snails.

    So unless folks can agree that evolution is a search then trying to explain search for a search is rather pointless. Don’t you agree?

    Let’s be clear that I’ve recently reached the point of exhaustion, dealing with other stuff, and have been tossing off glib one-liners because I’m finding it difficult to put together three coherent sentences (the ADD is presently very bad). I genuinely want to discuss the math (and its significance) with you. You should believe me when I say that I’m eventually going to generalize and simplify the two theorems of Section 5.8, and provide simple proofs (Evo-Info 4). I’ll replace their approximation in the second theorem with an exact result. I actually did the work back in the fall, responding to a paper by George Montanez.

    In short, I’m talking math — simple math. There’s really no debate to be had about it, now that I’ve figured out how to make it simple. (The key insight in the “weak case” is due to Joe Felsenstein.)

    What they’re calling search is, as Ewert has acknowledged, [quoting from memory] “a process that can be represented by a probability distribution.” There’s no point in calling that a search. Don’t you agree? 😉

    A “search for a search” is represented as a probability distribution on probability distributions. The best known of probability distributions on probability distributions is the Dirichlet distribution. The flat Dirichlet distribution (parameter vector all 1’s) is the “search for a search” that Marks et al. address in the second of their theorems. They do not identify it as that, but that’s what it is. The random probability mass conferred on the target is Beta distributed. This is perfectly straightforward “look it up and read off the results” stuff — once you figure out what to look up.

    This has taken me outrageously long to write. I’ve got to stop here, and return tomorrow — hopefully later rather than sooner.

  5. Mung: Now, if we agree that certain alleged models of evolution are in fact search algorithms, then perhaps it will make sense to speak of how it is that those models evolved in just such a way as to find the search that they just happened to need.

    Brief remark: You’re not getting the point of the models. The event of interest to the modeler is not determined independently of the evolutionary process, as assumed in the math. The modeler is not claiming that evolution magically solves given problems — that anything whatsoever can occur by evolution. The modeler is identifying circumstances in which the event tends to occur. This is not problem solving. The reason is that the modeler does nothing remotely like claim that the event of interest was given independently of the process. The “problem” depends on the evolutionary process.

    Did that make any sense? I’m just bashing it out. I’m not reading what I’ve written.

    Mung: I really think we need to go back to absolute basics in computing. But I’m not going to lay that on you. When people cannot even agree on whether or not a given sequence of computer instructions constitutes a search algorithm, what is the point of going further?

    Funny thing about the book: 162 instances of forms of model, but the word modeler appears nowhere. Marks et al. of course indicate that models come from programmers. That’s utterly wrong. A programmer implements a model. The computer program is not itself the model.

    Programs written by people seeking solutions to problems and by people seeking to understand evolution monitor simulated evolutionary processes. The programs may look the same, but that does not mean that they serve the same purposes. The problem-solver may address a problem specified independently of the evolutionary process. The modeler most definitely does not: the specification of the event of interest (“target”) depends on the evolutionary process. You cannot tell by reading a program how the target was specified.

    The “conservation of information” math assumes that the problem (the target is the set of solutions to the problem) does not depend on the randomly selected problem solver. That is not what the modeler says. And, sad to say, the model is what the modeler says it is. You can check a program to see whether or not it implements the model correctly. You can run the program to see whether the modeler has reported correctly on the model. But you cannot use the program as evidence in court that the modeler is lying about the model, or that the model isn’t what the modeler thinks the model is.

    Again, the modeler is saying that what tends to occur in the evolutionary process does depend on the attributes of the process. Marks et al. are attributing to the modeler a claim that the process was chosen independently of the event of interest. They’re rejecting a claim that the modeler never made. And how are they doing that? Think carefully about this: they’re analyzing a program that implements the model, and concluding the modeler did not do what the modeler says he did not do. How clever is that?

    OK, I’m really going now. I don’t know if any of that made sense.

  6. Robin,

    Much as I’d like to continue this conversation I feel that I have taken it too far of topic as it is, so unless you want to start a fresh thread, I’m going to leave it there.

  7. Mung: Indistinguishable from what a religious believer would say, so it counts for little to nothing.

    ???

    Mung, you really need to look up what “theory” means in science. It’s a rather stark contrast from what religious believers base their statements on. So your claim above is just plain old absurd.

  8. CharlieM:
    Robin,

    Much as I’d like to continue this conversation I feel that I have taken it too far of topic as it is, so unless you want to start a fresh thread, I’m going to leave it there.

    Fair enough.

  9. Mung,
    For once we are agreed.

    Indistinguishable from what a religious believer would say, so it counts for little to nothing.

    Glad to see you actually front up and admit that what you say counts for nothing.

  10. Tom English: What they’re calling search is, as Ewert has acknowledged, [quoting from memory] “a process that can be represented by a probability distribution.” There’s no point in calling that a search. Don’t you agree?

    I don’t find that in the book. Do you?

  11. I wrote: What they’re calling search is, as Ewert has acknowledged, [quoting from memory] “a process that can be represented by a probability distribution.” There’s no point in calling that a search. Don’t you agree?

    A “search for a search” is represented as a probability distribution on probability distributions. The best known of probability distributions on probability distributions is the Dirichlet distribution. The flat Dirichlet distribution (parameter vector all 1’s) is the “search for a search” that Marks et al. address in the second of their theorems. They do not identify it as that, but that’s what it is.

    Mung quotes this part: What they’re calling search is, as Ewert has acknowledged, [quoting from memory] “a process that can be represented by a probability distribution.” There’s no point in calling that a search. Don’t you agree?

    Mung: I don’t find that in the book. Do you?

    Are you telling me that if there is not a precise match of that phrase in the book, then the authors are no longer saying what Ewert said they were saying?

    Do the authors treat the “search for a search” as a probability distribution on probability distributions, or do they not?

  12. Tom English: Do the authors treat the “search for a search” as a probability distribution on probability distributions, or do they not?

    That’s a different question from the one you originally raised. My answer is that I don’t know. If you have a page reference from the book I’ll look at it.

  13. Mung: If you have a page reference from the book I’ll look at it.

    If you’ve read the book you can probably just answer the question without a page reference?

  14. The more I reconsider the {Marks, Dembski, Ewert} papers and book the less there seems to be there. The general argument seems to be that (1) they take a space of all possible probability distributions on genotypes, (2) they consider a random outcome from a random one of those distributions, and (3) then note that this is no better (say, fitness-wise) than choosing a single random genotype.

    At first all the “lifting” and “lowering” machinery in the DEM papers seemed like magic, but thinking about the symmetry of the set of distributions, it is pretty obvious that this alone guarantees that the outcome of choosing a distribution at random and then sampling an outcome will just be a random point in the space.

    Their more outrageous step is to label all these distriibutions “evolutionary searches”. They aren’t explicitly “searches” in any meaningful sense, as Tom has been trying to get Mung to acknowledge. Many of them pay no heed to fitnesses, or even pay negative heed to fitnesses.

    And that is the fatal flaw of the {D,E,M} argument. They get bad average behavior of “evolutionary searches” by including vast amounts of garbage in those “searches”. As Tom and I showed (at PT last year) a very minimal evolutionary process reaches much better fitnesses than that, and thus does incorporate “active information” without any special smooth fitness landscape.

  15. Joe Felsenstein: The more I reconsider the {Marks, Dembski, Ewert} papers and book the less there seems to be there.

    Ditto. And the more it seems that they have crafted their rhetoric in order to conceal how little they have.

    The weak case of conservation of information in the search for a search, expressed as inequality (5.18) in the book, is Markov’s inequality with –log(\cdot) applied to both sides:

        \[-\!\log_2 \Pr\{X \geq \alpha \mathbf{E}[X] \} \geq -\!\log_2 \frac{1}{\alpha}.\]

    Here X is the performance of a randomly selected “search.” (They take performance to be the probability of hitting the target.) Next we change notation. Writing p in place of \mathbf{E}[X], and q^* in place of \alpha \mathbf{E}[X],

        \[-\!\log_2 \Pr\{X \geq q^* \} &\geq -\!\log_2 \frac{p}{q^*}.\]

    The authors define I^*_+ = -\!\log_2 (p/q^*) in their equation (5.17). They’re execrably vague on the meaning of \tilde{I}_\Omega, and I haven’t come up with a short and sweet explanation for equating it with -\!\log_2 \Pr\{X \geq q^* \}. At any rate, the substitutions get us:

    (5.18)   \begin{equation*} \tilde{I}_\Omega &\geq I^*_+  \end{equation*}

    Comparing this to what I wrote initially, I am filled with revulsion.

  16. Tom English: I haven’t come up with a short and sweet explanation

    That does not mean that I have the least doubt of the correctness. It means that I don’t know how to persuade someone who does not understand the math that I am providing the straight dope.

  17. Not sure who the revulsion is at, but I think it’s probably {M,D,E}.

    So they establish rather trivially that among all-possible-distributions that the average probability of choosing a point that is in a target T, which has M genotypes (points) in it, is simply M/N where N is the total number genotypes.

    Then they use Markov’s Theorem to establish that at most a fraction P of all the distributions can assign a probability to T that is P times as big as that. Then they take -log(P) to be the measure of information if you get a probability distribution that does that well at coming up with results in T. That is rather trivial, I agree.

    Also it does not really address what an “evolutionary” search is, or why all possible distributions are “evolutionary searches”.

    Perhaps Mung can clarify all that.

  18. Joe Felsenstein: Many of them pay no heed to fitnesses, or even pay negative heed to fitnesses.

    Did you mean to say negative fitness rather than negative heed?

  19. Joe Felsenstein: And that is the fatal flaw of the {D,E,M} argument. They get bad average behavior of “evolutionary searches” by including vast amounts of garbage in those “searches”.

    And that’s not true of evolution and no one has ever made the dysteleological argument that evolution is too wasteful to be the product of intelligent design.

    Right?

  20. Joe Felsenstein: They aren’t explicitly “searches” in any meaningful sense, as Tom has been trying to get Mung to acknowledge.

    I acknowledge it sounds odd. Perhaps I’ll start an OP on “meaningful search” and what characteristics distinguish a meaningful search from a meaningless search.

    If it’s a meaningless search does that mean it’s not a search and Is that why some people maintain that evolution is not a a search, because if it is a search, it’s a meaningless search?

  21. Joe Felsenstein: Then they use Markov’s Theorem to establish that at most a fraction P of all the distributions can assign a probability to T that is P times as big as that. Then they take -log(P) to be the measure of information if you get a probability distribution that does that well at coming up with results in T. That is rather trivial, I agree.

    There’s also huge irony in the apolomatics of {M, D, E}. Everyone knows to derive the most general results possible, and to specialize them as necessary. {M, D, E} commit to a particular measure of performance, namely, the probability of generating an element of an independently given target. But they have used that measure of performance in only one of their studies of evolutionary models (always uttering the magic words “conservation of information,” citing prior publications, and saying nothing about the mathematical details). Markov’s inequality “applies” in all of their studies, i.e., falsely attributing to the modeler the claims that (1) the event of interest was specified independently of the evolutionary process and (2) the evolutionary process was itself drawn randomly from a set of alternative processes (almost all of which are devoid of physics in the sense that, no matter what the state is at time t, all states are approximately equal in probability at time t+1).

    There’s also the inanity of expressing a quantity on a log scale, and talking as though you are dealing with a quantity of something different than you had in the beginning. Expressing improbability on a log scale does not magically transform improbability into information.

  22. Mung: Did you mean to say negative fitness rather than negative heed?

    He’s saying, under the assumption that there is a fitness landscape, a process might as well move down the landscape as up. I’m saying that he’s making the “search for a search” more sensible than it actually is. In the vast majority of random processes over a finite set of states, almost all state transitions are about equal in probability. [ETA: I have not put this well.] I think it’s reasonable to characterize such a process as devoid of physics.

  23. I’m thinking that MDE’s argument may boil down to this: Oh, yeah!? Then where’d physics come from? [Here Ewert giggles and snorts in delight.]

  24. Do not buy the book…

    Too late.

    First mention of evolutionary search (excluding the Contents section). p. xvi.

    Analysis of NASA’s design of an antenna using evolutionary search shows that the design domain expertise in evolutionary design is rich and the search problem was not that difficult.

    Is this controversial?

    Did the NASA engineers know they were using an evolutionary search? Did they just fail to understand that such searches are meaningless?

    I seem to recall a Bug* program. Does it use evolutionary search?

    *GUC (Greedy Uphill Climber) Bug

  25. Mung: I seem to recall a Bug* program. Does it use evolutionary search?

    *GUC (Greedy Uphill Climber) Bug

    Well, I did refer to the GUC Bug in footnotes 1 and 2 of the OP. A modeler does not specify an event of interest independently of the evolutionary process in which it tends to occur. The event that Joe and I targeted was the fittest of genotypes, whichever that might be for a random fitness landscape. The “conservation of information” math does not apply to the GUC Bug. Nor does it apply to ev. Nor does it apply to the Avida NAND study. (I hope you understand that Avida is a software platform, and that it was not designed for any particular study.)

  26. Mung: Is this controversial?

    It’s irrelevant to modeling of evolution. And it’s piss-poor as an account of the engineering reported in the literature. The NASA investigators were well aware that the results would depend on representation of antenna designs, and in fact published results for alternative representations. They said nothing remotely like “evolution solved the problem itself.” That’s merely a claim that Marks et al. want to pin on their adversaries. You might refer to the claim as an oil-soaked straw man (once or twice, not hundreds of times).

    By the way, I’ve been in touch with one of the engineers whose work is mangled in the book. He said that the explanation Marks et al. gave of how he obtained his results was “kind of hilarious.” They didn’t even get the most basic of facts right.

  27. Tom English: I hope you understand that Avida is a software platform, and that it was not designed for any particular study.

    I am one of the people who do understand that. Do you think ‘DEM’ don’t understand that?

    How about people here at TSZ who claim that Avida demonstrates this or that about evolution. But there’s no one like that here.

    Generally, when people speak of Avida here, they more than likely have a particular study in mind. I won’t excoriate them if they conflate the two.

  28. Tom English: It’s irrelevant to modeling of evolution.

    You’re a veritable goldmine Tom. If I go back and search the archives here at TSZ I’ll find that no one has ever brought up the GA that designed an antenna as proof-positive that evolution can do anything a designer can do, if not better.

    Did the NASA program use “evolutionary search” or did it just get lucky? Joe wants to know.

  29. …any “search” algorithm worthy of the name of “evolutionary search” comes with its own moderately smooth fitness landscape built in.

    – Elizabeth Liddle

    Is this what Joe is talking about? A search algorithm must come with its own moderately smooth fitness landscape built in else it is not worthy of the moniker “evolutionary search”?

  30. Tom English: I hope you understand that Avida is a software platform, and that it was not designed for any particular study.

    Mung: I am one of the people who do understand that. Do you think ‘DEM’ don’t understand that?

    They do not merely refer to Avida as a program, e.g.,

    There are a number of computer programs that purport to demonstrate undirected Darwinian evolution. The most celebrated is the Avida evolution program whose performance was touted by evolution proponents at the 2004–2005 Kitzmiller versus Dover Area School District trial. This trial examined the appropriateness of teaching intelligent design. Conservation of information, discovered and published five years later, soundly discredits Avida.

    Since Avida is attempting to solve a moderately hard problem, the writer of the program must have infused domain expertise into the code. We identify the sources and measure the resulting infused active information. Avida is shown to contain a lot of clutter used to slow down its performance. When the clutter is removed the program converges to the solution more quickly.

    Another evolutionary program discredited through the identification and measurement of active information is dubbed EV.

    Once a source of knowledge is identified in an evolutionary program, active information can be mined in different ways by using other search programs. For both Avida and EV, alternative search programs are shown to generate the same results as the evolutionary search. The computational burden of the evolutionary approach in both cases is significantly higher.

    On EvoInfo.org, we have developed online GUIs (graphical user interfaces) to illustrate the performance of both Avida and EV.

    They also treat the Avida-NAND setup as a program in their analysis (Section 6.2), e.g.,

    Obfuscation Tuning. With the minimal set of instructions, EQU is found quickly. Too quickly. We could claim to prove evolution by rolling two dice until we rolled snake eyes (two ones). Avida using minimal instructions is not as easy as rolling snake eyes, but converges too quickly to inspire any awe. Junk instructions, get in the way of convergence. They have allowed an EQU to evolve slowly enough to appear interesting. Like Goldilocks’s porridge, the search must not be as difficult as to be nearly impossible, must not be so easy as to get an EQU too quickly, but must be just right for convergence in a reasonable amount of time.

    So MDE are suggesting that the investigators added instructions to the Avida instruction set in order to make the NAND experiments seem more impressive than they actually were. (Any questions as to why I’ve just resolved never to let go of Ewert’s plagiarism and Marks’s approval of it?) They’re evincing complete ignorance of Avida as a software platform for conducting a wide range of experiments. However, they write at the end of Section 6.2:

    The Avida software platform has been embraced by numerous authors claiming to have demonstrated various aspects of Darwinian evolution. Avida has even been used as a teaching tool to support Darwinian evolution. Papers continued to appear even after our debunking of Avida in 2009.

    Some mathematical facts apparently take time to sink in.

    So they end up mentioning that Avida is a software platform, but claim that their response to the NAND experiments is a debunking of Avida in general. Are they really so stupid? Or are they actually that dishonest?

  31. Tom English:
    Are they really so stupid? Or are they actually that dishonest?

    I’m sure it’s belaboring a point to note those two aren’t mutually exclusive.

  32. So, instead of getting tangled in the semantics of the word “search”, why don’t we first ask what difference it makes (to Mung) whether or not models of evolutionary processes are or are not “evolutionary searches”. Is it that, if they are, {M, D, E} have a proof that evolutionary processes work as badly as random sampling of a genotype?

    Because Tom and I have disposes of that one — that argument is dead and gone.

    What other reason is there, then, to spend time on this?

  33. Joe Felsenstein: So, instead of getting tangled in the semantics of the word “search” …

    You brought it up. Or was it Tom that brought it up, and you seconded the motion? So why the sudden retreat?

    For example, Tom asked:

    There’s no point in calling that a search. Don’t you agree?

    And you chimed in:

    Their more outrageous step is to label all these distriibutions “evolutionary searches”. They aren’t explicitly “searches” in any meaningful sense, as Tom has been trying to get Mung to acknowledge.

    I attempt to follow up on that line of thinking and all of a sudden you’re no longer interested. Now I’m also longer interested, lol.

    …why don’t we first ask what difference it makes (to Mung) whether or not models of evolutionary processes are or are not “evolutionary searches”.

    Already asked an answered. I tire of repeating myself.

    Alan Fox very recently trotted out the Dawkins Weasel program as an example of how evolution is allegedly non-random. But the Dawkins Weasel program is a search algorithm. One might infer from this that evolution is a search. Yet it is often denied that evolution is a search.

    And if evolution is not a search, how can it be legitimate to model it as such?

  34. Evolution is usually modeled by having genotypes, which may or may not have different fitnesses. Then we see what happens. We usually have some questions in mind.

    Is that a search?

  35. Mung: Alan Fox very recently trotted out the Dawkins Weasel program as an example of how evolution is allegedly non-random.

    Just a side note, but don’t you believe that evolution is very much non-random?

  36. OMagain: Just a side note, but don’t you believe that evolution is very much non-random?

    How do you know he even believes it exists?

  37. Joe Felsenstein:
    Evolution is usually modeled by having genotypes, which may or may not have different fitnesses.Then we see what happens.We usually have some questions in mind.

    Is that a search?

    In an EA, the concept of fitness is another name for the thing for which you search. There is still no getting around that Joe.

  38. Joe Felsenstein: I have the book.Looking through it, I find that none of those functions are described in the book (though it does cite their several papers that include those terms).Nor are those terms found in the book’s index.

    Tom, who has a higher pain threshold than I do, has read all those papers closely and reports that they went through at least three different schemes of that sort.In any case, when they were showing how bad the behavior of an average “evolutionary search” they didn’t actually use any of that.You’d expect that, if they took that scheme seriously, that they would then consider randomizing over all possible terminators, inspectors, etc. to get a randomly chosen “evolutionary search”.But they didn’t do anything of the sort.

    Instead, in those papers, they identify an “evolutionary search” with a distribution of outcomes, and talk about all such distributions.They then find (ta-da!) that the result of a randomly chosen “evolutionary search” is just a random element in the space of genotypes, so that the typical “evolutionary search” does no better than random.

    As I have noted in this thread before, that includes among “evolutionary searches” all sorts of horrible processes that can look for the worst possible fitness, or just look at random.And as Tom and I showed in last year’s post at Panda’s Thumb, as soon as we make the reasonable requirement that an “evolutionary search” have genotypes that have fitnesses, and thus reward the reproduction of more-fit genotypes, the “evolutionary searches” do much better than random, and thus, in DEM’s terminology, incorporate “active information” without need for a Designer.

    Thanks for your thoughtful answer and the trip through memory lane. Indeed, they are back to hand-waving: On p. 173, DEM write

    “We note, however, the choice of an algorithm along with its parameters and initialization imposes a probability distribution over the search space.”

    So, when an algorithm finds target T_1 with probability 1, it doesn’t find any other target T_2. Thus on average over all targets it will be as good a random guess.

    They always fail to take into account that the result of an algorithm generally depends on the target…

    Take, e.g., a complete enumeration of the search space. It is clumsy, but it finds any target T with probability 1 – and this does not impose a probability distribution on the search space.

  39. DiEb: They always fail to take into account that the result of an algorithm generally depends on the target…

    I don’t think they are the only ones who fail to take this into account.

    Thanks for reminding some materialists here.

  40. phoodoo: I don’t think they are the only ones who fail to take this into account.

    Thanks for reminding some materialists here.

    Phoodoo, I think you’ve entered this curious state of mind where you think you have to reply to everything the “materialists” say and advertise that you stand in opposition and contradiction to them.

    It’s getting ridiculous to look at. Go do something more productive with your time than waste it nay-saying everything the materialist boogeyman says.

  41. DiEb: Thanks for your thoughtful answer and the trip through memory lane.

    I apologize for not responding (to anyone at all for a while).

    I expected Marks et al. to make big changes to what they had published, and produce a coherent whole. All they did was to devise rhetoric to conceal the fact that the pieces don’t fit together.

    DiEb: They always fail to take into account that the result of an algorithm generally depends on the target…

    They’ve been using the term “undirected Darwinism” a lot, without saying exactly what they mean by it. Their math corresponds to the idiotic notion that Darwinism is a claim that, whatever problem comes along, a process poofs into existence totally by chance, and generates a solution.

    They indicate in parts of the book that they know that’s not what modelers are saying. So the modelers aren’t actually modeling “undirected Darwinism.”

    Of course, they don’t bother to mention that their splendiferous math takes the measure of performance in search to be the probability of hitting the target, and that only one of their studies of models uses that performance measure. It’s easy to apply Markov’s inequality to the measures they actually use. But they dare not do that, because it would wreck the story they want to tell.

    DiEb: Take, e.g., a complete enumeration of the search space. It is clumsy, but it finds any target with probability 1 – and this does not impose a probability distribution on the search space.

    They restrict the number of steps to a number smaller than the size of the search space, so the probability will be less than 1. But it remains the case that there is no reason to randomize when you’ve got one shot a solving a problem. Randomly selecting a particular sequence as the “search” is equivalent to realizing a random sequence once. However, in the former case, the probability of hitting the target is either 0 or 1. If the probability is 1, MDE get very excited, and claim that an intelligence must have informed the search how to find the target. They’re simply ignorant of how the process came to be in the first place.

  42. phoodoo: Joe Felsenstein:
    Evolution is usually modeled by having genotypes, which may or may not have different fitnesses.Then we see what happens.We usually have some questions in mind.

    Is that a search?

    In an EA, the concept of fitness is another name for the thing for which you search. There is still no getting around that Joe.

    Utterly wrong. Fitness can be measured. You can take organisms of various genotypes and measure how well they survive and how well the survivors reproduce.

    Nothing to do with a “search”.

  43. Joe Felsenstein: Evolution is usually modeled by having genotypes, which may or may not have different fitnesses. Then we see what happens. We usually have some questions in mind.

    Is that a search?

    Now you want to discuss semantics? I’m so confused. What would qualify it as a search and what would disqualify it as a search?

  44. Joe Felsenstein: Utterly wrong. Fitness can be measured. You can take organisms of various genotypes and measure how well they survive and how well the survivors reproduce.

    Nothing to do with a “search”.

    Give us a real example, from an EA. Say you have an organism in your EA with a genotype in your EA and you want to measure the fitness of the organisms or the genotype, so you define a method get_fitness.

    What would the implementation of that function look like?

  45. Joe Felsenstein: In an EA, the concept of fitness is another name for the thing for which you search. There is still no getting around that Joe.

    Utterly wrong.Fitness can be measured.You can take organisms of various genotypes and measure how well they survive and how well the survivors reproduce.

    Nothing to do with a “search”.

    When do you measure how well they survive and how well they produce, before you design the EA or after?

    If its after, what is the criteria in the program which says what survives and what doesn’t?

    If its before, then you just stated what your target is, to survive better. Thus you are searching for what survives better, based on what you tell the computer survival means.

    This is not a complicated concept. No matter how many ways you continually try to say otherwise Joe, you will NEVER get around this basic necessity, which separates EVERY computer program from the premise that is evolution-that you start with NO TARGET, ZERO guidance, and just so happen to end up with something that works.

    I understand the concept of the computer algorithm is central to your work, so you are going to vigorously defend it, but it doesn’t change that in the case of the computer, you must first tell it what good means. In the case of organisms, supposedly no one is telling it what good is until after it is already in existence.

  46. And here’s the thing, Mung and I are at a huge advantage when it comes to arguing whether or not a evolutionary algorithm is a search, because no matter what program you can think of, it will never overcome the paradigm difference between programming what winning means, and simply surviving by whatever means happens to occur.

    Think of it this way, if you are designing a program to simulate what battling robots do, and all you tell the program to do is survive, without defining what survival means, you get nothing.

    In the game battle of the bots, your objective is to survive, while other robots try to destroy you. If this is what you tried to get a computer program to do, the computer can say, hm, I need to survive, Ok, I will just leave. Then you say, Oh, but another robot will design itself to follow you and destroy you. Then the computer will say, ok, I will just make myself into a hard round sphere that can’t be penetrated. Oh, but another computer will design a way to crush that sphere, or burn it. So then the computer say, Ok, I will become a gas that just floats away. Oh, but the other computer design can catch that gas and contain it in a bottle, then destroy it. Oh, Ok, then I will just become infinite gas. I win. Oh no, I will become infinite too, and catch all of you. Oh, Ok, then I will become a thought.

    Slowly but surely, without a definition of winning, other than just saying the best wins, you end up with nothing meaningful. The computer can not decide what you mean by survival. Until you tell it what survival means, it will just keep saying it will exist. It will die and come back to life, it will multiply, it will become invisible, it will become impenetrable, it will become nothing.

    We must always decide for the computer, what it means to win.

Leave a Reply