More on Marks, Dembski, and No Free Lunch, by Tom English

Tom English has a great post at his blog, Bounded Science, which I have his permission to cross post here:

Bob Marks grossly misunderstands “no free lunch”

And so does Bill Dembski. But it is Marks who, in a “Darwin or Design?” interview, reveals plainly the fallacy at the core of his and Dembski’s notion of “active information.” (He gets going at 7:50. To select a time, it’s best to put the player in full-screen mode. I’ve corrected slips of the tongue in my transcript.)

[The “no free lunch” theorem of Wolpert and Macready] said that with a lack of any knowledge about anything, that one search was as good as any other search. [14:15]And what Wolpert and Macready said was, my goodness, none of these [“search”] algorithms work as well as [better than] any other one, on the average, if you have no idea what you’re doing. And so the question is… and what we’ve done here is, if indeed that is true, and an algorithm works, then that means information has been added to the search. And what we’ve been able to do is take this baseline, that all searches are the same, and we’ve been able to, in cases where searches work, measure the information that is placed into the algorithm in bits. And we have looked at some of the evolutionary algorithms, and we found out that, strikingly, they are not responsible for any creation of information. [14:40]

And according to “no free lunch” theorems, astonishingly, any search, without information about the problem that you’re looking for, will operate at the same level as blind search.” And that’s… It’s a mind-boggling result. [28:10]

Bob has read into the “no free lunch” (NFL) theorem what he believed in the first place, namely that if something works, it must have been designed to do so. Although he gets off to a good start by referring to the subjective state of the practitioner (“with a lack of knowledge,” “if you have no idea what you’re doing”), he errs catastrophically by making a claim about the objective state of affairs (“one search is as good as any other search,” “all searches are the same”).

 

Does your lack of knowledge about a problem imply that all available solution methods (algorithms) work equally well in fact? If you think so, then you’re on par with the Ravenous Bugblatter Beast of Traal, “such a mind-bogglingly stupid animal, it assumes that if you can’t see it, it can’t see you.” Your lack of knowledge implies only that you cannot formally justify a choice of algorithm. There not only may be, but in practice usually will be, huge differences in algorithm performance.

What boggles my mind is that Marks and Dembski did not learn this from Wolpert and Macready (1997), “No Free Lunch Theorems for Optimization.” In Section III-A, the authors observe that “it is certainly true that any class of problems faced by a practitioner will not have a flat prior.” This means that some problems are more likely than others, and their NFL theorems do not hold in fact. So what is the significance of the theorems?

First, if the practitioner has knowledge of problem characteristics but does not incorporate them into the optimization algorithm, then… the NFL theorems establish that there are no formal assurances that the algorithm chosen will be at all effective. Second, while most classes of problems will certainly have some structure which, if known, might be exploitable, the simple existence of that structure does not justify choice of a particular algorithm; that structure must be known and reflected directly in the choice of algorithm to serve as such a justification. [emphasis mine]

So don’t take my word for it that Bob has twisted himself into intellectual contortions with his apologetics. This comes from an article with almost 2600 citations. If memory serves, Marks and Dembski have cited it in all 7 of their publications.

Marks and Dembski believe, astonishingly, that the NFL theorems say that an algorithm outperforms “blind search” only if some entity has exploited problem-specific information in selecting it, when the correct interpretation is that the practitioner is justified in believing that an algorithm outperforms “blind search” only when he or she exploits problem-specific information in selecting it. This leads them to the fallacious conclusion that when a searchsoutperforms blind search, they can measure the problem-specific information that an ostensible “search-forming process” added tosto produce the gain in performance. They silently equate performance with information, and contrive to transform the gain in performance into an expression that looks like gain of Shannon information.

Their name-game depends crucially on making the outcome of a search dichotomous — absolute success (performance of 1) or absolute failure (performance of 0). Then the expected performance of a search is also its probability of success. There is a probabilitypthat blind search solves the problem, and a probabilityps>pthat searchssolves the problem, and the ratiops/pis naturally interpreted as performance gain. But to exhibit the “added information” (information gain), Marks and Dembski perform a gratuitous logarithmic transformation of the performance gain,

I+ = log (ps/p) = log ps−log p = −log p +log ps,

and call it active information. (The last step is silly, of course. Evidently it makes things look more “Shannon information-ish.”) To emphasize, they convert performance into “information” by sticking to a special case in which expected performance is a probability.

Here’s a simple (in)sanity check. Suppose that I have a “pet” algorithm that I run on all problems that come my way. Obviously, there’s no sense in which I add problem-specific information. But Marks and Dembski cherry-pick the cases in which my algorithm outperforms blind search, and, because active information is by definition the degree to which an algorithm outperforms blind search, declare that something really did add information to the algorithm.

Now, a point I’ll treat only briefly is that Marks and Dembski claim that the cases in which my pet algorithm greatly outperforms blind search are exceedingly rare. The fact is that they do not know the distribution of problems arising in the real world, and have no way of saying how rare or common extreme performance is for simple algorithms. In the case of computational search, we know for sure that the distribution of problems diverges fabulously from the uniform. Yet Marks and Dembski carry on about Bernoulli’s Principle of Insufficient Reason, doing their damnedest to avoid admitting that they are yet again insisting that their subjective assignment of uniform probability be taken as objective.

A bit of irony for dessert [35:50]:

Question: Are you getting any kind of response from the other side? Are they saying this is kind of interesting, or are they kind of putting stoppers in their ears? What’s going on?Answer: It’s more of the stoppers in the ears thus far. We have a few responses on blogs, which are unpleasant, and typically personal attacks, so those are to be ignored. We’re waiting for, actually, something substantive in response.

80 thoughts on “More on Marks, Dembski, and No Free Lunch, by Tom English

  1. Tom English:

    But who, other than ID creationists, ever said that biological evolution is search?

    “ID creationists” are a figment of evo-imagination.

    Back in the late 1950′s, people began applying biologically-inspired algorithms to search problems.

    You mean inspired by the way someone thought biological evolution worked.

    What’s evident in the quote above is an incredibly stupid assertion that something is using biological processes to solve search problems.

    Evos use GAs to try the “prove” the efficacy of biological evolution. Perhaps you should take it up with them.

    The only thing that keeps Dembski from being laughed off the face of the planet is his masterful rhetoric.

    The only thing that keeps materialists from being laughed off the face of the planet is their masterful rhetoric.

  2. Tom English said:

    It seems to me that Dembski and Marks are inconsistent in what they say about fitness functions. Sometimes they refer to the “search” as exploiting “search space structure” (that would have to include a fitness function), and in other cases they treat the fitness function as something created to guide the search to the target. The latter is so bizarre that I have trouble responding to it. In engineering applications, there is often embedded in the fitness function a model of a physical system, and fitness is a straightforward measure of how good the response of the system is for the given parameters. And D&M give examples consistent with this.

    I had interpreted their statements a bit differently. I thought that by “search space structure” they meant the connectivity of the search space, which points can be searched on the next try starting from the one you are at. For GA or GA-like algorithms the issue is then whether there is a path uphill from where you are now. Search spaces that have a fitness function that is very rough, in the sense that there is little or no correlation between the fitness of neighboring points, cause trouble for GAs.

    To answer an earlier question of yours, no I am not asking about time complexity of the search, I am asking about whether a search can improve the fitness substantially in a fixed amount of time. That, and not whether it can reach some optimal organism, is what all the arguing should be about.

    Tom English:

    Much of what Dembski and Marks say about computing is bunk because they’re smuggling into the literature their beliefs about biological evolution, and all of what they say about biology is bunk because they assume the conclusion that evolutionary processes are engineered to achieve ends. Given that a major part of their game is to conflate engineering and biology, I think that there should be a fairly clean separation between critiques of their engineering claims and critiques of their biological claims.

    They certainly regard the fitness surface as something actively chosen out of all possible such functions, with uniform probabilities. As the fitness of each genotype is a positive real number, there are infinitely many possibilities, which makes that hard to define. However even if we take the set of fitnesses of all genotypes and consider all ways they could be permuted, almost all of those are impossible with real biology or real physics. The physics and the biology will prefer “smooth” fitness functions, at least a lot smoother than the white noise fitness functons.

    I’ve just realized that the inconsistency is probably due to Dembski and Marks saying different things. It’s easy to tell which of them is writing, and I could check that if I could stand going over their papers again.

    An interesting speculation about authorship, but when there are two coauthors they both have to take responsibility for all parts of the paper. I know that from bitter personal experience …

  3. Joe Felsenstein,

    I thought that by “search space structure” they meant the connectivity of the search space, which points can be searched on the next try starting from the one you are at.

    Dembski and Marks have gotten away with giving only examples, and not formal definitions, of what they mean by prior information about the “target location” and “search space structure.” One of their examples is search for the ace of spades in a standard deck of cards, laid out in four groups of 13 cards. If you’re told that the 13 cards in each group are all of the same suit, then blah-blah, wouff-wouff. Whenever a sampling process responds to data in the sample, they say that it is informed. Why? Because “blind search” doesn’t do that. They say that explicitly, but leave it to the reader to “reason” by analogy that a responsive process can “see,” and therefore gains information. When D&M deem a sampling process to have succeeded in a search, they call its responsiveness exploitation of information about the search space structure. I can’t see that they’ve ruled out any sort of structure.

    For GA or GA-like algorithms the issue is then whether there is a path uphill from where you are now. Search spaces that have a fitness function that is very rough, in the sense that there is little or no correlation between the fitness of neighboring points, cause trouble for GAs.

    To answer an earlier question of yours, no I am not asking about time complexity of the search, I am asking about whether a search can improve the fitness substantially in a fixed amount of time.

    The expected maximum fitness in the sample f(x1), f(x2), …, f(xn) after a sampling process has run for t units of time depends only on the size of {x1, x2, …, xn}, i.e., on the number of distinct fitness evaluations. The mathematically typical cases seem pathological, but in fact account for the central tendency of performance distributions (in theory) for algorithms that sample without replacement. Extreme performance in computational sampling is much more common than D&M think precisely because all functions that fit in our computers are highly atypical in theory. The real-world distribution of performance has to be much flatter than the theoretical distribution.

    The difficulty in making statements about GA’s in general is the propensity of classical ones to get stuck for long periods of time. What folks call GA’s nowadays must strike geneticists as very strange. It’s common to see real-valued alleles and Gaussian mutations at all loci. That just about eliminates redundant fitness evaluations, in practice.

    A more “genetic” approach is to go with an “annual” algorithm, in which the parents die in each generation. Then the population will drift across the fitness “landscape.” Although there may be many repeated fitness evaluations, the algorithm is unlikely to get stuck. I should mention that researchers in evolutionary computation almost always measure optimization performance on the entire run, and not just the final population.

    Now, as for fitness improvement, there are of course diminishing returns. But if an algorithm draws a sample of 921 thousand distinct elements of the domain, you can be 99.99% sure that at least one of them scores in the top 0.001%. Again, if the sampling is done by computational evolution, then the fittest individual probably will not be in the final population.

    If this still does not persuade you, then tell me what you’d like to see in a little program. (Programming is much, much easier for me than writing.)

    However even if we take the set of fitnesses of all genotypes and consider all ways they could be permuted, almost all of those are impossible with real biology or real physics. The physics and the biology will prefer “smooth” fitness functions, at least a lot smoother than the white noise fitness functions.

    And Dembski and Marks will then call physical order “information” that was added by the “search-forming process.” Go to this lecture by Marks, skip to 2:30, and within 20 seconds you will see a slide with a text box “No Information” pointing to Genesis 1:2. I’d long suspected that Dembski got the uniform distribution from that verse. Marks goes on to describe Genesis 1 as an account of creation of information.

  4. P.S. – In the lecture, Marks gives an example of “computer search” in which the fitness function is a “pancake taster.” The function takes as arguments the cooking times for the two sides of the pancake. This is a perfectly good example, from my perspective. Fitness is graded, not dichotomous, as is the rule in engineering applications. I don’t recall that Marks gives any indication that the fitness function has been rigged to guide the “search.” I can’t recall if he says in the lecture that the search gains information from the fitness function, which “knows about pancakes,” but he makes similar claims elsewhere. And he is out-and-out wrong about that.

  5. Tom, a key point here is that fitness functions found in biology are not “mathematically typical”. And since they aren’t there are (strong) correlations between fitnesses of points that can be reached from each other by a single mutation.

    D&M would then say for such a case that there is an “exploration of information about the fitness surface”. Which does not worry me because it is a very very far cry from having the ultimate solution built into the algorithm.

    I think their “active information” is going to turn out to be a fairly useless concept. Its main use now is to persuade naïve onlookers that GAs that we run have been given an unfair advantage, or else that natural selection itself has been given an unfair advantage by a Designer who designed the fitness surface. I argue that just plain old physics could have done that, in fact,

    Are we disagreeing? Talking past each other?

    PS just technical minutiae: real valued alleles and Gaussian mutations have been used in models of quantitative characters in evolutionary quantitative genetics since the work of Russ Lande in the 1970s. So we’re fine with that.

  6. More quibbling:

    Tom English:
    I should mention that researchers in evolutionary computation almost always measure optimization performance on the entire run, and not just the final population.

    Evolutionary biologists are mostly interested in how good the final state is, because that is what will affect future generations beyond that. If a great solution is found and then lost, that impresses them less. So the two fields probably just have different perspectives in this.

    Now, as for fitness improvement, there are of course diminishing returns. But if an algorithm draws a sample of 921 thousand distinct elements of the domain, you can be 99.99% sure that at least one of them scores in the top 0.001%.

    That will be true in the “mathematically typical” cases. But in typical biologically realistic cases, the correlations among neighboring points, and the fact that GAs sample nearby points a lot, mean that the calculation does not work and that one can in fact remain far from the top 0.001% even after sampling 921,000 points, particularly if there are multiple peaks in the fitness function.

    Again, these are quibbles with statements of yours but do not affect what I hope is basic agreement between us.

  7. The folks poring over papers by D&M should look for stuff like “Bernoulli’s principle of insufficient reason therefore applies and we are in our epistemic rights to assume that the probability distribution on Ω is uniform….” D&M always shift to treating the distribution as uniform in fact when they claim that a “search” succeeds only when something has in fact added information.

    It’s a strange epistemology that gives us the “right” to treat an uninformed prior as a conclusion. If I know nothing about gravity and conclude that dropped objects are equally likely to fall in any direction (even up), does nature violate my rights when it behaves otherwise?

  8. Joe Felsenstein,

    Tom, a key point here is that fitness functions found in biology are not “mathematically typical”.

    No argument. But I believe that the typical (algorithmically incompressible) fitness function remains interesting. It renders evolutionary adaptation impossible, even though the corresponding optimization problem is benign in and of itself. An optimization problem is intrinsically hard only if the fitness function is compressible. The compressible fitness functions in biological models make for relatively hard optimization problems. (The expected performance I gave above does not depend on the size of the genome.)

    But in typical biologically realistic cases, the correlations among neighboring points, and the fact that GAs sample nearby points a lot, mean that the calculation does not work and that one can in fact remain far from the top 0.001% even after sampling 921,000 points, particularly if there are multiple peaks in the fitness function.

    I hadn’t read this when I wrote what I did above. I was careful to distinguish fitness functions from optimization problems, and I’m not going to lapse into mix-and-match here. I’m saying that biologically realistic fitness functions make for hard optimization problems, and that this has nothing to do with how biota sample the phase space on which fitness is defined. To play creationist for a moment, Designer either has to make the “problem” harder or easier than random to enable “Darwinian search” to “hit the target.” And because Designer wants nature to testify to Her glory, She makes it harder.

    Evolutionary biologists are mostly interested in how good the final state is, because that is what will affect future generations beyond that.

    I think you’ve just identified a big problem with D&M’s claim that their analyses in “Life’s Conservation Law” apply to biological evolution. “Hitting the target” in that chapter is is just the same as in their engineering papers. No biologist would say that evolution had produced an outcome unless it were in some sense “latched.”

    I think their “active information” is going to turn out to be a fairly useless concept. Its main use now is to persuade naïve onlookers that GAs that we run have been given an unfair advantage, or else that natural selection itself has been given an unfair advantage by a Designer who designed the fitness surface.

    That’s precisely the use that worries me. Some of the naïve onlookers are state legislators. Just how sophisticated will the judge in Dover II be? We really did luck out with Judge Jones.

    Are we disagreeing? Talking past each other?

    Agreeing. I am probably harping too much on technical points.

    PS just technical minutiae: real valued alleles and Gaussian mutations have been used in models of quantitative characters in evolutionary quantitative genetics since the work of Russ Lande in the 1970s. So we’re fine with that.

    I’m glad to have you set me straight on that. When GA investigators made the shift, their algorithms became indistinguishable from evolution strategies that had been around for a couple decades.

  9. I’m coming late to this, so I’ll just throw in a couple of more or less trivial remarks. Tom wrote above that

    It seems to me that Dembski and Marks are inconsistent in what they say about fitness functions. Sometimes they refer to the “search” as exploiting “search space structure” (that would have to include a fitness function), and in other cases they treat the fitness function as something created to guide the search to the target. The latter is so bizarre that I have trouble responding to it.

    And earlier Tom also wrote

    Addressing Avida, D&M complain, as Dembski did previously, about the “royal road” to the target in the fitness function.

    These all have the same theme, of which ID creationists obsessing over Dawkins’ WEASEL illustration is but an extreme example. In effect, they are arguing that a problem statement (which in GAs is the fitness function, by which I mean the equation that assigns different fitness values to individuals in populations of replicators, which in turn bias the probability of reproduction) must somehow smuggle in the problem solution, thus illegitimately front-loading the search. That’s D&M’s core argument, what they call providing “active information.” D&M also ignore the fact that the shape of the Avida fitness landscape is completely under the control of the experimenter, and it’s instructive to play with it, watching changes in the dynamics of the distribution of critters/adaptations through time/generations, A high school student whose science fair project I mentored this spring did a project like this, and he’s going to the International Science and Engineering Fair next week. Tom’s right: it’s bizarre.
     

    Tom again:

    But who, other than ID creationists, ever said that biological evolution is search?

    As I’ve said numerous times, using search as a metaphor for, or model of, biological evolution is a snare and a deception. Biological populations do not “search” for solutions; they (occasionally) find adaptations –“solutions”–as a fortuitous by-product of the operation of the variation/selection process. 

    Joe wrote

    Tom, a key point here is that fitness functions found in biology are not “mathematically typical”. And since they aren’t there are (strong) correlations between fitnesses of points that can be reached from each other by a single mutation.

    Put another way, due to the several variation-generating mechanisms (e.g., mutations and recombination) biological populations sample a restricted volume in genotype space starting from an initial state (the restricted subvolume currently occupied by the population) that is already known to be viable: the bare fact that the population is replicating tells us it is not in some random location in genotype space but rather occupies a satisficing (virtually certainly not absolutely optimal) subvolume. It does not randomly sample from the whole of genotype space but rather samples from ‘nearby’ volumes of that space, where ‘nearby’ is defined in terms of single applications of the variation-generating operators. And ‘nearby’ locations are more likely than some random point in genotype space to also be viable locations; nature is stuffed full of gradients. Darwin got it right when he characterized biological evolution as ‘descent with modification.’ The ‘modification’ is of a current trait that is already in some degree fit.

     

    Finally, with respect to the “satisficing” mention above, as the old joke about the two hikers who encountered a bear (“I don’t have to outrun the bear,” said one of the hikers. “I only have to outrun you.”) illustrates, biological natural selection is not an optimizing algorithm. A little better than competitors is plenty good enough: the hiker doesn’t (usually) have to be an Olympic sprinter.

  10. According to Wesley Elseberry AVIDA is not a genetic algorithm. Not only that when it uses realistic parameters it does not produce anything worth talking about.

  11. Let me dissent a bit from this.  Evolutionary biologists use models with fitness surfaces a lot — Sewall Wright’s metaphor is more than a literary flourish, it is in wide use.  This summer I will co-teach a week-long course in Evolutionary Quantitative Genetics, in which we will make a lot of use of such models.  Granted, we do not insist that the search find the optimum fitness over all possible genotypes.  So in that sense the models aren’t a search.  But we do find it useful and fruitful to model evolution this way, without taking into account all sorts of departures from the model, such as coevolution and changes of fitness through time.

    So if by “search” you mean seeking the genotype of highest possible fitness among all possible fitnesses, and failing if you don’t find it, then I agree. But if you are arguing that models of evolutionary processes with populations moving on a fitness surface are not in use, well, you’d be surprised.

  12. Joe wrote 

    So if by “search” you mean seeking the genotype of highest possible fitness among all possible fitnesses, and failing if you don’t find it, then I agree. But if you are arguing that models of evolutionary processes with populations moving on a fitness surface are not in use, well, you’d be surprised.

    I have no problem with models of evolutionary processes that invoke “populations moving on a fitness surface…”. Shoot, I’ve developed some ways of describing (graphing) low-dimensioned slices of high-dimensioned fitness spaces in my company’s modeling business. (That’s part of our effort to reduce the number of variables/dimensions–genes–we employ. Finding that some variables are consistently associated with ‘rough’ fitness surfaces across slices allows us to eliminate them from our GAs.)

    It’s the “search” (and “seeking”) metaphor that I object to. It leads one’s thinking into a teleological morass, as the ID creationists like D&M amply illustrate. The movement of populations on fitness surfaces (or in higher-dimensioned fitness spaces) is a by-product of the variation/selection algorithm, not the goal, object, or purpose of the process. Evolution works not because there are peaks to be sought in fitness spaces, but because there are gradients and the variation/selection mechanism automatically differentiates among the slopes of gradients.

  13. That’s why I prefer to envisage fitness landscapes (if I must) upside down :)

  14. 1- variation/ selection is not an algorithm

    2- As you said, in real life whatever is good enough is all that matters

    3- whatever is good enough changes 

  15. RBH,

    I gave up on this thread after several days. Sorry to have missed you. I’d like to respond, if you’re around.

  16. RBH,

    Helping your mentee to achieve as he has would be special under any circumstance. But what a neat trick to pull it off, in your particular community, with Avida. Congrats to both of you.

    I tried (too) hard to write a decent response, but never managed to corral my wits. I’ll take another shot at it later.

  17. RBH,

    Over the past 21 years, I’ve understood computational evolution more and more, and biological evolution less and less. The explosion of knowledge about evolutionary mechanisms in life, coming from various new lines of investigation, often arcane, is overwhelming even for biologists. When folks make biological evolution out to be like computational evolution, they not only oversimplify it, but also throw out gobs of explanation of it. They evidently think that computational evolution is something very strong, and that creationists have no choice but to address it. The reality is that the creationists cannot begin to cope with the consilience of evolutionary theories.

    Dembski and Marks make it perfectly clear why they love computational evolution:

    The Law of Conservation of Information, however, is not merely an accounting tool. Under its aegis, intelligent design merges theories of evolution and information, thereby wedding the natural, engineering, and mathematical sciences. On this view (and there are other views of intelligent design), its main focus becomes how evolving systems incorporate, transform, and export information. [emphasis added]

    They cannot deal with the science, so they reduce “Darwinian” evolution to computational evolution, and then conflate their (bogus) engineering and mathematical analyses with science.

    Lewontin (“The Units of Selection,” 1970) gave the logic of Darwin’s natural selection. Evolution is logically necessary when:

    1. Different individuals in a population have different morphologies, physiologies, and behaviors (phenotypic variation).
    2. Different phenotypes have different rates of survival and reproduction in different environments (differential fitness).
    3. There is a correlation between parents and offspring in the contribution of each to future generations (fitness is heritable).

    (The term “fitness” appears only in the abbreviations, and is unimportant.) D&M obviously don’t want to play the ball where Lewontin laid it. This is quite interesting, because “natural selection is a tautology” was long a favorite line of creationists (see John Wilkins’ remarks). Now the standard approach is to treat fitness as objectively real, e.g., as something that guides the “search” to the “target.” (And Marks speaks approvingly of Sanford’s Genetic Entropy in the interview I link to in the OP.)

  18. I’ve read “Genetic Entropy”.  It is one of the crassest pieces of nonsense I ever read.

  19. RBH:

    As I’ve said numerous times, using search as a metaphor for, or model of, biological evolution is a snare and a deception. Biological populations do not “search” for solutions; they (occasionally) find adaptations –”solutions”–as a fortuitous by-product of the operation of the variation/selection process.

    Well… When you say “find,” you invite, “How can it find if it does not search?” And when adaptation leads to adaptations…?

    I’ve grown more and more concerned about back-application of concepts in evolutionary computation to biological evolution. Several years ago, when Google Books was still useful, I did some fairly serious investigation of the shift in language. As best I could tell, Sewall Wright never indicated that populations searched or optimized. I found a report on a 1955 conference, Concepts of Biology (full-text), that includes a long transcript of discussion by the invitees, including not only Sewall Wright, but also Ernst Mayr and George Gaylord Simpson. The language is scrupulous. I have to believe that evolutionists back then were determined to avoid any hint of teleology.

    The notions of “evolutionary search” and “evolutionary optimization” first appeared in the late 50’s and early 60’s, when early forms of computational evolution were applied to problems in search and optimization. (Oops. Left out G.E.P. Box’s EVOP.)

    Ground zero of “optimization” in evolutionary theory is, I am fairly sure, E. O. Wilson. The earliest case I see at the moment is “The Ergonomics of Caste in the Social Insects” (1968). References to “optimization” in the biological literature skyrocketed in the 80’s, and I suspect that the availability of PC’s, not careful thought, was the cause.

  20. I once heard the poet Richard Wilbur say, “Thank God James Joyce wrote Finnegan’s Wake, so now nobody else has to.” God aside, that’s more or less my feeling about your reading Genetic Entropy.

  21. Tom,

    The language is scrupulous. I have to believe that evolutionists back then were determined to avoid any hint of teleology.

    And, of course, this, when made explicit (because it often needs to be made explicit), is grist to the Creationist mill, as witness frequent dispatches from the quote mines. Long lists can be composed of authors emphasising the non-teleological nature of evolution, whichcan be spun back as pushing a ‘materialist ideology’. Otherwise (they ask mock-innocently), why even mention it?

  22. They cannot deal with the science, so they reduce “Darwinian” evolution to computational evolution, and then conflate their (bogus) engineering and mathematical analyses with science.

    I have for some time noticed that creationists will take some abstract property of evolution, present it as a definitive model, find some limitation in the model, then argue that evolution shares the limitation of the abstraction. That looks to me like equivocation, and it seems like it pretty much sums up the entire ID argument. 

    Equivocation with lipstick. 

  23. Alan and Petrushka:

    Your remarks are related.

    Alan: Long lists can be composed of authors emphasising the non-teleological nature of evolution, which can be spun back as pushing a ‘materialist ideology’.

    Actually, it’s the explanation that is non-teleological. Even at the level of fundamental physics, there are huge dangers in blurring the distinction between a successful explanation and that which it explains. The Discovery Institute runs on heat, not light, and it would have a lot less to go on if some outspoken atheists were to qualify their statements about evolution appropriately. The ID movement has no better friend than Richard Dawkins. It has no use for Steven Weinberg, a Nobelist who openly discusses the epistemological limits of science, and who makes careful and gentlemanly arguments against religion.

    Petrushka: I have for some time noticed that creationists will take some abstract property of evolution, present it as a definitive model, find some limitation in the model, then argue that evolution shares the limitation of the abstraction.

    That’s a combination of reification and naive falsification (Popper’s original version, which he later realized was untenable). The problem is that scientists reify (alternatively, hypostatize) constantly. They aid and abet in the first part of what you describe. I would nail Dembski and Marks for reification of fitness functions in “Life’s Conservation Law,” if not for some legitimate attempts to treat them as physically real.

  24. This discussion is circling around the role of models in science and how they contribute (and confuse). I think I’ll stay out of that morass for the moment, though it’s an important one.

     

    My use of “find” above is just the kind of unthinking default that I was talking about (or at least talking around), and pointing it out is a service. Language betrays us constantly.

  25. I should have told you that I’m struggling along with everyone else. I was trying out “exploration” and “discovery” not long back. They don’t work either.

    Within the past year, I said that life spreads, and Mike Elzinga suggested to me that it percolates. It’s unlikely that I’ll ever understand percolation.

  26. Back in the 1960s, when I was a graduate student in Dick Lewontin’s lab, he was very concerned to find out to what extent simple population-genetic models would go uphill on the fitness surface.  Theoretical work on models that had multiple loci with interactions among loci was very new.  There was no hope that natural selection of interacting loci found the global optimum.  It turned out that there were cases, even in an infinitely large population, where fitness declined in every generation.

    The most we could end up saying is that in models of infinite population that had no genetic drift, the outcome most of the time is that the final fitness is higher than the initial fitness when you have a model with natural selection and recombination.

    And of course, when you allow genetic drift or mutation, they will typically allow the population to wander down off any local fitness peak that it finds, or be pushed slightly off it by the mutation.

  27. I should add that the models we were studying also had fitnesses that did not change with time and did not depend on the composition of the population.  In the latter case one can actually find cases where the population goes partway up a peak, then stops moving and settles down there. 

Leave a Reply