The battle over cumulative selection and Dawkins’ Weasel program has raged on for some months [years?] here at TSZ and across numerous threads. So can it possibly be that we now, finally, have a definitive statement about cumulative selection?
Mung: And whether or not my program demonstrates the power of cumulative selection has not been settled…
To which keiths responded:
keiths: Anyone who understands cumulative selection can see that it doesn’t, because your fitness functions don’t reward proximity to the target — only an exact match. The fitness landscapes are flat except for a spike at the site of the target.
So there you have it. You need a target and a fitness function that rewards proximity to the target.
Imagine my surprise when I discovered that I had said the exact same thing nine months ago.
Mung: Here’s what he Weasel program teaches us:
1.) In order to demonstrate the power of cumulative selection one must first define a target.
2.) In order to demonstrate the power of cumulative selection one must define a fitness function that increases the likelihood of the search algorithm to find the target relative to the likelihood of a blind search finding the target.
Now perhaps I have misunderstood keiths here. Perhaps he did not really say, or really mean, what I think he said, or what it appears like he said. So I’d like to hear his response.
Is it possible that keiths has agreed with me all along while expending every effort possible to make it seem otherwise?
Just so there’s no mistake, here he is again saying the same thing:
keiths: Mung,
Besides failing in your attempt to code a Weasel and contradicting yourself regarding your intent, you also failed to demonstrate the power of cumulative selection in your program.
1) Your program doesn’t evolve a phrase; it evolves individual letters, one after the other, latching each one when it matches.
2) There is a separate fitness function for each letter.
3) The fitness functions don’t reward proximity to the target — they only reward an exact match for a single character.
The only thing your program demonstrates the “power” of is latching, not cumulative selection.
It’s a remarkable display of incompetence.
Perhaps. But it served its’ purpose. keiths admits I was right all along. So incompetence? Perhaps not.
You need a target. You need a fitness function that rewards proximity to the target. Is that your story keiths, and are you sticking to it? Weasel out of this!
I predict keiths will try to make this about my program and what it does or does not demonstrate rather than his revelation about cumulative selection.
My program DOES NOT SEARCH! Sheesh.
Yet it manages to perform better than a random search. It performs better than your own Weasel program, which IS A SEARCH.
That’s how evolution works. Just ask petrushka.
And that’s how “demonstrating the power of cumulative selection” works.
Your suggestion that we should make the search space of each letter the same as the size of the entire search space for every possible combination of characters is ludicrous.
That’s not how evolution works, that’s not how cumulative selection works, and that’s not how Weasel works. Why not apply that same suggestion to your own Weasel program and see how well it performs?
Do you spend most of your time applying it to yourself.
For example:
What was the loophole in his definition?
By the way, I don’t think keiths was trying to define cumulative selection, so I don’t think your quip is even relevant.
And you’re correct, keiths and I could both be wrong. And I can admit when I am wrong. 🙂
Neither does keiths, except when he does. [Hey, don’t blame me!]
So first we have keiths clearly claiming that a target is necessary. He may deny it, but it is what he said. He argues that my program does not “reward proximity to the target.” His words.
And because of this, he says, my program does not demonstrate the power of cumulative selection. His objection only makes sense if he thinks a target is necessary.
Yet he also claims, or appears to claim, that a target is not necessary. He claims programs exist which use cumulative selection but which lack any target or targets.
Do these programs also “demonstrate the power of cumulative selection”? I’ll leave it up to keiths to make that case.
Did you notice that keiths never actually addresses the question, what is necessary to demonstrate the power of cumulative selection?
But we do have one program, Weasel, and this program, we are assured by keiths, does demonstrate the power of cumulative selection, and it does this in just the manner as I claimed nine months ago. I just don’t know why it took nine months for keiths to indicate his agreement with me.
Your mistake is obvious, as usual. Cumulative selection does require a target, but Weasel doesn’t. Or perhaps your own program was not an actual version of the Weasel program. How soon you forget.
The latest revelation we’ve received from keiths is that his DriftWeasel program is not a Weasel program after all.
Mung,
Yes, those are my (correct) words. Your program, which has a specific target, does not reward proximity to that target, and thus fails to demonstrate the power of cumulative selection. Other programs, which don’t have a specific target, don’t need to reward proximity to a nonexistent target in order to demonstrate the power of cumulative selection.
This should be obvious to anyone smarter than an internet dipshit.
Right. So which is more likely — that I contradicted myself, or that you screwed up yet again? Hint — you aren’t the brightest bulb in the chandelier, to put it diplomatically.
Give up, Mung. You lost this one already.
keiths,
What is proximity to a target analogous to in evolution?
phoodoo,
Fitness.
The difference is that in Weasel, there is only one fitness peak, at the exact location of the predetermined target phrase. In evolution, there are multiple fitness peaks, and their locations aren’t predetermined and can shift over time.
keiths,
Fitness=proximity to a target? What proximity? What target?
Do you understand the question?
No, doofus. Fitness in evolution is analogous to proximity to the target in Weasel. (And in Weasel, but not in evolution generally, fitness is determined by proximity to the target.)
Have you forgotten what your question was already?
Jesus, phoodoo.
That doesn’t make sense, why would proximity to a specified target be analogous to a creature which can reproduce?
Analogies have to have a reason for being called analogous, or else we shouldn’t call them analogies? You understand how that works?
You think fitness is a creature that can reproduce?
I’d suggest sitting this one out, phoodoo.
Actually I think fitness in evolution is a concept with no meaning whatsover, but according to your side, that is what it means.
I mean, when your side chooses to be less ambiguous than possible.
I’m going to (partially) side with Mung for a bit, just to see how it feels. Or maybe, crossing the streams with another thread, I have no choice given the state of the universe.
If I understand you correctly, you’re saying that Mung’s Weasel variant fails to demonstrate cumulative selection because it doesn’t take the overall fitness function into account. I disagree with that conclusion. While Mung’s implementation consists of a series of independent random searches, none of which demonstrate cumulative selection individually, the latching behavior does demonstrate cumulative selection over the sequence.
I do agree with your related point, that other EAs can demonstrate cumulative selection by taking their fitness function into account, even when that fitness function is not proximity to a specific target. Some confusion could be avoided if you could provide an operational definition of “cumulative selection” as you’re using it.
You have been provided with multiple references to consistent definitions of biological fitness.
Continuing to claim otherwise is not honest behavior.
I don’t program in the language of mung’s program, but my limited understanding of it is that he mutates one character at a time in sequence until that character is correct.
In the usual sense of the language he is approaching the overall target.
It strikes me that both sides might be definition lawyering, a tactic I find unhelpful and boring.
It does come down to definitions and, more importantly, whatever point Mung is trying to make. Since I found myself agreeing with Mung, though, I had to jump in.
🙂
when keiths finally admitted that what was required for cumulative selection were the very things I had pointed out nine months ago, I thought we were making real progress.
I thought, great, now perhaps we can come up with an operational definition of cumulative selection, a way to quantify it, and yes, even write tests for it.
But alas. Now we hear from keiths that an operational definition of cumulative selection could only be applied to Weasel and nothing else.
According to keiths his own Drift Weasel program isn’t a Weasel program.
There is no such thing as a drift version of Weasel.
If I understand Keiths’s “Drift Weasel” program, it is a Weasel with 20 offspring and 1 surviving adult, with selection based on the number of sites matching the target. It has mutation. It is a Weasel. It also has genetic drift. As does Dawkins'[s] Weasel.
Let me qualify my above statement. Obviously there is no perfect agreement on what constitutes “a Weasel”. Keiths’s program does not always select as the survivor the individual offspring that has the closest match to the target. Instead it has a selection coefficient, which controls the probability that an offspring is chosen, and the selection is biased toward well-matching offspring to that extent.
If one requires a “Weasel” to have infinitely-strong selection, as Dawkins'[s] Weasel does, then the Drift Weasel is not a Weasel. But that requirement is not inevitable, it’s just a matter of semantics, and defining a Weasel one way or the other does not tell us anything important about their behavior.
His program may be compared with Wright-Fisher Weasels such as the ones I discussed. His Drift Weasel differs from those in having a finite number of offspring, while a Wright-Fisher model has an infinite number of offspring.
There is, of course a “drift version of Weasel”, since the original Dawkins Weasel does have genetic drift occurring in it, as well as very strong selection.
My program has a chromosome with multiple loci. One locus mutates at random until it is found to be beneficial. A result is that it becomes fixed in the population.
Then a second locus mutates until it also becomes beneficial with the result being that if likewise becomes fixed in the population.
Cumulative selection in action.
keiths whines that my program does not have a single fitness peak but elsewhere claims [without evidence] that a single fitness peak is not necessary for cumulative selection. Is a consistent argument that doesn’t contradict itself too much to expect?
Patrick,
Yea, yea, you need a definition for cumulative selection, a definition for random, a definition for fitness, a definition for a decision… You need a lot of definitions.
But you haven’t provided a definition for definition yet, so how can we get started?
From the OP:
Chalk one up for a successful ID prediction.
This becomes boring. OK, in Mung’s program, once a given letter matches the target letter at that position, it is latched and can never change again. In Dawkins’ program, letters do not latch and even correct letters at any given position are free to mutate away. Both approaches produce the target phrase fairly quickly.
And neither model is much of a match for the Real World, where there is no target, where the only thing that matters is that the mutated version must usually survive better, or at worst just a little bit worse, than the prior version. So the letters could mutate into any number of phrases, so long as a reader could usually guess some meaning. The higher the percentage of readers guessing the same meaning, the greater the fitness of the phrase, regardless of the guessed meanings. If the “same guess percentage” is used as a feedback factor, the phrase will converge on the guessed meaning.
But what about “the power of cumulative selection!”
Without Weasel, what do evolutionists like keiths have?
Empty rhetoric.
UD devoted at least 900 posts to arguing whether Dawkins’ program latched. I believe KF expressed the opinion that it would not work without latching.
Which, of course, is rubbish.
There is nothing like latching in biology. It would be interesting to know if mung had some purpose in mind.
There’s nothing like Weasel in Biology. Meanwhile, evolutionary arguments against ID rely on latching.
Anything to avoid the issues raised in the OP, but that’s ok.
What argument relies on latching?
Actually there are rough analogs to weasel in biology.
For example, the Russian biologist who bred tame foxes selected those most nearly tame. Natural selection doesn’t usually have a target, but breeders competing in show have targets. It’s still biology and still evolution.
I am unaware of any arguments against ID that rely on latching.
Latching also makes little difference to the time a Weasel takes to reach the target phrase. With or without latching, a thousand or so steps compared to or so when there is no selection.
Links or documentation, Mung?
Like these, for example.
weasels are weasely identified, but stoats are stoatally different.
What happens when Mung, the seagull commenter, is up against a Weasel:
Weasel vs. seagull
Patrick,
I’m not sure what you mean by “taking the overall fitness function into account.” Mung’s program has a fitness function — a bunch of them, in fact, since the character examined changes after each latched match — it’s just that they’re the wrong fitness functions. They don’t reward proximity to the target, instead just indicating that a particular mutated character does, or doesn’t, match. There’s no hill to climb in the fitness landscape, just a flat plain with a single spike at the location of the target character.
Mung’s program is just a bunch of independent random searches for single characters in fitness landscapes that have no slopes to climb. What Mung’s program does is what Dawkins refers to in the Blind Watchmaker as “single-step selection”. He contrasts its meager abilities with the power of cumulative selection, which is what Weasel demonstrates and what Mung’s program cannot.
No, because the latching of random search results is not the same as cumulative selection. During the Great Latching Kerfuffle, Kairosfocus’s mistake wasn’t that he objected to latching; that was legitimate. His mistake was to insist against evidence that Weasel latched, when it clearly did not and didn’t need to.
Mung’s program doesn’t have the right kind of fitness landscape, and it won’t work without latching. Weasel has neither of those defects.
Mung at one point suggested that I modify one of my Weasel programs so that the target could be changed mid-execution, as a way of investigating whether it truly demonstrated the power of cumulative selection. As anyone who understands Weasel could easily predict, Weasel would simply start tracking toward the new target in that case. I implemented the change, and that’s what happened.
Amusingly, Mung’s program would fail utterly in that case. Any characters that are latched are latched forever, so the program cannot track toward a new target. It fails his own suggested test.
Mairzy doats And dozy doats
And liddle lamzy divey
… a kiddly divey too, wouldn’t you?
Well, let’s take your example, but apply it in a way it is claimed evolution works.
What if instead of just selecting for one trait (because evolution doesn’t favor only one survival trait, it selects for all) the breeders selected for many. So besides selecting for the tamest, they also selected those that could jump highest, those with the strongest bite, those with the smallest paws, those that were tallest, those that could swim the best, and those which weighed the least. What do you think you would end up with? Would you still end up with the tamest? Would you have the best swimmers? The tallest ones? How would cumulative selection work then?
If every time the genes were selected for different reasons, wouldn’t you end up with a mess? Wouldn’t selecting for the tallest, whilst sometimes also selecting for the lightest, and sometimes selecting for the strongest bite give you a combination of genes that was the worst combination of all? It would be like trying to mix the best cake recipe, with the best margarita recipe, with the best drain cleaner recipe.
That’s a problem, because there is no one way to best survive. Maybe being dumb sometimes is good. Maybe being fast is also good, and maybe have a slow metabolism is also good, and maybe being smart is useful too. You can’t continue to accumulate any good combination, when each time the one you call fittest changes. Is a big brain or small brain useful? A big heart or a small one? To be aggressive or passive? From any minute to the next, the trait that is most useful could be different. Building a car with parts from a plane, a rubber duck and an xray machine will not help you make a very good car.
So, as petrushka has already noted, we come down to definitions. What definition of “cumulative selection” are you using that distinguishes the two?
Mung: There’s an official weasel algorithm is there?
Patrick: Yes.
But does it carry the “Approved By keiths” logo?
Nine months of Weasel.
It certainly appears to me that I knew enough about Weasel and cumulative selection to have you almost quote me word for word when you finally decided to come out and make an actual claim about cumulative selection.
You didn’t help the narrative you were trying to weave by agreeing with me, even if you didn’t intend to do so. You should have just stuck with your quote-mining.
Even if you change the target, it’s still a target, keiths.
We’re still waiting for a demonstration of “the power of cumulative selection” where there are no targets. And a way to do that with a fitness function that rewards proximity to a non-existent target.
Those are your own terms for cumulative selection. No one forced you to type those words, did they?
keiths:
What is the target in the travelling salesman scenario, mung?
Well, obviously, it can’t involve random search.
Else what’s the point of keiths claiming my program uses random search? So what? Weasel doesn’t?
Looks like the seagulls are coming home to roost.
Already asked and answered petrushka.
How does the traveling salesman problem demonstrate the power of cumulative selection. “It just does” is not an answer.
Well, you could graph the mean travel lengths of the population. That would test whether the program is behaving as expected.
Remember the niche, phoodoo.
Mung,
Write a program that generates solutions to the TSP randomly and write a program that uses cumulative selection to breed them using their length as the measure of fitness for each solution. Then, after a while, compare the lengths of the best (shortest) solutions for each program.
Then you will have answered your own question. I have done exactly what I propose above as it happens, just out of interest.
I’m sure it’ll take a coder like you an hour or two maximum.
phoodoo,
Hence the diversity of the biosphere. Any other questions?
I like the graph. It visualizes the cumulative aspect of the change in routes.
The point is there is no way to know the shortest route, once you get past 30 stops or so. There is no halting condition that says done.