A couple recent comments here at TSZ by Patrick caught my eye.
First was the claim that arguments for the existence of God had been refuted.
I don’t agree that heaping a bunch of poor and refuted arguments together results in a strong argument.
Second was the claim that IDCists do not understand cumulative selection and Dawkins’ Weasel program.
The first time I think I was expecting more confusion on Ewert’s part about the power of cumulative selection (most IDCists still don’t understand Dawkin’s Weasel).
In this OP I’d like to concentrate on the second of these claims.
I’ve never doubted the power of cumulative selection. What I ask is, where does that power come from and how does the Dawkins Weasel program answer that question? I’m capable of coding a version of the Weasel program. But I’m also interested in the mathematics involved. I’d like to see if we can agree on what is involved in the coding and how that affects the mathematics involved. What is the power of cumulative selection?
METHINKS IT IS LIKE A WEASEL
A string with a length of 28 characters. Each position in the string can assume one of 27 different values. 27^28 would be the size of the sequence space. Call it the search space.
Many critics of ID claim that evolution is not a search. But “the power of cumulative selection” is dependent upon this view of evolution as a search. After all, the Weasel program is a search algorithm. My view is that the Weasel program incorporates and relies on information that is coded into the program to make the search tractable. The “power of cumulative selection” is an artifact of the design of the program. We lack justification to extend this “power” to biology.
So are there people here who can explain the mathematics and the probabilities and make the necessary connection to biological evolution? Is it a mistake to think that Dawkins’ Weasel program tells us anything about biological evolution.
I think all this is lost on IDists.
We have people posting here who doubt that genomes determine phenotypes. We see people posting at UD who are consumed with the thought that viable genotypes are isolated islands.
Survival of the adequate is a bit too nuanced.
petrushka,
It gets worse. At UD, Vincent Torley is trying politely to talk Michael Egnor out of the idea that when you look at a distant object, your perception of the object takes place, not in your brain, but at the distant object.
You have to admit though, that perception is the result of mind rays encountering the object, so in a sense, he is right.
Unless someone walks in between with a pair of shears, clipping furiously at the air. Then the mind rays are cut, and fall to the ground, and you can no longer see the distant object.
I think Weasel is able to demonstrate the power of cumulative selection because it is a guided search.
Why is that even relevant?
I’ve made no comment about whether or not Weasel uses “latching.”
What does this have to do with my argument in this thread, which says nothing about “latching”?
Joe,
It’s interesting how when you’re seeing something, the mind rays follow the exact path of the light rays, only in reverse, bouncing off mirrors and refracting through lenses. Yet when you’re hearing something, the mind rays are smart enough to follow the reverse path of sound waves instead.
That’s a reply, but it’s not an answer.
Why not limit the character set used when generating the candidate sequences to the same letters used in the target sequence? Would that be cheating?
Please take your off-topic discussions to another thread. thank you.
If Weasel is not a genetic simulation, how is this relevant?
Mung,
Because your suggested experiments, including changing the target mid-stream and allowing mutations outside of the target phrase’s character set, demonstrate that you don’t understand the algorithm. A similar lack of understanding on KF’s part led to the kerfuffle over latching.
What is it about the algorithm that you think I don’t understand? Do you believe that changing the parameters you defined changes the actual algorithm?
You appear to be the only one willing to even attempt to address the topic.
Here’s what I wrote way back in the first post in the thread following the OP:
How is that any different from what you just said?
Mung,
How it works.
If you understood that, you wouldn’t be suggesting experiments like changing the target mid-stream. Anyone who understands the algorithm knows that if it tracks toward the original target phrase, it will also track toward any new target phrase taken from the program’s character set.
Allan Miller,
Its not a demonstration of the power of cumulative selection, if the definition of cumulative selection means selecting things with no target in mind. If cumulative selections means selecting things which match the target you want to match, then it is as trivial as saying, I want to find a blue car, if I only select parts of a blue car, look how fast I can find a blue car. Isn’t it amazing.
That’s meaningless.
phoodoo,
So you think I meant just change the target once? From where I sit it’s you who lacks understanding.
Let me highlight it for you:
As in a constantly moving target. Not a target that moves once.
Please try to do better, keiths. I know you can.
keiths,
So we can change the target after every mutation, and it will reach a target? I think not.
Edit note- I wrote before reading Mungs post, which says essentially the same thing.
Mung,
That’s not a problem, unless you let the target move frequently and with massive volatility.
What would you expect to happen?
Let’s ask an important question, for a change.
What do you think would happen if WEASEL were given a fitness function that included all the dictionary words in a language, and all the sub-strings of all the words? So that there is a large connected universe of targets?
Richardthughes,
What prevented it from producing Snooki from the Jersey Shore?
petrushka,
I think it would be more realistic if it also included markings that were no known language at all. Just random symbols. Maybe just some blobs and fragments of jigsaw puzzle pieces.
We could test that hypothesis. We could change all sorts of parameters, including constantly changing the target during a given run, and observe what effect those changes have on “the power of cumulative selection.”
Perhaps the power of cumulative selection is just an artifact of the design of the program. Humans think they see something that’s not really there. Like the shape of a weasel in a cloud.
If Weasel does in fact demonstrate the power of cumulative selection I want to know how Weasel does that.
If Weasel fails to demonstrate the power of cumulative selection given a constantly changing target, then we might be justified in thinking that demonstrating the power of cumulative selection requires a non-moving target.
But maybe we just don’t understand the algorithm. =P
Richardthughes,
Really, its not a problem? I can start with having the program target be the entire works of Shakespeare, and then halfway through, I can change it to the Bible, and then a thousand mutations later I can change it to Sun Tzu’s Art of War? That’s not too volatile is it?
Or we could increase the mutation rate.
Pretty much the same thing as setting the mutation rate too high.
I’d expect that it would increase the time it takes to reach a condition sufficient to halt the program, making it more difficult if not impossible for the program to demonstrate the power of cumulative selection.
phoodoo,
The definition of cumulative selection makes no mention of targets. So we’re home free.
Selection means that one genotype is rewarded by more offspring (and/or others are penalised by fewer – it’s a yin/yang thang). ‘Cumulative’ means that successive steps can go further – the reward remains there to be reaped if another genotype can do even better.
In this case, genotypes that are closer to the target phrase get more offspring – although it’s so far from typical ‘wet biology’ situations that this is not quite so obvious. It is a particular example of a selective scenario, rather than the defining ‘type case’ for cumulative selection in toto. You don’t have to have a target. Nor are you barred from having one. It’s irrelevant precisely how the genotype-reward is implemented.
Richardthughes,
Cool! Interesting how it ‘pops’, from something pretty abstract when it gets a pair of eyes and a schnozz. We love to see a face, we humans.
Mung,
If the mutation rate is too high, it will swamp selection. If the target changes too rapidly (which is equivalent to the environment changing very rapidly) it will swamp selection. No-one suggests selection will work in conditions that are inimical to selection. But given a certain constancy to the environment, and a certain fidelity in copying genomes – that is where selection can occur. And when it does, it is powerful, as demonstrated by the selection-favourable environment provided by that nice Dr Dawkins.
You appear to have some programming skills. You could have a play with your own version, and see for yourself where selection broke down.
Allan, to Mung:
Mung,
Implementing your own version might help you understand the algorithm and its performance under different conditions. (Don’t just crib from my version, though.)
I saw you say what selection means and what cumulative means, are we supposed to be able to add those two together to get the definition of cumulative selection?
Maybe I’ll look up definitions of “cumulative selection” online and see how well they fit the Weasel program. I mean, I was sort of taking Dawkins’ word that it demonstrates the power of cumulative selection, but maybe it doesn’t do any such thing.
It’s been done. Oddly enough, a version of that is included in the same book and same chapter where Dawkins introduced WEASEL.
But you knew that, right?
I know that changing the conditions changes the performance. That’s the whole point of the OP and my question about just how it is that the program demonstrates the power of cumulative selection.
Perhaps you can tell me which conditions in your own program are not relevant to the ability of the program to demonstrate the power of cumulative selection.
Mung,
If you understood the algorithm, it would help you see which conditions are relevant and which aren’t.
What does halting have to do with anything. WEASEL halts, because it’s a toy demo, but there is nothing in the algorithm that requires it to halt, nor is there any requirement for it to have a specific or fixed target.
The selection could simply be a weighted preference for certain letters, and the weights could change at random intervals, and it would till accumulate changes.
Weasel is a limiting case of a genetic simulation in which the organism is haploid and asexual, the population size is 1, and selection is very strong. Also, the table of fitnesses happens to reflect the differences between the genotype and “Methinks It Is Like a Weasel”.
Keep in mind that Weasel is a teaching example to demonstrate that when a creationist debater says that evolutionary biologists are saying that adaptations come about simply “at random”, their audience is being misled.
It is good to see that you aren’t fooled when you hear that. And if you regard the point about the blue car as obvious, then you won’t be fooled. And Weasel will look obvious and unnecessary.
But numbers of creationist debaters really do say that. And their audiences applaud, and no one calls them on that.
Mung:
petrushka:
Mung,
Set FITNESS_THRESHOLD to 29 in my program. It won’t halt, but it will nevertheless converge on “METHINKS IT IS LIKE A WEASEL”.
Halting sounds to me like a throwback to the bad old days at UD, when evolution was asserted to be a search for a specified solution.
I think Dembski wasted most of his adult life arguing against a phantom, a straw man. The specification.
So I cribbed from this code:
http://rosettacode.org/wiki/Evolutionary_algorithm#Ruby
I converted the program to a version that could be run repeatedly and the number of iterations it took to complete a run could be totaled and averaged.
For 100 runs of the program the average number of iterations was 288.
Then I converted the program to use only the characters that actually appear in the target.
For 100 runs of the program the average number of iterations was 116.
The power of Mung! I’m obviously smarter than Dawkins was in 1984.
As if the goal were to minimize the number of iterations.
The point of Weasel is to show that whatever the number of iterations, it is a zillion times smaller than , which is how long it takes without selection.
Joe Felsenstein,
Or tornado in a junkyard some call it.
It certainly isn’t to increase the number of iterations. Increase them to much and the power of cumulative selection fails to make an appearance. It’s no different from swamping the selection effect.
If the program failed to halt in the lifetime of the observer, how exactly would it demonstrate such a thing? Weasel was designed to be “just right.” 🙂
That’s what gives it the appearance of being a demonstration of the power of cumulative selection.
Mung, you have some coding chops I believe? You should play with this.. I think you’ll find that increasing mutation rates is a double edged sword..
Down to an average of 16 iterations per run. The power of cumulative Mung!
Beat that keiths.
I should probably change it to a coin tossing program so Salvador can understand it.
Don’t see why not. It wouldn’t get very far if it only had 1000 iterations between changes, but it would still follow the change.
You can see this happening in real populations – Jonathan Weiner gives some nice examples in The Beak of the Finch.
I’ve done it myself – set the environmental challenges randomly, then randomly changed them again. It’s a neat thing to do, actually, because you find that if you don’t have a high enough neutral mutation rate, the population gets “stuck” and can’t re-adapt to new conditions. But as long as you have lots of neutral mutations, the population readily adapts. It’s a nice illustration of how mutation rates themselves might have optimised, especially mechanisms that favour neutral mutations.
I’ve done a version of WEASEL that has as a target, all the words of Shakespeare.
The selector just adds up the matches, position by position, to any word.
That’s right. Those were decisions made by the designer of the program, who wanted to design a program to demonstrate the power of cumulative selection.
I’ve already explained this. If the program failed to halt then no one would know that it’s performing any better than random chance drawings. It would fail in it’s mission to demonstrate the power of cumulative selection.
That’s simply not true. I gave you a link to a program that accumulates selected changes without halting.
a) it is not necessary and b) we don’t have one.