Journal club time: paper by Sanford et al: The Waiting Time Problem in a Model Hominin Population. I’ve pasted the abstract below.
Have at it guys 🙂
Functional information is normally communicated using specific, context-dependent strings of symbolic characters. This is true within the human realm (texts and computer programs), and also within the biological realm (nucleic acids and proteins). In biology, strings of nucleotides encode much of the information within living cells. How do such information-bearing nucleotide strings arise and become established?
This paper uses comprehensive numerical simulation to understand what types of nucleotide strings can realistically be established via the mutation/selection process, given a reasonable timeframe. The program Mendel’s Accountant realistically simulates the mutation/selection process, and was modified so that a starting string of nucleotides could be specified, and a corresponding target string of nucleotides could be specified. We simulated a classic pre-human hominin population of at least 10,000 individuals, with a generation time of 20 years, and with very strong selection (50 % selective elimination). Random point mutations were generated within the starting string. Whenever an instance of the target string arose, all individuals carrying the target string were assigned a specified reproductive advantage. When natural selection had successfully amplified an instance of the target string to the point of fixation, the experiment was halted, and the waiting time statistics were tabulated. Using this methodology we tested the effect of mutation rate, string length, fitness benefit, and population size on waiting time to fixation.
Biologically realistic numerical simulations revealed that a population of this type required inordinately long waiting times to establish even the shortest nucleotide strings. To establish a string of two nucleotides required on average 84 million years. To establish a string of five nucleotides required on average 2 billion years. We found that waiting times were reduced by higher mutation rates, stronger fitness benefits, and larger population sizes. However, even using the most generous feasible parameters settings, the waiting time required to establish any specific nucleotide string within this type of population was consistently prohibitive.
We show that the waiting time problem is a significant constraint on the macroevolution of the classic hominin population. Routine establishment of specific beneficial strings of two or more nucleotides becomes very problematic.
Sets a specific target beforehand, then tries to work towards it.
That’s not how evolution works. Dismissed!
Meantime, the population is evolving like buggery! (Regional UK expression, meaning ‘a great deal’.) It’s a good job evolution has more than 1 dimension to work in.
Another task for Mendel’s accountant:
I’m not against Open Access publishing – and the costs have to come from the researchers if they are not from the readers.
The issue is the quality of the peer-review. I don’t see anything actually wrong with the paper, except for the repeated claims that it is “biologically realistic” (which seems solely to mean the population size and the generation time, the only sense in which it has any relationship to the “hominins” referred to in the title – that’s there just to be sciencey it seems). It’s a perfectly good demonstration of why evolution can’t work in the way modelled.
But then we know that.
It’s not there just to be sciencey, for that they could have picked any of a million species. It’s actually primarily there to prove humans couldn’t have evolved from no damn ape.
But no rationale is given in the paper for that choice. Nor for “hominin” rather “human”. A bit slippery. A good reviewer should have asked them to change that, as well as omit “biologically realistic”.
Actually re-reading I can see quite a few things that are a bit iffy.
This is weird:
What sort of fitness function does MA use? (will look up….)
It looks to me as though MA doesn’t actually assess phenotypic fitness at all, but merely allocates a selection coefficient to each mutation, regardless of context.
Well, that’s not “biologically realistic”. It’s not even an evolutionary simulation.
What is extraordinary is that the same people call Mendel’s Accountant “biologically realistic” say that AVIDA isn’t. But AVIDA is actually a proper simulation – mutations can be neutral, or even deleterious at the time of first appearance, and beneficial later on – and the the programmers do not know which will be what before the things start. FUNCTIONS are rewarded, not specific sequences, as in life.
This thing is not a simulation at all.
Unfortunately, that’s how evolution would have to work in order to make systems with coordinated interlocking complexity (IC).
There are an infinite number of ways to make locks and key, login and password systems, Rube Goldberg machines in the world of human affairs. The fact there are a infinite number of ways to do this doesn’t make the expectation very high that such interlocking complexity (IC) will emerge according to undirected variation, but requires foresight and aiming at premeditated targets of the designer’s choosing.
If there is foresight, it is rather easy to make the password characters match the login/password recognition system ahead of time when an account is set up. Monkeys on a keyboard won’t even know how to register an account to even create such a system requiring foresight.
Systems requiring coordinated complexity before conferring advantage as far as differential reproductive success should have some associated waiting time and combinatorial improbability since real selection in the wild can’t select for non-existent features. At best selection just freezes accidents, and at best all one can do is argue biological systems are frozen accidents.
The problem of interlocking dependency is that it requires foresight to construct. Differential reproductive success doesn’t solve the problems posed by structures that require foresight to overcome the combinatorial barriers (such as with lock and key systems or password login systems or protein interactome cascades unique to each species).
The problem with advocates of Darwinism is they equivocate the “direction to ward reproductive success” with the “direction toward a configuration away from statistical expectation”. Rube Goldberg complex relationships (such as found in life) are away from statistical expectation. Advocates of Darwinism like Dawkins require the equivocation and conflation of these ideas in order to maintain believing that Darwin’s theory. Darwin’s theory does not actually work to solve systems that require designers to specify in advance an arbitrary target (requirements specification), and then actually hit that target (implementation).
Biological systems because of their interlocking complexity are structured like such targets like Rube Goldberg machines.
No, it isn’t. See AVIDA, which does precisely that, and works the way evolution is postulated to work.
No its not.
Here, the author sets it up as a syntactic problem. He is, in effect, using Shannon information, though he might deny that.
And that is where the author makes clear that he is looking at the cost of a specific syntactic string arising.
My contention is that the syntax is vastly underdetermined by the semantics. So this method will give a cost that is far too high.
I’ll try to fill in some details in a followup post.
Let me relate it to the earlier discussion on realism. My contention of underdetermination is why I am being called anti-realist. The realism from philosophy seems to be a claim that the syntax of scientific theories is real, whereas I am arguing that it is vastly underdetermined by reality.
yeah, stcordova’s first sentence is simply factually wrong.
We analyzed Mendel’s Accountant in 2009.
The calculation of “working fitness” was broke. From Mendel’s Accountant (Fortran):
We can test this by taking a series of fitnesses from 1.001 to 2 (Basic),
This is a typical result.
Note the min and max.
ETA: both random functions return 0-1. Dividing by 0-1 is the same as multiplying by 1 to infinity.
(Hat tip Dr.GH)
You are confused.
No its not.
No, that’s not the way evolution is postulated to work, and when you understand why, you will understand why the entire ID project is rubbish.
You are far too intelligent to keep putting your hand on this hot stove. Try learning something for a change.
Sal, there is no direction to evolution, there is no direction toward or away from anything.
It beats me how critics of Darwinian evolution over-complicate the theory. It’s dead simple: let things reproduce with minor variation, where that minor variation has implications for their chances of breeding successfully in the current environment.
And to simulate this, you need a population of reproducing virtual critters with genomes that affect how the virtual critter avoids the hazards and exploits the resources of its environment. You can let them reproduce sexually or asexually.
You don’t need to specify the selection coefficient of any mutations in advance, and you don’t need to specify the genomes that you want to evolve. What you DO specify is the environment in which the critters must negotiate in order to breed.
And when you do this, populations adapt extremely rapidly and reliably, and “find” solutions to the problem of surviving and breeding in that environment that the designer didn’t know in advance.
If she’s done it right, they may do so in a way that helps her solve HER problem. Unfortunately sometimes they don’t! Like a population I got for distinguishing between patients and healthy volunteers that managed to utilise the subject IDs to do the categorisation! Crafty buggers.
When a population is faced with the necessity of evolving a new function, the nearly certain outcome is death and extinction of the population.
Sal, when you understand this, you will be on the road to recovery.
sez our boy cordova:
Bullshit. It’s not clear to me what that word “coordinated” is doing in there, unless Cordova is tryna smuggle an Intelligent Coordinator into his argument as an unspoken, implicit presupposition… but ordinary Interlocking Complexity without the “coordinated” bit, such as Behe went on about, is a thing that absolutely can be generated without any need to predefine a specific target. See Genetic Variablity, Twin Hybrids and Constant Hybrids, in a Case of Balanced Lethal Factors (Hermann J Muller, Genetics, Vol 3, No 5, Sept 1918, pp 422-499) for further details.
You are failing to take into consideration the power of painting the target after the arrow has landed.
“coordinated interlocking complexity (IC).”
There’s few things more pathetic than fancy ID-proponent techno-jargon made up to sound profound and complicated.
Notice this fancy term here, which really just means “multiple parts that stick together”. But obviously just saying that isn’t design-engineering-ish-enough, so better throw in a few technical-sounding terms and an abbreviation to impress.
Well, Behe has two definitions of IC, and by both definitions, AVIDA organisms with IC evolve without the target sequence being set beforehand.
Indeed, on each run, different sequences produce the IC feature by a different IC path.
And for Behe, IC stands for “Irreducible Complexity” not “Interlocking Complexity”.
Looking at strings of nucleotides as syntax, we can consider the corresponding semantics in terms of the way that the world is divided into niches. That dividing comes from the morphology and behavior of the biological organisms.
To say that the syntax is underdetermined by the semantics is say that, at least in principle, an organism with a different morphology and different behaviors could claim the same niche. The evidence of convergence seems to suggest this.
Not just convergence – invasion.
From the abstract as quoted by Lizzie in the OP:
Biologically realistic [???] numerical simulations revealed that a population of this type required inordinately long waiting times to establish even the shortest nucleotide strings. To establish a string of two nucleotides required on average 84 million years.
I’d say their model is wrong.
*reads rest of thread – sees point already made several times – Ah well!*
So essentially they’ve falsified a poor model that nobody actually proposes.
It seems to me that what they have done is assume that “the classic hominin population” is actually something that needs to be explained.
If on the other hand you assume that what ever happens is equally impressive then their investigation is beside the point.
What about it do you think they needed to explain? And what is a “classic hominin population” in your view?
Not sure what you mean by this.
I think hominins are cool in general especially the ones I know I would not expect to see them appear at random. So I would think their entire existence needs explaining. That is just me I’d expect folks here think otherwise.
I haven’t given it much thought I just cut and pasted it from the OP.
It really does not matter though you could replace “classic hominin population” with anything at all that you feel needs explaining and the point would stand.
Just that if you assume that any outcome is equally interesting then evolution works just fine as an explanation.
Its only when you want to use evolution to explain something specific and interesting that difficulties like the one in the paper show up.
Like Rumraket says ” That’s not how evolution works.”
The unresolvable dispute is about whether humans were intended, or indeed, whether anything was intended.
Yes, it does seem that they have made this assumption, and they were mistaken to do so. And yes again, “whatever happens” could be equally impressive. Modern humans didn’t “have” to happen. Something else could have happened just as easily. And, barring extinction, that something else would have been unique and exquisite.
And just as unlikely as we are.
Well there you go. That is the rub between us. I think humans are more impressive than junk DNA for example and you don’t.
There is simply no way of bridging that gap.We inhabit different worlds.
Great. So you’re only 5-6 years out of date. Where is the latest source code?
The current paper references a 2007 description of how the program works. There’s nothing in the paper indicating that its principles have changed. For this paper it was given a target, much like Dawkins’ WEASEL.
The evolutionary alternative to humans is not “junk DNA”.
Quick synopsis of comments:
“This paper does not come to the conclusion we want. It must be wrong!”
Basically a microcosm of all of TSZ.
Suppose you have an evolutionary algorithm that generates code.
Suppose that the fitness test is whether the code is executable.
Suppose that you then look at the code, what will you see?
So you think evolution has pre-specified targets? Notice how actual criticisms of the premises in the paper have been leveled, not the conclusion. It is also not because we “don’t like” the conclusion, because given the premises the conclusion is correct and unavoidable. Rather, again the problem is the simulation is trying to produce a pre-specified target, indtead of (as is what actually happens in nature) whatever works.
Given this, your synopsis of this discussion is completely contrary to demonstrable fact and just looks like mindless trolling borne out of butthurt. Actually on second thought it looks like projection.
So Bill Dembski would have something to say about “smuggling in information” in Sanford’s simulation, and presumably disagree that this is how evolution works.
So all you can conclude is that, as with WEASEL, the basic principle of selection works – the target sequence appears. You can’t conclude that there is a “waiting problem”.
It’s not, despite what they say, “biologically realistic”. AVIDA is much more “biologically realistic”, as it does not pre-specify a sequence, and it does not allocate “rewards” to sequences but to functions.
What do you think the conclusion of the paper is?
The stated conclusion is that:
However, they do not define “macroevolution” and it is not at all clear how their conclusion relates to any standard definition of “macroevolution”.
What “conclusions” are acceptable at TSZ then? Do you have a list?
One problem with the stated conclusion of the paper is of course that it goes way beyond its remit. Essentially, the conclusion is that that IF evolution worked as modeled by MA and IF there was once a hominin population of 10,000 and IF “macroevolution” (left undefined) of that population required a specific sequence of nucleotides to be generated from an existing sequence that shared nothing in common with it THEN it would take an unfeasibly long time to occur.
But, given that evolution DOESN’T work as modeled by MA, and given that “macroevolution” is undefined, although the unwritten subtext is that it is what would be required to convert mere “hominins” into “homo sapiens”, and given that whatever is required to make that undefined leap from “hominin” to “human” status is almost certainly not the transformation of a DNA sequence to one totally unlike it, THEN the conclusion is essentially meaningless.
So, phoodoo, it’s not that I don’t LIKE the conclusion – it’s just that it doesn’t say anything about how we did, or did not, evolve into or otherwise become human beings.
The subtext does seem to be that whatever was required to turn hominins into humans can’t have happened by Darwinian means. But the paper does not show this.
No evolution (according to you all) has no pre-specified targets. So each time you quote a computer simulation, that has pre-specified targets, its busllshit. But you only call the ones bullshit that don’t accomplish the theory you want them too, but praise the ones that supposedly do support your theory. that’s how stupid the whole science of your simulations is.
But you of course want it both ways.
Make a computer simulation that all it does is break copies aimlessly. See if anything worthwhile ever occurs.
Do you think that would be a good model of evolution? If so, you need to educate yourself more on the topic.
For even a simple model you need somewhat high fidelity but imperfect duplication combined with heritable variation in reproductive success.
Who do you think would expect that anything worthwhile would occur? Certainly not anybody you would dismissively call a “Darwinist” Your scenario is not even RM, let alone RM+NS.
Why don’t you try and come up with a computer model. First objective is not to “break copies aimlessly”, but to “aimlessly make imperfect copies of a subset of existing (imperfect) copies”. That is the RM bit. Once you implement selection to establish how that subset is chosen, evolution is inevitable. If that selection is random, clearly the copies will get worse with every generation. If the selection is “intelligent” then it is clearly possible to generate a targetted improvement with every generation, as animal and plant breeders have done for millenia.
Once we are agreed on that much, finding a single “unintelligent” selection algorithm that “improves” future copies will demonstate Evolution does not require any “intelligence”. Though for those who still need their comfort blanket, it can never show that some otherwise scientifically undetectable Divine Intelligent Selection is additionally operating in real world Evolution, particulary to hone the body of a semi-erect quadraped to the shape of a god.