At Uncommon Descent, poster gpuccio has expressed interest in what I think of his example of a safecracker trying to open a safe with a 150-digit combination, or open 150 safes, each with its own 1-digit combination. It’s actually a cute teaching example, which helps explain why natural selection cannot find a region of “function” in a sequence space in such a case. The implication is that there is some point of contention that I failed to address, in my post which led to the nearly 2,000-comment-long thread on his argument here at TSZ. He asks:
By the way, has Joe Felsestein answered my argument about the thief? Has he shown how complex functional information can increase gradually in a genome?
Gpuccio has repeated his call for me to comment on his ‘thief’ scenario a number of times, including here, and UD reader “jawa” taken up the torch (here and here), asking whether I have yet answered the thief argument), at first dramatically asking
Does anybody else wonder why these professors ran away when the discussions got deep into real evidence territory?
and then supplying the “thoughts” definitively (here)
we all know why those distinguished professors ran away from the heat of a serious discussion with gpuccio, it’s obvious: lack of solid arguments.
I’ll re-explain gpuccio’s example below the fold, and then point out that I never contested gpuccio’s safe example, but I certainly do contest gpuccio’s method of showing that “Complex Functional Information” cannot be achieved by natural selection. gpuccio manages to do that by defining “complex functional information” differently from Szostak and Hazen’s definition of functional information, in a way that makes his rule true. But gpuccio never manages to show that when Functional Information is defined as Hazen and Szostak defined it, that 500 bits of it cannot be accumulated by natural selection.
The Thief Scenario
Here is gpuccio’s scenario, as most succinctly stated in comment #65 of his ‘Texas Sharpshooter post at UD (gpuccio gives there links to some earlier appearances of the scenario):
In essence, we compare two systems. One is made of one single object (a big safe). the other of 50 smaller safes.
The sum in the big safe is the same as the sums in the 150 smaller safes put together. that ensures that both systems, if solved, increase the fitness of the thief in the same measure.
Let’s say that our functional objects, in each system, are:
a) a single piece of card with the 150 figures of the key to the big
b) 150 pieces of card, each containing the one figure key to one of the small safes (correctly labeled, so that the thief can use them directly).
Now, if the thief owns the functional objects, he can easily get the sum, both in the big safe and in the small safes.
But our model is that the keys are not known to the thief, so we want to compute the probability of getting to them in the two different scenarios by a random search.
So, in the first scenario, the thief tries the 10^150 possible solutions, until he finds the right one.
In the second scenario, he tries the ten possible solutions for the first safe, opens it, then passes to the second, and so on.
and gpuccio’s challenge is:
Do you think that the two scenarios are equivalent?
What should the thief do, according to your views?
My answers of course are, to the first question, “No”. To the second question, “Cheat, or hope that in this particular case you have an instance of the second scenario”.
My reaction: This is a nice teaching example showing why, in the first scenario, there is no hope of guessing the correct key, even once in the history of the universe.
In the first scenario there is no path to getting the safe open by successively opening parts of it. In the second case one can make an average of 5 guesses and open a safe, and when you have done this 150 times you get all the contents of the safes.
Why did gpuccio think that I objected to gpuccio’s logic for the case which has nonzero function only in a set of sequences which is of all sequences? I was very clear in acknowledging that in such a case, natural selection cannot find the sequences that have nonzero function, if we start in a random sequence.
Repeating his argument does not help his case, because my point is not that this part of his argument is wrong. And this was made very clear in my previous post on gpuccio’s argument. But let me repeat, for those who did not happen read that post.
What gpuccio got wrong
Gpuccio had stated (here) that
… the idea is that if we observe any object that exhibits complex functional information (for example, more than 500 bits of functional information ) for an explicitly defined function (whatever it is) we can safely infer design.
Functional information was defined by Jack Szostak (2003) and by Szostak, Hazen, and Carothers (2007). It assumes that we have a set of molecular sequences (for example, the coding sequence of a single protein, or its amino acid sequences), and with each of them is associated a number, the function. For example, its ability to synthesize ATP if it is an ATP synthase.
In this original definition of FI, there is no assumption that almost all of the sequences have function zero. Given that, there is no way to rule out the possibility that there are paths from many parts of the sequence space that lead to higher and higher function. And given that, we cannot eliminate the possibility that natural selection and mutation could follow such a path to arrive at the function that we observe in nature.
How gpuccio eliminates natural selection
Simply by changing the definition of Functional Information for gpuccio’s Complex Functional Information. And only calling it that when the amount of Function is zero for all sequences outside of the target set. That makes gpuccio’s definition of
Complex Functional Information very different from Szostak’s and Hagen’s. Gpuccio’s restricted definition rules out all cases where there might be a path among sequences leading to the target sequences, a path which has function rising continually along that path.
This was explained clearly many times in my post and in the 2,000-comment thread that it generated. Apparently all that gpuccio and jawa can do is repeat their argument, one which does not show that 500 bits of Functional Information, defined as Szostak and Hagen define it, cannot be achieved by normal evolutionary processes such as natural selection.
ETA: corrected “150 bits” to “500 bits” (which is roughly 150 digits).