At Uncommon Descent, poster gpuccio has expressed interest in what I think of his example of a safecracker trying to open a safe with a 150-digit combination, or open 150 safes, each with its own 1-digit combination. It’s actually a cute teaching example, which helps explain why natural selection cannot find a region of “function” in a sequence space in such a case. The implication is that there is some point of contention that I failed to address, in my post which led to the nearly 2,000-comment-long thread on his argument here at TSZ. He asks:
By the way, has Joe Felsestein answered my argument about the thief? Has he shown how complex functional information can increase gradually in a genome?
Gpuccio has repeated his call for me to comment on his ‘thief’ scenario a number of times, including here, and UD reader “jawa” taken up the torch (here and here), asking whether I have yet answered the thief argument), at first dramatically asking
Does anybody else wonder why these professors ran away when the discussions got deep into real evidence territory?
Any thoughts?
and then supplying the “thoughts” definitively (here)
we all know why those distinguished professors ran away from the heat of a serious discussion with gpuccio, it’s obvious: lack of solid arguments.
I’ll re-explain gpuccio’s example below the fold, and then point out that I never contested gpuccio’s safe example, but I certainly do contest gpuccio’s method of showing that “Complex Functional Information” cannot be achieved by natural selection. gpuccio manages to do that by defining “complex functional information” differently from Szostak and Hazen’s definition of functional information, in a way that makes his rule true. But gpuccio never manages to show that when Functional Information is defined as Hazen and Szostak defined it, that 500 bits of it cannot be accumulated by natural selection.
The Thief Scenario
Here is gpuccio’s scenario, as most succinctly stated in comment #65 of his ‘Texas Sharpshooter post at UD (gpuccio gives there links to some earlier appearances of the scenario):
In essence, we compare two systems. One is made of one single object (a big safe). the other of 50 smaller safes.
The sum in the big safe is the same as the sums in the 150 smaller safes put together. that ensures that both systems, if solved, increase the fitness of the thief in the same measure.
Let’s say that our functional objects, in each system, are:
a) a single piece of card with the 150 figures of the key to the big
safeb) 150 pieces of card, each containing the one figure key to one of the small safes (correctly labeled, so that the thief can use them directly).
Now, if the thief owns the functional objects, he can easily get the sum, both in the big safe and in the small safes.
But our model is that the keys are not known to the thief, so we want to compute the probability of getting to them in the two different scenarios by a random search.
So, in the first scenario, the thief tries the 10^150 possible solutions, until he finds the right one.
In the second scenario, he tries the ten possible solutions for the first safe, opens it, then passes to the second, and so on.
and gpuccio’s challenge is:
Do you think that the two scenarios are equivalent?
What should the thief do, according to your views?
My answers of course are, to the first question, “No”. To the second question, “Cheat, or hope that in this particular case you have an instance of the second scenario”.
My reaction: This is a nice teaching example showing why, in the first scenario, there is no hope of guessing the correct key, even once in the history of the universe.
In the first scenario there is no path to getting the safe open by successively opening parts of it. In the second case one can make an average of 5 guesses and open a safe, and when you have done this 150 times you get all the contents of the safes.
My question
Why did gpuccio think that I objected to gpuccio’s logic for the case which has nonzero function only in a set of sequences which is of all sequences? I was very clear in acknowledging that in such a case, natural selection cannot find the sequences that have nonzero function, if we start in a random sequence.
Repeating his argument does not help his case, because my point is not that this part of his argument is wrong. And this was made very clear in my previous post on gpuccio’s argument. But let me repeat, for those who did not happen read that post.
What gpuccio got wrong
Gpuccio had stated (here) that
… the idea is that if we observe any object that exhibits complex functional information (for example, more than 500 bits of functional information ) for an explicitly defined function (whatever it is) we can safely infer design.
Functional information was defined by Jack Szostak (2003) and by Szostak, Hazen, and Carothers (2007). It assumes that we have a set of molecular sequences (for example, the coding sequence of a single protein, or its amino acid sequences), and with each of them is associated a number, the function. For example, its ability to synthesize ATP if it is an ATP synthase.
In this original definition of FI, there is no assumption that almost all of the sequences have function zero. Given that, there is no way to rule out the possibility that there are paths from many parts of the sequence space that lead to higher and higher function. And given that, we cannot eliminate the possibility that natural selection and mutation could follow such a path to arrive at the function that we observe in nature.
How gpuccio eliminates natural selection
Simply by changing the definition of Functional Information for gpuccio’s Complex Functional Information. And only calling it that when the amount of Function is zero for all sequences outside of the target set. That makes gpuccio’s definition of
Complex Functional Information very different from Szostak’s and Hagen’s. Gpuccio’s restricted definition rules out all cases where there might be a path among sequences leading to the target sequences, a path which has function rising continually along that path.
This was explained clearly many times in my post and in the 2,000-comment thread that it generated. Apparently all that gpuccio and jawa can do is repeat their argument, one which does not show that 500 bits of Functional Information, defined as Szostak and Hagen define it, cannot be achieved by normal evolutionary processes such as natural selection.
ETA: corrected “150 bits” to “500 bits” (which is roughly 150 digits).
This was the great hope of the 80’s : rational drug design.
As of 2000 (~ 20 years late…), drug developers DO use computer simulations to dock drug candidates with proteins (of known structure). Usefulness has been underwhelming, hence all the excitement around “click-chemistry” and related ways of generating large numbers of variations on a theme for elucidating Structure-Activity-Relationships; IOW trial and error is still King.
The problem with answering the “rarity question” is that crappy primary sequences (i.e. those that do not form a unique fold with high yield) can still be functional, if a detectable fraction of them form a sufficiently useful fold. Szostak’s initial ATP binders behaved like this.
Validating that your virtual folding program was accurately predicting this plethora of fold variants, including the correct minor variants, would be VERY difficult.
Likewise, knowing that a particular virtual fold is inactive is also problematic. As soon as your badly folded protein contacts *any* ligand, it can change shape… Hence the whole “induced fit” problem. I have difficulty imagining a way of making that problem tractable.
The solution to date has been to have molecular biology create large libraries of proteins and test who works empirically. Hence the Szostak/Phylos technology (now owned by BMS) and the Dyax phage display (now owned by Shire).
That’s a feature, not a bug. If, in reality, that is another upward path for evolution to work on, that supports the possibility of small improvements from just-above-zero functionality.
Have faith, brother! 🙂
Of course he doesn’t think there’s slopes, and he doesn’t think that other protein families could perform a function currently performed by one protein family.
He, gpuccio, has an “unanswered” challenge to show that there’s evidence that would at least suggest, that it is even-if-slightly possible for there to be “paths” towards a function (paraphrasing).
Of course, the supposed “challenge” was very easily answered, but, of course again, he won’t accept that it was answered. So, he moved the goalposts, and dismissed a chunk of the answer by claiming that “affinity has nothing to do with it” (he must imagine that proteins work by telepathy). Oh, but before my answer to the “challenge,” he, gpuccio, had quoted a bunch of abstracts, all of which contained the word “affinity” in them.
After that amount of dishon …. I mean … ahem … mistakes! I thought it wasn’t worth trying to have a conversation with the guy, I just told him that the challenge was answered from a reasonable person’s perspective, or something to that effect. I don’t know if he said anything after that.
DNA_Jock,
Thanks for the links. Science is always work in progress. The phage display technique looks effective.
Yes, he did!
I get my own response!
Just to clarify, gpuccio, I don’t think I have ever criticised “Intelligent Design Theory”. I have no idea what “Intelligent Design Theory” is and have asked for a scientific ID hypothesis many times at UD when I could post there. I seriously doubt any “Intelligent Design” theory or hypothesis exists.
I wish gpuccio would look at the thread himself rather than relying on edited highlights relayed by Joe G.
@ Mung
Don’t know if you are just joking but no contributions in that thread are from me under any pseudonym. Just sayin.
Absolutely, and as OMagain and petrushka have noted, everything has some function. The debate is around how well connected the lowlands are to the peaks. Hayashi reckoned that “most” random sequences had pathways to higher function available. Personally, I think he may have been over-concluding.
How about 10% ?
It wouldn’t surprise me. Has anyone suggested a threshold of percentage above which evolutionary processes are inevitable.
gpuccio “defines” Intelligent Design theory as follows
I dunno, I’d hardly call that a theory. An untestable hypothesis, perhaps.
You’re waaaaaaaay too kind.
A claim with unsalvageable philosophical and scientific problems. IOW a nonsensical claim.
ETA: I had semantic problems there, but that’s part of the philosophical problems.
!
Based on gpuccio’s replies to my comments, I now realize that I was wrong. I had inferred that gpuccio would only call a function CFI if outside the target region the function was exactly zero everywhere, so that natural selection could not follow an uphill slope to tha target.
But gpuccio did allow for cases where there was some function outside the target region. It is just that in all such cases gpuccio concludes that it is too little function to allow natural selection to follow it uphill. How gpuccio concludes that is a mystery, but gpuccio is very convinced of that. So I was wrong to say that gpuccio only calls a situation CFI it there is no function outside the target region. Instead gpuccio considers CFI to be present in any case where there is 500 bits of FI. But somehow has concluded that in all such cases, natural selection cannot move the genotype from outside of the target region to inside it.
A massive dismissal of natural selection which will not impress evolutionary biologists.
But it impresses his admirers, and for ID creationists that’s what counts.
If a hypothesis is untestable, isn’t it just an unsupported assertion?
Yes. Impressing people like Joe G, Mullings and Cunningham must be very life affirming. 😃
Puccio claims that there’s no religion in that bullshit of his, and…
Apparently there’s no limit to how complex something can be before design becomes an untenable explanation. Almost as if they were silently assuming an omnipotent Designer. All science, no religion! Lulz
There’s little philosophy there, that’s true, just an argument from ignorance/question begging plus argument from personal incredulity. keep your chin up creotards! take pride on your fallacies, hell yeah!
Yes, joking. 🙂
Joe Felsenstein,
What do you think is the best empirical case that natural selection can find what we are observing in nature? His case is being made observing information jumps observed which there are a lot of along the road to life’s diversity.
The case you can make is there are workable steps for natural selection which are not observable today. An example is lots of extinct single celled organisms between observed bacteria and yeast. A theory based on evidence that isn’t there cannot be very strong.
And yet you cannot topple it.
colewd,
I think you have the evidence right in front of you: gpuccio’s data, because highly conserved sequences is what negative selection does, and very distant relatives still present significant sequence similarity with the highly conserved ones in vertebrates, which I think means they were under selective pressure, hence functional, in both lineages. Otherwise drift would have erased all sequence similarity a long time ago.
If Darwinism isn’t dead, why do we have all these people taking about the modern synthesis and the “third way” and emergent properties of autopoietic systems within far-from-equilibrium thermodynamics?
Darwinism is a collective term, usually used by people like you, to collect everything together that is not Intelligent Design Creationism. Actual science, in other words.
Unlike your religious books, science moves on.
But for simplicity’s sake, when you read “Darwinism” just think “Not-Intelligent-Design-Creationism”.
Also note that “Darwinism” was not mentioned in my comment or the comment I was responding to. That’s your projection.
You guys are gonna love this. At UD, user john_a_designer compares the evidence for natural selection with the evidence for the bigfoot (while missing badly the point about arguments from personal incredulity ), and JoeG/Frankie/ET responds defending the existence of Bigfoot
Oh boy, what a laugh 😀
Holy crap.
Every time there’s a new commenter of sorts, gpuccio repeats the mantra that “and all of this doesn’t assume God!” or something like that. But a bit of thinking would reveal otherwise, even if they authentically were trying to avoid making it about gods (I doubt they’re even trying to avoid it though).
For one, if they claim that complex systems don’t develop naturally and we point at life, bum! “Fallacy! fallacy! That’s the thing in contention!” No kidding? Then why do you use life to try and explain life yourself? “I’m not!” Yes, yes you are: you’re using us, the so-called intelligent designers, to try and explain life, and, last time I checked, we were one of many life forms, so, if I cannot say: we see it everyday: check it around, life does everything by itself, naturally. No magical being in the s… I mean, no intelligent designers anywhere to be seen, if I cannot say that, then you cannot say that it’s intelligently designed.
So, how to try and avoid the philosophical conundrum here if not by imagining gods? What is it then if not religion masquerading as science?
No kidding. Right? Not about “God” at all. I’m sooooo convinced now.
I try and avoid calling them creatards, but then they show that fucking dishonesty, and I’m tempted to change my ways.