On Uncommon Descent, poster gpuccio has been discussing “functional information”. Most of gpuccio’s argument is a conventional “islands of function” argument. Not being very knowledgeable about biochemistry, I’ll happily leave that argument to others.
But I have been intrigued by gpuccio’s use of Functional Information, in particular gpuccio’s assertion that if we observe 500 bits of it, that this is a reliable indicator of Design, as here, about at the 11th sentence of point (a):
… the idea is that if we observe any object that exhibits complex functional information (for example, more than 500 bits of functional information ) for an explicitly defined function (whatever it is) we can safely infer design.
I wonder how this general method works. As far as I can see, it doesn’t work. There would be seem to be three possible ways of arguing for it, and in the end; two don’t work and one is just plain silly. Which of these is the basis for gpuccio’s statement? Let’s investigate …
A quick summary
Let me list the three ways, briefly.
(1) The first is the argument using William Dembski’s (2002) Law of Conservation of Complex Specified Information. I have argued (2007) that this is formulated in such a way as to compare apples to oranges, and thus is not able to reject normal evolutionary processes as explanations for the “complex” functional information. In any case, I see little sign that gpuccio is using the LCCSI.
(2) The second is the argument that the functional information indicates that only an extremely small fraction of genotypes have the desired function, and the rest are all alike in totally lacking any of this function. This would prevent natural selection from following any path of increasing fitness to the function, and the rareness of the genotypes that have nonzero function would prevent mutational processes from finding them. This is, as far as I can tell, gpuccio’s islands-of-function argument. If such cases can be found, then explaining them by natural evolutionary processes would indeed be difficult. That is gpuccio’s main argument, and I leave it to others to argue with its application in the cases where gpuccio uses it. I am concerned here, not with the islands-of-function argument itself, but with whether the design inference from 500 bits of functional information is generally valid.
We are asking here whether, in general, observation of more than 500 bits of functional information is “a reliable indicator of design”. And gpuccio’s definition of functional information is not confined to cases of islands of function, but also includes cases where there would be a path to along which function increases. In such cases, seeing 500 bits of functional information, we cannot conclude from this that it is extremely unlikely to have arisen by normal evolutionary processes. So the general rule that gpuccio gives fails, as it is not reliable.
(3) The third possibility is an additional condition that is added to the design inference. It simply declares that unless the set of genotypes is effectively unreachable by normal evolutionary processes, we don’t call the pattern “complex functional information”. It does not simply define “complex functional information” as a case where we can define a level of function that makes probability of the set less than . That additional condition allows us to safely conclude that normal evolutionary forces can be dismissed — by definition. But it leaves the reader to do the heavy lifting, as the reader has to determine that the set of genotypes has an extremely low probability of being reached. And once they have done that, they will find that the additional step of concluding that the genotypes have “complex functional information” adds nothing to our knowledge. CFI becomes a useless add-on that sounds deep and mysterious but actually tells you nothing except what you already know. So CFI becomes useless. And there seems to be some indication that gpuccio does use this additional condition.
Let us go over these three possibilities in some detail. First, what is the connection of gpuccio’s “functional information” to Jack Szostak’s quantity of the same name?
Is gpuccio’s Functional Information the same as Szostak’s Functional Information?
gpuccio acknowledges that gpuccio’s definition of Functional Information is closely connected to Jack Szostak’s definition of it. gpuccio notes here:
Please, not[e] the definition of functional information as:
“the fraction of all possible configurations of the system that possess a degree of function >=
which is identical to my definition, in particular my definition of functional information as the
upper tail of the observed function, that was so much criticized by DNA_Jock.
(I have corrected gpuccio’s typo of “not” to “note”, JF)
We shall see later that there may be some ways in which gpuccio’s definition
is modified from Szostak’s. Jack Szostak and his co-authors never attempted any use of his definition to infer Design. Nor did Leslie Orgel, whose Specified Information (in his 1973 book The Origins of Life) preceded Szostak’s. So the part about design inference must come from somewhere else.
gpuccio seems to be making one of three possible arguments;
Possibility #1 That there is some mathematical theorem that proves that ordinary evolutionary processes cannot result in an adaptation that has 500 bits of Functional Information.
Use of such a theorem was attempted by William Dembski, his Law of Conservation of Complex Specified Information, explained in Dembski’s book No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2001). But Dembski’s LCCSI theorem did not do what Dembski needed it to do. I have explained why in my own article on Dembski’s arguments (here). Dembski’s LCCSI changed the specification before and after evolutionary processes, and so he was comparing apples to oranges.
In any case, as far as I can see gpuccio has not attempted to derive gpuccio’s argument from Dembski’s, and gpuccio has not directly invoked the LCCSI, or provided a theorem to replace it. gpuccio said in a response to a comment of mine at TSZ,
Look, I will not enter the specifics of your criticism to Dembski. I agre with Dembski in most things, but not in all, and my arguments are however more focused on empirical science and in particular biology.
While thus disclaiming that the argument is Dembski’s, on the other hand gpuccio does associate the argument with Dembski here by saying that
Of course, Dembski, Abel, Durston and many others are the absolute references for any discussion about functional information. I think and hope that my ideas are absolutely derived from theirs. My only purpose is to detail some aspects of the problem.
and by saying elsewhere that
No generation of more than 500 bits has ever been observed to arise in a non design system (as you know, this is the fundamental idea in ID).
That figure being Dembski’s, this leaves it unclear whether gpuccio is or is not basing the argument on Dembski’s. But gpuccio does not directly invoke the LCCSI, or try to come up with some mathematical theorem that replaces it.
So possibility #1 can be safely ruled out.
Possibility #2. That the target region in the computation of Functional Information consists of all of the sequences that have nonzero function, while all other sequences have zero function. As there is no function elsewhere, natural selection for this function then cannot favor sequences closer and closer to the target region.
Such cases are possible, and usually gpuccio is talking about cases like this. But gpuccio does not require them in order to have Functional Information. gpuccio does not rule out that the region could be defined by a high level of function, with lower levels of function in sequences outside of the region, so that there could be paths allowing evolution to reach the target region of sequences.
An example in which gpuccio recognizes that lower levels of function can exist outside the target region is found here, where gpuccio is discussing natural and artificial selection:
Then you can ask: why have I spent a lot of time discussing how NS (and AS) can in some cases add some functional information to a sequence (see my posts #284, #285 and #287)
There is a very good reason for that, IMO.
I am arguing that:
1) It is possible for NS to add some functional information to a sequence, in a few very specific cases, but:
2) Those cases are extremely rare exceptions, with very specific features, and:
3) If we understand well what are the feature that allow, in those exceptional cases, those limited “successes” of NS, we can easily demonstrate that:
4) Because of those same features that allow the intervention of NS, those scenarios can never, never be steps to complex functional information.
Jack Szostak defined functional information by having us define a cutoff level of function to define a set of sequences that had function greater than that, without any condition that the other sequences had zero function. Neither did Durston. And as we’ve seen gpuccio associates his argument with theirs.
So this second possibility could not be the source of gpuccio’s general assertion about 500 bits of functional information being a reliable indicator of design, however much gpuccio concentrates on such cases.
Possibility #3. That there is an additional condition in gpuccio’s Functional Information, one that does not allow us to declare it to be present if there is a way for evolutionary processes to achieve that high a level of function. In short, if we see 500 bits of Szostak’s functional information, and if it can be put into the genome by natural evolutionary processes such as natural selection then for that reason we declare that it is not really Functional Information. If gpuccio is doing this, then gpuccio’s Functional Information is really a very different animal than Szostak’s functional information.
Is gpuccio doing that? gpuccio does associate his argument with William Dembski’s, at least in some of his statements. And William Dembski has defined his Complex Specified Information in this way, adding the condition that it is not really CSI unless it is sufficiently improbable that it be achieved by natural evolutionary forces (see my discussion of this here in the section on “Dembski’s revised CSI argument” that refer to Dembski’s statements here). And Dembski’s added condition renders use of his CSI a useless afterthought to the design inference.
gpuccio does seem to be making a similar condition. Dembski’s added condition comes in via the calculation of the “probability” of each genotype. In Szostak’s definition, the probabilities of sequences are simply their frequencies among all possible sequences, with each being counted equally. In Dembski’s CSI calculation, we are instead supposed to compute the probability of the sequence given all evolutionary processes, including natural selection.
gpuccio has a similar condition in the requirements for concluding that complex
functional information is present: We can see it at step (6) here:
If our conclusion is yes, we must still do one thing. We observe carefully the object and what we know of the system, and we ask if there is any known and credible algorithmic explanation of the sequence in that system. Usually, that is easily done by excluding regularity, which is easily done for functional specification. However, as in the particular case of functional proteins a special algorithm has been proposed, neo darwininism, which is intended to explain non regular functional sequences by a mix of chance and regularity, for this special case we must show that such an explanation is not credible, and that it is not supported by facts. That is a part which I have not yet discussed in detail here. The necessity part of the algorithm (NS) is not analyzed by dFSCI alone, but by other approaches and considerations. dFSCI is essential to evaluate the random part of the algorithm (RV). However, the short conclusion is that neo darwinism is not a known and credible algorithm which can explain the origin of even one protein superfamily. It is neither known nor credible. And I am not aware of any other algorithm ever proposed to explain (without design) the origin of functional, non regular sequences.
In other words, you, the user of the concept, are on your own. You have to rule out that natural selection (and other evolutionary processes) could reach the target sequences. And once you have ruled it out, you have no real need for the declaration that complex functional information is present.
I have gone on long enough. I conclude that the rule that observation of 500 bits of functional information is present allows us to conclude in favor of Design (or at any rate, to rule out normal evolutionary processes as the source of the adaptation) is simply nonexistent. Or if it does exist, it is as a useless add-on to an argument that draws that conclusion for some other reason, leaving the really hard work to the user.
Let’s end by asking gpuccio some questions:
1. Is your “functional information” the same as Szostak’s?
2. Or does it add the requirement that there be no function in sequences that
are outside of the target set?
3. Does it also require us to compute the probability that the sequence arises as a result of normal evolutionary processes?
True. You and fifth are giving Bill a run for his money.
What priors? Obviously, things you have never seen, must be nigh impossible!
Will you publish a rebuttal paper? Otherwise unless people check here they will continue to cite it unknowingly.
Having a stroke much?
I sense a “but” coming…
But spliceosomes have RNA as their core ingredient.
You’re not giving me that. That is a known fact.
Anyway, as Rumraket points out, lack of knowledge of a precise evolutionary pathway does not disprove an evolutionary pathway exists.
This is moot, anyway, so I’ll stay mute on spliceosomes. The topic at hand, the one you raised with me, was whether Axe’s 2004 paper has been oversold by the ID crowd and whether Arthur Hunt’s and Rumraket’s criticisms are fair. I’ve just got a copy of Axe’s paper and I’m happy to go through the issues with you, time permitting.
This we covered before: Even if everything you say is true and the spliceosomal machinery was lovingly crafted by the Designer in her super secret laboratory outside time and space, then this tells us nothing about whether all extant organisms are related by common descent. That is a distinct issue from the origin of novel characters.
The part about spliceosomal introns being derived from group II self-splicing introns has also been covered before (See what I mean about not examing your opponents arguments)? As for the PRPF8 protein: It looks like Rumraket scooped me: I found the Dlakić & Mushegian paper too. I have only given it a cursory reading, but I understand that there is some evidence that PRPF8 evolved from retroelement-encoded reverse transcriptase (co-option baby). Well, I guess that evolutionary biology is saved now (*whew*).
Looking forward to your detailed treatment of Rum’s argument.
If it was after the ark then it was common descent. If it was before the ark then it was special creation.
I think I should do an OP on the evidence for common descent within kinds. Then perhaps we can see where the evidence has to change to accommodate the separate kinds, which cannot possibly have also been related by common descent.
Are you a YEC now?
So you’re ok including a proposal with 10^60 per 100 AA’s as the working number 🙂
Sure. When ever you want to discuss it. I am ready to take the position that ID undersold it.
The whole discussion is about trying to get you to understand that none of the “problems” you “propose” for evolution work. The whole discussion is moot with respect to what evolution can or cannot do. We’re just trying to get you to understand a few things, but you are unable to consider that you might be quite incompetent at understanding anything of what we’ve explained so far. So, instead, you continue surprising us with things that, given our previous explanations, should already be no issues to you, had you just understood.
There’s no such thing as naturalism of the gaps Bill. I already explained this to you. Nature is all around and within you. You cannot deny nature’s existence by definition.
When confronted with things around us, if we didn’t understand how they came about (mind you, in this case we do understand how), we’d have to default towards what we know to exist, meaning nature, rather than default to something as non-sensical as some unknown, unidentifiable, cart-before-the-horse designers.
The main problem within this discussion, Bill, is that your emotional states won’t allow you to admit of your inadequacies for this discussion, and to notice the abysmally ridiculous double standard that you apply to get to your preferred conclusion.
ETA: fixed some misworded stuff
It indicates that the eukaryotic cell may be a separate origin event.
I have read it also and see there is some sequence similarity mostly with group 11 introns. Once we get this one behind us we only have 199 proteins to go 🙂
How would you test his hypothesis on the cause of bacterial variation?
So let me get this right. Anyone who disagrees with you is wrong?
colewd, to Entropy:
I would say that Bill is as dumb as a rock, but that would be insulting to rocks. So let me say instead that Bill is dumber than a rock.
Nope. But anyone who disagrees by refusing to understand, and by applying a ridiculously polarized double standard, must have some issues other than disagreeing with me.
If only I knew what is it that you don’t get, but you prefer to pretend that you read, and understood, those explanations, yet you just don’t get any of it. Do you?
Let’s try a simple one. You claim that “the closest we get to humans, the most the conservation.” You say that as if it proves that god-did-it. Yet, it fits perfectly well with evolution. The closest we get to humans, obviously, the less time there’s been for divergence to occur. So sequences will look more similar between chimps and humans because they have been separated for less time than, say, humans and snakes. This is obvious. The conservation shows nothing else but divergence since separation. More time, more divergence (less “conservation”). This is painfully obvious. For good measure, Rum showed you a tree so that you’d get it. Yet, you seem to have missed a point that could not be clearer. What about you show me that I’m wrong, that you actually got it, and explain to me what the tree was meant to show you?
Do you understand why it’s not surprising at all that the closer we get to humans the more the “conservation”? Do you understand how that doesn’t mean that god-did-it?
Is there a sea change that I see in Spanish politics?
And the obvious place to do that is in Rumraket’s thread. I still can’t believe I hadn’t noticed it before.
Thank you for revealing once again that you don’t know what you’re talking about.
Axe’s conclusions as sold to the ID crowd would have meant the results of all the experiments I refer to in my OP on the topic, would have been practically impossible. No functions should have been found at all, because only an infinitesimal fraction of 10^77 sequences were tested.
The claim made to the ID crowd is that functions should be found at a rate of about 1 in 10^77 sequences, or even rarer than than. Yet experimentally functions are found at a rate between 1 in 1, and 1 in 10^11. That’s why I write in my post that real world experiments put Axe’s number off by at least sixty orders of magnitude.
It is difficult to imagine how distorted must be the thinking of an individual who wishes to advance the proposition that the real numbers are even worse than Axe’s.
Corneel, to colewd:
Stop dodging and do what you said you’d do:
You still haven’t shown there is a probabilistic barrier. And the kind of evidence we have shouldn’t exist if there was, so your assumed probabilistic barrier is contradicted by evidence. Let go of your assumptions. You keep coming to the table with assumptions and even when real world evidence contradict them, you just can’t let go. That tells me you’re not really psychologically interested in discovering what is really true, you’re here to advocate for a conclusion you desire to be true. For some reason you just can’t allow yourself to consider that your “probabilistic barrier” assumption could be wrong.
No as we have been over now multiple times, you’re just using simple gap-reasoning. At no point do you make an actual proper inference using valid syllogistic form. If you were to do that (who are we kidding?) you would discover where the errors in your reasoning lie, or at least you would have to make your assumptions explicit as premises in your argument, and it would become obvious how you are basically just assuming your conclusion before your argument even gets off the ground.
Bill has done the reset-switch thing again. Some times he just goes back to zero, and it’s like the last 5 years just don’t exist in his mind. He remembers nothing of what we have discussed before. RoboCreationDefender v 1.013 self-resetting model with the UnTeachable patch hotfix.
When it runs into something it can’t deal with, it says it’s “interesting” and then forgets all about it. What can you do when you’re up against this kind of attitude?
It’s even worse than that. He’s jumping from “an intelligence could do it, if it existed, if it had the right capabilities, and if it chose to do so” to “such an intelligence did exist, it did have the right capabilities, and it did choose to do so.”
He knows none of those things, of course, and he has no evidence for them. But he wants them to be true, and that’s good enough for Bill. He’s a lot like Trump in that regard.
?? Weren’t you the one that accepted universal common descent ?
Anyway, we covered some of that ground in John’s bird kinds thread, and it seemed at that time that there is not really any consensus on that among the resident TSZ creationists. Perhaps you could get Nonlin to articulate his/her position on that in the new OP (I doubt it).
Haha, nice try.
I think I’ll wait until you have presented your detailed treatment, OK?
Ok. I thought this was a simple question. E coli has a 60% sequence conservation to humans while other bacteria are 15% conserved. If this is just a time issue then we should be able to change the sequence in e coli knock out the repair mechanism and see if it survives. Perhaps it will not bond to the alpha chain because of the change so we could also change the alpha chain to the same sequence as the 15% similar to human sequence bacteria.
How would your conclusion change if the e coli could not survive?
Your assumption here is that all functions are the same. You are ignoring the empirical data. Hunt set a range to 10^11 to 10^60 for 100 AA based on the literature.
You are arguing with yourself but at least you are arguing. I read your 2012 post last night and it took circular reasoning to a new level.
You should really look at the data thats been generated since the 15 year old paper you continue to reference.
This is an assertion you continue to make. You’re putting yourself as the judge and authority. A logical fallacy which renders the statement meaningless.
Not what I assumed at all. I am looking at different functional requirements as the possible cause.
There’s no logical fallacy. You continue to demonstrate my point each and every time. For example:
This shows that you stoped reading. Why would you skip the explanation if not because you don’t understand it? I ask again:
What was Rum’s reason to show you the tree? Do you understand that the less time since separation, the less time for sequences to become “less similar”? Do you understand how that explains why the closer we get to humans the less differences there should be, thus making your claim meaningless in terms of “functional requirements”?
See? You don’t get it. yet you pretend to answer, but, in doing so, you show that you missed the point entirely. It’s not me making mere assertions about your understanding and double standards. It’s you showing lack of knowledge and a ridiculously polarized double standard.
Wanna give it another try? An honest one for a change?
How would yours if it could? 🙂
Yes, I understand Rums argument. When you have an argument about the subject that is new I am happy to respond.
It would tell me that bacteria have a wider tolerance to functional change then perhaps vertebrate do. I would want to do a similar experiment in mice and see how much change they could tolerate. If mice could also tolerate the same change I would concede that this is a time issue. If not then I would ask you to concede that functional requirements plays a role.
colewd, to Entropy:
By Bill’s inane logic, no one can ever decide that someone else is wrong. That would be a logical fallacy. So when Bill criticizes evolutionary theory, he is, by his own standards, spouting fallacious nonsense.
Bill couldn’t pour water out of a boot with instructions on the heel.
I see you still have some work to do to understand my sense of humor. 🙂
The underlying theme is the evidence that leads people base their belief in common descent within “kinds.” How do they decide where common descent is acceptable and where it is not?
I am assuming that Bill accepts at least SOME common descent. But why?
It is not surprising to me that you don’t recognize when someone is arguing in circles from authority. You have mastered this skill.
So you reject the greater divergence time as a viable explanation for the greater sequence variation among bacteria? I don’t understand why that is. You might want to explain.
BTW: Unless you have a secret lab in your house, we will not be able to perform those experiments, so I do not really see the point.
I’m arguing with you and as usual, you have no counterargument to offer.
Says the guy who just got through saying this to Entropy:
Show us the circularity. Make your case.
As if the new data invalidated my argument.
Present your counterargument, Bill, or admit that you can’t. Your bluffing isn’t fooling anyone.
I always first try to parse your comments as jokes, but I confess I missed this one.
Because it allows him to assimilate some of gpuccio’s arguments is my guess, but maybe that’s a bit unfair.
That’s because Mung’s conversion to creationism would actually be quite plausible. In the “Common design vs common descent” thread, he showed us that he lacks even a rudimentary understanding of why the evidence for common descent is so strong and persuasive. A person like that is subject to being swayed.
This is confused. Nobody is saying some of the mutations that separate species do not have functional consequences. Of course they do, that’s inevitable.
Mutations that are initially neutral in effect and therefore aren’t selected against, nevertheless open up paths for other mutations that would otherwise be deleterious, but are permitted because of compensatory epistasis. These unavoidably will accumulate over time. For individual modules in a protein that are dependent on each other, a neutral mutation in subunit 1, can open up for another mutation in subunit 2 (And plenty of work has been done on this general principle, even for ATP synthase subunits, by .. you guessed it, the Thornton Lab). But that now means the mutant subunit 2 is dependent on a mutant subunit 1.
That means we can’t necessarily expect to be able to replace mutationally diverged components of larger structures from different species and expect them to remain functional. There are examples known (also with ATP synthase) where we know it can be done even for proteins that are very dissimilar (and it still yields a functional system where the organism lives on just fine, or with slightly lower fitness), and there are examples known where such an exchange results in a nonfunctional system and lethality, or a very large reduction in fitness.
Either way, the fact of evolution is not contingent on it being possible to just pick any component from any species and remove it, or replace it with a similar one from any other arbitrary organism.
I thought this had already been agreed upon by both sides. I though you yourself in fact agree there is a “probabilistic barrier” and have even shown why in some of your posts.
colewd, to Entropy:
Stop dodging and do what you said you’d do:
Yes, it will end badly for you, but you should have thought about that before you made the commitment.
I’m already a creationist. Do keep up keiths.
What keiths means, Corneel, was that keiths was either unable or unwilling to meet even the simplest of challenges. And that’s my fault, him being such a willing teacher and all.
If you support your claim with Mung I will respond and show you how Theobald’s cytochrome c data is not current and that new gene data contradicts common descent and supports common design. You can find all this out yourself by reading Cordova’s post which went over 5000 comments.
It fooled me.
If you think dodging is bad your should consider supporting your claims.
You should also read the post for comprehension and see I have discussed this with Rum already.
Good point. I should have said: