On Uncommon Descent, poster gpuccio has been discussing “functional information”. Most of gpuccio’s argument is a conventional “islands of function” argument. Not being very knowledgeable about biochemistry, I’ll happily leave that argument to others.
But I have been intrigued by gpuccio’s use of Functional Information, in particular gpuccio’s assertion that if we observe 500 bits of it, that this is a reliable indicator of Design, as here, about at the 11th sentence of point (a):
… the idea is that if we observe any object that exhibits complex functional information (for example, more than 500 bits of functional information ) for an explicitly defined function (whatever it is) we can safely infer design.
I wonder how this general method works. As far as I can see, it doesn’t work. There would be seem to be three possible ways of arguing for it, and in the end; two don’t work and one is just plain silly. Which of these is the basis for gpuccio’s statement? Let’s investigate …
A quick summary
Let me list the three ways, briefly.
(1) The first is the argument using William Dembski’s (2002) Law of Conservation of Complex Specified Information. I have argued (2007) that this is formulated in such a way as to compare apples to oranges, and thus is not able to reject normal evolutionary processes as explanations for the “complex” functional information. In any case, I see little sign that gpuccio is using the LCCSI.
(2) The second is the argument that the functional information indicates that only an extremely small fraction of genotypes have the desired function, and the rest are all alike in totally lacking any of this function. This would prevent natural selection from following any path of increasing fitness to the function, and the rareness of the genotypes that have nonzero function would prevent mutational processes from finding them. This is, as far as I can tell, gpuccio’s islands-of-function argument. If such cases can be found, then explaining them by natural evolutionary processes would indeed be difficult. That is gpuccio’s main argument, and I leave it to others to argue with its application in the cases where gpuccio uses it. I am concerned here, not with the islands-of-function argument itself, but with whether the design inference from 500 bits of functional information is generally valid.
We are asking here whether, in general, observation of more than 500 bits of functional information is “a reliable indicator of design”. And gpuccio’s definition of functional information is not confined to cases of islands of function, but also includes cases where there would be a path to along which function increases. In such cases, seeing 500 bits of functional information, we cannot conclude from this that it is extremely unlikely to have arisen by normal evolutionary processes. So the general rule that gpuccio gives fails, as it is not reliable.
(3) The third possibility is an additional condition that is added to the design inference. It simply declares that unless the set of genotypes is effectively unreachable by normal evolutionary processes, we don’t call the pattern “complex functional information”. It does not simply define “complex functional information” as a case where we can define a level of function that makes probability of the set less than . That additional condition allows us to safely conclude that normal evolutionary forces can be dismissed — by definition. But it leaves the reader to do the heavy lifting, as the reader has to determine that the set of genotypes has an extremely low probability of being reached. And once they have done that, they will find that the additional step of concluding that the genotypes have “complex functional information” adds nothing to our knowledge. CFI becomes a useless add-on that sounds deep and mysterious but actually tells you nothing except what you already know. So CFI becomes useless. And there seems to be some indication that gpuccio does use this additional condition.
Let us go over these three possibilities in some detail. First, what is the connection of gpuccio’s “functional information” to Jack Szostak’s quantity of the same name?
Is gpuccio’s Functional Information the same as Szostak’s Functional Information?
gpuccio acknowledges that gpuccio’s definition of Functional Information is closely connected to Jack Szostak’s definition of it. gpuccio notes here:
Please, not[e] the definition of functional information as:
“the fraction of all possible configurations of the system that possess a degree of function >=
Ex.”which is identical to my definition, in particular my definition of functional information as the
upper tail of the observed function, that was so much criticized by DNA_Jock.
(I have corrected gpuccio’s typo of “not” to “note”, JF)
We shall see later that there may be some ways in which gpuccio’s definition
is modified from Szostak’s. Jack Szostak and his co-authors never attempted any use of his definition to infer Design. Nor did Leslie Orgel, whose Specified Information (in his 1973 book The Origins of Life) preceded Szostak’s. So the part about design inference must come from somewhere else.
gpuccio seems to be making one of three possible arguments;
Possibility #1 That there is some mathematical theorem that proves that ordinary evolutionary processes cannot result in an adaptation that has 500 bits of Functional Information.
Use of such a theorem was attempted by William Dembski, his Law of Conservation of Complex Specified Information, explained in Dembski’s book No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2001). But Dembski’s LCCSI theorem did not do what Dembski needed it to do. I have explained why in my own article on Dembski’s arguments (here). Dembski’s LCCSI changed the specification before and after evolutionary processes, and so he was comparing apples to oranges.
In any case, as far as I can see gpuccio has not attempted to derive gpuccio’s argument from Dembski’s, and gpuccio has not directly invoked the LCCSI, or provided a theorem to replace it. gpuccio said in a response to a comment of mine at TSZ,
Look, I will not enter the specifics of your criticism to Dembski. I agre with Dembski in most things, but not in all, and my arguments are however more focused on empirical science and in particular biology.
While thus disclaiming that the argument is Dembski’s, on the other hand gpuccio does associate the argument with Dembski here by saying that
Of course, Dembski, Abel, Durston and many others are the absolute references for any discussion about functional information. I think and hope that my ideas are absolutely derived from theirs. My only purpose is to detail some aspects of the problem.
and by saying elsewhere that
No generation of more than 500 bits has ever been observed to arise in a non design system (as you know, this is the fundamental idea in ID).
That figure being Dembski’s, this leaves it unclear whether gpuccio is or is not basing the argument on Dembski’s. But gpuccio does not directly invoke the LCCSI, or try to come up with some mathematical theorem that replaces it.
So possibility #1 can be safely ruled out.
Possibility #2. That the target region in the computation of Functional Information consists of all of the sequences that have nonzero function, while all other sequences have zero function. As there is no function elsewhere, natural selection for this function then cannot favor sequences closer and closer to the target region.
Such cases are possible, and usually gpuccio is talking about cases like this. But gpuccio does not require them in order to have Functional Information. gpuccio does not rule out that the region could be defined by a high level of function, with lower levels of function in sequences outside of the region, so that there could be paths allowing evolution to reach the target region of sequences.
An example in which gpuccio recognizes that lower levels of function can exist outside the target region is found here, where gpuccio is discussing natural and artificial selection:
Then you can ask: why have I spent a lot of time discussing how NS (and AS) can in some cases add some functional information to a sequence (see my posts #284, #285 and #287)
There is a very good reason for that, IMO.
I am arguing that:
1) It is possible for NS to add some functional information to a sequence, in a few very specific cases, but:
2) Those cases are extremely rare exceptions, with very specific features, and:
3) If we understand well what are the feature that allow, in those exceptional cases, those limited “successes” of NS, we can easily demonstrate that:
4) Because of those same features that allow the intervention of NS, those scenarios can never, never be steps to complex functional information.
Jack Szostak defined functional information by having us define a cutoff level of function to define a set of sequences that had function greater than that, without any condition that the other sequences had zero function. Neither did Durston. And as we’ve seen gpuccio associates his argument with theirs.
So this second possibility could not be the source of gpuccio’s general assertion about 500 bits of functional information being a reliable indicator of design, however much gpuccio concentrates on such cases.
Possibility #3. That there is an additional condition in gpuccio’s Functional Information, one that does not allow us to declare it to be present if there is a way for evolutionary processes to achieve that high a level of function. In short, if we see 500 bits of Szostak’s functional information, and if it can be put into the genome by natural evolutionary processes such as natural selection then for that reason we declare that it is not really Functional Information. If gpuccio is doing this, then gpuccio’s Functional Information is really a very different animal than Szostak’s functional information.
Is gpuccio doing that? gpuccio does associate his argument with William Dembski’s, at least in some of his statements. And William Dembski has defined his Complex Specified Information in this way, adding the condition that it is not really CSI unless it is sufficiently improbable that it be achieved by natural evolutionary forces (see my discussion of this here in the section on “Dembski’s revised CSI argument” that refer to Dembski’s statements here). And Dembski’s added condition renders use of his CSI a useless afterthought to the design inference.
gpuccio does seem to be making a similar condition. Dembski’s added condition comes in via the calculation of the “probability” of each genotype. In Szostak’s definition, the probabilities of sequences are simply their frequencies among all possible sequences, with each being counted equally. In Dembski’s CSI calculation, we are instead supposed to compute the probability of the sequence given all evolutionary processes, including natural selection.
gpuccio has a similar condition in the requirements for concluding that complex
functional information is present: We can see it at step (6) here:
If our conclusion is yes, we must still do one thing. We observe carefully the object and what we know of the system, and we ask if there is any known and credible algorithmic explanation of the sequence in that system. Usually, that is easily done by excluding regularity, which is easily done for functional specification. However, as in the particular case of functional proteins a special algorithm has been proposed, neo darwininism, which is intended to explain non regular functional sequences by a mix of chance and regularity, for this special case we must show that such an explanation is not credible, and that it is not supported by facts. That is a part which I have not yet discussed in detail here. The necessity part of the algorithm (NS) is not analyzed by dFSCI alone, but by other approaches and considerations. dFSCI is essential to evaluate the random part of the algorithm (RV). However, the short conclusion is that neo darwinism is not a known and credible algorithm which can explain the origin of even one protein superfamily. It is neither known nor credible. And I am not aware of any other algorithm ever proposed to explain (without design) the origin of functional, non regular sequences.
In other words, you, the user of the concept, are on your own. You have to rule out that natural selection (and other evolutionary processes) could reach the target sequences. And once you have ruled it out, you have no real need for the declaration that complex functional information is present.
I have gone on long enough. I conclude that the rule that observation of 500 bits of functional information is present allows us to conclude in favor of Design (or at any rate, to rule out normal evolutionary processes as the source of the adaptation) is simply nonexistent. Or if it does exist, it is as a useless add-on to an argument that draws that conclusion for some other reason, leaving the really hard work to the user.
Let’s end by asking gpuccio some questions:
1. Is your “functional information” the same as Szostak’s?
2. Or does it add the requirement that there be no function in sequences that
are outside of the target set?
3. Does it also require us to compute the probability that the sequence arises as a result of normal evolutionary processes?
Hi Joe,
I’m having a difficult time following. Could you give an example? An analogy?
How can there be an uphill path of function from outside the function? That makes no sense to me.
Are you in agreement then that such is out of reach of blind search, or is that also in dispute?
The argument doesn’t need to rule out natural selection. Natural selection is not an alternative to blind search, it is an addendum to blind search. Natural selection is dependent on blind search as the source of it’s own sampling mechanism.
What would be that paper, please? You posted two links a while back:
Hayashi Y et. al. 2003.: Can an arbitrary sequence evolve towards acquiring a biological function? J Mol Evol. 2003 Feb;56(2):162-8. [DOI: 10.1007/s00239-002-2389-y]
Hayashi Y et. al. 2006.: Experimental rugged fitness landscape in protein sequence space. PLoS One. 2006 Dec 20;1:e96. [DOI: 10.1371/journal.pone.0000096]
I see I misremembered the date. I’ve been referring to the later paper, though both papers deal with essentially the same thing. In the later paper they just expand the size of the population from which they select mutants to seed the next generation, compared to the first paper.
What is interesting here is that a random protein turns out to be able to assist infectivity and single mutations have large effects on the function, proving that the random protein sits right on a slope for selection to climb towards higher levels of infectivity. If such functions were incredibly rare and isolated in sequence space, then you pretty much have to believe a miracle occurred in this experiment. But why would you do that? Isn’t it more likely that the belief that such functions are incredibly rare and isolated in sequence space is just wrong?
Rumraket,
Is it your assumption that they a randomizing an entire enzyme sequence?
How many trials did Hayashi estimate were required to get to the wild type? How is it possible that we are observing this sequence in nature?
No, they state as much. And it’s not an enzyme. It’s an attachment protein. It’s job is to bind a cellular transporter and assist infection of a bacterial cell by instigating intracellular transport of phage particles. If you read the first paper from 2003 you will also realize that the initial question that was raised and sought to address (the whole motivation to do the experiment) was to see if an arbitrary random sequence could evolve towards acquiring a biological function. It’s right there in the title.
An incredible number. Which is completely irrelevant. The point is the random protein has the same function, it just didn’t function as well as the wild-type.
The question is if such hills can be found, not whether they are all equally tall.
I don’t even know what you’re asking. HOW is it possible? Who knows? Why is reality the particular way it is? Do you think you know the answer to such a question?
I think you could technicallly start at nonfunction, then mutate iteratively until function is found. It is even possible to start at the base of a slope where there are single mutations that constitute uphill movement.
In Szostak’s 2003 paper he makes it clear that the function is a number (say the rate at which a given reaction is catalyzed by that protein). The number exists for every sequence, and is not necessarily zero. You seem to be assuming that there is a set of sequences which has the function, and the rest do not have it. That is the case that gpuccio is interested in, and such cases can exist. However there can also be cases where the number that is the function is nonzero outside of the target set. Szostak asks us to set a threshold value of the function — the set which is the target is then all sequences whose level of function is above that. So lower nonzero levels of function can exist outside of that set, and there can be uphill paths into the set.
It is out of reach of processes like mutation. But if by “blind search” you mean
to include natural selection, then no, it need not be out of reach of that.
So what you mean by “blind search” sort-of-includes natural selection and sort-of-doesn’t. Let’s leave aside the phrase “blind search”, as defining it would lead is into a semantic wrangle and away from the questions of the OP.
Within Szostak’s framework, one could have a smooth landscape of levels of function, and local moves uphill on that could increase function substantially. That is allowed within Szostak’s scheme. So the 500-Bits-Rule is wrong.
It’s much worse than that. He claims the target region is reduced to the wild type sequence, as Rumraket noted. So pathetic that one can only point and laugh
Rumraket,
Thanks Rum
Mung leaving aside semantic wrangle? Good luck with that, Joe
While this is effectively what he claims, since he refuses to consider the effect of there being other peaks, he claims to be using the Hazen/Szostak definition of FI, which is (minus log2, whatever) of the proportion of the sequences that have function level X or above.
* He uses the observed level of function as the threshold (TSS)
* He ignores the existence of peaks other than the observed one.
* He demands that random, equiprobable sampling find this peak.
so your reaction is understandable…
Mung, here’s a way of thinking about it, if you can handle terrestrial landscapes (dimensions have given you problems in the past IIRC.):
Consider a landscape, say mainland UK.
Draw contours on it. The 100-bit contour is at that height where only 1 part in 2^100 of the UK is above that elevation. The 200-bit contour is at that height where only 1 part in 2^200 of the UK is above that elevation. Keep drawing.
Since the peak of Ben Macdui only has 24 bits of FI, we are in fact drawing contours around the top of Ben Nevis.
Joe’s point is that there is a path from a just below the peak of Ben Nevis to the peak of Ben Nevis (which has over 500 bits of FI…)
FYI: at high elevations, Hayashi’s landscapes get more spikey than Scotland.
Joe Felsenstein,
Assuming that mutation can consistently move you up the hill and not down the hill, off the island and into the infinite sea of non function. If the latter is consistently the case then the 500 bit rule is solid. Sanford’s paper appears to be supporting the case of consistent move away from function.
This fits with your scenario 2 and what I believe is his argument. His test of this is highly conserved sequences such as the alpha chain of ATP synthase and the PRP8 spliceosome protein.
Bill, you are one confused dude.
Joe Felsenstein,
In the cases that gpuccio supplied the proteins were part of a multi protein complex. They bind to other proteins and support the function or they don’t. Their sequence specificity is dependent on the proteins they bind with. If function here is either working or not working how would you argue there is any hill to climb?
colewd,
You are arguing one case, in a Michael Behe style argument.
But the issue I am raising is whether there is some mathematical proof that all cases where we can have a set of sequences that have functional information greater than 500 bits cannot be reached by natural selection acting on less-functional sequences that are outside the set. Is there a mathematical proof? Something like William Dembski’s Law of Conservation of Complex Specified Information? (Like his, but not the same — his does not do the job).
Or has gpuccio restricted the 500-Bits-Rule somehow, such as requiring that all sequences outside of the target set have no function at all? Or has he dodged the whole issue by only defining CFI to be present if we already know that natural selection cannot reach the set?
Arguing one case, as you do, does not address the issue of whether the 500-Bits Rule is valid in all cases.
Joe Felsenstein,
I will post this at UD I assume that it is ok with you. You make a very interesting argument.
This is a completely and utterly false dichotomy.
Every protein-ligand interaction occurs some of the time. How much depends on their concentrations, and on the Kd for the interaction. Protein-protein interactions are particularly squishy.
Effectively, this is the door he has chosen. He refuses to consider the possibility that sequences outside the target might have any naturally selectable function, unless someone is able to demonstrate to him to his satisfaction that such sequences exist. Which is tricky, since any experiment one might do is designed, and therefore a demonstration of the power of “Intelligent Selection”, not “Natural Selection”. Keefe et al. included.
It is really that bad.
DNA_Jock,
How do you create a selectable path out of a protein that is only 10% of a protein complexes overall function.
The spliceosome either works or it doesn’t. ATP synthase either creates ATP or it doesn’t.
It’s usually not that simple. These reactions and interactions can occur at extremely low levels, and the question is if there is enough of that function to be selectable.
Many chemical reactions occur spontaneously and enzymes “merely” speed them up. Many enzymes have promiscous side-activities that occur at very low levels which can potentially be enhanced by selection should they turn out to have a positive effect on fitness. Binding activity between different proteins (and their substrates) are simply ubiquitous and occur at weak strength, and again the question is if such weak activity is enough for selection to work on.
Even water molecules in liquid water bind together at very weak levels, that’s why we see a phenomenon like surface tension in water droplets. Such weak interactions are pretty much unavoidable. The Hayashi papers basically indicate that yes, such weak but ubiquitous functions can be enhanced by positive selection.
Fine with me, but you should learn more about delimiting quotes at UD, to make clearer who said what.
Trying one more time.
If the Basener and Sanford model is correct in showing that mutations have such a distribution of effects that this will result in an inevitable decline in fitness, then the DNA sequences encoding the ATP synthase alpha subunit and PRPF8 could not possibly have remained conserved for millions of years, but must have arosen a short time ago. In that case, you are effectively accepting a YEC view of life.
If OTOH, you accept that these sequences have remained functional over large periods of geological time, then the Basener and Sanford model is NOT describing a biologically plausible scenario, and any path of increasing fitness should be available to an evolving population. In that case, Joe’s criticism is valid and you need to come up with some other argument to salvage the 500-bit rule.
You cannot have it both ways, so please choose a side
Ah it appears that gpuccio has answered.
Quoting the beginning of the comment:
I seem to be unable to find the requirement that the information be “new and original” anywhere in Szostak’s definition, so it looks like gpuccio’s has dreamt up his own definition. I am very curious how we can tell apart this new and exotic NOFCSI from your average FCSI.
I love the part where he says it doesn’t need any “mathematical proof”, but just works in all known cases (trust me, I am a doctor).
And here is the next comment.
Quoting again:
Heehee, that’s rich: No, I have not restricted anything, oh wait, yes, that is exactly what I did.
This is nothing but an assertion. It’s also not an explanation.
Why can’t “new and original” complex functions be reached through step by step increases of function?
It’s the quintessential fallback of IDcreationism, the “new” and “original” qualifier which is never defined. Yeah so the random protein from Hayashi et al 2006 had the infectivity function, but you see it isn’t “new” or “original”, so it doesn’t count!
What does new and original even mean? At what point does a chemical reaction become “new”? In what way must it be “original”?
He’s just saying shit. We are never told.
By definition, of course. Looks like Joe’s diagnosis was spot on:
The additional condition is the “new and original” qualifier, which natural selection cannot fulfill.
Question for the experts please ,does the beta chain of ATP synthase perform the same function in e-coli and in humans?
Yep. There can never be new genetic information because new, never before genetic sequences which arise through mutation are just rearranged old nucleotides. Just like no one can ever write a new novel because they’re just reusing old already existing words.
And if it were “new and original,” clearly it would have been designed.
They demand the sort of evidence for evolution that is what they
“define” as evidence for design.
Glen Davidson
Rumraket,
If a cell cannot remove introns reliably or produce enough ATP to function it dies. It has to perform this way from day 1.
Joe Felsenstein,
After discussing this with gpuccio I think the 500 bit rule is fine.
We would both agree that a sequence of 150 AA with one solution could not be obtained with a random search.
If there is more then one solution then that would reduce the number of bits. Selectable steps are simply additional solutions and they reduce the bit count when identified.
The challenge is accurately measuring the bit count.
Sure. Why not believe just what you want to believe. Who can argue with that
Evolution is designed. Obviously.
How about we believe what evidence keeps showing instead of coming up with fatuous excuses to avoid the implications?
What is a sequence of 150 AA with “one solution”?
No Joe.
Take the WEASEL program. It generates a set of random strings. That’s blind search (or sampling). It then selects from that set another set, a subset, that matched the target phrase.
There is a clear difference between the random sampling (blind search) portion of the algorithm and the non-random (biased) sampling portion of the algorithm.
You don’t get the biased sampling except as a subset of the random sampling. Natural selection doesn’t do away with the blind search aspect.
You want to extrapolate from a single study with a single 137AA sequence that already had a :”function” to all sets of AA sequences regardless of whether they have any function at all. Don’t let me stand in your way.
But to me that looks exactly like you just choosing to believe what you want to believe.
The spliceosomal complex evolved from self-splicing introns.
In order to GET introns, some piece of RNA has to be able to cut itself out and get itself inserted elsewhere. If there isn’t a type of RNA molecule that can splice itself out to begin with, then there isn’t any way for a putative intron to spread.
You seem to think there were introns (as in large insertions in protein coding genes) to begin with, then there were a long period without splicing where organisms magically survived having large insertions in protein coding genes, and then a splicing mechanism evolved. But it is the other way around.
Particular pieces of mRNA turned out to be able to cut themselves out and get inserted elsewhere, and that’s how introns came to exist in more and more places in the genome. If the splicing mechanism had not evolved, then introns would have never came to exist at all. And then there wouldn’t have been any need FOR a splicing mechanism.
And there could be other forms of energy currency possible besides ATP, and there are other ways of making ATP besides being coupled to the pumping of ions across a membrane. Cells today all use ATP, that doesn’t mean they HAD to at the origins of cells. Substrate level phosphorylation is another way of making ATP that cells use (including our own). And other forms of molecules that can carry energy could be thioesters. There are many possibilities where small organic molecules other than ATP can serve as energy currency for chemical reactions.
You keep falling into this trap of thinking that things always were the way they are now.
So, Rumraket, can you calculate the FI, the complexity, or the “information content” for “METHINKS IT IS LIKE A WEASEL”?
No some of the evidence actually comes from phylogenetic studies. It is simply implied from comparative genetics that such functions have arisen regularly over the history of life. How many times must something happen before it stops being a miracle?
And from the continued interaction of the immune system with diseases, exploited in the pharmaceutical and biotech industry. Biotech companies keep being able to to inject foreign biological materials into living organisms, and their immune systems keep being able to evolve antibodies that can bind it. These antibodies can then be purified and sold to diagnostic medical and research laboratories for use in biochemical research. It’s amazing how there’s scarcely a protein or RNA or DNA sequence that you can’t buy a specific labeled antibody for, from dusins of different species (meaning the immune systems in dusins of species independently evolved antibodies against the injected materials), for use in some diagnostic or research assay. That immediately demonstrates that binding activity between molecules is ubiquitous and almost unavoidable.
And from the recurrent phenomenon that no matter what antibiotic we use, bacteria manage to evolve some way of becoming resistant to it. Transporters and receptors change shape to avoid uptake, or enzymes change conformations or active sites and turn out to be able to attack the antibiotic, and so on and so forth.
Viruses keep being able to evolve new ways of crossing species boundaries even across rather wide taxonomic divergences. Influenza has been known to be able to cross from various species of livestock (birds and other mammals). So it just keeps being able to find some way of tuning it’s capsid and attachment proteins to trick cellular receptors to take it in.
Viruses are know to be able to recombine if several different viruses coexist in the same host. Meaning some times even capsid and attrachment proteins, polymerases, integrases, RNA stabilizing proteins or what have you, can cross across viral speices boundaries, become incorporated in foreign viral genomes and function to increase virulence. And even then, given time, the immune system can usually catch up and find a way to recognize the virus.
These things should all be practically miraculously rare if the IDcreationist vision of functional sequence space were a reality. The whole basis for an arms race between immune systems and infectious diseases should not be able to exist.
You guys just don’t know the slightest thing. Your collective position can only be maintained by a profound ignorance of molecular biology and chemistry in general.
Why should he believe things for which he has no evidence?
Mung, to Joe:
Damn, Mung. After all this time you still don’t understand how Weasel works, or what the word “cumulative” is doing in the phrase “cumulative selection”?
Rumraket,
I don’t think this is a trap at all. Empirical evidence supports this. You can speculate all you want but at some point you need to support your assertions.
Self splicing introns and the spliceosome are very different animals. All we can observe is prokaryotic cells without the spliceosome and eukaryotic cells with one. If you knock out PRP8 from a eukaryotic cell it will not function. Where is the evidence for intermediate steps?
You got me man. Eyes evolved independently numerous times. Each and every one was a miracle.
Yeah. Blame us for demanding actual evidence. I see how it works.
Damn, keiths. After all this time you still think that blind search means starting over from scratch every iteration.
gpuccio has now responded in a comment at UD to this post, and to the questions I asked at the end of the post.
I am busy today but I am sure people will have thoughts on gpuccio’s responses.
Your statement is false.