# Aphrodite’s head: Eight questions for Douglas Axe

Over at Evolution News, Dr. Douglas Axe argues that merely by using very simple math, we can be absolutely certain that life was designed: it’s an inescapable conclusion. To illustrate his case, he uses the example of a rugged block of marble being transformed by natural weather processes into a statue of a human being. Everyone would agree that this simply can’t happen. And our conclusion wouldn’t change, even if we (i) generously allowed lots and lots of time for the statue to form; (ii) let each body part have a (discrete or continuous) range of permitted forms, or shapes, instead of just one permitted shape; (iii) relaxed the requirement that all body parts have to form simultaneously or in sync, and allowed the different parts of the statue to form at their own different rates; and (iv) removed the requirement that the different parts have to each form independently of one another, and allowed the formation of one part of the statue to influence that of another part.

In his post, Axe rhetorically asks: if we’re so sure that a rugged block of marble could never be transformed by the weather into a human statue, then aren’t we equally entitled to conclude that “blind natural causes” could never have “converted primitive bacterial life into oaks and ostriches and orangutans”? In each case, argues Axe, the underlying logic is the same: when calculating the probability of a scenario which requires many unlikely things to happen, small fractions multiplied by the dozens always result in exceedingly small fractions, and an event which is fantastically improbable can safely be regarded as physically impossible.

In an attempt to persuade Dr. Axe that his logic is faulty on several grounds, I’d like to put eight questions to Dr. Axe, and I sincerely hope that he will be gracious enough to reply.

My first question relates to the size and age of the universe. As I understand it, Dr. Axe, you define “fantastically improbable” as follows: something which is so improbable that its realization can only be expected to occur in a universe which is much bigger (or much older) than our own. Indeed, on page 282 of your book, Undeniable, you further stipulate that “fantastically improbable” refers to any probability that falls below 1 in 10116, which you calculate to be the maximal number of atomic-scale physical events that could have occurred during the 14-billion-year history of the universe. You calculation requires a knowledge of the age of the universe (14 billion years), the amount of time it takes for light to traverse the width of an atom, and the number of atoms in the universe. So here’s my first question for Dr. Axe: how is the design intuition supposed to work for an ordinary layperson who knows none of these things? Such a person will have no idea whether to set the bar at one in a million, one in a billion, one in 10116 , or even one in (10116)116. I should also point out that the figure you use for the number of atoms in the universe refers only to the observable universe. Astronomers still don’t know whether the size of the universe as a whole is finite or infinite.  And it gets worse if we go back a few decades, in the history of astronomy. Until the 1960s, the Steady State Theory of the universe was a viable option, and many astronomers believed the universe to be infinitely old. How would you have argued for the design intuition back then?

My second question relates to functional coherence. You make a big deal of this in your book, Undeniable, where you managed to distill the case for Intelligent Design into a single sentence: “Functional coherence makes accidental invention fantastically improbable and hence physically impossible” (p. 160), where functional coherence is defined as a hierarchical arrangement of parts contributing in a coordinated way to the production of a high-level function (p. 144). The problem with your statue illustration should now be apparent. A statue has no functions. It just sits there. Consequently, whatever grounds we may have for rejecting the supposition that ordinary meteorological processes could transform a block of marble into a statue, they obviously have nothing to do with the argument you develop in your book, relating to functional coherence and whether living things could possibly be the product of unguided natural processes. So my question is: will you concede that the marble block is a bad illustration for your argument relating to functional coherence?

My third question relates to the identity of the object undergoing transformation. In your statue illustration, you ask whether “a rugged outcrop of marble would have to be altered by weather in only a few reasonably probable respects in order to convert it into a sculpted masterpiece.” Obviously, the answer is no: the number of steps would be extremely large, and the steps involved would be fantastically improbable. You then compare this case with the evolutionary claim that “blind natural causes converted primitive bacterial life into oaks and ostriches and orangutans.” But there is an obvious difference in the second case: the primordial bacterium itself is not being changed into an orangutan. Its very distant embryonic descendant, living about four billion years later, is developing into an orangutan. Its ancestors 20 million years ago were not yet orangutans. Self-replication, along with rare copying mistakes (mutations), is required in order for evolution to work. So I’d like to ask: why do you think it’s valid to infer from the fact that A’s changing into B is a fantastically improbable event, that A’s distant descendants gradually mutating into B is also fantastically improbable?

My fourth question relates to chemistry. Let me return to your original example of a block of marble being transformed by weather events into a human statue. I think we can all agree that’s a fantastically improbable event. However, the probability is not zero. I can think of another event whose probability is much, much lower: the likelihood of weather processes transforming a block of diamond, of adamantine hardness, into a human statue. What’s the moral of the story? Chemistry matters a lot, when you’re calculating probabilities. But the average layperson, whom you suppose to be capable of drawing a design inference when it comes to living things, knows nothing about the chemistry of living things, beyond the simple fact that they contain atoms of carbon and a few other elements, arranged in interesting structures. An ordinary person would be unable to describe the chemical properties of the DNA double helix, for instance, even if their life depended on it. So my question to you is: why do you think that a valid design inference can be made, without knowing anything about their underlying chemistry?

My fifth question relates to thermodynamics. I’d like you to have a look at the head of Aphrodite, below (image courtesy of Eric Gaba), known as the Kaufmann head. It’s made of coarse-grained marble from Asia Minor, and it dates back to about 150 B.C.

You’ll notice that her face has worn away quite a bit, thanks to the natural weather processes of weathering and erosion. This is hardly surprising: indeed, one might see weathering and erosion as an everyday manifestation of the Second Law of Thermodynamics: in an isolated system, concentrated energy disperses over time. Living things possess an unusual ability to locally decrease entropy within their
highly organized bodies as they continually build and maintain them, while at the same time increasing the entropy of their surroundings by expending energy, some of which is converted into heat. In so doing, they also increase the total entropy of the universe. But the point I want to make here is that a living thing’s highly useful ability to locally decrease entropy is one which a block of marble lacks: its thermodynamic properties are very different. So my question to you is: why would you even attempt to draw an inference about the transformations which living things are capable of over time, based on your observations of what happens to blocks of marble? And why would you encourage others to do the same?

My sixth question relates to your probability calculations. In your post, you explain the reasoning you employ, in order to justify a design inference: “it takes only a modest list of modestly improbable requirements for success to be beyond the reach of chance.” You continue: “Once again, the reasoning here is that small fractions multiplied by the dozens always result in exceedingly small fractions.” Now, this kind of reasoning makes perfect sense, if we are talking about dozens of improbable independent events: all you need to do is multiply the probability of each event, in order to obtain the probability of the combination of events. But if the events are not independent, then you cannot proceed in this fashion. Putting it mathematically: let us consider two events, A and B. If these events are independent, then P(AB) is equal to P(A) times P(B), and if both individual probabilities are low, then we can infer that P(AB) will be very low: one in a million time one in a million equals one in a trillion, for instance. But if A and B are inter-dependent, then all we can say about P(AB) is that it is equal to P(A) times P(B|A), and the latter probability may not be low at all. Consequently, in an inter-dependent system comprising dozens of events, we should not simply multiply the small probability of each event in order to compute the combined probability of all the events occurring together. That would be unduly pessimistic. And yet in your post, you attempt to do just that, despite your earlier statement: “Do I assume each aspect [of the statue] is strictly independent of the others in its formation? No.” So I’d like to ask: if you’re willing to grant that the even the formation of one aspect of a statue may depend on the formation of other aspects, thereby invalidating the method of calculating the probability of the forming the whole statue by multiplying dozens of “small fractions,” then why do you apply this invalid methodology to the formation of living things?

My seventh question relates to the vast number of possible pathways leading to the formation of a particular kind of living thing (such as an orangutan) from a primordial ancestor, and the even vaster number of possible pathways leading to the formation of some kind of living thing from the primordial ancestor. The point I want to make here is a simple one: this or that evolutionary pathway leading to an orangutan may be vanishingly improbable, yet if we consider the vast ensemble of possible pathways leading to an orangutan, the probability of at least one of them being traversed may not be so improbable. And even if we were to agree (for argument’s sake) that the likelihood of an orangutan evolving from the primordial ancestor is vanishingly low, when we consider the potentially infinite variety of all possible life-forms, the likelihood of evolutionary processes hitting on one or more of these life-forms may turn out to be quite high. It is this likelihood which one would need to calculate, in order to discredit the notion that all life on earth is the product of unguided evolutionary processes. Calculating this likelihood, however, is bound to be a very tricky process, and I doubt whether there’s a scientist alive today who’d have even the remotest idea of how to perform such a calculation. So my question is: what makes you think that an untutored layperson, with no training in probability theory, is up to the task? And if the average layperson isn’t up to it, then why should they trust their intuition that organisms were designed?

My eighth and final question relates to algorithms. Scientific observation tells us that every living thing, without exception, is put together by some kind of biological algorithm: a sequence of steps leading to the formation of an individual of this or that species. The algorithm can thus be viewed as a kind of recipe. (Contrast this with your illustration of a statue being formed by blind meteorological processes, which bears little or no relevance to the way in which a living thing is generated: obviously, there’s no recipe in the wind and the rain; nor is there any in the block of marble.) In order for “blind natural processes” (as you call them) to transform a bacterial ancestor into an orangutan, the algorithm (or recipe) for making an ancient bacterial life-form needs to be modified, over the course of time, into an recipe for making an orangutan. Can that happen?

At first blush, it appears fantastically unlikely, for two reasons. First, one might argue that any significant alteration of a recipe would result in an unstable hodgepodge that’s “neither fish nor fowl” as the saying goes – in other words, a non-viable life-form. However, this intuition rests on a false equivalence between human recipes and biological recipes: while the former are composed of letters which need to be arranged into meaningful words, whose sequence of words has to conform to the rules of syntax, as well as making sense at the semantic level, so that it is able to express a meaningful proposition, the recipes found in living things aren’t put together in this fashion. Living things are made of molecules, not words. What bio-molecules have to do is fit together well and react in the appropriate way, under the appropriate circumstances. Living things don’t have to mean anything; they simply have to function. Consequently, the recipes which generate living things are capable of a high degree of modification, so long as the ensembles they produce are still able to function as organisms. (An additional reason why the recipes found in living things can withstand substantial modification is that the DNA found in living organisms contains a high degree of built-in redundancy.)

Second, it might be argued that since the number of steps required to transform a bacterial ancestor into an orangutan would be very large, the probability of nature successfully completing such a transformation would have to be fantastically low: something could easily go wrong along the way. But while the emergence of an orangutan would doubtless appear vanishingly improbable to a hypothetical observer from Alpha Centauri visiting Earth four billion years ago, it might not seem at all improbable, if the Alpha Centaurian also knew exactly what kinds of environmental changes would befall the Earth over the next four billion years. The probability of evolution traversing the path that leads to orangutans might then appear quite high, notwithstanding the billions of steps involved, given a suitably complete background knowledge of the transformations that the Earth itself would undergo during that period. In reality, however, such a computation will never be technically feasible: firstly, because we’d probably need a computer bigger than the cosmos to perform the calculation; and second, because we’ll never have the detailed knowledge of Earth’s geological history that would be required to do such a calculation. So my concluding question to you is: given that the probability of nature generating an orangutan from a bacterial ancestor over a four-billion-year time period is radically uncomputable, why should we trust any intuitive estimate of the probability which is based on nothing more than someone eyeballing a present-day bacterium and a present-day orangutan?

Over to you, Dr. Axe. Cheers.

## 311 thoughts on “Aphrodite’s head: Eight questions for Douglas Axe”

1. Heh. Bill the problem is you haven’t shown that 10^50 “trials” (whatever you even mean by that) is actually required.

You are right. I have estimated that 10^150 trials are required. 10^190 divided by 10^40 for contingencies. 100 orders of magnitude greater then the available evolutionary trials. This is an estimate. If you think the contingent number is light make your case it is at least 10^140 or lets argue about something else.

2. It’s fantastic really. He keeps insisting evolution must be wrong and can’t happen and keep giving fallacious arguments, so we point out his mistakes and explain how they don’t accomplish what he says they do, so he turns around and demands we prove evolution. So we link him articles that show that the things he insist can’t happen, actually happened, and… not a peep about it. But we’re supposed to be the ones having a “mental block”.

From the paper I linked earlier Foldability of a natural de novo evolved protein:

As a step toward structural characterization of young de novo proteins, we present a case study of the yeast protein Bsc4. A serious issue with case studies of individual newborn genes is the difficulty in proving, in the absence of evolutionary conservation, that they are both protein-coding and functional, in addition to proving that they arose from non-coding sequences (McLysaght and Hurst, 2016). The yeast gene BSC4 is an exceptionally well-supported case of an entire functional protein-coding gene that recently evolved de novo from an ancestral noncoding sequence (Cai et al., 2008). The name BSC4 (‘bypass of stop codon’) derives from belonging to a set of Saccharomyces cerevisiae genes with 9–25% stop codon bypass efficiency (Namy et al., 2003). BSC4 is conserved in all strains of S. cerevisiae, but no homologous open reading frame is present in other fungal species, and the hypothetical Bsc4 protein sequence is not similar to any other known protein sequence. A thorough analysis of synteny and phylogeny among numerous fungal species demonstrated that BSC4 is homologous to, and evolved recently from, a region of noncoding DNA in the intergenic region between LYP1 and ALP1 (Cai et al., 2008).BSC4 is nonessential but has two synthetic lethal partners (RPN4 and DUN1) (Pan et al., 2006). Its sequence is >90% conserved across known S. cerevisiae strains (Figure 1) and shows a low dN/dS ratio indicating purifying selection. RT-PCR and mass spectrometry data demonstrate expression of BSC4 at the RNA and protein level, respectively, under normal culture conditions (Cai et al., 2008). Heightened expression of BSC4 is observed in stationary phase (Aragon et al., 2008; Gasch et al., 2000), and both synthetic lethal partners function in DNA damage repair pathways, suggesting that BSC4 plays a role in DNA damage repair during stationary phase (Cai et al., 2008). Bsc4 is a functional, whole de novo protein-coding gene and, given its presence in only a single yeast species, a notably young one that can provide a window into de novo gene origin.

We predict that the Bsc4 protein has at least some folded structure despite the de novo origin and youth of the BSC4 gene. The Bsc4 protein from S. cerevisiae reference strain S288C has 131 amino-acid residues, easily long enough to form a domain. Its sequence is rich in positively charged residues, which disfavors folding, but also rich in hydrophobic residues, which favors folding (Uversky et al., 2000). Based on a weighting of these two factors, the program FoldIndex predicts that Bsc4 will fold (Prilusky et al., 2005). IUPRED (Dosztanyi et al., 2005) and JRONN (Troshin et al., 2011; Yang et al., 2005), also predict relatively low disorder except near the termini (Figure 1).

It’s ironic how this contradicts literally every excuse Bill Cole has made for why this can’t happen. It’s well conserved, could fold to some extent already to begin with, and apparently has some function in DNA repair.

3. colewd: You are right. I have estimated that 10^150 trials are required. 10^190 divided by 10^40 for contingencies. 100 orders of magnitude greater then the available evolutionary trials. This is an estimate. If you think the contingent number is light make your case it is at least 10^140 or lets argue about something else.

What I’m asking is how you got that idea in the first place. Why would that be needed?

If some primordial sequence starts at the beginning of an upward slope in sequence space, why are that many “trials” required? That makes no sense.

4. colewd:
This I agree with this but at the end of the day you have a combinatorial problem that can be reduced by substitutability to create the secondary fold.2500 amino acids is a large mountain to climb.

How could it be a problem after I showed you, a million times, that your math was wrong, and that it’s very easy to predict that a 2500 aa-long protein will have a combination of alpha-helices, beta-strands, turns, loops? This is as easy as predicting that a random toss of coins will result in a combination of heads and tails.

colewd:
You have not yet made the case that you can reduce it from the alpha helix estimate based on Jocks numbers as the other folds maybe less probable then the alpha helix.

Of course I have. Many times over. You just refuse to understand something too simple. I think there’s two reasons for your failure to understand:

1. You don’t understand that sequences of amino-acids can have different tendencies towards different secondary structures, and that their probabilities should add to 1. So, you don’t understand that I don’t need to limit my calculations to the protein being a long alpha-helix. that I know that there’s other secondary structures, and that if it’s not one, then it will be the other. That the natural conclusion from this is that, even before considering evolutionary processes, a 2500 aa-long protein will have a combination of alpha-helices, beta-strands, turns, loops.

2. You fail to understand because, instead of your all-alpha-helix claim, you hold something different in your mind. Namely, you think that the protein could not have any other sequence but one, and that the evolutionary processes are nothing but the stitching of 2500 randomly arranged aa-long proteins, and thus could not have produced that one-and-only functional sequence. This despite you actually knowing that there’s homologs, with different sequences, in many other organisms, and despite numerous attempts at explaining to you that evolution doesn’t consist on stitching together random strands of amino-acids and then testing for an all-or-nothing function.

That you need to hold to such deep misconceptions to defend your claims shows the poverty of your position.

On that note, enjoy your holidays.

5. Shocked to discover it’s over 5 years*** since I discussed all this at length, including references to papers where both alpha helixes and functional proteins were generated by simple patterning algorithms – those things that can’t happen due to combinatorial explosion actually happen. Golly.

***[Holds head in hands] what have I done with my life? Lost in the weeds, I s’pose. 😁

6. Shocked to discover it’s over 5 years*** since I discussed all this at length, including references to papers where both alpha helixes and functional proteins were generated by simple patterning algorithms – those things that can’t happen due to combinatorial explosion actually happen. Golly.

So you guys have failed to defend your position and make the claim the mechanism is true because the structure is there. The combinatorial explosion problem is telling you your theory is wrong. We cant even come up with a clean explanation of how blind and unguided processes created the secondary folds of one protein. All you guys have are the mathematical innovations of Jock and Entropy trying to redefine how we calculate a sequence.
The RNA of this protein has introns that require the structure it is a small piece of (spliceosome) to remove so it can even start the secondary fold ouch :-). Now that I think of this the 10^150 is looking way too conservative. Lets add another 3500 orders of magnitude to solve this chicken and egg problem 🙂 If you think my estimate of 3500 orders of magnitude to solve the chicken and egg problem is too high let me know how you think blind and unguided processes would be able to simultaneously create PRPF8 and the spliceosome creating the ability to splice out the introns we are observing and create the secondary fold. Maybe Axe’s thinking is slightly ahead of you guys.

7. Regarding your demand for a “positive argument”, did you read the paper Rumraket referenced?
Foldability of a natural de novo evolved protein
Did you understand it?

Yes, and I think it helps your case for secondary structures yet it also helps confirms Axe’s thesis. You have to be careful not to win the battle and lose the war. Be mindful that we are debating a very long sequence so you may be comparing apples and oranges.

Protein folding is difficult and poses a potential roadblock to evolving protein structures from scratch. In classic textbook views, natural proteins such as myoglobin fold cooperatively into specific, stable, soluble, globular structures; these elegant, intricate native states then serve as scaffolds for biological functions such as oxygen binding. Such native structures are, however, rare among amino-acid sequences. Soluble proteins with significant secondary structure content have been recovered from unevolved random amino-acid sequence libraries, BUT THEY DO NOT HAVE SPECIFIC, WELL-DEFINED TERTIARY STRUCTURES

8. THAT is the exponentiation that is ludicrous. As you have demonstrated with your calculation that there are only 10^80 80mers that fold, when we know that there are ~ 10^93 of them that bind ATP. It’s frikking hilarious!

I may not have been wrong in the discussion you are talking about the probability of 10^80 vs 10^93 but for argument sake I will admit I was wrong.

That does not make your argument about P(N)^Y” right. You need a proof that this math is incorrect or at least empirical examples of why it fails. In reality there is no difference between this and a normal sequence as for total sequence space the value P and P(N) are virtually the same. There are 20 amino acids and the probability of one coming from a random draw is 1/20.

Lets see if you can prove this wrong without invoking logical fallacies.

9. I was wondering if colewd would ever cotton on to the difference between secondary and tertiary structure. I think he may be about to educate himself.
Just to be really clear about the impending goalpost shift, colewd has been arguing that random peptides cannot have function because they lack stable secondary structures (which is wrong on both counts — many (perhaps most) have sufficiently stable secondary structures, and S2S are not required for minimal selectable function.)
colewd is about to claim that extant proteins have stable tertiary structures and so, er , therefore Jesus.
He’s going to be claiming that stable tertiary structures are required for minimal selectable function. I predict he will try to obfuscate that particular aspect of his ‘argument’.
colewd learns a new word, enabling him to move from the merely wrong to the deeply wrong.

“we are debating a very long sequence”
LOL No, we are not. We know you would like to, but it just is not relevant, for the {facts, logic, math} reasons outlined previously. Have you heard of unequal cross-over?

10. My “P(N)^Y is wrong” argument rests solely on the fact that P(A&B) = P(A) x P(B|A)
You are assuming that P(B|A) = P(B), which is obviously wrong. Consider unequal cross-over.
QED

11. I think he may be about to educate himself.

Jock, are you claiming to be an authority here? If so I really want to pick you brain 🙂

12. My “P(N)^Y is wrong” argument rests solely on the fact that P(A&B) = P(A) x P(B|A)
You are assuming that P(B|A) = P(B), which is obviously wrong. Consider unequal cross-over.
QED

I understand this as you have educated me on this point many times. Thats why I gave you a contingency for some interdependence.

13. DNA_Jock: colewd is about to claim that extant proteins have stable tertiary structures and so, er , therefore Jesus.

Your behavior here, in particular the reference to Jesus, comes across as trolling.

14. Allan Miller: It has to get going of course. But that is not a problem for evolution. It may be deemed impossible to get the system started, but once it’s started, probability calculations that ignore mechanism are worthless, and certainly no barrier to evolution.

Testify brother!

How do you know the probabilities are not a barrier, have you done the actual math?

15. What? You haven’t addressed the argument. Haven’t gone near it. You continue to declare like an automaton that the combinatorial issue is a problem. We explain why it isn’t, and you just say it again. You pretend that transposition and duplication aren’t even a thing; that your random-independent method – that does not even happen in the real world – is the only way to make protein, and all the other things don’t happen. You are quite astonishingly blinkered in your approach. So the extent to which we have ‘failed’ is equivalent to my failure to explain relativity to a rock.

16. Rumraket: The spliceosome is made mostly of duplicated proteins, with a core of RNA homologous to self-splicing group II introns, which are known to encode PRPF8-homologous proteins.

Wow! What are the odds of that!?

Rumraket: But in BOTH cases you would end up with some insanely low probability, yet you have no problem accepting that plate tectonics and erosion created the Mt Everest.

But Mt. Everest looks nothing at all like Aphrodite’s head.

17. Mung: Testify brother!

How do you know the probabilities are not a barrier, have you done the actual math?

Probability calculations that ignore mechanism are worthless, and therefore not a barrier. If someone does a calculation that’s bollocks, do I need to do any others to hold this position? It’s not my notion that probabilities need calculation; it’s you guys’. I’m just explaining why the simplistic independent approach is full of shit, is all. Do you think it OK to ignore duplication and transposition?

18. If you can make an alpha helix out of just two kinds of amino acid, why does 20^n matter?

19. You continue to think that the 20 amino acids are 20 completely different things. They aren’t.

20. I understand this as you have educated me on this point many times.Thats why I gave you a contingency for some interdependence.

Haha. “You don’t like this number I pulled out my ass?” (Rummage, rummage) “OK, how about this?”. 😀

21. I am enjoying the way that, when colewd reckons his numbers are good for 10^190, he is willing to offer up a buffer of 10^40. But if I point out that he has been off by 10^540, he starts talking about 10^3500. I’m sure he’ll offer a generous ‘contingency’ of 10^100 on those estimates.
We actually have a word for this at work:
“numeroproctology”

22. Mung: Wow! What are the odds of that!?

1 in 1. Group II self-splicing introns really do encode PRPF8-like homologues. What are the odds that they do that? Well they do, so it’s 1 in 1.

Perhaps you meant to ask a different question, but I don’t see what else that might be.

23. Mung: How do you know the probabilities are not a barrier, have you done the actual math?

Observation. If the probabilities were a barrier, there would be no reason to have the kind of data we do. If de novo protein evolution was so improbable as to be practically impossible, why is there so much evidence that it happens?

Even worse for Bill Cole, he’s saying that when the sequences are conserved, this shows a barrier to change. Then we show him significantly more divergent, yet still clearly similar sequences, and then he turns around and says the magnitude of the difference is too improbable to have evolved. So the evidence is retro-fitted to his foregone conclusion every time, rather than letting the evidence guide his thinking. I concede that I can not convince a person of that mindset.

24. 1 in 1. Group II self-splicing introns really do encode PRPF8-like homologues. What are the odds that they do that? Well they do, so it’s 1 in 1.

Thanks Rum I have been waiting for this 🙂 So your claim is the spliceosome is not necessary to process PRPF8? If you think it evolved why would it evolve if self splicing introns do the trick?

25. I am enjoying the way that, when colewd reckons his numbers are good for 10^190, he is willing to offer up a buffer of 10^40. But if I point out that he has been off by 10^540, he starts talking about 10^3500. I’m sure he’ll offer a generous ‘contingency’ of 10^100 on those estimates.
We actually have a word for this at work:
“numeroproctology”

Spliceosome and PRPF8 is a tough chicken and egg problem unless RUM can save the day by showing self splicing introns can process PRPF8. If you need the spliceosome to process PRPF8 then all rational thought points to a deterministic mechanism.

26. Haha. “You don’t like this number I pulled out my ass?” (Rummage, rummage) “OK, how about this?”.

Pretty much but not completely 🙂 I am giving you 100% interdependence on 25% of the sequences.

27. Pretty much but not completely I am giving you 100% interdependence on 25% of the sequences.

Why?

28. Rumraket: If the probabilities were a barrier, there would be no reason to have the kind of data we do. If de novo protein evolution was so improbable as to be practically impossible, why is there so much evidence that it happens?

Post hoc, ergo propter hoc. Texas sharpshooter. etc.

Did you take into account all the proteins that did not evolve and their probabilities? I’m guessing that no, you didn’t.

ETA:

Rumraket: I concede that I can not convince a person of that mindset.

What, specifically, are you trying to convince Bill of? That if Everest looked like Aphrodite’s head we should chalk it up to “natural” mechanisms?

29. Mung,

To allow for anomalies like a long duplication event that might extend a secondary working structure like an alpha helix. Versus assuming that all mutations are independent point mutations. I was responding to the points Allan was making that I felt were legit.

30. colewd: If you think it evolved why would it evolve if self splicing introns do the trick?

Have you read any literature on that subject? I’m asking because rather than having me play the middle man as usual and bring you reference after reference and explain what it says in them, we can just skip that step and you can proceed to google scholar to find out for yourself. How does that sound?

Also, that’s just a textbook example of an argument from ignorance fallacy. Suppose nobody knew why or why X evolved, does it follow that it didn’t? Even more pertinently, does it then follow X must have been designed? Neither the conclusion “X did it”, nor “Y didn’t do it” follows from us not knowing why or whether Y would do it.

31. Allan Miller: Do you think it OK to ignore duplication and transposition?

It depends, doesn’t it. Oh, and are they independent events?

Probability calculations that ignore mechanism are worthless, and therefore not a barrier.

That’s a non-sequitur. You frankly don’t know whether there are probabilistic barriers or not, because you have never done the calculations. You’re simply choosing to believe what you want to believe.

If someone does a calculation that’s bollocks, do I need to do any others to hold this position?

Your calculation is bollocks therefore probabilities are unimportant? Yeah, you need a reason to hold that position. Or just agree it’s irrational.

It’s not my notion that probabilities need calculation; it’s you guys’.

Well, I think it is your notion that evolutionary theory is at heart probabilistic but that you can’t be bothered to do show the actual probabilities involved. For good reason I might add.

I’m just explaining why the simplistic independent approach is full of shit, is all.
It’s better than what your side has come up with. Your position is that there are no independent events at all in evolution?

What if we don’t know whether two events are independent? We just throw up our hands in disgust?

32. colewd: Spliceosome and PRPF8 is a tough chicken and egg problem unless RUM can save the day by showing self splicing introns can process PRPF8. If you need the spliceosome to process PRPF8 then all rational thought points to a deterministic mechanism.

Unnecessarily playing the middle man again I did some work for you. Here you go:

https://biologydirect.biomedcentral.com/articles/10.1186/s13062-017-0201-6

The similarities between spliceosomal introns and group II self-splicing introns have been recognised for a long time. The latter are present in prokaryotes and in eukaryotic organelles. In mitochondria and plastids these introns are bona fide introns that lost their mobility potential, whereas in prokaryotes they are more properly regarded as retroelements [38, 39]. Group II introns (reviewed in e.g. [39, 40]) typically have a length of around 2–3 kb and consist of six RNA domains. The large domain I functions as a scaffold and recognises and positions the exons [41, 42], domains II and III enhance splicing catalysis [43] and domain VI contains the adenosine residue that functions as branch point [44]. Domain V is the most conserved domain and contains the catalytic triad, which binds the two catalytic divalent metal ions [43, 45, 46]. Domain IV is the largest, as it encodes a protein, aptly named intron-encoded protein (IEP). The maturase function of this versatile protein is required for the proper folding of group II introns, promoting RNA recognition and splicing [47, 48]. Moreover, its reverse transcriptase activity enables reverse splicing, which results in the proliferation of the introns in the host genome [47, 49].

There is an overwhelming amount of evidence supporting the homology between spliceosomal introns and group II self-splicing introns. The splice site recognition, branching mechanism, stereochemical course of the splicing reaction and the presence of similar RNA domain structures and a homologue of the IEP in the spliceosome (see below) demonstrate the similarities between the two intron types [39, 40, 50, 51]. Moreover, there is a known example of a group II intron that was transferred from mitochondria to the nucleus in a plant family and subsequently evolved into a spliceosomal intron [52], which underlines the evolutionary relationship between group II and spliceosomal introns.

(…)

As mentioned above a group II intron usually encodes an IEP. A homologous protein of IEP functions in the spliceosome, namely pre-mRNA processing protein 8 (Prp8), which is present in the U5 snRNP. Prp8 is present in the spliceosomal catalytic core and likely functions as an assembly platform [50, 69, 70]. It is the largest and most conserved spliceosomal protein and interacts with the U2 and U6 snRNPs and especially the helicase Brr2 and GTPase Snu114, which are present in the U5 snRNP as well [1, 2, 71, 72, 73, 74]. The first indication for the homology between IEP and Prp8 was the presence of a reverse transcriptase (RT)-like domain in Prp8, which is similar to the RT domain in IEP [75, 76, 77]. IEP did not only give rise to Prp8, but also to telomerase and the RT of non-long terminal repeat retrotransposons [76]. At some point Prp8 must have lost its RT activity [75, 78], thereby losing the ability for retromobility while maintaining its maturase function, which has occurred frequently for IEPs in organelles as well [39].

Group II introns can be classified based on RNA structures or phylogenetic groupings of IEP [39, 79, 80, 81]. The exon recognition in spliceosomal introns is more similar to the A subtype of group II introns [39]. It is not known how Prp8 and its paralogues relate to the different IEP groups, which could be informative for the source of the group II introns that evolved into the spliceosomal introns.

Turns out group II self splicing introns do make use of a PRPF8-like protein.

33. Mung,

Rumraket: I concede that I can not convince a person of that mindset.

Repeat customers for swamp land are very hard to find.:-)

34. And Rumraket, just look at all the mountains. So the probability of Everest is actually quite high. Your use of it as an example of something of low probability is the Texas sharpshooter fallacy.

🙂

35. Turns out group II self splicing introns do make use of a PRPF8-like protein.

Thanks Rum this makes sense. So why did we need a spliceosome? Higher speed and accuracy?

36. Rumraket: If you need the spliceosome to process PRPF8 then all rational thought points to a deterministic mechanism.

If the mechanism has to be deterministic, it can’t have been a mind with free will. Right?

Really? You can’t figure that out yourself? You don’t even need to calculate anything. It’s plenty obvious that if you have evidence that something happens relatively often, then the probability that it’s a highly improbable event is fairly low.

I replied to a post of yours a while back about the improbability of miracles, and how seeing lots of them happening would mean that miracles are not improbable at all. Did you miss that? Pretty straightforward stuff really.

38. DNA_Jock: We actually have a word for this at work:
“numeroproctology”

Well you know how some times the universe is claimed to have been wished into existence ex nihilo? That assertion is itself created ex recto.

Here’s a little exercise. Let’s say we have evidence that a certain event has occurred 10 times during the whole existence of the universe. What’s the probability that we would see at least the same 10 occurrences of the event if we assume that it’s underlying probability is such that we should only expect to see it once every 4.5 billion years?

40. dazz: You don’t even need to calculate anything.

If you want me to believe your theory you need to show me the math. Until you do I’ll continue to believe that your own acceptance of it is based on wishful thinking. 🙂

It’s plenty obvious that if you have evidence that something happens relatively often, then the probability that it’s a highly improbable event is fairly low.

Do you know what a singularity is? Do you understand that evolutionary theory is chock full of dependence on singularities? And yet you’re willing to accept them as non-miraculous. It’s also chock full of events that have never been observed. Were you there?

So if we’re honest, the number of times something is observed to happen is rather irrelevant. Woudn’t you say?

It’s plenty obvious that if you have evidence that something happens relatively often, then the probability that it’s a highly improbable event is fairly low.

Can you give me an example, from evolution? What is it that happens relatively often and is thus probable? Birth and death?

It’s plenty obvious that if you have evidence that something happens relatively often, then the probability that it’s a highly improbable event is fairly low.

You realize, don’t you, that we are not talking probable events, like mountains forming, we’re talking improbable events.

41. Mung: You realize, don’t you, that we are not talking probable events, like mountains forming, we’re talking improbable events.

No, you assume they must be improbable, contrary to the evidence, because IDists seem to be hardwired to misunderstand probabilities despite the fact that their whole shtick relies on probabilistic arguments. LOL

42. dazz: Here’s a little exercise. Let’s say we have evidence that a certain event has occurred 10 times during the whole existence of the universe.

ok. For example, finding ten righteous men.in Sodom. Oops. Didn’t happen.

What’s the probability that we would see at least the same 10 occurrences of the event if we assume that it’s underlying probability is such that we should only expect to see it once every 4.5 billion years?

Huh?

You’re appealing to a frequentest interpretation of probability, in which can you don’t get to assume that we expect to see it x times in 4.5 billion years. I’d say your sample size is too small.

43. If the mechanism has to be deterministic, it can’t have been a mind with free will. Right?

Minds can be both.

It looks like the spliceosome is required to splice in the lower regulated mg+ environment that exists in the eukaryotic cell nucleus. Performance may also be an issue as intron complexity grew. Bigger chicken and egg issue then I thought.

44. dazz: No, you assume they must be improbable, contrary to the evidence, because IDists seem to be hardwired to misunderstand probabilities despite the fact that their whole shtick relies on probabilistic arguments.

So you’ve actually seen purely natural processes bring about a sculpture that looks like the head of Aphrodite? Go on!

You guys crack me up. If we see it alot it means it’s probable. If we never see it, it is still probable. How does that logic work, exactly?

This site uses Akismet to reduce spam. Learn how your comment data is processed.