Glancing at Uncommon Descent (I still do as Denyse O’Leary often reports on interesting science articles, as here*, and the odd comment thread can still provide entertainment), I see an OP authored by gpuccio (an Italian medical doctor) entitled The Ubiquitin System: Functional Complexity and Semiosis joined together, telling the story of the ubiquitin protein and its central role in eukaryote biochemistry in some considerable detail. The subtext is that ubiquitin’s role is so widespread and diverse and conserved across all (so far known) eukaryotes, that it defies an evolutionary explanation. This appears to be yet another god-of-the-gaps argument. Who can explain ubiquitin? Take that, evolutionists! I’m not familiar with the ubiquitin system and thank gpuccio for his article (though I did note some similarities to the Wikipedia entry.
In the discussion that follows, gpuccio and others note the lack of response from ID skeptics. Gpuccio remarks:
OK, our interlocutors, as usual, are nowhere to be seen, but at least I have some true friends!
and later:
And contributions from the other side? OK, let’s me count them… Zero?
Well, I can think of a few reasons why the comment thread lacks representatives from “the other side” (presumably those who are in general agreement with mainstream evolutionary biology).
- In a sense, there’s little in gpuccio’s opening post to argue over. It’s a description of a biochemical system first elucidated in the late seventies and into the early eighties. The pioneering work was done by Aaron Ciechanover, Avram Hershko, Irwin Rose (later to win the Nobel prize for chemistry, credited with “the discovery of ubiquitin-mediated protein degradation”, all mainstream scientists.
- Gpuccio hints at the complexity of the system and the “semiotic” aspects. It seems like another god-of-the-gaps argument. Wow, look at the complexity! How could this possibly have evolved! Therefore ID! What might get the attention of science is some theory or hypothesis that could be an alternative, testable explanation for the ubiquitin system. That is not to be found in gpuccio’s OP or subsequent comments.
- Uncommon Descent has an unenviable history on treatment of ID skeptics and their comments. Those who are still able to comment at UD risk the hard work involved in preparing a substantive comment being wasted as comments may never appear or are subsequently deleted and accounts arbitrarily closed.
I’m sure others can add to the list. So I’d like to suggest to gpuccio that he should bring his ideas here if he would like them challenged. If he likes, he can repost his article as an OP here. I guarantee that he (and any other UD regulars who’d like to join in) will be able to participate here without fear of material being deleted or comment privileges being arbitrarily suspended.
Come on, gpuccio. What have you got to lose?
“Full and complete exposure to neutral variation” assumes that when one base changes, it does not open up new possible pathways.
This is not a valid assumption.
What is constrained by selection, be it neutral, positive or purifying, changes each time a change occurs. The best analogy I can think of is word ladder, in which the sequence of change is a relevant as the number of changes.
For how big a sequence?
Origenes,
Thanks for posting here.
Yep. I read it.
I don’t think that the reasons are subtle, they seem rather necessary. He needs to justify why he thinks that the conservation is functionally meaningful.
I know that’s the key remark. I have been trying to get Bill Cole to understand it and its implications for quite a while. So let’s check how gpuccio reaches that conclusion:
In order to guarantee full exposure to neutral variation, there must be an enormous amount of mutations. Remember that mutations are mostly random. Thus, in order to touch every site, we need a number of mutations well above the length of the sequences. This is where saturation of synonymous sites comes into play, they’d show that there’s been quite a number of mutations. So, if synonymous substitutions have reached saturation, that means that an enormous chunk of sequence space was “explored.”
If we were to accept gpuccio’s assumptions, the conclusion would be that evolution performs more-than-enough exploration of sequence space to explain “jumps” in “functional” information (or whatever wording gpuccio might like today).
Thinks of it as happening within a genome. Gene sequences don’t mutate “in isolation.” So, to touch every site in a gene, there must be a number of mutations well above the length of the genome.
Unless Wagner is completely wrong, changing one site opens the possibility of making one or more other sites or changes neutral or positive. There really isn’t any way to count the number of possible survivable changes.
I agree. That, however, doesn’t contradict my point: in order to touch every position there must be an amount of mutations well above the sequence length.
To make sure that I understand you correctly, the following question: Here we are talking about “sequence space” in the context of neutral changes to a specific protein sequence, right? We are not speaking of “sequence space in its entirety”, which is much much bigger. Right?
Neutral changes to a specific protein sequence, even over the course of 400 million years, is an exploration of a tiny minuscule part of the entire sequence space.
Right?
Yes.
I have a hard time seeing how the entirety of even the neutral space could be covered. At some point mutations that go into a fitness valley are going to happen, and those will be selected against. That’s where you get purifying selection. But could there be other neutral or even beneficial mutations on the other side of that valley? Well if crossing the valley is not allowed because it is strongly deleterious, we can’t claim to know. Then we can’t claim to know that all of the neutral areas in sequence space have been covered. We can’t even claim to know that all viable variation that exists on some particular hill has been sampled unless we are talking about a very small peptide.
Rumraket @
I take it then that you do not agree with Entropy, according to whom “if synonymous substitutions have reached saturation, that means that an enormous chunk of sequence space was ’explored’.” and goes on to suggest that it follows “that evolution performs more-than-enough exploration of sequence space to explain ‘jumps’ in ‘functional’ information.”
400 million years can do that so it seems. According to GPuccio the Ks ratio reaches ‘saturation’ after 400 million years — Ks ratio is the “number of synonymous substitutions per synonymous site”.
Not a problem for synonymous substitutions which are neutral — do not alter the amino acid sequence — and thus invisible to selection.
If that was true, wouldn’t that mean the “probabilistic resources” would be enough to traverse the entire search space and produce tons of “bits of information”
Right to a point. Mutations are random, they happen regardless of being neutral or not. That the surviving sequences have mostly neutral mutations is an assumption. Whether that’s true or not, surviving mutations is not the same as total mutations. This is why gpuccio points to “saturation” of synonymous sites as an indication that enough mutations have happened to cover the whole thing.
Right.
Do you really think I didn’t think about that? Whether it’s a tiny minuscule fraction or a large fraction is a subjective evaluation. The important point is that if every position in a sequence has been “touched” by mutations to the point that every non-functionally-compromised position has mutated, then the sequence-space “exploration” is enough to explain those “jumps” in “functional” information, regardless of what proportion of the whole sequence space was explored. The mutation numbers must still be high enough that compatible changes (aka “neutral”) were found for the less compromised positions, and the synonymous sites saturated.
Also remember that changes in a protein sequence happen in the DNA sequence, which is embedded in a genome. So, if every position has been mutated in a coding gene, then that must have happened to the whole genome too.
Somebody seem to be getting it!
Origenes,
Rum and me are talking about different things. Rum is talking about whether gpuccio’s assumptions are correct. I’m just following into the implications of those assumptions.
Entropy,
It is not subjective at all. Until you understand this you won’t understand the major obstacle you are facing.
Entropy & Dazz
You will receive an answer from the master himself.
https://uncommondescent.com/intelligent-design/defending-intelligent-design-theory-why-targets-are-real-targets-propabilities-real-probabilities-and-the-texas-sharp-shooter-fallacy-does-not-apply-at-all/#comment-656678
No.
Given that the “master” has a talent for focusing on semantics and for missing the point, I doubt that we will have an actual answer.
My prediction is that he will just tell us to read his OP on the “resources” “available” for natural selection (or whatever wording he might be liking today). Thus missing the point.
Thanks for the interaction anyway Origenes. I appreciate you’re willingness to come and post here.
Now you’re changing your tune. Earlier you were perfectly content if materialists did not have to supply reasons. Even claiming that not doing so was logical and rational.
You still appear to not understand ID. ID is a theory about causes. More specifically, about the necessity for an intelligent cause.
What were you looking for, a blow by blow account of how the flagellum is manufactured?
No, you are simply mistaken. Chance could not do it, because chance is not a cause.
Is that what they taught you in physics, that chance is a physical force, like gravity?
Evolution isn’t a cause.
Evolution is not an explanation in any sense of the word.
This is false. Is it that Glen just doesn’t know any better?
Calling it minuscule is a subjective evaluation. However, that doesn’t matter. Call it whatever you want. What matters is the point I’m making, which you seem to miss yet again.
Either way, I’m not facing any obstacles. It’s not me who’s proposing a philosophically ridiculous “solution” to explaining the origin of life and its features.
My understanding allows me to think that we know of the most important pieces, and that there’s things whose more detailed histories we will be able to trace, while others we will not be able to trace. Neither makes me any more inclined to accept a ridiculous religiously-motivated “answer” such as ID.
Also remember I’m facing right now the “”arguments” of a person (gpuccio), who’s determined to prove that a magical being in the sky did it, and, in doing so, he’s willing to be skeptical-to-the-point-of-irrationality about any scientific evidence contrary to his position, while accepting the most dubious standards when it suits him. That makes my life even easier. On the one hand we have the irrationality of the ID “movement” as exemplified by gpuccio’s double standards. On the other we have scientific understanding. Which one do you think I’d choose?
I don’t care all that much to be brutally honest.
Why not pass him this recent paper posted by Dave Carlson here at TSZ?
Random sequences rapidly evolve into de novo promoters
Abstract
How new functions arise de novo is a fundamental question in evolution. We studied de novo evolution of promoters in Escherichia coli by replacing the lac promoter with various random sequences of the same size (~100 bp) and evolving the cells in the presence of lactose. We find that ~60% of random sequences can evolve expression comparable to the wild-type with only one mutation, and that ~10% of random sequences can serve as active promoters even without evolution. Such a short mutational distance between random sequences and active promoters may improve the evolvability, yet may also lead to accidental promoters inside genes that interfere with normal expression. Indeed, our bioinformatic analyses indicate that E. coli was under selection to reduce accidental promoters inside genes by avoiding promoter-like sequences. We suggest that a low threshold for functionality balanced by selection against undesired targets can increase the evolvability by making new beneficial features more accessible.
http://theskepticalzone.com/wp/sandbox-4/comment-page-8/#comment-219654
His squirming and wriggling trying to brush that result aside should be a lot more entertaining
Me neither. In the end, it’s not as if gpuccio was offering any insights, other than into how irrational can a person be just because they want to refuse anything coming from the “enemy.”
To reject anything we say, the guy is willing to pretend not to know what the word “arbitrary” means, after using the word to try and make his own point; or to claim that affinities have nothing to do with complex functions, after quoting that word a million times when trying to make his point about functional complexity. It’s ridiculous.
Oh look! gpuccio thinks we were actually arguing that evolution must traverse the entire search space! isn’t that cute?
Is he finally getting why the size of the search space is completely irrelevant and how that totally debunks his stupid TIAJY negative argument against “neo-darwinism”?
Of course not, LOL
Entropy,
I don’t think you understand. Not because you are not a wicked smart guy but the problem is difficult to comprehend. Gpuccio gets it perhaps because he has been working with understanding large sequential spaces for a long time as shown by his ease of communicating with exponents and logarithms.
This problem of large sequence spaces is what got me interested in this theory in the first place. I stumbled into it in a conversation that included my youngest son. Prior to that I accepted evolution as a solid scientific theory and did not know anything about intelligent design other then hearing about in passing conversations.
The problem of the origin of genetic information is a real one and it is a real academic problem in my opinion. Whether it is scientific or not is simply where definitions are drawn.
I understand the political implications of intelligent design being accepted but if it is based on solid academic reasoning it should be explored.
You are starting from the philosophy and I am starting from the evidence. Maybe this is what we are struggling with as we try to reach common ground.
Like word games, I take it? No, chance is not a cause. But some actual causative processes have inherently unpredictable characteristics or elements. If this unpredictability is essential to understanding the process, it makes sense to feature it. Nobody is fooled by your energetic efforts to feign confusion.
Similarly, evolution isn’t a cause, but rather more of a summary term for the pattern of outcomes of a set of related processes which generally have an observed statistical bias. Evolution doesn’t “cause” unequal rates of reproduction, but it does result from them.
The problem is that it’s the same unscientific bullshit that it’s been for eons.
Keep whining, makes no difference
Okay, given what he’s trying to explain, that’s a truly horrendous error.
Here’s a tip, folks.
At one mutation per site, 36% of the sites will be unchanged.
At three mutations per site, 5% of the sites will be unchanged.
Ironically, this was another thing I tried to explain to him in 2014.
It’s e^-n
And he’s lecturing us on combinatorics.
ROFL
dazz,
No, it is not. Maybe Mung can educate you here.
I don’t understand this.
Just for kicks I coded a simulation to find out how many substitutions on average are necessary to get down to 25% similarity for a DNA sequence and 5% for protein sequences (20 AAs)
I’m getting about 4.7 times the sequence size for DNA and 5.5 times for proteins.
does that seem about right?
ETA: definitely not right
DNA_Jock,
Solid straw-man. 🙂
I would not be surprised if that’s what he thought. If so, I’d be very surprised if Origenes explained to him that such is not the issue.
Since I cannot see the bold in your quoting of his crappy example, I won’t comment on it, other than suspecting that he doesn’t know what random and saturation mean.
(I’m not in the mood of visiting his site because there’s so much crap posted there that gives me a headache.)
colewd,
Bill,
This is high school math, and CP at that. It does not matter whether it is DNA, or proteins, or letters. If you take a string of length L, and randomly mutate it, after L mutations (i.e. one per site) then 36% of the string has never been touched. After 3 L mutations, (i.e. three mutations for every site) then 5% of the string has never been touched.
It is Poisson(0) = e^-mu
This is getting awkward.
DNA_Jock,
Sure it is. Why don’t you explain why this is relevant to the point gpuccio was making.
Why the unnecessary diversion? Your correction of my point is duly noted but do you think your point really changes the overall argument?
ouch, I’m embarrassed myself for not seeing that
I understand all right. I’ve been working on sequence analyses my whole academic life.
What is that problem exactly? Make sure you explain why it is a problem too.
So you hadn’t heard of apologetics? I’d be surprised if so.
Of course that the origin of genetic information is a real academic endeavour. What I don’t get is why you’d think that ID is the same thing as research in the origins of genetic information. It’s not. ID is bullshitting around about the origin of genetic information. Very different things.
That’s a huge “if” right there. So far all I’ve seen is misapplication and misunderstanding. Poor science when it’s science, poor philosophical backgrounds at the foundations.
In order to consider the evidence we have to start with solid philosophical foundations. You cannot make a proposal that doesn’t make sense and expect it to be accepted just because you manage to twist a few numbers while trying hard to hide even more philosophical convolutions. I think you’ve failed to grasp how much the the poor philosophy extends and intertwines into ID “theory.” Think not just about the “designers-designed-designers” nonsense, but also of the irrationality of rejecting perfectly good results just because they happen to go against mistaken expectations. Expectations that show even more poor philosophy.
Think, for example, of the insistence that only one protein family would do, when there’s plenty of examples of different protein families doing the same thing, and of members of a family doing different things. Think of how gene copies mean more complexity when it’s convenient to ID, but they add no information when it’s not convenient to ID. Think of the double standards where anything in evolution will be rejected if it doesn’t comply with mistaken creationist notions, while almost anything will be acceptable if it aligns with ID. Think of gpuccio forgetting the meaning of words he’s been using all the time just because, suddenly, I point out that it means that such thing makes evolution easier. Think of gpuccio claiming that affinity doesn’t have anything to do with complex functions just because I showed “ladders” of affinity available for selection, despite gpuccio himself had been quoting articles talking about “affinity” in descriptions of complex functions. That’s not just ignorance of an obvious factor in protein function (affinity). That’s something I’d rather not name.
So, if that kind of arguments and people convinced you that evolution is not a solid scientific theory, then you have very deep scientific and philosophical problems. Sorry to be the carrier of bad news.
I’m not sure I understand what it means to say synonymous substitutions have reached saturation, so I can’t comment on that in particular.
We have been talking past each other it seems. I thought we were talking about the neutral portion in amino acid sequence space.
But even non-synonymous mutations can be neutral, or very nearly so. I don’t believe it is possible to fully explore the neutral portion of sequence space for a protein of average length in 400 million years, with realistic mutation rates and population sizes.
Sure. Even so, no I don’t believe it is possible to fully cover the neutral sequence space of synonymous mutations for an average protein in 400 million years with realistic mutation rates and population sizes. Not even a good chunk of it.
Though I WOULD agree that IF synonymous substitutions have reached saturation, by which I now understand that to mean all possible combinations of synonymous substitutions have been sampled (still completely unrealistic, but I’ll go with it), then in that (unrealistic) situation then yes a huge portion of DNA sequence space has been covered.
Because of this:
Which is a enormous underestimation of the number of sequences that need to be visited before we reach saturation.
and more importantly this:
No offence Bill, but self awareness doesn’t seem very well developed at UD.
ETA: Just saw your comment:
Yes, here is one: I did the calculation myself and DNA_Jock is correct and you and gpuccio are wrong.
Straightforwardly false.
As usual you seem to lose track of what exact was said in previous posts, and in response to what. I have not at any time stated that “materialists” did not have to supply reasons.
No.
What I was claiming was logical and rational, is that probabilities can never cross over from even lower, to impossible. So it will technically never amount to a valid argument to say that because the probability of X is low, X couldn’t possibly happen so Y must have done it instead.
I never said that because it is logically coherent to say that “that because the probability of X is low, X couldn’t possibly happen so Y must have done it instead” that therefore we should just relegate everything to chance and that to believe chance did it is logical or rational.
Then your have conceded the point. Because, since it is true that the probability can never cross over from just more unlikely, to impossible, then an intelligent cause can never be “necessary”.
Something like that, yes, I’d like to see something more than just “design did it”. I’d hate to see IDcreationists keep yammering about how vacuous it is to say that “chance did it”, then do the exact same thing just using another equally vacuous label where “design” takes the place of “chance” in the same sentence.
Stop being hypocrites. Sort out your own position and come up with a logically valid, falsifiable hypothesis. Not this vacuous non-explanation we keep seeing. Particularly when it is based on a fallacy(that it is “necessary” because the odds of X on chance, is low).
Corneel,
Isn’t it interesting that I’m supposed to be the one who doesn’t understand combinatorics, yet it’s gpuccio who changed that number systematically, rather than randomly?
Corneel,
Your whole argument is ad hominem.
Do you think now this says that Corneel is completely unaware baboon. No, your just trying your best to support your position.
You guys make unsupported assertions and invoke logical fallacies all the time. Do you really think that TSZ is completely self aware? More self aware then UD?
Were just have people with different philosophies arguing. Everyone makes mistakes.
No Mung, chance is here just a label for some blind physical phenomenon, like the tornado in a junkyard. We say the tornado throws things together “by chance”. That doesn’t mean there were no causes that explain why things ended up the way they did. For every piece of scrap picked up and thrown by the tornado, there is an explanation for how that happened. The wind came from some particular direction, and with enough force to pick up the object, and throw it on to that location so it happened to land in the right way to stick together with that other piece of junk. It happened by chance is not to say that chance is itself a cause like a force of nature.
lol, this is prime Mung output.
At this stage I have to wonder what the hell you think a cause, or an explanation even is.
Entropy,
Ok, we have some common ground.:-)
I will continue to think about your arguments and read your posts but were probably moving in circles at this point.
Your comments of the ID movement are duly noted.
The number of substitutions required, on average, to completely change every site in a sequence, will have to outnumber the length of the sequence, because some times the same site will mutate several times in a row. Unless you deliberately code the program so an already mutated site can’t change again, chances are you’re going to get several hits in the same position.
EDIT: Nevermind, you realized further down.
I never attack people on anything else than their arguments, but if gpuccio wants to pretend that he is a math genius he’d better live up to it.
TSZ is supposed to be a mixed bag, Bill. You are at TSZ as well remember? To be honest, I think the climate at UD is not really encouraging self reflection.
I can’t believe he did that. I guess he was thinking it required some orderly Design 🙂
Corneel,
So what? How is this relevant to the overall argument? Gpuccio under estimated the number of trials and my initial take on Jocks analysis was wrong. Does the real number make any difference to the argument?
Why is Jock making this divergence from the real discussion? Its because gpuccio made an ad hominem attack by questioning Entropy’s reasoning and Jock came to his defense. Is Entropy a poor innocent victim here?
So we are in an ad hominem pissing contest. Do you want to be part of the problem or part of the solution?
Corneel,
Why do you think that so many TSZ members can no longer post at UD? Only a few are still there.