“The reason a bike lock works,” explains Meyer, “is that there are vastly more ways of arranging those numeric characters that will keep the lock closed than there are that will open the lock.”
Most bicycle locks have four dials with ten digits. So for a thief to steal the bike, he would have to guess correctly from among 10,000 possible combinations. No easy task.
But what about DNA? Well, in experiments Axe conducted at Cambridge, he found that for a DNA sequence generating a short protein just 150 amino acids in length, for every 1 workable arrangement of amino acids, there are 10 to the 77th possible unworkable amino acid arrangements. Using the bicycle lock analogy, that’s a lock with 77 dials containing 10 digits.
http://www.evolutionnews.org/2015/10/eric_metaxas_on_1100261.html
I believe this is what Mung has been talking about. I asked Mung:
How many goes do you get? How many bacteria in the earth’s soil?
Mung replies:
Not nearly enough.
I feel this is interesting enough for an OP as it seems to finally touch upon what IDers think the designer actually does that can be investigated scientifically.
For example, if we find in a population a protein that is different to the version in an ancestral population but which still works, the by (their) definition, that is prima facie evidence of the designer at work.
Perhaps we can then take the population with the original protein, enclose it in our most sensitive equipment and attempt to detect the designers actions when it “solves the bike lock” and finds the new protein and somehow makes the required adjustment?
If I were an ID supporter these are exactly the sorts of experiments I’d be proposing, and with money on the table (Templeton) I continue to be surprised at the lack of such endeavours. At the very least they can rule out some levels of possible designer interaction at the macroscopic level.
And Mung, I’d be interested in knowing how many would be enough?
Earlier during his direct testimony, Behe had argued that a computer simulation of evolution he performed with Snoke shows that evolution is not likely to produce certain complex biochemical systems. Under cross examination however, Behe was forced to agree that “the number of prokaryotes in 1 ton of soil are 7 orders of magnitude higher than the population [it would take] to produce the disulfide bond” and that “it’s entirely possible that something that couldn’t be produced in the lab in two years… could be produced over three and half billion years.”
http://www.talkorigins.org/faqs/dover/day12am.html
Mung and Colewd:
What matters is not the size of sequence space, but whether it can be explored one step at a time.
Any thoughts on something that actually matters?
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3979732/
Evolution of increased complexity in a molecular machine
And a classic…
Experimental Rugged Fitness Landscape in Protein Sequence Space
Allan Miller,
I am not sure what it brings to the table. It is an inference argument. The 64 million dollar question is what is the mechanism that gets you from animal a to animal b. Intelligent design? Ok whats the next step? Can the cell generate new novel sequences by some yet undiscovered mechanism? This is what James Shapiro is working on called NGE. How much down to the physics level do you think we understand about human cells? Can a discussion about the evidence of design in nature bare fruit? I am on the fence whether this is worth while. I think the inference standard is of questionable value. Thoughts?
So long as you aren’t talking about getting from some current organism to some other current organism, the mechanisms of getting from ancestor A to descendent D seem pretty well understood. Small, cumulative, single steps sometimes beneficial, more often neutral (at the time they occur), and frequently deleterious but mildly enough to be tolerated for many generations.
The notion of design in nature is what precipitated the theory of evolution in the first place. What, Darwin wondered, caused so many features of so many organisms to be so well suited to their environment and lifestyle. He (and Wallace, and others) came up with the idea that a species is a population of unique individuals varying from one another in countless little ways. And that some variations were more useful and advantageous, and that organisms lucky enough to enjoy these variations experienced better breeding success.
Of course, the notion of selective breeding was ancient even then, and people had been selectively breeding dogs and livestock since before recorded history. The key insight was that from the perspective of the organism, there was no “artificial selection”, there was only the environment which rewarded certain characteristics selectively. It was a known effective mechanism, needing only Deep Time to work to produce the biosphere we know today.
And today, the sources of novelty are largely identified – imperfect replication, sexual recombination, even radiation (from cosmic rays or naturally occurring radioactive materials). I’m not sure what you think is being missed. You SEEM to be saying that these mechanisms don’t strike your intuition as being sufficient.
The gist of our post is that, even if we allow DEM their definitions, their “conservation of information” (CoI) theorem does not apply to what they regard as “evolutionary search.” It’s highly relevant to this thread, why CoI doesn’t apply. CoI assumes that a target event is specified in advance of the “search” process. Scientists, however, decide what to investigate after seeing what nature tends to do. In my follow-up post, here at TSZ, I illustrated (animated) how randomly selected processes are biased. And I explained that there’s actually no limit to the quantity of active information (doubly misnamed) expected to arise by chance.
Ironically, Winston Ewert responded (“The GUC Bug“) by silently reverting to the definitions that Dembski and Marks gave in 2009, and by exploiting a defect in them (identified by me in 2009 — “Work Is Not Information“) to trump up the appearance that there was nothing to our model. Hyper-ironically, Dembski had criticized Joe for paying attention to the old definitions, but not those of DEM (2013). Joe and I agonized over the details of DEM, for obvious reasons. I contacted Dembski by email, to make sure that he would be sticking with the DEM (2013) definitions. So Ewert reverts to the D&M (2009) definitions, to criticize the GUC Bug?
There should be no controversy as to whether Ewert’s behavior was grossly inappropriate. You can’t turn it into “oopsies” without turning him into an idiot. I don’t believe he’s the brightest bulb, but an idiot he is not. And thus he’s accountable. But you just lost interest in the matter of sticking to definitions, didn’t you?
colewd,
Why invoke it then? If you can’t see any way that intelligence would solve your non-problem, how can you infer its necessity?
You tell me. Do we need to get from animal a to animal b?
Possibly. It doesn’t need to though. The entire protein repertoire is actually composed of just a few hundred motifs. Clearly, one can get shorter motifs and combine them to make a 300-acid protein without havng to land in a random part of 300-acid space. Clearly, that has happened. The proteins in organisms are evidently not random picks from the space delineated by their current length and alphabet size.
There are just 6 or 7 different enzyme-catalysed reactions, chemically speaking. And much of the electron-shuffling in those is actually done by metal ions.
The rest is reshuffling those modules. Your failure to appreciate the role of modularity is something of a sticking point here.
One wonders then what your excuse was before you met me.
Animal evolution is a tiny fraction of the history of evolution, one that, for all I know, barely involves de novo proteins (if any). It’s mostly regulatory changes.
And then there’s molecular clocks: Pick animal A and B so that we know they share a common ancestor by independent lines of evidence (any other A and B amounts to falsifying imaginary crocoducks). Calculate their divergence time based on known mechanisms, mutation rates, population sizes, generation times… Is the divergence time from a last common ancestor consistent with evolutionary history, the fossil record, etc..? Where can I claim my 64M prize if it is?
The simple fact is that if colewd pointed out solutions instead of problems he could step up and claim his Nobel prize in no time.
But they, as far as I know, don’t yet give out prizes for pointing out that we don’t know everything about everything and nor do we have a video of the origin of life.
Nor do they give prizes out for creating an impossible strawman and pointing out that it’s impossible.
It’s my specialty, and I’m rather proud of it. Took years of practice.
Then why did you say the following?
Does it not matter that “a significant percentage of random sequences have some function”?
Of course, if a “significant” percentage of random sequences has some function and by “significant” one means that it’s enough to explore the sequence space one step at a time, then the size of sequence space is completely irrelevant.
Is it really that hard?
I agree with you that:
Those of you who argue with ID proponents should put the tired old Weasel out to pasture, or wherever it is that old Weasels like to go, and give Felsenstein’s Killer Bug a try.
Perhaps folks should stop asking me to write a Weasel program and start asking me to write a Bug program. Maybe they should write a Bug program. They might learn something.
I probably haven’t understood anything from it. If you’re in a teaching mood why not start an OP? Teach, keiths.
Mung,
People were actually encouraging you to write a GA, just to get a handle on the issues involved therein, and render your critiques more informed. Weasel would do – Bug would do – but I’m sure you could do better yet.
Mung,
It matters that you can get from one to another. It’s more how they are clustered than how the whole space is patterned.
Wouldn’t it be amazing if they were clustered and there were viable paths? I’d be inclined to put that down to more than coincidence.
The GUC Bug was conceptual. Tom has a program, but one can see without any program that an evolutionary process modeled by a GUC Bug would increase fitness and do much better than a blind search. And that shows that Dembski, Ewert and Marks’s theorems do not establish that evolutionary processes cannot do better than blind search.
Does Mung agree with our argument? Or does Mung have some counter-argument to ours? I am somehow not seeing any clear statement by Mung on that.
Did you even read any of Arrival of the Fittest.
Connectedness is an attribute of arbitrary complex phase spaces.
Did you read the section on the array of logic gates, and how one could change 80 percent of the gates (one at a time) without changing the output?
I think I get a perverse sort of pleasure from allowing people here to think that I am incapable of writing a GA. More than the pleasure I’d get if I actually wrote one to prove them wrong.
I feel like I need a more interesting problem than Weasel. It doesn’t even use crossover. But even more than that I want to know what it is that a GA is supposed to demonstrate and how we can objectively test one. I need more than “you might learn something” to motivate me.
I am certain I have more books on GA’s and have done more reading on the subject than most people here. Lack of understanding of the issues isn’t the problem.
For example, I asked folks here to write a generic GA that will solve any problem. Some people took that to mean I didn’t understand GA’s. I took it to mean they couldn’t do it. I am still waiting for someone to prove me wrong.
I think GA’s are problem-specific and need to be designed for the problem attempted to be solved. Am I wrong about that?
Are you aware of the capabilities of the simple NAND?
I do not have a counter-argument. I hope that is a clear enough statement. 🙂
Mung,
I don’t see why.
But … evolution is true, then? As generally accepted? Organisms are just wibbling their way round this wonderfully interconnected Designed space?
Mung,
I don’t think you’re incapable, I think you are in part bone idle, and in part don’t see the point. Yet you pontificate about what GAs are, what evolution is, and so on. You may get a perverse pleasure out of appearing ignorant, I dunno.
That too. Better if your opponent thinks you’re incompetent.
Mung,
Yes. All GAs do the same basic things with populations and organisms, a few wrinkles aside. Those ‘wrinkles’ obviously include the selection process, and that affects the fitness landscape, but you don’t even need one. You could write a generic GA and farm out the selection process to a subroutine. The selection process does not even need to be algorithmic.
The Biological Fine-Tuning argument. A designed fitness landscape. A designed algorithm for moving about on that landscape. An irreducibly complex interplay between organism and environment.
LMFAO
Mung,
Yes, because that way … er, no, that one’s beyond me I’m afraid.
Mung,
Yeah, I know, but modern evolutionary theory pretty much completely correct as taught. Yes?
Then why were people claiming it could not be done and claiming I was a fool for suggesting it?
But the devil is in the details. A sufficiently abstract GA won’t solve any problem, much less every problem.
Right, I know. Stupid idea. Now I’ll pause and let you and Allan work things out.
Well, there we have it. Mung — perverse by his own admission.
Mung,
Sorry, I’m too thick.
Then you make a post about ID “science” and remove all doubt..
Mung,
That’s right, if you are looking to solve a problem, you need a selection routine. That does not mean there is no generic GA core that is applicable throughout. That is what was taken from nature. That’s why the writing of a GA (rather than the reading of a book on GAs lol) – is a routine part of an evolutionary education.
Mung again arguing over the meaning of stuff.
So let’s change the question. Can one implement a generic fitness function to solve any problem?
I do wish that law enforcement would take more seriously the probability that the Designer is stealing a whole lot of bicycles.
What else is it going to do now? Apparently it isn’t even designing P. falciparum any more, slacking off and letting normal evolution overcome the chloroquine threat.
Stealing bikes is about all that it has left to do.
Glen Davidson
Allan Miller,
Lets explore your argument. Can you show me more specifically how modularity can build a multi protein complex? How does co option work. A protein that exists just happens to fit together in form and function with 30 other proteins? How do 15 nuclear proteins work together? How do you propose reshuffling is done? How does gene expression timing evolve? We still don’t know the source of splicing codes. How does the nuclear pore complex know how to block proteins from entering the nucleus? What is the specific mechanism you think built all this?
That seems to be what some of the intelligent design creationists are retreating to.
Seems a solution for theists. God designs us and all other life on Earth by designing and providing the appropriate environments at the appropriate times. Fits the facts, no?
Plus it has the benefit, from a creationist standpoint, of being damn near untestable!
Yep. 🙂
The core that is taken from nature includes the selection routine. Are you saying that nature plugs in different selection routines in order to solve different problems?
Another core that is taken from nature is the concept of the chromosome. Are you saying that there is a generic chromosome that can be used to solve any problem in a GA?
Mung: A sufficiently abstract GA won’t solve any problem, much less every problem.
So now you’re confusing me.
I said: I think GA’s are problem-specific and need to be designed for the problem attempted to be solved.
I asked: Am I wrong about that?
You said yes, I am wrong about that. Now you appear to be saying the opposite.
Do you think there is a generic GA core that will solve any problem? Or are you just asserting that every GA has a generic GA core that is present in every GA (e.g., every GA includes a “population” of candidate solutions)?
Mung,
Nature may or may not select. If it does, you get change through adaptation, if you dont, you get change through drift. But yes, one could look at ‘being a predator’ and ‘being prey’ as different selection routines plugged in by nature.
Now, I think, you are just yanking my chain.
colewd,
You ‘explore my argument’ by scooting straight past it. I am talking, in the first instance, of the role of modularity in generating a single protein of a given length. While a consistently folded configuration 300aa long may be deemed impossible due to the size of 300aa space, and the rarity of folding within it, the protein library is currently stuffed with sub-protein-length fragments that reliably fold. Plug a number of those together and you have a ‘novel’ 300 aa protein which consistently folds. Therefore, the number 20^300 is utterly irrelevant.
All arguments that attempt to scale up 20^300 according to the total length, in amino acids, of the peptides forming the system in question, are also utterly irrelevant.
Do you understand this?
Mung,
You set up a false dichotomy: either one GA will work in all conceivable situations, or every GA has to be written from scratch in its entirety. There is another option, as any software jockey would know.
I don’t know why you are fixated on problem solving anyway; that’s not why students of evolution write them. GAs just take populations of strings and apply analogues of biological process to them. This can be used to solve problems, but it also can be used to explore the behaviour of populations themselves, and to gain an understanding of the impact of the various parameters on operation – population size, mutation rate, crossover, strength of selection etc. It might also equip one to make sensibe statements in conversations about them.
I know you won’t. That would be one concession too far to a suggestion from the TSZ Crowd. You know everything you need to know about them and have even read a book. Your education is, of course, in your own hands.
colewd,
Evolution. Your turn.
As Allan has noted, you are changing the subject. As expected.
But, just for s&g, I’ll answer:
You have a protein (say a DNA-binding protein). It dimerizes. Duplicate the gene, swap in a different DNA-binding domain (or just mutate aa’s that contact the major groove) and you’ve got a heterodimer that recognizes a different DNA sequence. Oh, and hey presto, you also have a “multi-protein complex”.
That was easy.
🙂
Yeah, well, Allan tells me I’m wrong about there not being a generic GA that can solve any problem then begs me to change the subject from problem solving.