(Sorry this is so long – I am in a hurry)
Gpuccio challenged myself and others to come up with examples of dFSCI which were not designed. Not surprisingly the result was that I thought I had produced examples and he thought I hadn’t. At the risk of seeming obsessed with dFSCI I want assess what I (and hopefully others) learned from this exercise.
Lesson 1) dFSCI is not precisely defined.
This is for several reasons. Gpuccio defines dFSCI as:
“Any material object whose arrangement is such that a string of digital values can be read in it according to some code, and for which string of values a conscious observer can objectively define a function, objectively specifying a method to evaluate its presence or absence in any digital string of information, is said to be functionally specified (for that explaicit function).
The complexity (in bits) of the target space (the set of digital strings of the same or similar length that can effectively convey that function according to the definition), divided by the complexity in bits of the search space (the total nuber of strings of that length) is said to be the functional complexity of that string for that function.
Any string that exhibits functional complexity higher than some conventional threshold, that can be defined according to the system we are considering (500 bits is an UPB; 150 bits is, IMO, a reliable Biological Probability Bound, for reasons that I have discussed) is said to exhibit dFSCI. It is required also that no deterministic explanation for that string is known.”
(In some other definitions Gpuccio has also included the condition that the string should not be compressible)
These ambiguities emerged:
Some functions are not acceptable but it is not clear which ones. In particular I believe that functions have to be prespecified(although Gpuccio would dispute this). Also functions which consist of identifying the content of “data strings” (a term which is itself not so clear) are not acceptable because the string in question could have been created by copying the data string.
The phrase “no deterministic explanation for that string is known” is vague. It is not clear in how much detail and how certainly the deterministic processes have to be known. For example, it appears from above the possibility that the string in question might have been copied from the string defining the function by some unknown method is sufficient to count as a known deterministic explanation. This implies that really it is sufficient to be able to conceive of the very vague outlines of a determinist process to remove dFSCI. I think this amounts to another implicit condition: no causal relationship between the function and the string.
Lesson 2) dFSCI is not a property of the string.
It is a relationship between a string, a function and an observer’s knowledge. Therefore, it may be that dFSCI applies for a string for one observer with a certain function but not for another observer with a different function. The rules for deciding which function are not clear.
Lesson 3) The process for establishing the relationship 100% specificity of dFSCI and design is not commonly found outside examples created by people to test the process.
Gpuccio says thisabout the process:
“To assess the dFSCI procedure I have to “imagine” absolutely nothing. I have to assess dFSCI without knowing the origin, and then checking my assessment with the known origin.”
When challenged he was unable to name any instances of this happening outside the context of people creating or selecting strings to test the process as in our discussions. This is important as the dFSCI/design relationship is meant to be an empirical observation about the real world applicable to a broad range of circumstances (so that it can reasonably be extended to life). If it is only observed in the very special circumstances of people making up examples over the internet then the extension to life is not justifiable. To give a medical analogy. It might well be that a blood test for cancer gives 100% specificity for rats in laboratory conditions. This is not sufficient to have any faith in it working for rats in the wild, much less people in the wild. Below I discuss what is special about the examples created by people to test about the process.
A Suggested Simplification for dFSCI
dFSCI says that given an observer and a digital string where:
1) The observer can identify a function for that string
2) The string is complex in the sense that if you just created strings “at random” the chances of it performing the function are negligible
3) The string is not compressible
4) The observer knows of no known deterministic explanation for producing the string
Then in all such cases if the origin eventually becomes known it turns out to include design.
Given the rather lax conditions for “knowing of a deterministic mechanism” that emerged above, surely (2) and (3) are just special cases of (4). If (2) or (3) were present then deterministic mechanisms would be conceivable for creating strings.
So the dFSCI argument could be restated:
Given an observer and a digital string where:
* The observer can identify a function for that string
* The observer cannot conceive of a deterministic explanation for producing the string
Then in all such cases if the origin eventually becomes known it turns out to include design.
Conclusion
There are two main objections to the ID argument:
A) There are deterministic explanations for life.
B) Even if there were no deterministic explanations it would not follow that life was designed
For the purposes of this discussion I will pretend (A) is false and focus on (B)
No one disputes that it is possible to detect design. The objectors to ID just believe that B) true. The correct way of detecting design is to compare a specific design hypothesis with alternatives and assess which is provides the best explanation. This includes assessing the possibility of the designer existing and having the motivation and ability to implement the design. If no specific hypothesis is available then nothing can be inferred.
So is the dFSCI claim above true and if so does it provide a valid alternative way of detecting design?
The trouble is that there is dearth of such situations. One of the reasons for this is that digital strings do not exist in nature above the molecular level. At any other level it is only a human interpretation that imposes a digital structure on analogue phenomena. The characters you are reading on this screen are analogue marks on the screen. It is you that is categorising them into characters. So all such strings are created by human processes. It follows that design is a very plausible explanation for any such string. People were involved in the creation and could easily have designed the string. If you add the conditions that the function must be prespecified and there should be no causal relationship between the function and the string then design is going to be by far the best explanation. It goes further than that. It also means there almost no real situations where someone is confronted with a digital string without knowing quite a bit about its origin – which is presumably why Gpuccio can only point to examples created/selected by bloggers.
What about the molecular level? Here there are digital strings that are not the result of human interpretation. Now human design is massively implausible (except for a few very exceptional cases). The problem now is that carbon chains are the only digital strings with any kind of complexity and these are just the one’s we are trying to evaluate. There are no digital strings at the molecular level with dFSCI except for those involved in life.
So actually the dFSCI argument only applies to a very limited set of circumstances where a Bayesian inference would come to the same conclusion.
gpuccio:
The added context just reinforces my point. You are saying that if we currently lack knowledge of selectable intermediates, then the only “realistic” way to model NS is to assume that there are none and that NS therefore plays no role at all.
That’s as ridiculous as saying “I haven’t measured the air resistance of my new car design yet. Therefore the only realistic model of my car’s performance must assume that air resistance plays no role at all.”
The phrase “the only model he can really build at present” does not mean the same thing as “the only realistic model of NS.” And as I pointed out, Joe did the right thing in his equations. Instead of foolishly assigning NS no role, as you recommend, he included fitness parameters in his equations. His equations therefore apply both in cases where selectable intermediates exist and in cases where they don’t. You just have to plug in appropriate values for the fitness parameters.
Joe’s definition of fitness covers both natural and artificial selection. Your airy dismissals might be more persuasive if you actually understood what you are airily dismissing.
Another airy dismissal sans counterargument. How do you justify the three wild assumptions you must make in order to force-fit your hypothesis to the evidence?
1. The assumption that selectable intermediates are absent.
2. The assumption that there is a designer who can bridge the gaps that you assume are there.
3. The assumption that out of trillions of possibilities, the designer just happens to behave in one of the few ways that produce an objective nested hierarchy and thus make it appear that unguided evolution is operating.
If you can’t justify those assumptions, you can’t justify ID, even in its “guided evolution” forms.
Not only that, but ‘random’ (with caveats***) strings can completely replace the activity of a deleted, and much longer, functional protein.
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0015364 (I may have mentioned this before!)
*** There is an element of ‘design’ – a rule is involved such that hydrophilic and hydrophobic acids are alternated in a 14-acid cycle, which is significant for folding (though by no means the only cycle that will fold). Nonetheless, the specific acid at each hydrophilic and hydrophobic site is entirely in the lap of the gods. Who see fit to provide several analogues for 4 different E. coli proteins from this otherwise totally random mishmash of acids, expressed in a library of a mere 1.5 million different sequences. Which neatly demonstrates that the ID assertion that function is ‘islanded’ universes apart is suspect, when you focus the empirical lens on the matter. Getting the basic peptide for this particular set requires a 14-acid module, hardly the biggest evolutionary task ever faced. The peptides synthesised in the study were longer iterations of that base module but still only 102 acids in length, or 306 nucleotides – and as functional as you like!
GP has no way of knowing how many evolutionary steps separate a barely functional “random” sequence from an optimized sequence.
Rather than 80 or 300 bases, we may have half a dozen.
gpuccio, the two sentences above mean exactly the same thing.
My understanding is that English is not your first language and I believe you are rushing through the comments of many of us without completely getting an understanding of them.
Part of my sentence can be paraphrased to mean, “Above some appropriate threshold, some event is simply too improbable, and thus a random process is not a credible explanation“.
The part of my sentence that reads, “that something above a certain threshold needs design”, can be paraphrased to, “A bit configuration above some appropriate threshold requires a design explanation”.
The final point, “not a configuration below that threshold ” , simply means, “a bit configuration below a design threshold does not require a design explanation”.
The only comment I can make about that statement is that it is uncalled for.
From Allan’s paper that he may have mentioned before!:
So, it’s not just me then!
Gpuccio,
You’re welcome. Now if we could only get Joe G. to use the spell checker, we’d be set. I’m not sure he understands why the red squiggly line always appears under the word ‘obvioulsy’. Perhaps he thinks it’s a WordPress ‘auto-emphasis’ feature.
Click here for amusement.
Again, you are rushing to respond without understanding the point being made.
The threshold we choose in all our debates between every contributor on both sides, is the number of bits that we accept as being the boundary between what we accept as being possible without the mechanism of intelligent design, i.e. random processes, as opposed to those configurations that require the efforts of an intelligent designer.
By definition, any bit configuration under the threshold, is considered to be a possibility, in a random distribution.
As an example it is possible to find 7 bit patterns in a random distribution that will equal ‘A’.
Now please read the next part carefully before you respond as it may not be obvious why it is important.
If I design a 7 bit pattern to equal ‘A’, it in no way means that all other patterns that exist with that same value, are thus no longer random.
If random patterns can be found with the same configuration as a designed pattern, those random pre-existing patterns are still considered to be due to random processes, even though the pattern I configured with intentional design, exists in the universe alongside them.
The whole point of designing a bit configuration that can also be found occurring randomly, is to show the functionality of a string, that can also exist due to non-design processes.
By designing the string, I am demonstrating it, not forcing its existence.
Without my design efforts, the string could still exist in nature since it is below our design threshold.
So in conclusion, I will design a string, that is also within the capabilities of non-design processes to produce.
Since it is below the threshold, it already exists randomly and I don’t have to design it for any other purpose than to show what its capabilities are.
Yes.
The “dFSCI” string has to be due to non-design.
To clarify, if I “show” you an initial string with the information to self-replicate, whose length is below the UPB threshold, are we in agreement that the string can also exist randomly without my actually creating it?
So you’re saying that ID models based on mathematical improbabilities, like Dembski’s CSI, are not applicable as a tool for investigating the capabilities of biological processes?
keiths:
gpuccio:
Would you care to apply that same standard to your Designer? Substitute ‘Designer’ for ‘selectable intermediates’, with the appropriate grammatical changes:
Too funny.
And I have given you evidence that selectable intermediates exist. You didn’t deny that Lenski had found one — your only claim was that it was a case of ‘microevolution’, not ‘macroevolution’. Also, my objective nested hierarchy argument shows that the evidence supports the existence of selectable intermediates far, far better than it supports the existence of your designer.
But there is support for them. Even you accept the reality of natural selection. Natural selection operates via differential fitness, so of course there are fitness parameters in the equations. How could there not be?
You keep saying that you accept microevolution but deny macroevolution. Do you really think that microevolution never involves fitness differentials?
keiths:
gpuccio:
Your understanding isn’t even close to perfect if you can’t see that fitness applies to both natural and artificial selection, and both macro- and microevolution.
Your answer failed to justify the assumptions you must make. Let me pose the question again:
gpuccio:
By your own standard, then, we must reject the Designer unless you can find him and show him directly to us.
Gpuccio 722
Be prepared for some apparently pedantic stuff below but I truly believe it is this confusion over language that hides deep problems with your dFSCI assertion. There is a point to it.
Your English is better than most Englishmen – but here it has let you down. To “infer” is to draw a conclusion from … people infer things … an explanation cannot infer anything. For example, the wet umbrella implies that it is raining. I infer from the wet umbrella that it is raining. (This is a common grammatical error in English).
The key difference as far as we are concerned is whether a design explanation logical implies a design origin or empirically implies it.
You may wrongly infer that a protein is designed. That is not the point.
Yes this is what I mean. And, assuming that you use includes in a reasonably normal sense (but this is getting into a real quagmire) it follows that a design explanation logically implies a design origin.
This the same confusion. I am not saying the pattern in the cloud logically implies that the pattern was designed or the origin. What I am saying is that if the pattern was designed that it follows logically (not empirically) that the pattern had a design origin.
Yes – but the key thing is the nature of that impossibility. I just want you to agree that if origin of the configuration of a digital string was a design process then it follows logically (not empirically) that the configuration of the digital string was designed. I think you agree with this, but these things need absolutely nailing down given the nature of the debate. If you disagree then it should be possible to describe a case where a digital string’s configuration had an origin in a design process and yet was not designed.
What is this all leading up to? I hope that at the end of this you will agree:
The configuration of a digital string is designed if and only if the origin was a design process.
That’s exactly the point. CSI and its variants are mathematical abstractions purporting to model whether things can just happen. GAs are an appropriate exercise of CSI models. If ID proponents do not believe mathematics is an appropriate way to argue, perhaps they should avoid using Dembski’s and Behe’s mathematical models.
But it is just silly to initiate an argument based on a mathematical abstraction and then deny that mathematics is an appropriate avenue for testing.
Gpuccio 733 (edited to try and make clearer)
I am glad you agree that:
The configuration of a digital string is designed if and only if the origin was a design process. (X)
And furthermore it follows from your definition of being designed. So this is logically true, not empirically true. Let’s hang on to this agreement and I will label it (X) for brevity.
I have trouble reconciling this with:
But maybe that is just a misunderstanding. All I am saying is that:
A) To give a design explanation of the configuration of a digital string is to assert (among other things) that the configuration of the digital string is designed.
Therefore
B) To give a design explanation of the configuration of a digital string is to assert (among other things) that the configuration of the digital string has a design origin.
(B) seems to me to be logically true as a result of your acceptance of (X) above.
What did I miss/fail to understand?
An additional thought added later
You wrote:
I am not saying that the hypothesis is logically true. I am saying that if the hypothesis is true then it logically implies a design origin. So, for example, it follows that if I don’t know of any plausible design explanation then I don’t know of any plausible design origin and if I don’t know of any reasonable design origin then I don’t know of any reasonable design explanation.
Perhaps it would be plainer to say that evolution, as demonstrated by Lenski, is the only observed process that generates biolgical information. It is the only candidate for an explanation. Denegrating it as microevolution is equivalent to saying a falling apple is just microgravity.
Gpuccio 739
That will do.
OK. I will rephrase that slightly.
Is that clearer?
Let me risk jumping ahead a bit. I will look for a couple more agreements (I am actually rather pleased with this establishing agreement bit by bit. I feel like it is avoiding a lot of misunderstandings. But it is taking a very long time and you must be getting bored).
Agreement 1:
Given :
The configuration of a digital string is designed if and only if the origin was a design process.
then it follows
The configuration of a digital string is not designed if and only if the origin was not a design process.
Agreement 2:
In the case of digital strings with a function which are complex enough to exclude a probabilistic explanation then the only explanations which are known to most of society at this time are design explanations or deterministic explanations.
I think this is logically true – but if you want to insist it is empirical it makes little difference.
Please, please, please, ……read more carefully.
I can show you step by step that non-design processes can result in strings which you, gpuccio, will agree have “dFSCI”.
However, there is nothing in my quoted statement above, that has anything to do with “dFSCI”, for the point I am trying to make for this step.
The answer you gave has no relationship to the question I actually asked.
“dFSCI” comes later in the process which is the whole point, that non-design processes will result in strings which you, gpuccio, will agree have “dFSCI”.
Why are you afraid to simply answer the questions that are actually asked?
Q1: Can random strings of any length exist without a “design mechanism” ?
Q2: At what length are “design mechanisms” necessary?
That is an assertion that no one on your side has even come close to providing evidence for.
For every mountain of articles that support “non-design biology”, there is one small hill of articles that are opposed to “non-design biology”.
There is not one single article written that supports “designed biology”.
Any such “designer” document would need to meet the same requirements as a “non-design” treatment, in that “mechanisms” that support deign would have to be described.
It’s one thing to say “Darwin can’t” as opposed to, “Here’s how the designer did it”.
Show me one article that describes the methodology of the designer.
That’s the type of statement that Behe used to lose Dover for the ID side and it won’t help ID’s cause in the future if ID continues to “hand-wave” away “non-design” articles.
Show me one pro-design document that describes the designer’s actual design methodology.
Was the “designer of life” himself alive?
That would require gpuccio to at least agree on some basic issues.
Here’s the first.
Issue 1: At what “bit threshold level” do we consider “non-design processes” can operate at?
Popcorn: Check.
It would be interesting to see any design methodology at all that does not involve cut and try. “Intelligent” selection is not different from natural selection in probability. And based on the Lenski experiment, functionality resulting from natural selection involves steps that have no obvious function that could be selected.
The whole point of coming up with a term like Dembski’s UPB is to establish the demarcation line between bit configurations of information that can only be attributed to “design processes” and those smaller less complex bit configurations that might be attributable to any “non-design” mechanisms.
kairosfocus also uses a 500 bit threshold while other IDists have set thresholds as low as 150 bits.
The UPB and other defined thresholds are a requirement if you intend on making an ID improbability argument.
Please read up on this, and then come back with a threshold you consider appropriate.
KF is fond of referring to his straw-in-a-hay-bale-a-light-year-on-its side probabilities. The authors of this paper mention that it would take a mole of universes – 6 x 10^23 spaces each billions of light years across – to hold every possible 100-acid sequence from a library of 20. And yet here we have a library of 1.6 million non-‘natural’ proteins that would together fit into about a tenth of a single E. coli cell, and we find function for not one but 4 nutritional knockouts of a possible 27 assayed. That’s some strike rate.
OK, they weighted the game in favour of a 4-helix fold. The patterning algorithm used yields a potential 5 x 10^52 sequences all told, so the algorithm obviously cuts down the space dramatically. I reckon the space for 5 x 10^52 would be a sphere of about 2.6 AU in radius – a mere 40 light seconds across, but still quite a haystack in which to hide this tenth-of-an-E coli needle. Suggesting that it’s not a needle at all. Or rather, the haystack is in no small part made of needles.
What would be impressive is if GP applied his algorithm and picked out the functional sequences. He would probably (rightly) infer design in every single case, functional or not. But we knew that. What we didn’t know, without trying, is which of these part-designed proteins would function. Nor do we know, for any functional peptide, how far away is its nearest functional neighbour. But I would not bet on any great distance, given the way these flashgun pops illuminate the space.
The designer that Dembski and Behe insist is a requirement for the diversity and existence of life itself.
It’s in ID 101! 🙂
Mung It just happened, that’s all, is hard to model, mathematically or otherwise.
History is hard to model, if you’re looking to explain a particular result. However it happened. Plot the positions and energies of every atom in the earth-atmosphere system a week last Tuesday and see if it comes up with today’s weather on any run. Or set up a virtual casino and see whether the run of numbers duplicates a real one. Therefore God?
ID enthusiasts seem to think that evolutionary theory should be able to ‘explain a giraffe’, in every detail, otherwise it, and any tool used in it, is wrong.
You can look at general behaviours and the interplay of forces in such stochastic models. And one of the most striking features of them is their tendency to produce ‘design-like’ results from essentially random (but biased) processing. Because a method of discard of poorer solutions and retention of better is almost indistinguishable from an intentional process of design. It simply doesn’t need the intent bit. Cold weather will weed out those genes whose phenotypes are less robust in cold weather; a prolonged period of cold will see the entire population stripped of those genes in favour of the more robust. The cold couldn’t care less.
A single round of selective fixation adds an increment of ‘designedness’ to the population; repeat rounds add further increments. This is a vital consideration for people who wish to declare that actual design is behind this or that string. If it came from a genetic process with bias, it comes from a known source of false positives.
Again, we are having language difficulties and these difficulties are yours, not ours.
The term “design mechanism” does not mean design is a mechanism.
If you point out a “red Ferrari” to me I would not correct you with the statement, “Red is not a Ferrari”.
I’d like to attempt to invalidate your “dFSCI” but I can’t if you don’t put it into a form that is falsifiable.
If you cannot come up with a falsifiable version of “dFSCI”, it is useless as a scientific tool.
If I can show a computer simulation of the growth of complex information without the guidance of a designer, would that invalidate “dFSCI”?
Ah, but you are really vague!
I want to invalidate “dFSCI”.
Can you come up with a falsifiable description for “dFSCI”?
This implies that there is enough information in your description to come up with an experiment that would invalidate “dFSCI”.
a) – yes.
b) – where is it?
c) – yes
d) – I don’t understand what you mean by remunerate
e) – yes
Then let’s agree on a test. I don’t want to do something on my own and have you say the experiment was not valid.
Do you agree on f) ?
f) The software in charge of the simulation is not a part of the simulation.
I’m confused Joe as to who you are in agreement with.
Joe does seem to exist on a special island all his own.
Gpuccio 746
Almost there but there is a significant hurdle …
I cannot understand how you can say we cannot conceive of origins. As long as you can picture something in your mind you can conceive of it. And you can certainly picture origins. You can conceive someone designing something or a natural process creating a crystal which becomes the origin of something. I think you must be using some rather specialised meaning of the word “conceive”.
I want to ask you if you agree to this:
But clearly we need to be sure what we mean by “conceive” before answering this.
Joe 755
I am couldn’t resist this. I looked up design in the online dictionary. It can be a verb or a noun. “mechanism” is a noun. So if a design is a mechanism it must be the noun meaning of “design”. Here they are listed:
I don’t see the word “mechanism” there. I wonder which meaning you think implies mechanism?
Gpuccio,
Evolutionary biologists (and I) claim that selectable intermediates exist and that unguided evolution is responsible for the diversity of life. You claim that selectable intermediates don’t exist, or that they are so sparse as to be unbridgeable via a Darwinian process. Therefore you claim that an intelligence must have been responsible for bridging the gaps.
What would we expect to see if the evolutionists and I are correct? Well, unguided evolution operates via small genetic changes and primarily vertical inheritance, so we would expect it to produce a nested hierarchy. And not just any nested hierarchy, but an objective nested hierarchy, meaning that disparate lines of evidence — morphological and genetic, for example — will converge on the same tree, or very nearly so.
That is exactly what we find. As Theobald explains, if you infer a nested hierarchy for the 30 major taxa of his Figure 1, first using morphological data and then using genetic data, you get exactly the same tree to an accuracy of 38 decimal places. Out of trillions of alternative possibilities, unguided evolution via selectable intermediates gets it exactly right.
On the other hand, what would we expect to find if your designer hypothesis were correct? Well, you are hypothesizing an unknown designer with unspecified abilities working under unknown constraints with unknown goals. Therefore, your hypothesis makes no predictions at all. Any state of affairs would be compatible with the existence of your Designer — you could just shrug and say “I guess that’s how the Designer did it.”
The world looks exactly — to 38 decimal places — like we expect it to look if unguided evolution is operating via selectable intermediates. Against this stunning success, intelligent design can offer nothing. It makes no prediction at all. The evidence blows it away. It’s not even competitive as a theory.
Your argument is a futile attempt to bring ID back into the race by tacking extraneous, ad hoc assumptions onto it. But as I’ve explained, you can’t just tack on arbitrary assumptions — you’ve got to justify them.
You’ve invented a fictional barrier to evolution by assuming that selectable intermediates do not exist. Yet you’ve been shown that the evidence is literally trillions of times stronger for the existence of selectable intermediates than it is for their absence.
You try to rationalize this by claiming, suddenly and very conveniently, that indirect evidence, no matter how overwhelming, is no longer sufficient to establish the existence of selectable intermediates. Never mind that all of ID is based on indirect evidence. ID depends on indirect evidence, so claiming that it is insufficient when applied to evolution is a transparent double standard.
Having assumed (without justification) that there are no selectable intermediates, you now need a designer to get across the barren gaps. You assume the existence of a Designer who conveniently has the abilities needed to do the job.
In other words, you invent a Designer to surmount the invented obstacle.
As if that weren’t ridiculous enough, you need yet another assumption to make this rickety “theory” match the evidence: you have to assume that the Designer, for whatever reason, just happens to behave in a way that mimics unguided evolution and produces an objective nested hierarchy — despite the trillions of other possibilities.
I don’t see how you can suggest it with a straight face, especially given that the alternate hypothesis fits the evidence without the need for any ridiculous, ad hoc assumptions.
Tomorrow I’ll address some of the specific claims you make in your latest comment to me. For now, suffice it to say that you don’t succeed in justifying the ridiculous assumptions outlined above.
When arguing with IDists I think you need to be careful with the word selectable. Neutral or nearly neutral mutations pass through the sieve of purifying selection, but are not “selected for.”
As Lenski demonstrated, neutral mutations can provide the critical scaffolding for later adaptive changes. This is what makes possible molecular inventions that span Behe’s Edge.
The problem here is one of communication not simply taking a somewhat different position than someone on your own side.
Two doctors debating a diagnosis would not be possible if their usage of terms was very different.
It’s one thing to disagree on a point, but having a mismatch in definitions means someone is going to receive a message that is different then the one that was sent.
Everyone should agree on terminology.
Mung #730 says:
Ok, Joe, I’ll bite. What is a GA.
I’ll say what a GA is not. A GA is not a model of evolution.
Sorry for the delay in answering — I have been busy. (I will shortly also reply to gpuccio’s most recent response to me.)
A little online searching will disclose that there are various definitions, but that generally a GA is an algorithm with a representation of a finite-size population of genomes that reproduce and have multiple sites that mutate, recombine, and undergo natural selection. The fitnesses of the genotypes are chosen so that they are largest when some desired optimization problem is solved.
In other words, a GA is a standard multilocus Wright-Fisher model (or else Moran model) of evolution, except for the particular choice of fitness function.
Mung has some very strict distinction that Mung makes between a GA and a “model of evolution”. I have no idea what it might be, or why it is important. Don’t ask me — I’ve only been modeling evolution for 50 years now, but apparently this is not enough experience to understand what Mung is talking about.
The effect of declaring that a GA is not a model of evolution is to rule out using it to see whether there is a general rule that CSI cannot be put into the genome by the ordinary population-genetic processes of evolution. Ruling that GAs are off-limits for this purpose is arbitrary and unjustified.
It speaks to Gpuccio’s understanding of what science is when he does not see a problem with multiple self-contradictory definitions of, in this case, dFSCI.
It should rather be called Gpuccio-dFSCI and Joe-dFSCI to show what particular version is referenced.
Nobody expects everybody in ID to agree on everything but if you can’t even agree on the basics then that shows there are as many different versions of ID as there are ID supporters.
And the fact that they never thrash out their internal disagreements and present a unified definition they can all agree on again shows their lack of understanding of how to progress in science. Unless all can agree on the terms in use then everybody is talking about different things.
I don’t understand what you mean by “number of states the software generates”.
Do you mean per “individual” or for one generation of a “population”.
There is no final target but I don’t understand “transition from an unrelated state”.
The start of the simulation must start with something within the constraints of the “environment” the software resides in.
For instance if we were modeling biology on Earth, there is no point in attempting to model a biological configuration that only works at -200 F.
For the same reason, a computer simulation should not bother attempting to run code that has no chance of working in its “environment”.
Secondly, a computer simulation has the added problem that certain “code” may crash the “virtual world” while biology has no such equivalent.
The world is not instantly destroyed by the birth of any single biological organism, but in a computer simulation, that is something that needs to be addressed.
g) I will use a random bit generator to come up with my replicator.
h) I will not run the code before I verify the “world will not end”.
Gpuccio 759
OK this very helpful and I think draws us close to the sense in which I find your argument is circular.
First some comments about this scenario.
Now recall the statement in 651 which we agreed was an accurate statement of the dFSCI case.
“In the case of digital strings with a function, if the information linked to the function is complex enough to exclude empirically a probabilistic explanation, and if there is no known deterministic explanation why the string should happen to have a configuration that performs that function, then you can infer design”. (A)
How (A) be falsified?
First consider the statement :
In the case of digital strings with a function, if the information linked to the function is complex enough to exclude empirically a probabilistic explanation then the string is designed
(this is (A) without the clause “no known deterministic explanation”)
If we come across a functional, digital string which is too complex to have a probabilistic explanation there are four options:
a) There is a known mechanism and the origin is designed. This is quite common.
b) There is a known mechanism and the origin is not designed. This is also quite common. The London temperate record would be an example.
c) There is a no known mechanism and the origin is designed. This is very uncommon, in fact it is pretty much the definition of magic or a miracle.
d) There is a known mechanism and the origin is not designed. The type of scenario you describe. This is also very uncommon and a major scientific mystery.
All four options: a,b,c, d would be relevant evidence. a and c would support it. b and d would be evidence against it. However, c and d are bizarre and extraordinarily uncommon because our current state of knowledge is such that we can almost always identify a mechanism to link an origin to an outcome and, as discussed above, it would be hard to be sure that origin is an origin and hard to tell c from d. They can effectively be ignored as sources of evidence. This leaves a and b.
But if you now add back the no known deterministic explanation clause to get back to statement (A) then you have said you will not consider cases of b. So you left only with a as a realistic source of evidence for (A)
Joe 761
Joe as you are so charming and always ready to listen to the other point of view I am going to give you a free English lesson. The words of the English language are divided into parts of speech such as verb, noun, adjective, preposition, adverb, and conjunction. These parts of speech are mutually exclusive . An A cannot be a B if A and B are different parts of speech.
Now in the example you give the word design is a verb (the word “to” at the beginning of the definition is a handy tip). The word mechanism is a noun. So in your example design cannot be a mechanism. It may be an activity done according to a mechanism – but this is not what you asserted back in 755.
You already have evidence that gpuccio, myself and others in this debate, do not use “design” to mean mechanism in the sense that you do.
In this debate we have two groups of mechanisms, design and non-design.
e.g. design.mechanism_001, design.mechanism_002, etc.
For our side to claim “non-design” as a mechanism would leave you no “specific processes” you can investigate and falsify.
The same applies to your side in that we need “specific processes” of design that we can investigate and falsify.
Your side has provided no “specific process” supporting the design position.
This whole debate has centered on processes described by our side.
Where are yours?
Putting GAs off limits is neither arbitrary nor unjustified. If GAs are admitted as models of evolution, the ID hypothesis becomes superfluous.
But Joe, responding to the environment is an “evolutionary mechanism”, i.e. it’s ours!
What you have described as design processes are really more equivalent to what a mechanic does in a garage.
I want to know the processes used by the designer of life, not a human being discovering what the designer has done and altering the design.
An automotive engineer solves questions that apply to design issues, not to service technician issues.
How did the designer design?
How do you design biology for an unknown future without trial and error?
If the designer uses trial and error, how is that different from non-design trial and error?
Give me processes Joe.
What is the process used for foreseeing the future environment in order to even install responses to environmental cues?
I think you and Mung should clarify what you actually mean by a GA.
We again have different meanings for the same term.
Why don’t you and Mung flowchart a GA so we can have some understanding of what you mean by the term.
Joe 768
It is so kind of you to spend time preparing these amusing faux pas of logic and misuse of dictionaries – much appreciated. I think I have spotted all the mistakes – but do tell me if I have missed any.
Originally you wrote:
Then to back this up you wrote:
Equating a verb with a noun. (I got that one last time).
When challenged you switched to a different definition of design (that was easy to spot):
This turns out to be one of ten definitions of design as a noun (a bit of research required – but the letter b at the beginning was a clue).
Also turns plan has four definitions as a noun – one of which is method for achieving an end (a bit harder – no clue provided).
So:
So a design can be many things other than a mechanism. Therefore it is false that
Design is a mechanism BY DEFINITION
I think I got all the errors. But I didn’t understand the point of all the insults. Were they another game?
There are lots of different GAs modelling different aspects of evolution. What they all have in common is the ability to test hypotheses regarding the probability of functional change.
The problem they pose for ID is that they directly test the probability estimates of Dembski and Behe.
Hence the necessity of ruling them out by definition.
I simply find it amusing that ID advocates base their entire argument on abstracting chemistry to “information,” then estimating the probability of information being created by an evolutionary process, but ruling out any mathematical test of the probabilities.
Although mathematical models of complex phenomena are always incomplete, it is interesting that Dembski and Behe think their own models of probability are infallible.
My own thought is that if Joe or gpuccio or Dembski or Behe think that existing GAs are deficient, they should build their own and argue that their models capture something currently being missed.
Of course Behe once did something along those lines and was quizzed on it at Dover. His own model failed to support his claim of Edginess.
Gpuccio,
It’s a given that we are entitled to our opinions. The question is whether each of us can justify his opinions.
No, because guided evolution via common descent doesn’t guarantee the existence of an objective nested hierarchy. You have to make additional assumptions which amount to stipulating that the designer acts in a way that is indistinguishable from unguided evolution. How do you justify the assumption that the designer is an evolution mimic?
You create a gap by assuming, against the evidence, the absence of selectable intermediates. Having created the gap, you now need a Designer to fill it, so you invent one. The gap is your only reason for invoking a Designer, and it is an invented gap.
That’s the problem with ID, that their models don’t accurately take into account physics, unless it makes something more improbable instead of less!
The designers of Stonehenge did not use the same processes as Lockheed engineers used to design the F-35.
The Lockheed designers could probably design Stonehenge, but being able to move stones doesn’t mean you have the capabilities and tools required to design a supersonic fighter.
The designer of life had a bigger challenge than either of them.
The first challenge of the designer of life was, “What will my design face in future environments when I can’t predict what that environment will look like?”
Well, this and recombination. Recombination has the feature that both parts have separately been through the ‘purifying’ sieve already. They’ve never been through it in harness, of course, but two individually benign elements are more likely to reach a successful combination than a pairing where one or both, or their immediate neighbourhood, is detrimental. A sequence that cannot be reached from either by point mutation due to surrounding detriment can be reached by combination. Point mutation is probing at the same time, of course. Every change is.
Behe’s ‘Edge’ is based on two serial mutations, but there is a significant probability of the two mutations existing separately in the population and recombining which needs to be added in. Related to the ‘birthday paradox’, it takes surprisingly few instances to generate a significant probability of occurring.