What has Gpuccio’s challenge shown?

(Sorry this is so long – I am in a hurry)

Gpuccio challenged myself and others to come up with examples of dFSCI which were not designed. Not surprisingly the result was that I thought I had produced examples and he thought I hadn’t.  At the risk of seeming obsessed with dFSCI I want assess what I (and hopefully others) learned from this exercise.

Lesson 1) dFSCI is not precisely defined.

This is for several reasons. Gpuccio defines dFSCI as:

“Any material object whose arrangement is such that a string of digital values can be read in it according to some code, and for which string of values a conscious observer can objectively define a function, objectively specifying a method to evaluate its presence or absence in any digital string of information, is said to be functionally specified (for that explaicit function).

The complexity (in bits) of the target space (the set of digital strings of the same or similar length that can effectively convey that function according to the definition), divided by the complexity in bits of the search space (the total nuber of strings of that length) is said to be the functional complexity of that string for that function.

Any string that exhibits functional complexity higher than some conventional threshold, that can be defined according to the system we are considering (500 bits is an UPB; 150 bits is, IMO, a reliable Biological Probability Bound, for reasons that I have discussed) is said to exhibit dFSCI. It is required also that no deterministic explanation for that string is known.”

(In some other definitions Gpuccio has also included the condition that the string should not be compressible)

These ambiguities emerged:

Some functions are not acceptable but it is not clear which ones.  In particular I believe that functions have to be prespecified(although Gpuccio would dispute this). Also functions which consist of identifying the content of  “data strings” (a term which is itself not so clear) are not acceptable because the string in question could have been created by copying the data string.

The phrase “no deterministic explanation for that string is known” is vague.  It is not clear in how much detail and how certainly the deterministic processes have to be known. For example, it appears from above the possibility that the string in question might have been copied from the string defining the function by some unknown method is sufficient to  count as a known deterministic explanation. This implies that really it is sufficient to be able to conceive of the very vague outlines of a determinist process to remove dFSCI. I think this amounts to another implicit condition: no causal relationship between the function and the string.

Lesson 2)  dFSCI is not a property of the string.

It is a relationship between a string, a function and an observer’s knowledge. Therefore, it may be that dFSCI applies for a string for one observer with a certain function but not for another observer with a different function.  The rules for deciding which function are not clear.

Lesson 3) The process for establishing the relationship 100% specificity of dFSCI and design is not commonly found outside examples created by people to test the process.

“To assess the dFSCI procedure I have to “imagine” absolutely nothing. I have to assess dFSCI without knowing the origin, and then checking my assessment with the known origin.”

When challenged he was unable to name any instances of this happening outside the context of people creating or selecting strings to test the process as in our discussions. This is important as the dFSCI/design relationship is meant to be an empirical observation about the real world applicable to a broad range of circumstances (so that it can reasonably be extended to life). If it is only observed in the very special circumstances of people making up examples over the internet then the extension to life is not justifiable. To give a medical analogy. It might well be that a blood test for cancer gives 100% specificity for rats in laboratory conditions. This is not sufficient to have any faith in it working for rats in the wild, much less people in the wild. Below I discuss what is special about the examples created by people to test about the process.

A Suggested Simplification for dFSCI

dFSCI says that given an observer and a digital string where:

1) The observer can identify a function for that string

2) The string is complex in the sense that if you just created strings “at random” the chances of it performing the function are negligible

3) The string is not compressible

4) The observer knows of no known deterministic explanation for producing the string

Then in all such cases if the origin eventually becomes known it turns out to include design.

Given the rather lax conditions for “knowing of a deterministic mechanism” that emerged above, surely  (2) and (3) are  just special cases of (4). If (2) or (3) were present then deterministic mechanisms would be conceivable for creating strings.

So the dFSCI argument could be restated:

Given an observer and a digital string where:

* The observer can identify a function for that string

* The observer cannot conceive of a deterministic explanation for producing the string

Then in all such cases if the origin eventually becomes known it turns out to include design.

Conclusion

There are two main objections to the ID argument:

A) There are deterministic explanations for life.

B) Even if there were no deterministic explanations it would not follow that life was designed

For the purposes of this discussion I will pretend (A) is false and focus on (B)

No one disputes that it is possible to detect design.  The objectors to ID just believe that B) true. The correct way of detecting design is to compare a specific design hypothesis with alternatives and assess which is provides the best explanation. This includes assessing the possibility of the designer existing and having the motivation and ability to implement the design.   If no specific hypothesis is available then nothing can be inferred.

So is the dFSCI claim above true and if so does it provide a valid alternative way of detecting design?

The trouble is that there is dearth of such situations. One of the reasons for this is that digital strings do not exist in nature above the molecular level.  At any other level it is only a human interpretation that imposes a digital structure on analogue phenomena.  The characters you are reading on this screen are analogue marks on the screen. It is you that is categorising them into characters. So all such strings are created by human processes. It follows that design is a very plausible explanation for any such string.  People were involved in the creation and could easily have designed the string. If you add the conditions that the function must be prespecified and there should be no causal relationship between the function and the string then design is going to be by far the best explanation. It goes further than that.  It also means there almost no real situations where someone is confronted with a digital string without knowing quite a bit about its origin – which is presumably why Gpuccio can only point to examples created/selected by bloggers.

What about the molecular level?  Here there are digital strings that are not the result of human interpretation. Now human design is massively implausible (except for a few very exceptional cases).  The problem now is that carbon chains are the only digital strings with any kind of complexity and these are just the one’s we are trying to evaluate. There are no digital strings at the molecular level with dFSCI except for those involved in life.

So actually the dFSCI argument only applies to a very limited set of circumstances where a Bayesian inference would come to the same conclusion.

493 thoughts on “What has Gpuccio’s challenge shown?”

1. The use of language turns out to be an important tactic of the ID/creationists.

Those who can recall the heyday of the “scientific” creationism vs. evolution debates back in the 1970s and 80s will remember the most common settings and the atmospheres in which those debates took place.

They were often sponsored by local “Creation Science Associations” – these sprung up all over the country – and the audiences were bussed in from surrounding churches to support the creationist debater.  They hooted and cheered in all the right places when the creationist debater evoked his practiced, sneering jabs at evolution.  It was pure political theater.

The rules of engagement in these debates were usually written by the creationists, and they usually included a rule forbidding the “evolutionist” from bringing up the connection between “scientific” creationism and sectarian beliefs.

The other main feature of these debates was that the creationists insisted on the language to be used.  Every taunt and every critique of science was couched in the language of the creationists.  Their opponents were expected to answer using creationist concepts and language.  Any attempts on the part of the opponent to correct the concepts and language would be met with some kind of implication that the person, even though an expert in the science, didn’t understand science and the methods of science.  Gish and Brown used this tactic quite agressively.

ID/creationists are still using these tactics today on their websites.  That UD website is a classic example, as are the AiG and ICR sites.  Over at UD they appear to have recruited the hecklers also.  There are at least two flying monkeys over there who are constantly flinging feces at anyone who takes issue with any of the big gurus.

All the language over there is made up ID/creationist language.  They have their own “information”, their own “function,” their own “entropy,” their own “second law of thermodynamics,” their own “kinds,” their own rules about atoms and molecules, their own “mathematics,” their own “fitness landscapes;” no matter what it is, they have their own version of it and expect their opponents to speak in their language.  They don’t acknowledge or accept any of the concepts and language developed around science.

So anyone who wants to debate them is expected to answer in their language and put up with their hecklers.  Answers that use concepts from science are not recognized or acknowledged.  Whenever any ID/creationist opponent makes a good point, they are immediately bombarded with the Gish Gallop in the form of huge dumps of copy/paste junk along with a condescending scolding.

Not only is ID a child of “scientific” creationism by political design in order to get around the courts, it contains all the same language, the socio/political tactics, and the same sets of fundamental misconceptions and misrepresentations.

And now they are trying to use the Gish Gallop to distance themselves from “scientific” creationism while claiming that they intellectually legitimate with no sectarian motives.

2. Mike Elzinga has pointed out that UD commenters have their own definitions of many terms. I’d add “model of evolution” as another one that they have assumed that they can redefine.

3. If gpuccio really is able to identify dFSCI, then I would ask him to consider the following experiment:

First, examine the dog genome for obvious examples of dFSCI. I should think that ID predicts, implicitly, that there ought to be at least one example unique to dogs.

Second, examine the genome of a species widely believed to be of another “form,” yet close to the dog lineage (at least by evolutionary thinking), like that of the domesticated house cat. Again, there ought to be at least one unique example.

Third, use the dFSCI data from the previous steps to determine the status of an animal that may or may not be an intermediate form, like that of true foxes — is Vulpes a product of “micro-evolution” or of the Intelligent Designer(s)?

Not only would this be a practical example of dFSCI’s ability to detect design, but it would also be the first step towards an ID version of Linnaean taxonomy — a map of the actual islands of functionality found in nature.

4. I’d add “model of evolution” as another one that they have assumed that they can redefine.

Yes indeed!

Henry Morris and Duane Gish hammered on this from the beginning. Evolution was defined as organisms getting better and better and better; progressing upward toward “higher” states, toward more perfection, and toward lower entropy.

Then Morris threw in the ultimate refutation of “evolution” – as defined by the creationists – by redefining entropy and the second law of thermodynamics and asserting that this “fundamental law of the universe” directly refutes “evolution” (as defined by creationists).

They blasted this at biology teachers and at the general public. I have in my files multipage newspaper clippings of the generous coverage given to creationists by local newspapers at the time. In these newspaper articles creationists – claiming to have doctorates of some sort – would be laying out all these definitions and carefully painting the contradictions with their charicatures of science.

They “uncovered the hidden skeletons in the closets of the science community;” and with copious use of inuendo, they labeled the scientific community as a bunch of deceivers. The books and booklets published by the ICR were the classic, paranoid diatribes against the “dark secrets” of evolutionists. Evolutionists were “exposed” admitting that they didn’t really believe there was any evidence for evolution.

Everything coming out of the creationist movement was fabricated. My impressions at the beginning of all that were that it was so stupid that nobody would take it seriously. I was wrong; as were many others in the science community. We didn’t know how extensive or organized it already was. We were simply very naive about those kinds of socio/political tactics.

I still look back and shake my head at the brazeness of Duane Gish harrassing teachers in front of their students. Schools were much more open back then; and Gish would simply latch onto a fundamentalist student to “invite” him to visit their biology class.

5. Mung (#949) doesn’t “get it”:

[me:] Take a population of one million mosquitos. If allele A ot one locus has a gene frequency of 0.0001, and allele B at another locus has a frequency of 0.0001 also, then if they are associated at random, the haplotype AB would basically not exist in the population, as it would have an expected frequency of 0.00000001. Now suppose that A and B are favorable. Each rises to a frequency of 0.01. Now recombination between these loci would create AB haplotypes at a frequency of 0.0001, which is high enough that they really would exist in the population.

[Mung:] Assume that as ‘A’ increases in frequency ‘a’ decreases in frequency. Assume that as ‘B’ increases in frequency ‘b’ decreases in frequency.

So while you have increased the probability of AB you have decreased the probability of ab.

And the probability of Ab and Ba?

I still understand why you think this is some great “probability increaser.” It’s not.

However dumb I may be, even I realize that the four haplotype frequencies have to add up to 1. So you can’t increase all of them at the same time.

The issue was whether a new type could come into existence as the result of natural selection. These are haplotypes. In the original population there were 1,000,000 mosquitos and an expected frequency of the AB haplotype of 0.00000001, which means basically no AB’s at all. After gene frequencies of A and of B increase by natural selection, now there can very easily be AB haplotypes. So we have answered the question, and you are wrong: natural selection creates the conditions for AB to exist.

(To anticipate the usual objections:) Yes, recombination is involved too, but without the natural selection it would not make AB’s. The semantic quibble that natural selection doesn’t do it without another evolutionary forces such as recombination is that, an irrelevant quibble.

And yes, the frequencies of some of the other haplotypes will decline. That is very sad for them, but this will always happen when the population’s mix of genotypes changes.

Finally, yes, it is merely a matter of probabilities. Population genetics kind of tends to be that way.  🙂

So Mung’s (#931) assertions that because natural selection both increases and decreases frequencies of genotypes, it’s not special, and by implication can do nothing are wrong. When land vertebrate forelimbs evolved from fins, the frequencies of alleles that made fins decreased as the frequencies of alleles that made limbs increased, and that is not a problem for evolution.

6. Mung has a “gotcha” moment:

Me: A common feature of Creationist argumentation is an inability to distinguish the continuous and the discrete…

One could change one letter of a book every 1000 years.

Mung:That would be a discrete change. What’s your point?

Link two sentences somewhat separated in my original post. Point out that genetic change is digital at the lowest level and – bingo! – my point is neatly obfuscated by definology, or so Mung appears to believe, though he rather helps to illustrate it (and Mike’s, that word gaming is at the heart of much of the argumentation). I’m sure there is no such thing as a continuous sound generated from an MP3 file either, if you had responsive enough speakers and ears. And let’s not forget quanta!

My continous/discrete distinction related to biological categories, of which I provided examples – species, ‘kinds’, or protein function, sets around which we can attempt to draw our wiggly Venn-diagram lines. Incremental genetic change can (on the evolutionary paradigm) move populations hither and thither. The wiggly lines aren’t fixed – we, categorisers, draw them. If we lived long enough, we’d have to keep redrawing them. The Creationist, however, sees these lines as indicative of some fundamental essence: ‘macro-discrete’ categories between which Mung’s pedantically ‘micro-discrete’ genetic change cannot travel.

7. I wanted to discuss one more detail which will end up being one more reason why this thread has run its course. It had been suggested (by patrick on December 4 at 10:29pm above) that the `tierra` system would be a simulation of evolution by natural selection, mutation, and some other genetic forces that might meet gpuccio’s conditions. Thus it could be used to see whether dFCSI would arise in such a simulation.

But there is a problem. dFCSI requires an assessment of “function”. tierra has no clearly definable function other than survival — whether a particular genotype persists. I cannot easily see how we could use that to assess “function” for dFCSI. Survival is assessed purely empirically by whether the genotype comes to take over the tierra “world”, or whether it persists in it. That cannot be evaluated prospectively. One cannot compute fitness by examining the genotype and using a table of genotype fitnesses (as one can in more conventional population genetics models of evolution). Thus there is no way to know whether the fitness is very high just by looking at the genotype.

So this suggestion will probably not fly. gpuccio might (wearing the “Oracle of Naturalness” hat) declare it sufficiently “natural”, but functionality would probably prove impossible to assess in a way that gpuccio would approve.

gpuccio has seemingly also ruled out GA-type models (although keiths has pointed out contradictory statements gpuccio has made on this point, and there is as yet no clarification of the matter by gpuccio).

So we seem still to be in the situation that there is no tractable model that gpuccio would approve of that could be used to investigate whether gpuccio’s claims are plausible.

8. Mung (UD Jerad thread #987) has continued to try to (mis)characterize natural selection as ineffective. My comments were directed toward one issue: whether natural selection could bring about the appearance of new combinations of alleles, the example being a haplotype AB.

The example I gave did require recombination to be present, to put together the two alleles once they are present in high enough frequency to make this event probable (another possibility involves mutation at one locus once the allele at the other is frequent enough). Yes, of course it is possible for that event to occur when A and B are very rare — it is just very improbable for AB to exist in the population in the example I gave. Mung had made it sound as if natural selection had no effect on whether new combinations of alleles would occur. It would have a big effect in this example. So Mung was wrong about that.

As I have explained above, I made no assertion that this meant that all haplotypes increase as a result. I explain above that if some haplotypes become less frequent, that is no problem. I did not use the term “combinatorial probability generator” nor do I think that is a useful phrase.

The rest of Mung’s comment is word games. The issues are not those Mung imagines, and I have no interest in spending time on those.

9. Mung has corrected me (in UD Jerad thread #988) on my wording on two points: I should have said “survival and reproduction” instead of just “survival”. And yes, the “function” needed is the function of a digital organism within tierra, not the function of the whole simulation.

If there is some other “function” Mung has discerned for the genotypes in tierra, it would be interesting to know what it is — I don’t see it.

Perhaps when gpuccio has looked more closely at tierra, we can be told whether there is any other “function” available in tierra, and gpuccio can clarify whether or not a simulation like tierra can be used to assess dFCSI. Until that is clarified there is little more to say.

10. One of the more interesting features that arise in some tierra runs is parasitism followed by hyperparasitism.  I wonder if those qualify as functions.

As you say, though, any discussion is blocked pending clarification from gpuccio.

11. Actually Joe, I was thinking of Avida and not Tierra, but I got lucky, lol. After all, what’s an evolutionary simulation without reproduction!

I need to take a look at Tierra specifically. I haven’t seen gpuccio in a few days =p. I hope he’ll return.

Is there no scenario in Tierra according to which some organisms leave more offspring than others?

Do you consider Tierra to be an evolutionary simulation? If not, why not? Is it because you can see no implementation of natural selection?

12. Well, I believe I have answered Keiths’s comment in my post #941.

You still haven’t resolved the contradiction. I responded to your #941 here:

keiths on December 6, 2012 at 1:19 am said:

This is becoming really boring. I have given very clear definitions of what NS and IS are.

It’s a shame that you find consistency so boring, because science depends on it. If you want your argument to be taken seriously, it needs to be consistent, and that means correcting your inconsistencies when they are pointed out to you.

The correct concept is as follows: It is completely wrong to model NS using IS, because they have different form and power. [emphasis yours]

Now you have reversed yourself:

So, what about a GA that models NS? It is a model which implements parameters appropriate for what is being modeled. It will use intelligent selection, but giving it a mathematical form which mimics true natural selection as we can observe it in nature. In that sense, it can give useful information.

Those statements contradict each other. Which do you affirm, and which do you retract?

13. Gpuccio,

You also haven’t responded to this comment:

keiths on December 5, 2012 at 1:48 am said:

To be able to explain that by a designer (the only credible explanation available) I have to assume that the designer worked in a way that explains the nested hierarchies.

And in particular, the objective nested hierarchy. The problem is that you know nothing about the designer, so you can’t independently justify your assumptions. He’s an unknown designer (maybe more than one), with unknown abilities, unknown limitations, and unknown goals.

The only “justification” you offer for your assumptions is that you want to fit your hypothesis to the evidence of the objective nested hierarchy. But then you’re committing the Rain Fairy fallacy.

It’s the equivalent of this:

You are arguing against a Rain Fairy proponent. You point out that under the Rain Fairy hypothesis, there is no reason to expect low-pressure systems to rotate in opposite directions in the two hemispheres. Any wind pattern is compatible with the Rain Fairy hypothesis, so it is completely arbitrary to assume that the Rain Fairy just happens to choose (or is somehow limited to choosing) the counter-rotating scheme.

Your opponent responds, “I am making a simple and reasonable assumption. I assume that the Rain Fairy likes symmetry, and this is why low-pressure systems rotate in opposite directions in the two hemispheres.”

Are you persuaded by the Rain Fairy advocate? If not, then why should we be persuaded by your argument, which is logically identical?

To summarize, the problem is that if you know nothing about the designer, you can’t independently justify any assumptions you make about him. And if you try to force-fit your hypothesis to the evidence by simply tacking on arbitrary assumptions, then you are committing the Rain Fairy fallacy.

14. My statement was:

“The correct concept is as follows: It is completely wrong to model NS using IS, because they have different form and power.”

That is absolutely true for all the GAs you guys propose, which are based on IS and try in no way to realistically model NS.

In other words, you would like to change the meaning of your statement without admitting that you are changing anything.

The correct concept is as follows: It is completely wrong to model NS using IS, because they have different form and power.   [emphasis yours]

You want your statement to mean this:

It is wrong to model NS using IS unless the model is realistic.

Your refusal to admit your mistake is just silly, but can I at least get you to agree, for the sake of future argument, that the revised statement is correct?

15. I am really curious about the power of itnelligent selection. I have asked for several years why gpuccio thinks IS is superior. I’m specifically interested in how IS would apply in the Lenski experiment.

What I want to know is how the intelligent selector  knows to favor neutral precursor mutations, those that confer no immediate reproductive advantage but which enable later mutations to become adaptive.

How does this work?

16. Keiths:

I will not answer your “argument” about the Rain Fairy. I find it simply stupid, with all respect.

Gpuccio,

You won’t answer it because you can’t answer it. Pretending it’s stupid is just a way of saving face.

You’re stuck between a rock and a hard place:

1a. Unguided evolution is far better than ID at explaining the evidence of the objective nested hierarchy.

1b. Meteorology is far better than the Rain Fairy hypothesis at explaining the weather.

2a. The Designer is an unknown being with unknown abilities, unknown limitations, and unknown goals. ID therefore predicts nothing, and can be fitted to any set of facts about life by simply saying “that’s how the Designer did it.”

2b. The Rain Fairy is an unknown being with unknown abilities, unknown limitations, and unknown goals. The Rain Fairy hypothesis therefore predicts nothing, and can be fitted to any set of facts about the weather by saying “that’s how the Rain Fairy does it.”

3a. To bring ID into alignment with the biological evidence, you have to make a bunch of assumptions about how the Designer operates.

3b. To bring the Rain Fairy hypothesis into alignment with the meteorological evidence, you have to make a bunch of assumptions about how the Rain Fairy operates.

4a. There’s no independent justification for the assumptions you add to the ID hypothesis. You’re just forcing ID to fit the evidence. All the work is being done by your arbitrary assumptions, not by the theory itself.

4b. There’s no independent justification for the assumptions you add to the Rain Fairy hypothesis. You’re just forcing the Rain Fairy hypothesis to fit the evidence. All the work is being done by your arbitrary assumptions, not by the theory itself.

Where does all of this leave you? You’re in the embarrassing position of either

a) supporting both ID and the ridiculous Rain Fairy hypothesis, or

b) admitting that unguided evolution fits the evidence far better than ID, just as meteorology fits the evidence far better than the Rain Fairy hypothesis.

Rather than facing up to this, you’ve chosen to avoid the dilemma by pretending that the argument is stupid — and hoping that someone will believe you.

17. keiths: Gpuccio,

You won’t answer it because you can’t answer it. Pretending it’s stupid is just a way of saving face.

You’ve stated the case very well.

ID doesn’t bring anything to the table in the way of mechanisms and is thus useless as an aid in understanding why life responds to the environment it finds itself in.

ID also doesn’t answer another important question and that is, “How does the designer know what will be required in the future?”

18. kairosfocus weighs in on the Rain Fairy argument. He predictably makes multiple references to strawmen, then just as predictably proceeds to set up and topple his own strawman.

(And, on the empirically observable sign FSCO/I in its various forms including dFSCI, it is abundantly confirmed to be reliable at that empirically with billions of cases in point. That is, there are no credible false positives for FSCO/I beyond 500 – 1,000 bits…

In other words, if you assume that evolution isn’t a credible explanation of dFSCI, you will conclude that evolution isn’t a credible explanation of dFSCI.  Take that, evolutionists!

So, while KS et al do not wish to accept it, we have a highly reliable sign that is best explained on the known and observed causal factor, design.

We know that design can produce functional complexity, but that doesn’t justify the conclusion you’re leaping to: that only design can produce functional complexity. Where’s your evidence?

I’ve shown that unguided evolution explains the evidence literally trillions of times better than intelligent design. I even challenged you to respond, placing absolutely no restrictions on the venue, format, or length of your response. You’ve been evading the challenge ever since. Why is that?

Now, the next thing is that a strawman demand is set up that a designer of life, to be acceptable to KS et al, must start from scratch every time, instead of using and even modifying a code base.

I’ve made no such demand. You would know this if you had bothered to familiarize yourself with my argument before presuming to criticize it.

I take issue with Gpuccio’s assumption about design reuse not because I think he should have assumed the opposite, but because I think he is not entitled to assume anything at all, unless he provides independent justification for his assumptions. Otherwise he is committing the Rain Fairy fallacy. See this comment.

ID can be made to fit design with reuse, or design from scratch. It can be made to fit modular design, or monolithic design. It can be made to fit anything at all, by simply stipulating that the Designer must have done it that way. ID is infinitely malleable. It fits anything, which means that it predicts and explains nothing. That is its great weakness.

Evolution is the better theory.

19. petrushka,

I am really curious about the power of intelligent selection. I have asked for several years why gpuccio thinks IS is superior.

Think WEASEL. Think Genetic Algorithms.

20. Toronto:

ID doesn’t bring anything to the table in the way of mechanisms…

And yet you managed to post on this site. No intelligent design required.

ID also doesn’t answer another important question and that is, “How does the designer know what will be required in the future?”

The people who designed this web site had no idea what you would post here on December 13, 2012. How on earth did they manage to design for what would be required in the future?

21. Mung: “The people who designed this web site had no idea what you would post here on December 13, 2012. How on earth did they manage to design for what would be required in the future?”

You seem to be on our side again with your analogy.

The website designers foresaw the need to host comments and designed a site that would accept a combination of ASCII characters to represent those comments, but they did not require any knowledge of what the actual configuration of those ASCII characters in my comment would be.

Tell me what the weather will be like in a hundred years

There is no spec anyone could write that would give us enough foresight to modify the “information” in every single biological design that would require change to exist in that unknown future environment.

How does the designer know what to change for an unknown to him, future environment?

How many organisms would be affected by this change?

Who will be affected more, herbivores or carnivores?

Should he hedge his bet and make 70% of the organisms herbivores to balance out predator/prey relationships?

22. There is no spec anyone could write that would give us enough foresight to modify the “information” in every single biological design that would require change to exist in that unknown future environment.

So?

How does the designer know what to change for an unknown to him, future environment?

There’s no rule of design that demands that designers must plan for all possible future contingencies. Our universal experience with known designers indicates that there is no such requirement.

23. Explain what WEASEL is in your own words, and what it is intended to demonstrate.

WEASEL is a computer program written by Richard Dawkins as an exercise in demonstrating the “power of cumulative selection” in which strings of characters are copied and mutated and then compared to a target phrase with those strings which more closely resemble the target phrase being selected to seed the next round, repeating the process until a string exactly matching the target phrase is found.

There are various other versions of it in many different programming languages freely available on the internet.

It’s a fine example of the power of intelligent selection, though that’s hardly what Dawkins intended to demonstrate with it.

When your side comes up with a version of it that uses natural selection, rather than intelligent selection, we can compare the differences in “power” and answer petrushka’s question.

24. Give us an operational definition of natural selection, so we will know what to shoot for.

25. Mung: “So?”

Design me a program.

Here’s what it has to do: ………………..

Your code is a critical component so let me know right away if you can’t meet any part of the spec. 🙂

26. Toronto:

Here’s what it has to do: ………………..

Program 1:

puts “………………..”

Program 2:

20.times {print ‘.’}; puts

Program 3:

def print_periods how_many

how_many.times {print ‘.’}; puts

end

Program 4:

def print_char chr, how_many

how_many.times {print chr}; puts

end

Your code is a critical component so let me know right away if you can’t meet any part of the spec.

First we’ll need to work together to write tests, that way we’ll know when the program has met your specification(s).

27. Mung: First we’ll need to work together to write tests, that way we’ll know when the program has met your specification(s).

🙂

28. The intelligent designer had better know what’s coming in the future and prepare for it.

If he doesn’t, he’ll be modifying our “code” 24/7.

In that case, science would be impossible since the intelligent designer will be interfering with natural processes on a daily basis.

Designers of fighter aircraft can’t ignore a future that’s decades away and neither can the intelligent designer.

29. Designers of fighter aircraft can’t ignore a future that’s decades away and neither can the intelligent designer.

What intelligent designer?

How about a designer of disposable razors or cotton swabs?

In that case, science would be impossible since the intelligent designer will be interfering with natural processes on a daily basis.

You don’t know that they aren’t. How would you know?

30. Toronto: In that case, science would be impossible since the intelligent designer will be interfering with natural processes on a daily basis.

Mung: You don’t know that they aren’t. How would you know?

That’s why ID is a non-scientific concept.

31. You don’t know that they aren’t. How would you know?

That. in a nutshell, is why ID is useless.

Once you have made the assumption that natural processes are capricious, you have abandoned any hope of doing science, because science is the business of finding regularities. The short answer is that you cannot be certain that nature is not capricious or that the fabric of reality is manipulated by the matrix masters.

But based on the last couple hundred years, that’s the way to bet. When theists get sick, most will go to a doctor who relies on reductionist scientific medicine.

EDIT:

All the sciency machinery of ID is engaged in the hunt for discontinuities. Gaps, if you will. Miracles by another name.

All the work and writings of Axe, Dembski, Behe, et al, are deployed to find events and phenomena that cannot current be explained. I Some argue that there is some utility in that, because it sometimes suggests areas that need to be studied.

But it is highly inefficient, because science goes to those places anyway. And it is entirely ineffectual, because unexplored areas shrink.

32. Stumbled across the following post by gpuccio:

Intelligent selecion is a powerful principle, as shown in bottom up protein engineering.

There are two fundamental differences between IS and NS:

a) IS can select for any defined function, even if not immediately useful. NS can select only for those functions that give a reproductive advantage in a specific context (that is, an extremely tiny subset of all possible functions).

b) IS can select functions even at very low levels. IOWs, IS can recognize a function even in its raw manifestation, and then optimize it. NS requires that the function level be high enough that it can give the reproductive advantage at phenotipic level.

Both points are extremely important, and both points are the consequence of the intervention of intelligence and purpose in the process. Moreover, bottom up IS can well be integrated with top down engineering in the design process.

All those possibilities are denied to non intelligent processes.

33. petrushka:

Gpuccio is arguing that because no one has completed the painstaking research to find the history of protein domains, such histories cannot exist.

That is incorrect.

The painstaking research is in. That’s what allows us to identify the protein superfamilies. It is that same painstaking research which allows us to say that the intermediates are missing.

34. It is that same painstaking research which allows us to say that the intermediates are missing.

Someone has done research and concluded that billion-years-dead genomes are no longer available to sequence? I wonder if I could get a slice of that funding?

35. Allan:

Someone has done research and concluded that billion-years-dead genomes are no longer available to sequence? I wonder if I could get a slice of that funding?