What has Gpuccio’s challenge shown?

(Sorry this is so long – I am in a hurry)

Gpuccio challenged myself and others to come up with examples of dFSCI which were not designed. Not surprisingly the result was that I thought I had produced examples and he thought I hadn’t.  At the risk of seeming obsessed with dFSCI I want assess what I (and hopefully others) learned from this exercise.

Lesson 1) dFSCI is not precisely defined.

This is for several reasons. Gpuccio defines dFSCI as:

“Any material object whose arrangement is such that a string of digital values can be read in it according to some code, and for which string of values a conscious observer can objectively define a function, objectively specifying a method to evaluate its presence or absence in any digital string of information, is said to be functionally specified (for that explaicit function).

The complexity (in bits) of the target space (the set of digital strings of the same or similar length that can effectively convey that function according to the definition), divided by the complexity in bits of the search space (the total nuber of strings of that length) is said to be the functional complexity of that string for that function.

Any string that exhibits functional complexity higher than some conventional threshold, that can be defined according to the system we are considering (500 bits is an UPB; 150 bits is, IMO, a reliable Biological Probability Bound, for reasons that I have discussed) is said to exhibit dFSCI. It is required also that no deterministic explanation for that string is known.”

(In some other definitions Gpuccio has also included the condition that the string should not be compressible)

These ambiguities emerged:

Some functions are not acceptable but it is not clear which ones.  In particular I believe that functions have to be prespecified(although Gpuccio would dispute this). Also functions which consist of identifying the content of  “data strings” (a term which is itself not so clear) are not acceptable because the string in question could have been created by copying the data string.

The phrase “no deterministic explanation for that string is known” is vague.  It is not clear in how much detail and how certainly the deterministic processes have to be known. For example, it appears from above the possibility that the string in question might have been copied from the string defining the function by some unknown method is sufficient to  count as a known deterministic explanation. This implies that really it is sufficient to be able to conceive of the very vague outlines of a determinist process to remove dFSCI. I think this amounts to another implicit condition: no causal relationship between the function and the string.

Lesson 2)  dFSCI is not a property of the string.

It is a relationship between a string, a function and an observer’s knowledge. Therefore, it may be that dFSCI applies for a string for one observer with a certain function but not for another observer with a different function.  The rules for deciding which function are not clear.

Lesson 3) The process for establishing the relationship 100% specificity of dFSCI and design is not commonly found outside examples created by people to test the process.

Gpuccio says thisabout the process:

“To assess the dFSCI procedure I have to “imagine” absolutely nothing. I have to assess dFSCI without knowing the origin, and then checking my assessment with the known origin.”

When challenged he was unable to name any instances of this happening outside the context of people creating or selecting strings to test the process as in our discussions. This is important as the dFSCI/design relationship is meant to be an empirical observation about the real world applicable to a broad range of circumstances (so that it can reasonably be extended to life). If it is only observed in the very special circumstances of people making up examples over the internet then the extension to life is not justifiable. To give a medical analogy. It might well be that a blood test for cancer gives 100% specificity for rats in laboratory conditions. This is not sufficient to have any faith in it working for rats in the wild, much less people in the wild. Below I discuss what is special about the examples created by people to test about the process.

A Suggested Simplification for dFSCI

dFSCI says that given an observer and a digital string where:

1) The observer can identify a function for that string

2) The string is complex in the sense that if you just created strings “at random” the chances of it performing the function are negligible

3) The string is not compressible

4) The observer knows of no known deterministic explanation for producing the string

Then in all such cases if the origin eventually becomes known it turns out to include design.

Given the rather lax conditions for “knowing of a deterministic mechanism” that emerged above, surely  (2) and (3) are  just special cases of (4). If (2) or (3) were present then deterministic mechanisms would be conceivable for creating strings.

So the dFSCI argument could be restated:

Given an observer and a digital string where:

* The observer can identify a function for that string

* The observer cannot conceive of a deterministic explanation for producing the string

Then in all such cases if the origin eventually becomes known it turns out to include design.

Conclusion

There are two main objections to the ID argument:

A) There are deterministic explanations for life.

B) Even if there were no deterministic explanations it would not follow that life was designed

For the purposes of this discussion I will pretend (A) is false and focus on (B)

No one disputes that it is possible to detect design.  The objectors to ID just believe that B) true. The correct way of detecting design is to compare a specific design hypothesis with alternatives and assess which is provides the best explanation. This includes assessing the possibility of the designer existing and having the motivation and ability to implement the design.   If no specific hypothesis is available then nothing can be inferred.

So is the dFSCI claim above true and if so does it provide a valid alternative way of detecting design?

The trouble is that there is dearth of such situations. One of the reasons for this is that digital strings do not exist in nature above the molecular level.  At any other level it is only a human interpretation that imposes a digital structure on analogue phenomena.  The characters you are reading on this screen are analogue marks on the screen. It is you that is categorising them into characters. So all such strings are created by human processes. It follows that design is a very plausible explanation for any such string.  People were involved in the creation and could easily have designed the string. If you add the conditions that the function must be prespecified and there should be no causal relationship between the function and the string then design is going to be by far the best explanation. It goes further than that.  It also means there almost no real situations where someone is confronted with a digital string without knowing quite a bit about its origin – which is presumably why Gpuccio can only point to examples created/selected by bloggers.

What about the molecular level?  Here there are digital strings that are not the result of human interpretation. Now human design is massively implausible (except for a few very exceptional cases).  The problem now is that carbon chains are the only digital strings with any kind of complexity and these are just the one’s we are trying to evaluate. There are no digital strings at the molecular level with dFSCI except for those involved in life.

So actually the dFSCI argument only applies to a very limited set of circumstances where a Bayesian inference would come to the same conclusion.

493 thoughts on “What has Gpuccio’s challenge shown?

  1. 771

    Mark Frank:

    So a design can be many things other than a mechanism.

    Yes, so what?

    Therefore it is false that

    Design is a mechanism BY DEFINITION

    That doesn’t follow from the first part. You are confused. Ya see Mark, all I need is ONE of the definitions to match in order for my claim to be true. And I have that.

    Very good – I never thought of that strategy – change the usual meaning of “true by definition”.

  2. gpuccio: I believe anyone can express his own ideas. Those who are interested to clarify their terminology can do that. But it is not imperative.

    How do you expect to express your idea if you use terms in a different way than the public that is trying to follow you?

    Is it your goal to make ambiguous statements?

    How do I invalidate a theory that its proponent cannot clarify?

    I told you I was going make an attempt to simulate “dFSCI” but statements like this make me think I’ll never get a clear understanding from you.

     

  3. Perhaps Joe needs a thesaurus. Then he can equate any word with any other word, probably within six degrees.

  4. toronto: Your side has provided no “specific process” supporting the design position.

    Joe: Liar! Big FAT liar!

    Dr Spetner gave us “built-in responses to environmental cues”.

    So that’s a ‘specific process’, is it? Hmmmm.

    Joe has in the past pointed to Vitamin C as an example of something that is just waiting for the right ‘environmental cue’, and the built-in-response is that the gene will somehow reassemble itself and start producing the vitamin. A supposition for which there is not a shred of support. Many experiments have been inadvertently conducted in which small populations were kept in an environment deficient in Vitamin C – long sea voyages, for example. The built-in response to this environmental cue appears to be scurvy, followed by death. 

    He may be confusing it with D, which is only a vitamin when sunlight is inadequate. Its synthesis in response to the sunlight ‘cue’, however, relates to the fact that its synthesis is photochemical, so sunlight is rate limiting. It doesn’t turn genes on and off, still less does it rearrange them hopefully.

     

  5. gpuccio,

    g) I will use a random bit generator to come up with my replicator.

    I will run code that “kills” an “organism”, even multiple ones, but I will not run any code that crashes the computer, .i.e. my “world”.

    It makes sense that we are testing lifeforms but not the environment they inhabit, so;

    h) I will not run any code before I verify the “world will not end”.

     

  6. gpuccio: The total number of individual states generated: the number of replications in the time span multiplied by the population number. That’s the measure of the probabilistic resources of the System in the Time Span, IOWs the number of “attempts”.

    If the two of us are still alive when the simulation ends, we won’t have come close to exhausting any probabilistic resources of the universe. 🙂

    gpuccio: Any information that is already in the starting state is not “generated”.

    Agreed.

    i) The simulation will be purely “informational” and not constrained by physics/chemistry.

     

     

  7. Gpuccio 775

    Goodness this is hard work. I try so hard to be clear and fail by miles! I will see if I can concentrate on the essentials. I think the confusing thing is that you were thinking of the mechanism as something that creates the origin. In my examples the mechanism is what links the origin to the configuration of the string

    a) There is a known mechanism and the origin is designed. This is quite common.

    Again with the mechanism concept. I don’t understand what you mean. What are the quite common cases in a)? What is the mechanism?

    I mean there is known mechanism, aka as a causal link, between the origin and the configuration of the string. For example, Shakespeare writes a sonnet – the origin – the sonnet gets copied by a mechanism (which may or may not be man-made) but many such mechanisms are known.

    b) b) There is a known mechanism and the origin is not designed. This is also quite common. The London temperate record would be an example.

    Here, “mechanism” seems to mean a detailed explanation based on well known necessity laws: how to measure a temperature, for example. That does not seem the same meaning (whatever it may be) that the word has in a). Please, clarify.

    Exactly the same sense as A) . There is some known causal sequence which could reasonably link the origin (one temperature record which is the origin) with the string (another nearby temperature record which is the string).

    c) There is a no known mechanism and the origin is designed. This is very uncommon, in fact it is pretty much the definition of magic or a miracle.

    Again, I am in the dark. Design is not a mechanism, IMO. It is a process where the conscious representations of a conscious intelligent agent are transferred to a material object. We have no idea of how the conscious representations originate, and we have no idea of how the consciousness-matter interface works. Therefore, any act of design is in a sense magic. Therefore, this case is not uncommon at all. We cannot “explain” design. We can explain designed objects as a result of a design process, but we cannot explain the design process. We just know it happens all the time.

    I hope this now clear. I am not talking about how the designed origin arises. I am talking about the causal link between the designed origin and the configuration and the string. For example, suppose a magician shuffles a pack of cards and then asks a member of the audience to name any five cards in order and then reveals that the top five cards are in exactly that sequence. The designed origin is the string that the member of the audience calls out. The sequence on the top of the pack is the configured string. There is apparently no known mechanism for linking what the member of the audience says to the cards and if there really is no known mechanism than it is truly magic (although of course in reality magicians do use known mechanisms but rather cleverly).

    If by “known mechanism” you mean that a design process involving a designer is observed as the origin of the object, then that is valid both for a) and for c). Indeed, I can see no difference. Please, explain. Possibly with examples.

    d) There is a known mechanism and the origin is not designed. The type of scenario you describe. This is also very uncommon and a major scientific mystery.

    D is the scenario you described where some event which is apparently not designed (the ingredients in the laboratory) seems to be correlated with the configuration in a string but there is no known mechanism for connecting the two events.

    I hope this clarifies the four cases. My point being that we can ignore C and D. They are very uncommon, amount to magic or undiscovered natural laws, and have the other problems I discussed. All explanations that we might reasonably consider for a digital functional complex string are either A or B. Based on all the work we did over the last few days we have agreed that B is equivalent to a deterministic explanation and therefore, by your definition, if it is B then it is not dFSCI. So by definition all strings with dFSCI are A and therefore designed.

  8. gpuccio,

    j) A string above 150 bits has “dFSCI”.

    k) A string below 150 bits is within the scope of random configuration.

     

  9. gpuccio: It is clear how A can be falsified. Just show that the neo darwinian algorithm, or any other smart algorithm, can create exactly that kind of string: a string that no one could, by mere observation, distinguish from any other string exhibiting dFSCI.

    For this I need clarification.

    How do you intend by mere observation of a string, to determine “dFSCI”?

     

  10. gpuccio (#707) reacted thusly to my link to my example of a GA putting Specified Information into the genome:

     Joe Felsenstein:

    I have read your old post at TSZ titled:

    “Natural selection can put Functional Information into the genome”.

    Well, my only reaction is: are you serious?

    What you have done:

    a) You have defined each ncrement of a 1 as a selectable trait.

    b) You have created a context where each one bit transition can be selected.

    c) There is nothing at all abot how those one bit transitions would confer any reproductive advantage in any real case.

    Now, a very simple question, and please answer: why do you call that:

    What is “natural” in your model? It is obviously an artificail mathemathical model, that has nothing to do with NS

    A model of NS ?!!!

    I’m sorry to hear that gpuccio is unhappy, and wants models to be fully realistic. But the question being addressed was whether there was some general proof that models of genomes that had mutation, selection, and recombination could not put SI (and CSI) into the genome. I guess that if gpuccio has such a proof, or if Dr. Dembski has such a proof, the genetic systems and selection regimes that their proof applies to do not include the model that I presented. Anyway I have not seen any proof from them.

    Moreover, the bits in your model are completely independent one from the other. You just assume, for you comfortable purposes, that the more the ones, the better. There is no function tied to a specific configuration of bits. There is no dFSCI there.

     
    True, but gpuccio failed to notice that the model counterexample is for SI and CSI. Not for dFCSI. It was a refutation of Dembski’s arguments, which I guess are different from gpuccio’s.

    Indeed, your result could have been much more easily realized by a very simple necessity mechanism. Just imagine that the algorithm of “reproduction” allows a 0.9 probability for 1, and a 0.1 probabiltiy for 0. That could correspond to a true natural situation: the 1s are simply much more represented in your environments than the 0s. You would easily get the same result, and no artificial (not “natural”) selecion would be needed.

    This represents a misunderstanding of my argument. Sure, if you start with gene frequencies of 0.9, you get gene frequencies of 0.9. But my example started with gene frequencies of 0.5, and then natural selection moved the gene frequencies to 0.9 and, as a result the content of 1 alleles in the population moves far out into the tail of the original distribution.

    That’s because your result is however highly compressible. It has none of the chracteristics of dFSCI.

    As I just mentioned, it was addressing SI and CSI but was not responding to gpuccio’s dFCSI. The compressibility issue is interesting. Dembski allowed high compressibility to be one of the possible signs of the presence of CSI (not a reason to reject possible CSI). So its presence was no problem.

    But the main objection remain the first one: what has all that to do with “natural selection”?

    I’m glad to hear that we are in the presence, at long last, of someone who can serve as an oracle to tell us what selection is or is not “natural”.

    The real question is “is there a mathematical theorem of some sort that shows that CSI cannot come to be in the genome by processes of natural selection?”  The answer so far is that there is not. But any attempt to investigate the issue using a mathematical or computational model seems to get criticized. The original criticism was that the CSI was already there, in the model, and that that (somehow) accounted for the presence of CSI in the genome at the end. I did and do reject that argument.

    But now the argument seems to have changed to the pronouncements of the Nature Oracle.

  11. Joe 780

    By Mark’s “logic” a person’s foot can never be a weapon because there are multiple definitions of each and not all say that a foot is a weapon.

    Jo – by your logic a foot is a weapon by definition. 

    I don’t know what all your other stuff is about. I don’t usually read anyone’s comments on UD other than Gpuccio (this is an exception which caught my eye).

  12. Gpuccio 779

    First consider the statement :

    In the case of digital strings with a function, if the information linked to the function is complex enough to exclude empirically a probabilistic explanation then the string is designed

    (this is (A) without the clause “no known deterministic explanation”)

    Now, you seem to readily forget this part, but it is in reality the most important part. It is the “positive” part of the procedure, and that’s why you probably pretend it does not exist.

    Now, let’s pretend for a moment that the “necessity clause” is not necessary. We just infer design because of the complexity of functional information. And we are always right. That is a very good empirical reason to use the complexity of functional information to detect design when we have no direct knowledge of the origin.

    Now, in this simplified case, any string with enough complexity is designed.

    Well, if it is true that only design can create cuntional complexity, then A is always true. Why? Because it is circular? No. Because only design can create functional complexity, and therefore if we see functional complexity we can infer design with certainty.

    This is not a circular statement. It is just a perfect empirical tool to detect design.

    If only X can cause Y, the observation of Y is perfect empirical evidence of X. That is not circular.

    In this case it is not circular. It is false. The London temperatures are one example. GA’s are another. It becomes circular when you use the necessity clause (or in the case of GAs a simple refusal to allow them as evidence) to exclude anything that might count as evidence against it. What I did was review all the possible ways in which (A) might be falsified and note that you have (subconsciously) introduced a rule which said they were not allowed as evidence. I know I will not persuade you of this. But I hope you can at least see I am being serious and thoughtful in my response.

    I am going to drop this now as I think it has gone as far as it usefully can and I would not like it to become acrimonious. At least you have learned the meaning of “infer” 🙂

  13. keiths:

    No, because guided evolution via common descent doesn’t guarantee the existence of an objective nested hierarchy. You have to make additional assumptions which amount to stipulating that the designer acts in a way that is indistinguishable from unguided evolution. How do you justify the assumption that the designer is an evolution mimic?

    gpuccio:

    What additional assumptions? Please, specify.

    You have to assume that the designer always makes small changes (just like unguided evolution), that he uses vertical inheritance primarily (just like unguided evolution), and that he limits horizontal transfers to those that could be produced by unguided mechanisms (just like unguided evolution).  You have to assume that he arranges the morphological and genetic information in just the right way so that their associated inferred hierarchies are identical (just like unguided evolution). 

    How do you justify such wild, ad hoc assumptions?  Is your Designer determined to hide himself behind evolution?  Does his evolution mimicry serve some higher purpose that you are privy to?  Is your hypothetical Designer a weak Designer who is somehow limited to behaving in this way?  If so, how do you justify that assumption?

    Now note that unguided evolution predicts and explains the objective nested hierarchy naturally, without the need for awkward and arbitrary assumptions like those above.  The theory intrinsically fits the evidence, unlike ID.  It’s a far superior theory.

    keiths:

    You create a gap by assuming, against the evidence, the absence of selectable intermediates.

    gpuccio:

    Against the evidence? What evidence? I must have missed it!

    Or you’re pretending to.

    Your problem is stark. The designer is needed only if selectable intermediates don’t exist. If they do exist, your magical designer vanishes in a puff of smoke. And it’s not enough for them to be unknown or uncharted. You have to show that they don’t exist. You can’t do that, and you know it, so you resort to saying things like the following:

    There is only one way to prove the existence of selectable intermediates: find the. Nested hierarchies will not help.

    Suddenly (and very conveniently) nothing short of direct evidence is good enough for you, and when direct evidence is presented, you wave it away. And if direct evidence is not given, you claim that you’re entitled to assume that selectable intermediates don’t exist at all.

    Yet when asked for direct evidence of your hypothetical designer, you argue that indirect evidence is just fine in that case, and that no one is entitled to assume that your designer doesn’t exist just because you can’t supply direct evidence. It’s a hopeless (and obvious) double standard.

    You need to find a way to show, positively, that selectable intermediates do not (or cannot) exist. Unfortunately for you, the evidence of the objective nested hierarchy points overwhelmingly in the opposite direction.

  14. gpuccio: j) A biological string on our planet with more than 150 bits of functional information exhibits dFSCI (if we accept my proposed threshold for biological information on our planet). Therefore, we can make a safe design inference for it.

    k)A biological string on our planet with less than 150 bits of functional information does not exhibit dFSCI (if we accept my proposed threshold for biological information on our planet). Therefore, we cannot make a safe design inference for it.

    I think we had agreed to the following:

    i) The simulation will be purely “informational” and not constrained by physics/chemistry.

    So j) and k) should be defined as informational strings.

    j) A string with more than 150 bits of functional information exhibits dFSCI. Therefore, we can make a safe design inference for it.

    k)A  string with less than 150 bits of functional information does not exhibit dFSCI. Therefore, we cannot make a safe design inference for it.

    Do you agree to the above?

    Toronto:How do you intend by mere observation of a string, to determine “dFSCI”?

    gpuccio: I mean all the conclusions we can reach by observing the string itself, and by what we know of the System and Time Span where it emerged, without knowing anything else about it (IOWs, without having any information about its origin).

    In order to pass the test, you are going to have judge that a string generated by my process has “dFSCI”, which makes this a key point that needs to be very specific.

    l) I will accept that a string generated by a “non-design process” exhibits “dFSCI” if ………………????………………………

     

  15. The intermediates don’t even have to be selectable, of course. The version of evolution that most critique is the one that expects every last change to be a step up the fitness ladder. I read a bit of Joe G’s hero Spetner, and that’s the version he tilts at, rather cluelessly. But the fundamental process of evolution is one that generates, simultaneously, sequence common ancestry (of survivors) and extinction (of the descendants of all other sequences), irrespective of fitness differentials. It is that process that causes change, not the differential part.

    The intermediates must only be ‘not-overly-detrimental’. Being beneficial speeds things up and increases likelihood of fixation, but with or without a selective differential, generational distortion of frequency generates inexorable change given mutational input. Neutrality doesn’t add ‘dFSCI’, I suppose, but I don’t see anything in GP’s method that distinguishes those bits that got there through drift from those that got there through selection. Many bits in the modern protein contribute to modern function and would be detrimental if lost, because of coevolution of the parts of the whole – but many such bits may have got there in the first place by neutral means, only later being pinned in place by selection. 

  16. You have to admit, though, that in every case where selectable intermediates have been demonstrated — Lenski, Thornton et al — that it’s just been microevolution. No one has witnessed a frog evolving into a squirrel. Until you can do that it’s more reasonable to believe in sky fairies.

    Isn’t that the way Newton reasoned about gravity?

  17. toronto: g) I will use a random bit generator to come up with my replicator.

    Joe: That is only valid if any molecular arrangement can be a replicator. Ya see toronto replication is the very thing you need to account for so you just can’t start with it.

    When ID proposes an “informational/mathematical” criticism against “Darwinism”, then the response is going to be from an “informational/mathematical” perspective.

    If you are insisting that any argument, even those started by your side and based on mathematical models, must address chemistry and physics, then Dembski and other “engineering” ID proponents, are not qualified to take part in the debate.

    Your side will have to rely on people like Behe and our side will rely on people like Joe Felsenstein.

    You, I and Dembski would not be qualified to take part.

     

  18. Joe 784

    Jo – by your logic a foot is a weapon by definition.

    Yes, it is. And if someone uses it to harm or try to harm, someone then the charge would be- for example “assault with a deadly weapon- shod foot (or even unshod foot)”.

    Gosh you are productive. As the airport security guard will tell you almost any object can be used a weapon. So all of them: combs, tooth picks, automobiles, rubber bands etc, etc. are weapons by definition! In fact your breakthrough is even better than that because any object can be used for a vast range of tasks (many of which were never considered) e.g a light bulb can be used a storage container (cut off the top), a keyboard can be used a doorstop. So any object is a vast range of other things by definition!

    Joe 788

    Jo – by your logic a foot is a weapon by definition.

    In what other way would it be a weapon if NOT by definition? Or are you just as clueless as your posts make you out to be?

    I was thinking of observing it being used as a weapon. But what do I know – I’m clueless.

  19. According to the online slang dictionary, Joe is a cigarette, and a cigarette is a bone, and a bone is a penis. So for anyone who knows how to use a dictionary, Joe is a penis, by definition.

  20. Allan,

    The intermediates don’t even have to be selectable, of course.

    I’m using ‘selectable intermediates’ to denote everything that “passes through the sieve”, to borrow Petrushka’s metaphor. Is there a less ambiguous but compact phrase that I can use in its place?

  21. Heh. Joe wants to make it clear that he did not take my advice about the spell checker:

    Earth to keiths- you don’t have any advice worth taking and I do not use “Word” to type my comments. Not only that I don’t care about typos. It gives imbeciles like you something to play with.

    Of course, even Joe knows that spell checkers are not unique to Word, though whether he knew that a few days ago is an open question.

    To ‘prove’ that he is not using a spell checker, Joe throws a couple of ‘typos’ into his next comment, including his trademark ‘obvioulsy’:

    Scioenec does not prove a negative. Obvioulsy you are scientifically illiterate

    ‘Scioenec’? Really, Joe?

  22. So for anyone who knows how to use a dictionary, Joe is a penis, by definition.

    Can’t argue with that.

  23. Oddly enough, the scioenec of ID is all about proving a negative.

    What are CSI and dFSCI if not attempts to prove that unguided evolution is not sufficient? 

  24. keiths:

    You have to assume that he arranges the morphological and genetic information in just the right way so that their associated inferred hierarchies are identical (just like unguided evolution).

    gpuccio:

    I don’t understand this. Could you please clarify better?

    Read this section of Theobald, including the quote from Zuckerkandl and Pauling. Unguided evolution predicts that phylogenies inferred from morphological and genetic evidence will be highly congruent, if not identical. (In fact, they are identical for the 30 taxa of Theobald’s Figure 1.) The prediction is intrinsic to the theory. The theory doesn’t require force-fitting, via arbitrary assumptions, in order to match the evidence. Given that there are 1038 possible hierarchies involving 30 taxa, the exact match between the morphological and genetic hierarchies is a spectacular confirmation of evolutionary theory.

    Design, on the other hand, makes no prediction at all regarding the degree of congruence of these phylogenies. They could be identical to each other. They could be totally different. They could be anything in between. You can force-fit design to the data by adding assumptions, but then it is the assumptions, not the theory, that are doing the work. The assumptions are purely ad hoc. If the evidence had turned out differently, you would once again force-fit the theory to the evidence by making a set of assumptions — it would just be a different set of assumptions. Design “theory” is infinitely malleable, meaning that it makes no predictions, and this is one of its fatal weaknesses.

    Given that evolutionary theory makes a prediction that is confirmed to 1 part in 1038, you can see that I am considerably understating my case when I say that unguided evolution is trillions of times better as a theory. In the face of such lopsided evidence, no rational, objective observer could choose ID over modern evolutionary theory.

  25. Toronto:  l) I will accept that a string generated by a “non-design process” exhibits “dFSCI” if ………………????………………………

    gpuccio: If it exhibits functional information higher than an appropriate threshold (let’s start with 150 bits, then we will see). And if I can see nothing in the information in the string that can be explained by a specific necessity mechanism present in the System.

    If in multiple runs of the simulation, we produce strings with enough of a difference in configuration, I think it would be safe to say that no “necessity mechanism” was at work.

    However, generating strings that were very much alike should not rule out “dFSCI” either, since a computer simulation, due to the computer hardware, is not as random as most people think.

    This is why kairosfocus mentions noise through a zener diode and flipping through a telephone book to actually generate a better “randomness” quality than computers do.

    As far as biology goes, that is simply not relevant when one is making a “mathematical/statistical” argument.

    Your claim is that “digital” FSCI is what is being tested against non-design mechanisms, not actual biology.

    I am not going to model biology for this test as this is not the argument you are making.

    If you insist on a biological aspect to the simulation, how can you claim “dFSCI” as a biological design detection tool when your test strings were not biological?

    You have to be able to recognize a string with “dFSCI” without regards to anything about its origin.

    gpuccio: If it exhibits functional information higher than an appropriate threshold (let’s start with 150 bits, then we will see).

    You have to think of a reasonable threshold and then stick to it.

    The string I will deliver to you must meet the specification that we agree to before the start of the test.

    l) I will accept that a string generated by a “non-design process” exhibits “dFSCI” if …………………?????……………

     

  26. keith,

    I’m using ‘selectable intermediates’ to denote everything that “passes through the sieve”, to borrow Petrushka’s metaphor. Is there a less ambiguous but compact phrase that I can use in its place?

    ‘viable intermediates’, I guess. One could of course read ‘selectable intermediates’ as ‘intermediates upon which selection could operate’, but I do think the ID fraternity would read it as a path with increasing fitness every single step of the way, and would demand to know the advantage of each step***. I wasn’t critiquing your usage, so much as making explicit that selection is but a part of a more general process: population sampling, a process that distorts allele frequencies towards fixation/extinction regardless of whether the sampling is with or without bias.

    *** they still would – of course – even ‘pure drift’ has a selective advantage: zero. To know the actual advantage would require knowledge of historic population sizes, the full environment, other alleles in the population now long dead…

  27. To put that 1038 number into perspective, imagine that we are comparing two astronomical theories.

    The ID-like theory predicts only that a particular distance will fall somewhere in the range from 0 to 2.5 miilion times the diameter of the Milky Way.

    The evolution-like theory predicts that same distance down to a billionth of an inch.

    We do the measurement, and we find that the evolution-like theory is exactly right.

    That’s how stark the difference is. No rational person who is aware of this will continue to prefer ID over modern evolutionary theory.

  28. keiths’s statement applies to the preference for common ancestry over a version of ID that does not include common ancestry. Usually ID folks (1) say scornfully that criticism of ID fails to note that common ancestry is compatible with ID, but (2) fail, tellingly, to themselves endorse common ancestry.

    If someone (the only someone I can think of is Michael Behe, but there may be others) accepted common ancestry but still thought that ID played a role in the origin of adaptations, then this refutation of ID would not apply to that version of ID.

  29. Joe Felsenstein:

    keiths’s statement applies to the preference for common ancestry over a version of ID that does not include common ancestry…

    If someone (the only someone I can think of is Michael Behe, but there may be others) accepted common ancestry but still thought that ID played a role in the origin of adaptations, then this refutation of ID would not apply to that version of ID.

    Joe,

    My statement actually applies to all forms of ID, including those that accept common descent. The common-descent forms of ID (which include gpuccio’s, by the way) are just as bad at explaining the existence of the objective nested hierarchy as the separate creation forms.

    This surprises a lot of people — including many evolutionists — who have gotten into the habit of thinking that there is a symmetrical relationship between common descent and the objective nested hierarchy. They think that common descent implies an objective nested hierarchy, and that the objective nested hierarchy implies common descent. While this is true for unguided common descent (which of course is the kind that you, as an evolutionary biologist, spend most of your time thinking about), it is not true for guided common descent (i.e. ID).

    In the case of unguided common descent, the inferred hierarchies tend to match the actual hierarchy, because evolution proceeds by small changes, inheritance is primarily vertical, and horizontal transfers are limited in number and type. The fact that hierarchies inferred from different lines of evidence match each other so well (perfectly, in the case of Theobald’s 30 taxa) is what gives us confidence that the inferred hierarchies match the true hierarchy.

    In the case of guided common descent, those same conditions are not guaranteed.

    1. We have no reason to assume that the Designer always chooses to make small changes or is somehow limited to doing so. Human designers commonly make large changes, so why assume that the big-D Designer doesn’t, or can’t?

    2. We have no reason to assume that the Designer limits himself to primarily vertical inheritance. In fact, one of the hallmarks of human design is that it isn’t limited to vertical inheritance. Why assume that the designer is limited (or chooses to limit himself) in this way?

    3. We have no reason to assume that the Designer imposes strict limits on the number and type of horizontal transfers. A designer could take a large, complex subsystem and transplant it into ten separate lineages at once. Again, human designers do this all the time. Why should we assume that the Designer doesn’t, or can’t?

    Without these three assumptions, there is no reason to expect the inferred hierarchies to match each other or the true hierarchy. And no ID proponent I’ve encountered has ever been able to justify making these wild and arbitrary assumptions, which amount to claiming that the Designer is an evolution mimic.

    Under a “guided common descent” scenario, the inferred hierarchies could match each other perfectly, or mismatch perfectly, or anything in between. Thus ID, even in its “guided common descent” forms, fails to predict an objective nested hierarchy.

    Unguided evolution, on the other hand, actually predicts an objective nested hierarchy, and this prediction is confirmed to an accuracy of 1 in 1038 in the case of Theobald’s 30 taxa.

    Evolution is by far the better theory, even when compared to the forms of ID that accept common descent.

  30. I grant you all of that, but there are variants of ID that only involve occasional intervention by a Designer, and then only in some characters, and those will be hard to tell from “unguided” common descent.

  31. @Keiths:

    “keiths on November 29, 2012 at 7:14 am said:

    Joe Felsenstein:

    keiths’s statement applies to the preference for common ancestry over a version of ID that does not include common ancestry…

    If someone (the only someone I can think of is Michael Behe, but there may be others) accepted common ancestry but still thought that ID played a role in the origin of adaptations, then this refutation of ID would not apply to that version of ID.

    Joe,

    My statement actually applies to all forms of ID, including those that accept common descent. The common-descent forms of ID (which include gpuccio’s, by the way) are just as bad at explaining the existence of the objective nested hierarchy as the separate creation forms.

    This surprises a lot of people — including many evolutionists — who have gotten into the habit of thinking that there is a symmetrical relationship between common descent and the objective nested hierarchy. They think that common descent implies an objective nested hierarchy, and that the objective nested hierarchy implies common descent. While this is true for unguided common descent (which of course is the kind that you, as an evolutionary biologist, spend most of your time thinking about), it is not true for guided common descent (i.e. ID).

    In the case of unguided common descent, the inferred hierarchies tend to match the actual hierarchy, because evolution proceeds by small changes, inheritance is primarily vertical, and horizontal transfers are limited in number and type. The fact that hierarchies inferred from different lines of evidence match each other so well (perfectly, in the case of Theobald’s 30 taxa) is what gives us confidence that the inferred hierarchies match the true hierarchy.

    In the case of guided common descent, those same conditions are not guaranteed.

    1. We have no reason to assume that the Designer always chooses to make small changes or is somehow limited to doing so. Human designers commonly make large changes, so why assume that the big-D Designer doesn’t, or can’t?

    2. We have no reason to assume that the Designer limits himself to primarily vertical inheritance. In fact, one of the hallmarks of human design is that it isn’t limited to vertical inheritance. Why assume that the designer is limited (or chooses to limit himself) in this way?

    3. We have no reason to assume that the Designer imposes strict limits on the number and type of horizontal transfers. A designer could take a large, complex subsystem and transplant it into ten separate lineages at once. Again, human designers do this all the time. Why should we assume that the Designer doesn’t, or can’t?   ”

    Take a modern ‘smartphone’ for example which is a phone, internet device, computer, camera etc. all combined.   

  32. Joe: BTW Richie, there isn’t anything pereventing unguided evolution from producing a mammal with gills or any number of characteristic combinations.

    You’re making “unguided evolution” sound powerful enough to alter body plans.

    Do you really want to say this?

     

  33. He’s just (knee) jerked because I’ve commented. For designed artifacts, we can sometimes see improvements each generation that are carried forward vertically. But the combining of some many, ‘mature’ technologies and devices is akin to a dog and a cat and a rat and a bat all having one baby. Which we don’t see in science. Also, we can still trace back the precursors of the smart phone, but it produces a very different taxonomy that a naturally evolved one.

  34. Joe F.:

    I grant you all of that, but there are variants of ID that only involve occasional intervention by a Designer, and then only in some characters, and those will be hard to tell from “unguided” common descent.

    Those variants of ID are really hybrid theories: X% intelligent design, Y% unguided evolution. In effect, you’re noting that if X is small enough, then a hybrid theory is scientifically indistinguishable from unguided evolution. That’s true, but in that case the ID part isn’t doing anything. The fit between theory and evidence comes from the unguided evolution part, and the ID part is only getting in the way. There’s no reason to keep the ID part. Occam’s razor slices it right off.

    Besides, gpuccio and Behe are not content with a theory in which ID plays only a small role. They insist that ID is essential to the production of biological information.

    The evidence says otherwise.

  35. Toronto: You have to be able to recognize a string with “dFSCI” without regards to anything about its origin.

    gpuccio: Yes. I need to know, however, the System where it emerged, at least to some degree, and the functional definition.

    The functional definition we’ll both discover together at the end! 🙂

    It will be a string of software instructions that are actually exectubale.

    I get the impression you think this will be another test string I’m going to give you which may or may not be designed.

    That’s not the case since I intend to only hand you a string that has “dFSCI” but was not designed.

    That is the point of this test, that I will generate functionality to a degree that you don’t think is possible without conscious design.

    Firstly, it’s what the string “does” that gets the “dFSCI” assessment, not “the means by which it came to be”.

    Secondly, while you may attribute “positive” or “false” for design afterwards, you would be wrong if you assessed design.

    Since this is not a test case we’re talking about, why would you need to know anything other than complexity and functionality?

    This would be the real thing that tests the effectiveness of “dFSCI” in detecting desin.

     

    gpuccio: “I will accept that a string generated by a “non-design process” exhibits “dFSCI” if it exhibits functional information higher than an appropriate threshold (let’s start with 150 bits, then we will see). And if I can see nothing in the information in the string that can be explained by a specific necessity mechanism present in the System.”

    We’ve agreed previously that “dSFCI” is a quality of a string, while design is an issue that is resolved afterwards.

    That is why you came up with terms like “false positive”.

    The “dFSCI” is always present regardless of whether you assert “design” or “non-design” afterwards.

    This is an experiment and specifications like, “(let’s start with 150 bits, then we will see)”, are just not specific enough.

    You should know by now what can be attributed to random chance in a purely mathematical sense, which is what “dFSCI”, by virtue of its “digital information”, measures.

    So I need from you, a description of a string that will invalidate “dFSCI”, if that string configuration is “not guided by an intelligence”.

     

     

  36. I confess to not following this discussion closely, finding keiths’ summaries of the numerous and severe problems with gpuccio’s argument more than enough reason not to waste my time further with it.  However, I do remember that his definition of dFSCI was that it exists if a string has a minimum amount of functional complexity AND if no non-design mechanism is known that could have created it.  Has he changed this in some way such that dFSCI is now solely a property of the string itself rather than being an argument from ignorance?

     

  37. I have had a hard time understanding gpuccio as he sometimes states “dFSCI” is an attribute of a string that once applied is never rescinded, yet it is difficult to follow why he sometimes will not attribute that label to a string, due solely to how the string was generated.

    I believe he sometimes is only thinking within the scope of testing “dFSCI” as a design detection tool on known samples, but the way he is doing that is impacting on the possible real-world use of “dFSCI”.

    Proper test cases would need to be blind to a larger  extent than gpuccio has allowed.

    At the point where someone would actually deliver a string to gpuccio that is of an actual  non-design origin, that extra information that he had counted on for purely test cases, should no longer be available to him.

    Either “dFSCI” is refined enough to be usable or it isn’t.

  38. MTM? Joe makes 3 comments:

    “1- Baraminology predicts reproductive isolation. Your position can only try to explain it away”

    Ah, that conjecture based on a religious text that can’t explain where the water came from, where it went to, how all the animals got to the ark, how they all fit on the ark, how they all then dispersed, requires rapid speciation beyond evolutionary claims and its completely at odds with the fossil record (no recent global mass extinction event).

    “2- There isn’t anything in unguided evolution that prevents an organism from having a blend of dog, cat and rat characteristics ”

    Apart from heritability. Category error, Joe. Opposable thumbs aren’t a “human characteristic”, they are a characteristic shared by humans”

    “3- Richie pom-poms is also ignorant wrt nested hierarchies (hint- they are manmade constructs, Richie. WE set the categories. WE would categorize hybrid technology the same way you would categorize transitional forms and hybrids.)”

    So these hierarchys are human constructs. Can we put “smart phone” in the clade for “Car”, or “Washing machine”? If not, why not?

  39. Joe: “Unfortunately Creationists have said where the water came from and where it went. Your position can’t explain water, so stuff it. They have also said how the animals got to the Ark, how they all fit on the Ark and even how they dispersed. “

    Joe, are saying you believe the story of Noah’s Ark to be literally true?

    If you believe the Bible is literally true, what is the point of debating “dFSCI”?

    If however, it turns out that “non-guided by an intelligent agent evolution” over billions of years is true, what does that do to a literal interpretation of the Ark story?

     

  40. Toronto: Joe, are saying you believe the story of Noah’s Ark to be literally true?

    Joe: That doesn’t follow from what I posted. So why, other than being a total loser, would you ask such a thing?

    Of course it follows, both from what you said and how you respond to claims.

    When our side claims something you respond with, “Where’s the evidence”?

    When it comes to the Ark story though, you say, “They have also said how the animals got to the Ark, how they all fit on the Ark and even how they dispersed. “

    There is no evidence for the Ark story and the story itself does a poor job of explaining the world we see if it were actually true.

    You are treating the Ark story with respect and “non-guided by an intelligent agent evolution” with a great deal of disrespect.

    So when I see “skepticism” for our side but not for yours, I have to conclude that in some way, you believe in the Ark story.

    Dembski didn’t believe in the Ark on a scientific basis, but was forced to accept it might be true or risk losing his job.

    Again, so that I can understand more about the ID side, do you believe in a literal Noah’s Ark?

  41. Joe: ” And I will say there is more evidence for the Ark story than there is for your position. So stuff it, already.”

    If your side was not trying to get ID into schools we would not be having a conversation of any type.

    As it is, this debate, in all its forms, including “dFSCI”, is between a theistic origins story versus a non-theistic origins story and therefore, what students end up studying.

    So Joe, why can you simply not say, “I believe in the literal Ark story”, or “I don’t believe in the literal Ark story”?

     

  42. Joe:

    “Nice cowardly non-sequitur, there Richie. Unfortunately Creationists have said where the water came from and where it went. Your position can’t explain water, so stuff it. They have also said how the animals got to the Ark, how they all fit on the Ark and even how they dispersed. ”

    Yes they have, all of those have the same answer: God.

    Nice to see that on an ID blog. Also ” requires rapid speciation beyond evolutionary claims and its completely at odds with the fossil record (no recent global mass extinction event).”

    is ignored. Shock!   

  43. Gpuccio,

    Your recent comments reveal some confusion over how hierarchies are inferred and what is meant by “small changes” in that context.

    Later today I’ll post a Zachriel-style example of phylogenetic inference (Link, Link) and show why it depends on small changes.

  44. It is fairly easy, using only high school level physics, to debunk the Flood narrative.

    Mt. Everest is 8848 meters high. The mass of water needed to cover the entire Earth to that depth is 4.5 x 1021 kg.

    If all that water came down from outer space, it would have to start out in the form of ice. The change in potential energy of all that ice raining down on the Earth works out to be 2.8 x 1029 joules.

    If it came down evenly over the entire surface of the Earth during a period of 40 days, that is equivalent to 383 megatons of TNT going off every second for each and every square meter of the Earth’s surface; and that includes the surface of the Ark as well.

    On the other hand, if the water came up from inside the Earth’s lithosphere, it would have to have come from depths considerably greater than 8848 meters. The depths from which it came would be at temperatures of at least 250 degrees Celsius, and up to over 1000 degrees Celsius if the water was in contact with the Earth’s mantle. This would result in superheated steam boiling up out of the earth to release its heat and condense into a liquid that covered the entire Earth. Where did all that heat go?

    How did the Ark survive either of those scenarios?

    This is how we know that the vast majority of the followers of ID/creationism don’t have even a high school level of understanding of science. They cannot grasp the implications of such a simple calculation. Not one of them can even begin the calculation let alone understand it. Just watch them try to word-game it away without ever doing the calculation.

  45. Preserving their ability to believe in a global flood is just one of the reasons why it is so vital that they misunderstand thermodynamics.
     

Leave a Reply