An Invitation to G Puccio

gpuccio addressed a comment to me at Uncommon Descent. Onlooker, a commenter now unable to post there,

(Added in edit 27/09/2012 – just to clarify, onlooker was banned from threads hosted by “kairosfocus” and can still post at Uncommon Descent in threads not authored by “kairosfocus”)

has expressed an interest in continuing a dialogue with gpuccio and petrushka comments:

By all means let’s have a gpuccio thread.

There are things I’d like to know about his position.

He claims that a non-material designer could insert changes into coding sequences. I’d like to know how that works. How does an entity having no matter or energy interact with matter and energy? Sounds to me like he is saying that A can sometimes equal not A.

He claims that variation is non stochastic and that adaptive adaptations are the result of algorithmic directed mutations. Is that in addition to intervention by non-material designers? How does that work?

What is the evidence that non-stochastic variation exists or that it is even necessary, given the Lenski experiment? Could he cite some evidence from the Lenski experiment that suggests directed mutations? Could he explain why gpuccio sees this and Lenski doesn’t?

It’s been a long time since gpuccio abandoned the discussion at the Mark Frank blog. I’d like to see that continued.

So I copy gpuccio’s comment here and add a few remarks hoping it may stimulate some interesting dialogue.

To Alan Fox: I am afraid you miss the points I made, and misrepresent other points.

I confess to many faults, among them reading too fast and typing too slowly. I also don’t have a good memory and don’t recall recently addressing any remarks to you other than this:

But, gpuccio, do you not see that Lenski was only manipulating the environment? The environment in this case, as in life in general, is the designer. Lenski provided the empty niche. Eventually a lucky mutant ended up and flourished in that niche. Selection is not random.

so I am not sure how I can be misrepresenting you.

a) The environment is no designer by definition. And it is not even the first cause of the adaptation. The adaptation starts in the bacteria themselves, in the information that allows them to replicate, to have a metabolism, and to exploit the environment for their purposes. Obviously, cahnges in the environment, especially extreme changes as those Lenski implemented, stimulate adaptation.

The environment designs by selecting from what is available. New genotypes arise by mutation, duplication, recombination etc, etc. Adaptation is the result of environmental selection. I think you mean to say that there is some suggestion that stress conditions can stimulate hyper-mutations in bacteria. This creates more material for environmental selection to sift through.

b) That adaptaion is a tweaking of the existing regulation of existing function. No new biochemical function is created. We can discuss if the adaptation is only the result of RV + NS (possible), or if it exploits adaptive deterministic mechanisms inherent in the bacterial genome (more likely). However, no new complex information is created.

If you are referring to the Lenski experiment, you are flat wrong here.

c) NS is not random, obviously.

Obviously! Glad we agree!

It is a deterministic consequence of the properties of the replicator (replication itself, metabolism, and so on) interacting with environmental properties. The environment changes are usually random in regard to the replicator functions (because they are in no way aware of the replicators, except in the case of competition with other replicators). Anyway, the environment has no idea of what functions can or should be developed, so it is random in that sense. The environmental changes made by Lenski are not really random (he certainly had some specific idea of the possible implications), but I can accept that they are practically random for our purposes. What we observe in Lenski’s experiment is true RV + NS OR true adaptation. I don’t think we can really distinguish, at present. Anyway, it is not design. And indeed, the result has not the characteristic of new design.

Whilst I find your prose a bit dense here and somewhat anthropomorphic (awareness in replicators, environments having ideas) I can’t see much to argue with.

d) NS is different from IS (intelligent selection, but only in one sense, and in power: d1) Intelligent selection (IS) is any form of selection where a conscious intelligent designer defines a function, wants to develop it, and arranges the system to that purpose. RV is used to create new arrangements, where the desired function is measured, with the maximum possible sensitivity, and artificial selection is implemented on the base of the measured function. Intelligent selection is very powerful and flexible (whatever Petruska may think). It can select for any measurable function, and develop it in relatively short times. d2) NS is selection based only on fitness/survival advantage of the replicator. The selected function is one and only one, and it cannot be any other. Moreover, the advantage (or disdvantage, in negative selection) must be big enough to result in true expansione of the mutated clone and in true fixation of the acquired variation. IOWs, NS is not flexible (it selects only for a very tiny subset of possible useful functions) and is not poweful at all (it cannot measure its target function if it is too weak). Those are the differences. And believe me, they are big differences indeed.

I think I have to accuse you of reification here. What is “intelligent selection” with regard to evolutionary processes?

By the way, just to tease Petrushka a little, it is perfectly possible to implement IS that works exactly like NS: we only need to measure reproductive fitness as our desired function. That’s exactly what Lenski did. Lenski’s experiment is, technically, an example of intelligent selection that aims to imitate, as much as possible, NS (which is perfectly fine).

My response would depend on whether and how you can define or identify the process you call “intelligent selection”. I suspect Petrushka can speak for herself!

_________________________________________________________________________

Any interested party is invited to comment. In Lizzie’s absence, I can only approve new commenters in threads I author. I am sure matters will be regularised soon. I hope gpuccio will find time to visit as unfortunately not many of us are able to comment at Uncommon Descent. Remember Lizzies rules and assume others are posting in good faith.

(Added in edit 22nd September 2012)

Gpuccio replies;

To Alan Fox (on TSZ):

Thank you for your kind invitation to take part in The Skeptical Zone. My past experiences in similar contexts have been very satisfying, so I would certainly like to do that.

Unfortunately, I am also aware of how exactling such a task is for my time, and I don’t believe that at present I can do that, while still posting here at UD.

So, for the moment I will try to answer the main questions you make there about my statements here, so that they are also visible to UD commenters, who after all are those to whom I feel more committed. I hope you understand. And anyway, it seems that you guys at TSZ are reading UD quite regularly!

I would just point out to Petrushka that it is not a question of “courage”, but of time: I have already done that (discussing things on a darwinist friendly forum) twice, and my courage has not diminished, I believe, since then.

No problem, gpuccio. I’ll paste any comments from you that I see that are directed at TSZ as I get chance.

88 thoughts on “An Invitation to G Puccio

  1. My questions are fairly simple:

    I would like to know how the designer overcomes the problem of emergence–the inability of chemists to predict the properties of molecules from the properties of constituent atoms. GP could illustrate overcoming this barrier with a simple molecule like water.

    of course biological molecules are a bit parger, and I would like to know how the designer acquired his knowledge of the properties of coding strings and how he indexes them by function and by need in changing environmental niches.

    Some of my other questions are listed in the OP.

    I’m particularly interested in how a designer fixes mutations that have no obvious somatic effect, but which become important later.

  2. gpuccio,

    I’m glad you’re responding at UD and hope you will choose to join us here. I understand your point about time constraints, but you should also consider the fact that this is a more open forum where neither you nor other participants will be arbitrarily banned. To provide a little context, here’s what I was responding to in the UD thread that began our conversation (your original words are nested twice, my response once):

    Any string that exhibits functional complexity higher than some conventional threshold, that can be defined according to the system we are considering (500 bits is an UPB; 150 bits is, IMO, a reliable Biological Probability Bound, for reasons that I have discussed) is said to exhibit dFSCI.

    Okay, so dFSCI is a true/false value based on the calculated functional complexity.

    It is required also that no deterministic explanation for that string is known.

    Now this is problematic. You seem to be defining dFSCI as a measure of ignorance. If you calculate the functional complexity of a string of unknown provenance and conclude that it meets the threshold for dFSCI, why would that calculation suddenly be moot if you learn how the string was created? Further, if a person designs an object deterministically, does it not have dFSCI? Maybe I need to understand what you mean by “deterministic” better.

    dFSCI cannot be created by unguided evolution.

    Well, depending on what you mean by “deterministic”, that may be true by definition. That wouldn’t be particularly interesting, though.

    I’ll respond to your latest comment to me in my next comment on this thread.

  3. gpuccio,

    From your most recent comment to me at UD:

    In that case, I think you have a fundamental problem because you are defining dFSCI such that only “non-deterministic” mechanisms can create it. Just so I’m clear, do you consider evolution (random mutations of various types, differential reproductive success, neutral drift, etc.) to be deterministic? If so, dFSCI doesn’t distinguish between “designed” and “non-designed” but between “known to be designed”, “known not to be designed”, and “unknown”. And just to be further painfully clear, would you agree that deterministic mechanisms can create functional complexity of more than 500 bits, by your definition?

    I will try to be more clear. In the definition of dFSCI, the exclusion of deterministic mechanisms is meant to exclude those cases of apparent order or function that can be explained as the result of known physical laws.

    This still sounds like you are explicitly defining dFSCI such that it cannot be created by any known process. This makes it impossible to use as an identifier of intelligent design because you would first have to eliminate all possible other causes.

    It also still remains an argument from ignorance. If you observe an object that has the necessary level of functional complexity and conclude that it has dFSCI, finding out more about its provenance later could change the conclusion from “dFSCI is present” to “dFSCI is absent” without any change in the measured functional complexity. That means that all dFSCI indicates is “we know how this was made” or “we don’t know how (or if) this was made”. This is problematic because of what I asked before:

    Are your concepts of functional complexity and dFSCI intended to be used to identify design where it is not known to have taken place or merely to tag design where it is known to have happened?

    Obviously the first option.

    dFSCI as you define it cannot be used to identify design where it is not already known to have taken place.

    a) The concept of dFSCI applies only to the RV part. What I mean is that dFSCI tells us if some step that should happen only by RV is in the range of the probabilistic resources of the system. As I have said, 150 bits (35 AAs) are more than enough to ensure that a single step of that magnitude will never happen. Empirically, as shown by Behe, anything above 3 AAs is already in the field of the exceptional.

    b) NS instead is a special form of determinstic effect, mainly due to the properties of replication itself, and partly to environmental factors that interact with replication. dFSCI has nothing to say about NS. The modeling of the NS effect must be made separately.

    What you seem to be saying here is that dFSCI is present if a single mutation generates more than 150 bits of functional complexity. Is that the case? Would you consider a gene duplication event of more than 75 bases to be such a mutation?

    If you do consider such (observed) duplication mutations to constitute dFSCI, then it is obvious that evolution can generate it. If you, for whatever reason, exclude such mutations, it suggests to me that dFSCI is defined deliberately to exclude any evolutionary process because evolution overwhelmingly tends to only explore regions of genotype space very close to points known to be viable.

    Before I make way too many assumptions about your position, let me close with a question: Does dFSCI only exist when a single change of more than 150 bits of functional complexity takes place or does it exist if that change takes place in multiple steps?

     

  4. Pasting the part of gpuccio’s comment addressed to me at Uncommon descent;

    Alan Fox:

    I confess to many faults, among them reading too fast and typing too slowly. I also don’t have a good memory and don’t recall recently addressing any remarks to you other than this:…

    I did not intend to criticize you in any way. I like the way you express things. I only meant that your comment, IMO, seemed not pertinent to what I had said about Lenski. You say: “But, gpuccio, do you not see that Lenski was only manipulating the environment?” But I had never denied that. So I wrote: “I am afraid you miss the points I made”, in the sense that I had said exactly what you were inviting me to “see”. And I apologize for the “misrepresent” word: “and misrepresent other points” is not probably a brilliant way to express it, but I was not referring to you misrepresenting me, but to you misrepresenting the role of the environment as designer in the second part of your phrase: “The environment in this case, as in life in general, is the designer.” Which indeed I commented upon. So, I apologize if I gave the impression that I was saying that you were misrepresenting me.

    The environment designs by selecting from what is available.

    Well, im my use of words that is not design. I have given an explicit definition of design, to avoid confusion. And anyway, the environment just interacts with the replicators. My point is that NS is the result of an interaction between replicators, with their biological information, and the environment. I don’t think that this point is really questionable.

    If you are referring to the Lenski experiment, you are flat wrong here.

    Yes, I am referring to the Lenski experiment there. Why am I wrong?

    I think I have to accuse you of reification here. What is “intelligent selection” with regard to evolutionary processes?

    I would invite you to reread what I wrote: “d1) Intelligent selection (IS) is any form of selection where a conscious intelligent designer defines a function, wants to develop it, and arranges the system to that purpose. RV is used to create new arrangements, where the desired function is measured, with the maximum possible sensitivity, and artificial selection is implemented on the base of the measured function.” I would think that is a clear definition. And, obviously, it has nothing to do with unguided evolutionary processes. It is a form of intelligent design. “d2) NS is selection based only on fitness/survival advantage of the replicator. The selected function is one and only one, and it cannot be any other. Moreover, the advantage (or disdvantage, in negative selection) must be big enough to result in true expansione of the mutated clone and in true fixation of the acquired variation.” NS has to do with unguided evolutionary processes. What is wrong with that? “Those are the differences. And believe me, they are big differences indeed.” I maintain that. Where is the reification?

    My response would depend on whether and how you can define or identify the process you call “intelligent selection”.

    I believe I had done exactly that.

  5. Thanks for the response, gpuccio and no offence taken on misrepresentation. I’d like to expand on what constitutes design. A problem that often arises in discussion is miscommunication. I accept that saying the environment acts as the designer in evolutionary processes may not coincide with your idea of “design” as applied to the diversity of life. I say:

    The environment designs by selecting from what is available.

    because that is exactly how I visualise the process of evolution. Variation arises in the gene pool by mutations, duplications, recombination and so forth and variations that are not immediately deleterious are then available for the environment to filter. The environment is dynamic and multi-dimensional. It is climate, weather, diurnal, seasonal, catastrophic and infinitesimal, plate tectonics, black smokers and reprodutive isolation. It’s intra-species competion, extra-species competition, predators, prey, parasites, hosts, symbionts and symbiogenesis. I see it mainly as a passive process, organisms inexorably being honed to better fit the niches they haphazardly tumble into. This seems very noticeable watching plants recolonising cleared land, say after a forest fire or weeding my garden. The most convincing evidence that this is what happens is that we always find organisms making a living in places to which they are supremely well adapted and dead in places to which they are not. If this isn’t design, I don’t know what is! There is also the element of parsimony here. Natural selection does not need to account for the amazing adaptations we find among living and extinct species, it is a predicted result of the theory.

    But “design” by the environment is only a word I am using as a shorthand for the process that is natural selection. Maybe when you use design you are not thinking about the lockstep between organism and niche. By the way, though myself an atheist and not finding the need of a religious explanation for life, the universe and everything, I wonder why the idea that God could create through natural processes is anathema to (for instance) Biblical literalists.

    Anyway to your definition of “design”. “Design” (plus designer etc) appear nearly 1,000 times in the UD thread comments and I did not spot where you define “design”, so I would be most grateful if you could point me to your definition. 

    Unless you mean this

     Intelligent selection (IS) is any form of selection where a conscious intelligent designer defines a function, wants to develop it, and arranges the system to that purpose.

    in which case, I’m even more in the dark! ;)

  6. Continuing:

    gpuccio wrote:

    b) That adaptaion is a tweaking of the existing regulation of existing function. No new biochemical function is created. We can discuss if the adaptation is only the result of RV + NS (possible), or if it exploits adaptive deterministic mechanisms inherent in the bacterial genome (more likely). However, no new complex information is created.

    and I said

    If you are referring to the Lenski experiment, you are flat wrong here.

    gpuccio:

    Yes, I am referring to the Lenski experiment there. Why am I wrong?

    Because a new strain of E. coli arose by variation that was able to digest citrate. That strain bloomed in the niche provided by Lenski’s flasks. How on Earth can the novel ability to digest citrate not be a new biochemical function?

    Where is the reification?

    “Intelligent selection” does not seem to have been in general use as a phrase linked to ID before I cam across it in your comment. Using it so confidently, you seem convinced such a concept is meaningful. I. e. it seems real to you. Notwithstanding your definition – Intelligent selection (IS) is any form of selection where a conscious intelligent designer defines a function, wants to develop it, and arranges the system to that purpose. – I am unconvinced such a process exists.

    My response would depend on whether and how you can define or identify the process you call “intelligent selection”.

    I believe I had done exactly that.

    Well, no. Selection in an evolutionary context is the same process. There is no distinction between artificial and natural selection. In plant and animal husbandry, the plant or animal breeder is a very important part of the environment; the selection process is not different in kind. I don’t see what you are driving at with “intelligent” selection, unless you are bringing in imaginary forces or actions. If so, fine, and we have to agree to disagree on the existence of imaginary intelligent designers.

  7. gpuccio

    To Alan Fox (on TSZ):

    No, that was my definition of Intelligent Selection. My definition is in post #5. I paste it here, but if you read the original post you will find other brief, useful definitions of mine:

    a) Design is the act by which conscious intelligent beings, such as humans, represent some intelligent form and purposefully output that form into some material system. We call the conscious intelligent being “designer”, and the act by which the conscious representation “models” the material system “design”. We call the material system, after the design, a “designed object”.

  8. Thanks for the clarification, gpuccio.

    I don’t find your definition very much help as it throws us back on to what is meant by “intelligent” and you include the phrase “such as humans”. I first came across “intelligent design” as a phrase round mid 2005 on encountering an ID proponent in a general discussion forum. I was intrigued enough to follow the links provided and ended up at Uncommon Descent (at the time still operated and moderated by Bill Dembski). I registered and submitted a comment asking for a definition of “Intelligent Design”. No comment appeared and my registration wouldn’t work. Thinking it was a glitch, I attempted to register several more times before suspecting I was not going to get an answer.

    Sorry for the digression but I am still doubtful a clear and consistent explanation of “Intelligent Design” exists in a scientific context. There is certainly no clear scientific definition of “intelligence” other than just an ad hoc comparative. You seem to suggest by saying “material output” that an intelligent designer’s input is immaterial. Science works by observing and postulating regularities, conservation of mass/energy, action and reaction equal and opposite. Could not ID scientists look for a material output with no material input? Our designer could tinker by loading the dice on variation events, I guess, but how would we tell?

  9. @ mung

    I agree that “Natural Selection” is a poor descriptive for differential survival of alleles. I propose “Environmental Design”. 

  10. The point of the word “natural” is that populations get shaped regardless of whether humans are managing the breeding or not. What GP fails to realize is that “natural” selection integrates hundreds of dimensions simultaneously. Something humans have never mastered.

    When humans try to manage selection they tend to focus on one or two traits and wind up with weak (inbred) populations that cannot survive without continuous human intervention. This happens over and over in our pets and in our crops.

    It’s why we have potato famines. Why our sweet bananas are likely to go extinct in the near future, and why many purebred animals are sickley. Intelligent selection simply isn’t as clever as selection that sees all dimensions of fitness simultaneously.

  11. petrushka: “Intelligent selection simply isn’t as clever as selection that sees all dimensions of fitness simultaneously. “

    Yes, and it’s a point that should be pressed.

    ID never seems to argue multiple on-going threads of “intelligent design”.

    Any “designer” that can “fine-tune” the universe should be able to multi-task.

     

  12. Another thing everyone should have learned by eighth grade is the purpose of limiting and controlling variables in an experiment (such as Lenski’s). Gpuccio’s sneering at such limitations indicates a profound ignorance of method. Perhaps he never participated in a science fair.

    At any rate, limiting the variables allows checking for things like non-stochastic mutation. It also allows replication of the experiment.

  13. Gpuccio: in the first 20,000 generations of the Lensli experiment, what were the desires properties, and how was the selection of them directed?

  14. gpuccio,

    To onlooker (on TSZ):

    I appreciate your willingness to continue this discussion around kairosfocus’ censorship, but wouldn’t it be easier for you to come here? Everyone at UD, with the exception of one person banned for vulgarity, can comment here. Almost no one here is welcome at UD.

    In any case, to the discussion.

    No, dFSCI can be used to infer design.

    Okay, in that sense your dFSCI is similar to Dembski’s CSI. It differs in that Dembski claims to be able to detect design without knowing anything about the provenance of the artifact under consideration.

    Our empirical experience says that dFSCI is always associated to design.

    We haven’t got to empirical observations yet. I’m still trying to repeat back to you your definition of dFSCI in a form you agree matches your intended meaning. Once we have an agreed definition, we can look for empirical evidence. This is my understanding thus far:

    Functional Complexity: The ratio of the number of digital strings that encode a particular function to the number of digital strings of the same length.

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    Is this accurate?

    Eliminating known necessity explanations is simply dutiful: if someone can provide a credible necessity explanation, the concept itself of dFSCI in that object is falsified, and the necessity explanation becomes by default the “best explanation”. But if no credible necessity explanation is available, design becomes the best explanation, not by any logical ptinciple, but because of the known empirical association between dFSCI and design.

    If you’re talking about artifacts that could conceivably have been created by humans, the only known source of dFSCI you have mentioned, then you may be able to make an argument. If you are talking about artifacts that humans could not possibly have created, then your dFSCI boils down to “We don’t know how it came about.”

    That is not an “argument from ignorance”

    I didn’t call it an argument from ignorance, I said it was only a measure of ignorance, not of design.

    The argument is empirical

    I don’t think that word means what you think it means. You can’t go from observations of humans to extrapolations of unknown, unevidenced designers and claim to have empirical support.

    Before I make way too many assumptions about your position, let me close with a question: Does dFSCI only exist when a single change of more than 150 bits of functional complexity takes place or does it exist if that change takes place in multiple steps?

    Definitely, as I have already said, it can take place in as many steps you like.

    Thanks for the clarification. I would like to apply your proposed design indicator to the CSI experiment that Lizzie organized here.

    The basic idea is to find a string of 500 bits representing coin flips such that the product of the length of the substrings containing consecutive heads is greater than 10e60.

    There are two approaches used to solve this problem. The first was for a human being to sit down and think about it, do a little math, and figure out that the optimal solution consists of a certain number of repeating sequences of heads broken up by single tails. The second is to use a genetic algorithm to model the application of simple evolutionary techniques to the problem. Both approaches led to a solution.

    I have three questions about this in relation to dFSCI:

    1) Do you agree that a string representing a solution has functional complexity in excess of 150 bits?

    2) Do you consider the GA approach to be deterministic, even though it includes a random component?

    3) If the human and the GA come up with the same string, does the string generated by the human have dFSCI and the string generated by the GA not have dFSCI, according to your definition?

     

  15. As I understand it, gpuccio has stated that:

    a) The concept of dFSCI applies only to the RV part. What I mean is that dFSCI tells us if some step that should happen only by RV is in the range of the probabilistic resources of the system. As I have said, 150 bits (35 AAs) are more than enough to ensure that a single step of that magnitude will never happen. Empirically, as shown by Behe, anything above 3 AAs is already in the field of the exceptional.

    b) NS instead is a special form of determinstic effect, mainly due to the properties of replication itself, and partly to environmental factors that interact with replication. dFSCI has nothing to say about NS. The modeling of the NS effect must be made separately,

     

    OK, so if I understand correctly, whenever we see FCSI which is so great that a pure mutational process could not plausibly explain it, even once in the history of the Universe, we cannot yet say that RV+NS cannot explain it. We can only say that the RV part, acting alone, cannot explain it.

    So gpuccio still has the task before him of showing that differences of fitness (together with the RV) cannot explain the evolution of the FCSI.

    gpuccio, are you relying on William Dembski’s Law of Conservation of Complex Specified Information to establish that? Or something else?

     

  16. Eine kleine Physik Lektion (at a high school level) for ID/creationists:

    The ratio between the electrical potential energy and the gravitational potential energy between two protons is 1.24 x 1036.

    The charge-to mass ratio for a proton is 9.58 x 107 C/kg.

    If something on the order of 10 gram sized objects like dice and coins had the same charge-to-mass ratios as protons, the potential energies of interaction among them for something like 10 cm separations would be on the order of 1023 J. This energy is on the order of 107 Megatons of TNT.

    Why do ID/creationists use dice, coins, and alphabets – objects which don’t interact among themselves with these kinds of energies – to calculate probabilities that “prove” that proteins and complex molecules have such a low probability of forming that intelligence is required to assemble them?

    How do probabilities associated with non-interacting objects prove that random variation in the presence of natural selection cannot do the job of evolution?

    Even high school students would be puzzled by ID/creationist pre-eighth grade “proofs”.

  17. Joe: “Mr Mike, the fact that there is that electrical potential energy and gravitational potential energy between two protons is evidence for ID. The same for the charge-to-mass ratio for a proton. “

    This is without doubt the single most powerful Creationist argument for God I have ever heard.

    It’s clean, testable and the conclusion stands on its own.

    After all, if God designed the “Energizer Bunny”, he must have created the real ones too! :)

     

  18. gpuccio’s reply (at least, the first few paragraphs of it, the whole thing is comment 497 in this thread at Uncommon Descent) was:

    That is naturally true. NS, or however we want to call it, is only a side effects of biological repreoduction, so ite needs complex biological reproductors just to exist.

    And anyway, even considering the huge amount of dFSCI already existing, for example in LUCA, that was probably nothing more and nothing less than a prokaryote similar to those we can observe now, that cannot explain the successive emergence of new information throughout the whole natural history.

    That’s why I focus my arguments on the emergence of basic protein domains, just as Axe does. I am perfectly aware that they are not all: a lot of other cases for dFSCI and design in bioloigcial beings could be done and will be done: regulation systems, body plans, irreducibly complex molecular machines, and so on. 

    OK, so yours is an argument about the improbability of the Origin Of Life and about the emergence of things like protein domains.  And these are based on the improbability of the mutations arising that are minimally needed for those.

    So I gather that the mere presence of FCSI (which would be high fitness so high that a pure mutation process could not plausibly being it about) does not indicate that evolution could not accomplish this.  As long as the individual mutations could arise and then successively have their gene frequencies changed owing to their improved fitness you could get arbitrarily high amounts of SI.

    Given that, I suggest you not phrase your argument in terms of the amount of FCSI.  The amount isn’t the issue, it is whether the increments of change can occur as changes of gene frequency or whether one would have to wait far too long for the required mutations.  In short, spite of the terminology yours is a Michael-Behe-style argument rather than a William-Dembski-style argument. 

  19. Joe,

    kairosfocus: “As I have said, Jerad can seek whatever help he wants to compose that 6,000 or so word essay.”

    Does that mean KF is prepared to write an essay on the abilities of the designer to actually do what ID claims he can, and that is design for “unseen future functionality”?

     

  20. As near as I can tell, none of them has any significant understanding beyond 8th grade science, if even that.  Nor do they seem to read even their own gurus like Dembski and Abel who actually make such ludicrous probability calculations and assertions.  Discussions with them are pointless and go nowhere.

  21. We bother engaging with them because if no one does, one day our kids will come home with an A grade in Intelligent Design.

    It`s a serious issue that won’t go away.

    I don’t want to see a religious fundamentalist group deciding what gets taught outside of their churches.

    I also believe we’re slowly losing.

     

  22. Oleg said: This makes me wonder why people here bother engaging the ID fans at all.

    I agree. I’m not sure what these threads are trying to accomplish. On the other hand, I am not really familiar with the history of the interactions between the individuals here and those over at UD. I started looking more closely at some of the UD people only within the last year. I’ve been traveling and immersed in other things; and they don’t strike me as very interesting or novel. They are very strange, however.

    The people over at UD don’t respond in any way that suggests any understanding of a scientific point or of scientific evidence. All that copy/paste stuff is a dead giveaway. I get the impression that the people over there haven’t had even the early childhood experiences with things like magnetic marbles and beads.

    Experiences like that provide a background upon which physics and chemistry can be taught. One would not try to calculate the probabilities of molecular assemblies the way Dembski and Abel do if they had the vaguest hint of what chemistry and physics teaches us about atomic and molecular assemblies.

    As it is, their camp followers have no idea what anyone is talking about when someone points to the fields of condensed matter and chemistry as counterexamples to Dembski’s or Abel’s probability calculations. Their responses are complete, uncomprehending non-sequiturs.

    Toronto said: I also believe we’re slowly losing.

    Some of what is happening in public education can be blamed on the ID/creationists, but not all. There is a broader ideology that totally rejects the social contracts that large societies must maintain among its members in order to coordinate their activities and survive. With the support of extreme sectarians, these ideologues have managed to disrupt much of what needs to be maintained in our society; and that includes public education.

    Rejecting the social contracts and tearing down the structures that maintain fairness and justice also contributes to poverty, which in turn puts more stress on public education.

    But that in no way excuses ID/creationist disruptions of education. They are serious participants in this ideology that rejects secular society and social contracts.

  23. olegt: “This makes me wonder why people here bother engaging the ID fans at all

    gpuccio: “That’s exactly what I too wonder about.”

    We do it because it shows how “religion-based” your arguments are.

    Whenever someone exposes your faulty logic and complete misunderstanding of evolution over on UD, they get banned.

     

     

  24. gpuccio: “Ah, now I unserstand! It’s a moral issue (we are the bad guys), and nobly motivated by your deep love for innocent children. “

    No you just have fears you can’t handle.

    If there is no God, who will tell you “right” from “wrong”?

    If there is no God, then you’re “just an animal” with no purpose.

    So you fight to create a “social order” with a very big emphasis on “order”.

    What you need to accept is that as an adult, you make all the decisions and you’ll be held accountable for all of them.

    You can’t hide behind religious dogma.

    Ask Dembski why he had to “re-evaluate” the Noah’s Ark story.

    It wasn’t his idea.

     

  25. gpuccio,

    Mung: “ID critics at TSZ are known and established liars. “

    As long as your side has Mung’s and Joe’s, we stand a chance! :)

  26. Upright BiPed: “GP meets the physicist Mike Elzinga

    Elizabeth Liddle’s very own Benito Mussolini…

    UB shows class in his second straight loss with his “Semiotic Theory Of Genesis”.

     

  27. Sheesh! :-)

    The calculations and the point of the calculations went right over their heads; yet it appears to have made them really mad.

    It certainly demonstrates how these kinds of sectarians can nurse hatreds until they are completely blind.  They even hate people who can understand high school level science and can do math.  How much more they must hate PhDs who actually do science.

    A number of years ago there were some street thugs in East Rochester, NY who ambushed and beat up kids coming home from school just because the kids were carrying books.  Apparently we are seeing the sectarian equivalent over at UD.

  28. Don’t get me wrong, I can see the entertainment value. It’s hilarious when these guys try to parse the word salad of that fearsome retired veterinarian and renowned ID scholar David L. Abel. Or when they learn physics from ba77.

    But a threat to public schools? After Dover? You’ve got to be kidding. 

  29. Folks, if Elizabeth were active here she would send the lot of you to the Sandbox.  Your material on people’s personal and political motivations is irrelevant to the gpuccio discussion.  The same thing is happening at UD where everyone here is being called liars.

    I am trying to have a discussion with gpuccio but the noise level here looks to be a problem. 

  30. Over at UD mung has posted a link to some sort of response to the points I made here.

    Alas, the link is not only nonworking, a peek at the HTML source of that comment shows the link text to be entirely absent.  (I had hoped that I would find something like a missing quotation mark and be able to copy the link out from the HTML source, but no such luck). 

  31. gpuccio commented, in comment 509 at the UD thread:

    (quoting me:) Given that, I suggest you not phrase your argument in terms of the amount of FCSI. The amount isn’t the issue, it is whether the increments of change can occur as changes of gene frequency or whether one would have to wait far too long for the required mutations. In short, spite of the terminology yours is a Michael-Behe-style argument rather than a William-Dembski-style argument.

    (gpuccio’s response:) More or less, that is correct. But, style apart, the essence is similar.

    I certainly share with Behe the biology centered approach, but Dembski’s concepts are fundamental both for Behe and for me.

    But you are right, Dembski is pursuing more a pure logico-mathemathical approach (that is obviously natural for him, and certainly very intersting). His approahc can be extremely stimulating to get to some universal formal theory of CSI, and his analisys of GAs is very useful to debunk amny evolutionists’ myths.

    What Behe, Axe and others are trying to do is to apply Dembski’s simple original concepts to an empirical analysis of the biological issue. For that, the original explanatory filter is practically enough, but a lot of detailed analysis of the neo darwinian theory is necessary to show that in no way it is a credible random + deterministic explanation. My personal approach is similar, probably more centered on the empirical concept of “conscious intelligent beings”.

     

    The original explanatory filter stated that if a certain level of high adaptation (in effect) was seen, this could not be explained by RV+NS.  You have dropped the NS part entirely.    You seem not to be using the Law of Conservation of Complex Specified Information that was the essential part of Dembski’s argument.

    If the requisite mutations occur, and one by one are fixed by natural selection owing to their fitnesses, then your dFCSI can in fact be achieved.  So just calculating it gets you nowhere.  Dembski’s argument had his LCCSI which he argued prevented both RV and NS from achieving the required level of Specified Information.

    Basically you aren’t using the LCCSI. Which is a wise step because it turned out to be wrong. You instead add on Behe’s argument about Irreducible Complexity.   I don’t see what any calculation of dFCSI aaccomplishes beyond that.

  32. I second this.  The moderation policy at UD has meant that this blog has become a sort of parallel discussion. It would be easier if it were one blog but it works in a fashion. 

    Let’s make sure this blog does not turn into another AtBC. I am not saying that there isn’t a place for AtBC as well – but it’s a different place. UD combines genuine criticism with more personal stuff  – but it works better if they are separate.

     

  33. Joe Felsenstein,

    You can have zero noise if you like if you can convince gpuccio to come here on the condition that everyone else stays out of it.

    I’m willing to do that just to see a fair discussion.

    The rest of us, and that includes those from UD, will simply refrain from commenting and allow you two to have an unmoderated debate.

    Ask gpuccio if this is acceptable.

     

  34. gpuccio,

    We haven’t got to empirical observations yet. I’m still trying to repeat back to you your definition of dFSCI in a form you agree matches your intended meaning. Once we have an agreed definition, we can look for empirical evidence. This is my understanding thus far:

    OK, but we will soon need empirical evidence in our discussion. Please remember that the conncetion between dFSCI and design is purely empirical.

    I would be delighted to discuss empirical observations. That does require clarity on the terms we’re using to describe those observations, though.

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    That’s fine, but please remember that the threshold of 150 bits is my personal proposal for realistic biological systems on out planet.

    Understood. What’s important at this point is that I understand that functional complexity is measured in bits while dFSCI is a boolean indicator. The question that remains is: An indicator of what?

    The reasoning goes this way: we define design starting from what we observe in huamn design.

    This does not appear to be the case. You define dFSCI as a boolean indicating whether or not an artifact was designed. You base this on the functional complexity, measured as the minimal number of bits required to describe some function of the artifact plus the lack of a known deterministic mechanism responsible for the artifact.

    You are still not measuring “design” in any objective sense. You are measuring functional complexity, by your definition, and in addition measuring your knowledge about the provenance of the artifact.

    Now, let’s remain in the field of possible human artifacts. If we apply the design inference to any material object that could be a human artifact, but of which we at present we don’t know the origin, we will see that, in all cases, if we use the correct method (for example dFSCI with 500 bits of threshold, just to be sure for the whole universe as a system) and if we can, after, ascertain the origin of the object, we will easily confirm that the method has no false positives, and that all objects exhibiting design are human artifacts, as independently ascertained after the inference.

    I don’t believe you can do this with the 100% accuracy you claim. Another poster here has already pointed out the difficulty with determining whether or not a particular piece of stone or bone found in an archaeological dig is natural or a tool.

    And what about natural objects that are not human artifacts? It is simple. They all can be classified into two classes:
    a) Biological objects, which often (but not always) exhibit clearly dFSCI: genes, proteins.
    b) Everything else, that never exhibits it.

    Not true. There are many non-biological phenomena that exhibit functional complexity of more than 500 bits. I believe Lizzie referred to a beach somewhere in the UK where all the rocks are sorted by size. The Giant’s Causeway is another example.

    Further, you are assuming your conclusion in (a). You can only conclude dFSCI, by your own definition, if you know of no deterministic explanation. Assuming that you consider the modern evolutionary synthesis to be deterministic by your definition, the most you can say is that there are biological artifacts with functional complexity in excess of 150 (or 500) bits. You don’t know that those artifacts are designed because you have no evidence for any intelligent agent that was present when they came into existence.

    dFSCI remains a measure of ignorance, not of design.

     

  35. Joe F. writes:

    If the requisite mutations occur, and one by one are fixed by natural selection owing to their fitnesses, then your dFCSI can in fact be achieved. So just calculating it gets you nowhere.

    I discussed ‘dFSCI’ with gpuccio earlier this year and reached the same conclusion. There’s no point in calculating dFSCI, since it can only rule out ‘tornado in a junkyard’ scenarios which no one believes anyway.

    I wrote:

    gpuccio, By your own admission, dFSCI is useless for ruling out the evolution of a biological feature and inferring design. Earlier in the thread you stressed that dFSCI applies only to purely random processes:

    As repeatedly said, I use dFSCI only to model the probabilitites of getting a result in a purely random way, and for nothing else. All the rest is considered in its own context, and separately.

    But evolution is not a purely random process, as you yourself noted:

    b) dFSCI, or CSI, shows me that it could not have come out as the result of pure RV.

    c) So, some people have proposed an explanation based on a mixed algorithm: RV + NS.

    And since no one in the world claims that the eye, the ribosome, the flagellum, the blood clotting cascade, or the brain came about by “pure RV”, dFSCI tells us nothing about whether these evolved or were designed. It answers a question that no one is stupid enough to ask. ["Could these have arisen through pure chance?"]

    Yet elsewhere you claim that dFSCI is actually an indicator of design:

    Indeed, I have shown two kinds of function for dFSCI: being an empirical marker of design, and helping to evaluate the structure function relationship of specific proteins.

    That statement is wildly inconsistent with the other two. I feel exactly like eigenstate:

    That’s frankly outrageous — dFSCI hardly even rises to the level of ‘prank’ if this is the essence of dFSCI. I feel like asking for all the time back I wasted in trying to figure your posts out…

    You have an obligation to make it clear in future discussions that dFSCI is utterly irrelevant to the “designed or evolved” question. In fact, since dFSCI is useless, why bring it up at all? The only function it seems to serve is as pseudo-scientific window dressing.

  36. gpuccio,

    1) Do you agree that a string representing a solution has functional complexity in excess of 150 bits?

    Maybe. Apparently, I should calculate how many strings of 500 bits have the defined property. I suppose it can be done mathematically, but please don’t ask me to do that. Let’s say, just for the sake of discussion, that only 2^100 strings have the defined property. Then apparently, the dFSCI of the output is 400 bits, which is enough to affirm dFSCI according to my threshold (which, indeed, is a threshold for biological systems, but that is not important here).

    Why do I say “apparently”?. Because you are telling me (and I believe it) that the output can be given by a GA. So, the GA is a deterministic way to produce the output (even if it uses some random steps). That does not mean that there is no dFSCI in the output. It just means that the true dFSCI of the output is the functional complexity of the GA. About which I know nothing.

    So, to some up:

    a) if the GA is more functionally complex than the output, we can simply consider the dFSCI of the output, 400 bits, and safely infer design for it. It can be designed directly or through the GA, that makes no difference.

    b) If the GA is less functionally complex than the output, then the functional complexity of the GA is the true functional complexity of the output, its Kolmogorov complexity. That’s waht we muhst consider, becasue it is the minimum functional complexity that can explain what we observe. IOW, the GA could have arisen randomly with a probability given by its functional complexity, and then the output would come as a necessity consequence.

    The GA engine itself has nothing to do with the complexity of the string describing the solution, by your own definitions of “functional complexity” and “dFSCI”. Based on those definitions, and the calculations provided in the thread about the coin flipping problem, it seems we agree that the solution string does have functional complexity well in excess of 150 bits, approaching 500 bits.

    3) If the human and the GA come up with the same string, does the string generated by the human have dFSCI and the string generated by the GA not have dFSCI, according to your definition?

    I have already answered that in my answer to the first point.

    You did in part. You did not directly state whether or not the human generated string exhibits dFSCI while the GA generated string does not. Assuming for the sake of this example that the string does have the minimal amount of functional complexity required, do either or both have dFSCI by your definition?

  37. gpuccio September 24, 2012 at 2:31 pm:

    If you look at my older definitions of dFSCI, you will see that I used to say that the complexity we have to measure is the Kolmogorov complexity, given known deterministic explanations that can compress the complexity of the string. It is the same thing as saying that we must exclude deterministic explanation, or just take them in consideration if they are known and credible. Now I avoid to use the concept of Lomogorov complexity, just for simplicity. Anyway, the concept in itself is simple: dFSCI measures the probability of coming into existence by random variation. It implies, therefore, a separate evaluation of the influence of known deterministic effects. As I have done for NS.

    This guy needs to switch to decaf, pronto. If “dFSCI” boils down to Kolmogorov complexity (a term he now avoids “just for simplicity”) then any random string is guaranteed to be highly complex according to Kolmogorov’s definition as its minimal description is the string itself. An object can be highly complex in this sense of the word, but there is no reason to pin its origin on a designer.

  38. What does “coming into existence by random variation” mean? It sounds like tornado in a junkyard.

  39. What does “coming into existence by random variation” mean? It sounds like tornado in a junkyard.

    That’s right, which is why dFSCI is useless as an indicator of design.

    Gpuccio seems to be coming (finally!) to realize this. He attempts to redefine dFSCI in comment 553 at UD:

    To be clear:

    a) If I am not aware of the GA, I would only compute the target space and then the dFCI.

    b) If the GA exists, and I know it, I would compute the dFSI of the GA too.

    The lower value netween a) and b) is the DFSI of the string, independently on whetehr it was directly designed by a human, or indirectly through a GA.

    So the computed dFSCI is always an upper bound on the real dFSCI. Any computed dFSCI that exceeds the threshold will be a false positive if we later discover a GA with sufficiently lower dFSCI that can produce the string.

    In other words: dFSCI, even under gpuccio’s new definition, gives false positives. That makes it useless as an indicator of design.

    There are some other problems with gpuccio’s new formulation, but I’ll hold off on those. No point in bringing them up unless gpuccio can address the fatal problem of false positives.

  40. Actually, looking at GP’s response, it appears that he’s calculating the probability of a specific string arising. In other words, a specific target. Like weasel.

  41. There are some rather glaring sources of false positives.

    One is the existence of alleles, which demonstrates that there are functionally equivalent sequences nearby and that RMNS found them. This petty much proves that evolution operates by exhaustively trying all nearby sequences.

    Another problem is the assumption that what is extant is something thst was pre-specified rather than the current bottom of the pond. The fact that most species that have ever lived are extinct argues that there are no pre-specified forms.

  42. Henry Morris and Duane Gish would be very pleased to see how their memes are being expressed in their intellectual descendants over at UD. Every misconception and misrepresentation about science, from thermodynamics to atoms and molecules to the origins of life and onward to evolution, is expressed as clearly as a deadly genetic disease in Morris’s and Gish’s intellectual progeny over there.

    Even funnier, those UD characters – like abandoned street urchins – don’t know anything about their own intellectual ancestry.  They don’t know or understand the genetic and memetic markers that identify them; and they actually believe they are solving the “scientific” problems that Morris and Gish set out for them by introducing them into their intellectual memes and genes.  ID is their attempted “solution” to the pseudo-scientific memes that reside in their heads.

    Who says ID is not a morph of “scientific” creationism?  The paternity test screams otherwise; right over there at UD.  It’s in their intellectual DNA.

  43. Let me answer gpuccio’s replies to me out of order:

    In comment #537 in the UD thread you say (in its first part):

    I have just read your post, and I believe I have already answered in some detail in my previous post. I would only add that I don’t think that the explanatory filter did not take into consideration NS. NS is a deterministic explanation (or at least, an attempt at an explanation), so it must be considered. And falsified. Now I don’t want to speak for Dembski, I am not intereste in who said what, but IMO the concept is clear. If you can explain the observed result by a credible non design theory, be it purely random, purely deterministic, or a mix of the two, you have made the trick: the design inference is no more warranted.

    So in other words, if we just observe 500 bits of dFCSI, that by itself does not establish that Design is responsible. Glad to see we are in agreement on that. Dembski had his Law Of Conservation of Complex Specified Information (LCCSI) which was supposed to be able to establish that natural processes (deterministic and/or random) could not generate that much SI.   If that theorem did the job for which it was designed, one could just invoke the presence of CSI and then conclude for Design. Alas, his theorem does not work (it is not proven, and also it is not formulated so as to be able to do the job even if it were proven).  However this has not stopped numerous Design advocates from citing the presence of CSI as proof by itself that Design operated. Where have they done this? All over UD many, many times.
     

    In the earlier reply, comment #534, gpuccio argued that in the case of the origin of new protein domains Design must be responsible, instead of natural selection, for which “The problem is, it does not work because those intermediates don’t exist”. In short, the Behe argument. Just observing 500 bits of SI does not do the job of establishing Design, because there could be cases with that much SI where it was put into the genome bit by bit by random variation and differential fitness.

    So the 500-bit-ness is not the issue at all. If there were not the required intermediates, a much smaller gain of SI would be inconceivable, while if there were enough intermediates, a much larger one would be conceivable. So why present this as if the amount of SI is somehow crucial?

    In another long comment of gpuccio’s, #548, he argues that one has to add into the calculation all the information needed to set up a replicating system in the first place, one which has metabolism.

    (gpuccio:) we are again in a situation where the active information already included in a designed system allows some, very limited, output of new information through random search couple[d] to the properties of the existing complex information (reproduction and metabolism), and some information derived from the environment. 

    In that situation you are in effect acknowledging that information, wherever it originally comes from, can get into the genome as a result of this process.  However there is no proof from you that the replication system degrades as this happens or that its capacity for putting more information into the genome in the future is diminished in some predictable way. You do indicate that you feel that the capacity to do so is “very limited” but again, that is basically Behe’s argument and does not derive simply from the concept of Specified Information. So there is no basis to consider that the future information content of the genome is limited by the amount of information in the replicating system.  Any such assertion would need to be proven.

  44. I should add that the replies to gpuccio that I just posted are very parallel to the arguments about GAs given just above by keiths and petrushka, and involve the same issues.

  45. So all this sophistication boils down to the old creationist argument: [something] looks complicated, so it must have been made by someone. Unless you figure out a natural explanation for [something], we’ll assume it was created.

    The bit-counting cargo cult was not invented by Dembski. Henry Morris of Institute for Creation Research explained its origin

    Dembski uses the term “specified complexity” as the main criterion for recognizing design. This has essentially the same meaning as “organized complexity,” which is more meaningful and which I have often used myself. He refers to the Borel number (1 in 1050) as what he calls a “universal probability bound,” below which chance is precluded. He himself calculates the total conceivable number of specified events throughout cosmic history to be 10150 with one chance out of that number as being the limit of chance. In a book written a quarter of a century ago, I had estimated this number to be 10110, and had also referred to the Borel number for comparison. His treatment did add the term “universal probability bound” to the rhetoric.

    Been there, done that, got the T-shirt.

  46. olegt, I think Dembski’s main addition to the argument was his Law of Conservation of Complex Specified Information. If valid, it would rule out natural selection as a means of getting organisms far enough out on a fitness scale for there to be CSI.

    And it is self-evident that organisms are that high in adaptation — if all that was available was mutation (with no natural selection) there is no hope, even once in the whole history of the Universe, of producing a fish or a bird.

    But could it happen if you also have natural selection? The LCCSI was intended to rule that out. This was a pretty gutsy thing to put forward. It proposed to invalidate 100 years of work in theoretical population genetics. If the LCCSI had been valid, it would have been the greatest advance in thinking about evolution since Darwin, or maybe even greater than Darwin. I would have written the letter of recommendation for the Nobel Prize myself.

    Unfortunately …

    Dembski’s other two additions to the ID corpus are his use of the No Free Lunch argument and (with Robert Marks) his Sewach For a Search argument.

  47. gpuccio,

    So, in clear english, dFSCI is a property objectively observable in objects. Now, listen with attention, please: The connection between dFSCI and designed objects is purely empirical. We observe that all objects that exhibit dFSCI, and of which we can know the origin, are design objects. No oject of which we can know the origin, and which exhibits dFSCI, has a non designed origin.

    That’s because you define dFSCI such that it does not exist if a deterministic cause is known:

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    There are non-designed objects that have functional complexity, by your definition, in excess of that required to demonstrate dFSCI, but because a deterministic cause is known, their dFSCI value is “false”. That’s fine, but you can’t pretend that you’ve identified any empirical observations when the result is simply a consequence of your definitions.

    You are still not measuring “design” in any objective sense.

    I have no intention of “measuring design”. I infer design. What I measure is FSI, and then I categorize it as a boolean, dFSCI.

    You are measuring functional complexity, by your definition,

    QED.

    and in addition measuring your knowledge about the provenance of the artifact.

    No. I just test known possible deterministic explanations.

    You do no testing. You assert that the value of dFSCI is “true” when you don’t know of a deterministic mechanism. This is inherent in your definition:

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    All you are identifying is your ignorance about an artifact’s provenance.

     

  48. gpuccio,

    The GA engine itself has nothing to do with the complexity of the string describing the solution, by your own definitions of “functional complexity” and “dFSCI”.

    Wrong. It has all to do with it. The GA is a deterministic explanation for the string, and therefore a compression of its complexity. If the GA is simpler than the string, it can more likely emerge randomly, and then the string comes automatically. So, in that case, the dFSI of the system is the dFSI of the GA. Is it so difficult to understand?

    I’m just going by your definitions:

    Functional Complexity: The ratio of the number of digital strings that encode a particular function to the number of digital strings of the same length.

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    The provenance of the string has nothing to do with its functional complexity, according to your definition. The fact that it was generated by a GA, which is a deterministic mechanism again by your definition, means that its dFSCI value is “false”, but that’s different from the number of bits of functional complexity it contains.

  49. gpuccio,

    I forgot your last question:

    Assuming for the sake of this example that the string does have the minimal amount of functional complexity required, do either or both have dFSCI by your definition?

    The string is the same. Its dFSCI is the same. If I am not aware of the GA, I will compute it in the direct way. If I become aware of the GA, I will refine my judgement by computing the dFSCI for the GA, abd then correcting my measurement only if that FC is lower than the direct FC of the string. Anyway, the string has always the same FC. My judgement can be different dpending on my awareness of the GA.

    I think I understand what you are saying, but could you please answer the questions directly? Assuming for the sake of this example that the string does have the minimal amount of functional complexity required, does the string generated by the human have a dFSCI value of “true”? Does the string generated by the GA have a dFSCI value of “false”?

  50. It’s always struck me as just a tiny bit circular to argue that dFSCI exists only if we can’t cite a “natural” cause, and there is no possible natural cause because there’s just so doggone much dFSCI.

    That seems to lie at the heart of all ID arguments. including Behe’s.

    How does water find the contour of the pond bottom, anyway? Perhaps the pond bottom was designed so that water could find it.

  51. gpuccio: “So, observing 500 bits, or even less, of dFSCI in a biological system “does the job” perfectly, because there is no known artificial GA working in that system, only the biochemical laws, that have no power to generate those results.”

    There is no “artificial” GA because biology is the “real” GA.

    This is precisely what the debate is about, the “real” GA.

    You have just asserted that “real” GA doesn’t work because it can’t.

    That’s not very scientific.

    Evolution is like a simple electronic oscillator.

    Take random resistors and capacitors and insert them in the feedback loop of the oscillator.

    Some will work and some won’t.

    If you measure the frequencies of those that actually manage to work, you will find frequencies all over the map, none “specified”, but all “functional”.

     

  52. gpuccio September 25, 2012 at 10:00 am

    I said (and you quote it): “I just test known possible deterministic explanations.”.

    So, I do test. I test known deterministc explanations. Obviously I do not test deterministic mechanisms of which I know nothing. I have tested NS, for example.

    So, that is the good ole argument from ignorance. Intelligent design by default. Why are people here interested in discussing this? 

  53. So, that is the good ole argument from ignorance. Intelligent design by default. Why are people here interested in discussing this?

    It’s rather that we are trying not to prejudge the possibility that there may be an argument for intelligent design. Just because we see only default arguments, we cannot preclude the idea that there is an argument for “intelligent design” that does not follow this pattern.

  54. Gpuccio at UD

    To Alan Fox (at TSZ):

    Because a new strain of E. coli arose by variation that was able to digest citrate. That strain bloomed in the niche provided by Lenski’s flasks. How on Earth can the novel ability to digest citrate not be a new biochemical function?

    From Carl Zimmer’s article, which sums up Lenski’s results:

    “When E. coli finds itself in the absence of oxygen, it switches on a gene called citT. Like other species (including us), E. coli turns genes on and off by attaching proteins to short stretches of DNA nearby. When E. coli senses a lack of oxygen, proteins clamp onto one of these genetic switches near citT. Once they turn the gene on, it produces proteins that gets delivered to the surface of the cell. There they poke one end out into the environment and pull in citrate, while also pumping out succinate. After the citrate gets inside the microbe, the bacteria can chop it up to harvest its energy.”

    So, as Joe already pointed out, the citT gene, and the metabolism to digest citarte, were already there, and are not a consequence of Lenski’s experiment. Only the switching on of the citT gene is the main factor here. What we are seeing is not very different in principle from what happens in the most classicla example of gene regulation, tha lac operon. So, as I said, there is no new biochemical function, only “a tweaking of the existing regulation of an existing function”. QED.

    I recommend The Loom article to anyone interested in current state-of-play on the Lenski experiment. Note, in the comments, Zachary Blount says:

    Aerobic citrate utilization is a novel trait for E. coli, and it is one that evolved spontaneously in the Ara-3 population that I study. (By this I mean that it did not come from the acquisition of foreign DNA like a plasmid carrying a citrate transporter into the cell line from outside.) Moreover, the actualization stage that produced the qualitative switch to Cit+ did not involve any loss of gene function! As Carl explained so well, actualization involved a duplication mutation that produced two copies of a segment of DNA that 2933 bp long. These two copies are in a tandem, head-to-tail orientation, and placed a copy of the citT gene under the control of a copy of the promoter element that normally controls when the rnk gene is turned on – this is what we call the “rnk-citT module”. (I know, I know, incredibly catchy!) As rnk is turned on when oxygen is present, the copy of its promoter in the new rnk-citT module likewise turns on citT when oxygen is present. Voila!…

     

     …Is the material that went into the new element fundamentally new? No. But to deny that the new module is new is like saying that a new word is nothing new merely because it was formulated from pre-existing letters. Now something that I found to be really interesting about this mode of innovation is that it wouldn’t eliminate any pre-existing function. The duplication generates the new rnk-citT module, but there ends up remaining in the genome complete copies of rnk under the control of the rnk promoter, as well as of the cit operon as it existed before the duplication took place. The duplication thus added a new regulatory element while not actually disrupting any pre-existing regulatory modules.

    As I remarked before, the environment designs by selecting from what is available, so it is no surprise that this novel trait is but a short step from and building from material that is already there. But – for heavens sake! – anaerobic to aerobic is just tweaking!

  55. Why should we have to defend the “size” of the function created? The simple fact is that the Lenski experiment confirms many of the mechanisms that were heretofore conjectured.

    It started with a series of drift mutations that turned out to be necessary, but which had no observable effect on function or adaptation. This isn’t much commented on, but it destroys any notion that an intelligent selector could have spotted them as incipient adaptations and selected for them. They are poster children for neutral drift theory.

    It validates the theory that new function is often accomplished by gene duplication followed by variation in the duplicate. This has now been observed in real time under controlled conditions.

    By the time the experiment was “concluded” we have exceeded Behe’s Edge with the number of required mutations, and all this happened in a few flasks in a laboratory in about a decade.

    It completely validates the ability of random mutation to exhaustively explore the nearby functional landscape in a finite amount of time. And it validates the assertion that useful new adaptations are often within reach of such a “braille” search.

    It totally destroys the ID assertion that new function comes at the expense of losing old function. I hope we never hear that argument again. One consequence of this observation is that there is no barrier to continuing this accretion of function. New function does not have to come at the loss of the old. And there was no evidence of “genetic entropy,” so we can expect to stop hearing about that.

  56. Gpuccio:

    I have redefined nothing. My definition of dFSCI has been the same for, I believe, years, at UD. You can check in the archives, if you want.

    I hardly need to go the archives. Your usage of dFSCI isn’t even consistent within the current thread at UD.

    First you tell us that dFSCI is a numerical quantity measured in bits:

    a) if the GA is more functionally complex than the output, we can simply consider the dFSCI of the output, 400 bits, and safely infer design for it.

    Then you tell us that dFSCI is a boolean:

    What I measure is FSI, and then I categorize it as a boolean, dFSCI.

    Then you tell us that dFSCI is not a boolean:

    A value of dFSCI is neither true nor false.

    Here (and many times previously) you tell us that dFSCI is a reflection of pure RV:

    a) The concept of dFSCI applies only to the RV part.

    Then you tell us that no, dFSCI also depends on any GA capable of producing the “output”:

    So, the GA is a deterministic way to produce the output (even if it uses some random steps). That does not mean that there is no dFSCI in the output. It just means that the true dFSCI of the output is the functional complexity of the GA.

    How do you expect others to follow your argument if you don’t use your terms consistently? Onlooker is doing you a considerable service by holding you to your earlier statements and asking you to state your argument consistently. You should be grateful, not petulant.

  57. Alan Fox said: It’s rather that we are trying not to prejudge the possibility that there may be an argument for intelligent design. Just because we see only default arguments, we cannot preclude the idea that there is an argument for “intelligent design” that does not follow this pattern.

    While that certainly can be of some interest, – after all, we do need to know what ID/creationists are currently thinking – default arguments that are derived from abject ignorance about the natural world are probably better addressed in a way that actually compares that ignorance with reality.

    For example, that KF character over at UD – as well the others over there – still can’t figure out how to scale up the ratio of the electrical potential energy to gravitational potential energy to objects the size of dice, coins, and alphabets and having separations on the order of centimeters. He therefore dismisses the calculation is an “atmosphere-poisoning” smear tactic.

    All this mucking around with logarithms of probabilities goes nowhere if one doesn’t understand the interactions among the things for which one is trying to calculate the probabilities of assembly. Even a high school level calculation provides a big hint that something is wrong with ID/creationist assertions; and that kind of hint also provides some knowledge about the real world.

    Within the science community, the calculations for atom and molecular assemblies are already being done on supercomputers using what is already known about the way atoms and molecules interact. And THOSE calculations are NOT the simplistic logarithms of probabilities about non-interacting objects with the label “information” attached to the logarithms in order to give the impression that such naive calculations are “sophisticated” calculations that automatically default to a “designer.”

    ID/creationists start with all the characteristic misconceptions they inherited from “scientific” creationism and then just make up things as they go while ignoring all of physics, chemistry, and biology.

    After something like 40+ years of observing this stuff, I stand by my assertion that the leaders and the followers of ID/creationism do not understand science beyond the eighth grade level if even that. What they possess instead are a bunch of misconceptions that consist of bent and broken science concepts designed to support a narrow set of sectarian beliefs. They got these from Morris and Gish who, in turn, may have learned them from A.E. Wilder-Smith. It is also obvious that people like Ken Ham, a protégé of Henry Morris, direct these misconceptions at children and middle school students; disrupting their science education at or before the eighth grade level.

    But these misconceptions don’t work in the real world. Every “alternative” they try to construct is therefore a straw man argument addressing their own misconceptions; and if they are exposed to real science, they think it is a straw man. That is abject ignorance at its worst.

    Not one of those characters over at UD – not one – can do a calculation that a high school physics or chemistry student can do very easily. Even when someone does it for them, they can’t comprehend its implications because they don’t even know what phenomena actually occur in the real world. So they simply assert “smear tactics” and airily dismiss the hint that such a calculation provides; a simple but serious hint that their own “calculations” are meaningless.

  58. I have always said that, when tested empirically, dFSCI can be shown to have 100% specificity, no false positives.

    Can you understand that? Empirically, not logically.

    So, show me in reality that example, the string and the algorithm. That will be my first empirical false positive.

    So biologists are required to provide the pathetic level of detail, and not just the system capable of generating it? I think you have the burden of proof backwards. GP. Once we have demonstrated a dynamic system capable of accumulating functional changes, the burden is yours to demonstrate the existence of a comparable competing system.

    Astronomers are under no obligation to wait for a full orbit of Pluto to declare that it does orbit. There is no branch of science in which regular processes are not extrapolated.

    I do believe this particular argument qualifies as “ID is the default scenario if every evolutionary gap is not filled.” How else could you possibly ignore the lab evidence that well established evolutionary mechanisms have actually been observed inserting information into the genome?

    This is a dirt simple argument. Once you have demonstrated a regular mechanism capable of inserting information into the genome in a time span consistent with the difference observed between related species, this is the default mechanism until such time as a competing mechanism is demonstrated.

  59. gpuccio: “What can I say to Toronto? Better say nothing. You know the old saying, “if you have nothing nice…” “

    That’s too bad because I do have a begrudging respect for the way you handle yourself.

    That doesn’t mean I agree with your arguments in any way.

    One question that has always bothered me and I think Petrushka, is how does the designer know in advance what semiotic codes to use for yet uncreated biological functionality?

     

     

  60. That bothers me. But I am also left wondering what purpose the designer serves. Evolution always uses minor variations of existing sequences, and Lenski has demonstrated that the entire nearby landscape of sequences can be tried in a short time.

  61. Joe: “How does a programmer know in advance what code to use for a yet uncreated program? “

    Human programmers get the “specified functionality” before we start the design, while your designer doesn’t have a specification before the fact that you can show evidence for.

    There are many possible “codes” that lead to specific functionality meaning our possible targets aren’t limited to one particular “string of information”.

     

  62. gpuccio has responded (in comment #596 here) at some length. Basically, gpuccio’s argiment is that dFCSI tests whether random variation can produce an adaptation — that it is only computed when it has been ruled out that natural selection can produce such an adaptation:

    “the judgement about dFSCI implies a careful evaluation of available deterministic explanations, as said many times.”

    Let me make three points:

    1. The notion of dFCSI is not the same as Dembski’s CSI.  Dembski’s argument involves a scale (the relevant one is fitness, one of the possibilities he mentions) and CSI exists when the organism is so far out on the scale that it is in the top 10-150 of the original mutational distribution, say one of equal frequencies of the two alleles at all loci). Dembski then invoked his Law of Conservation of Complex Specified Information (LCCSI) as showing that natural selection (or any other natural process) cannot have moved the system from not showing CSI to showing it. He does not say that first one must have ruled all possible explanations by random variation and differences of fitness, and one should only compute the amount of SI after that. In fact he argues that finding that the genotype is that far out on the fitness scale accomplishes that without considering all those explanations.  For example in The Design Revolution he says that repeatedly in chapter 12.  On page 96, for example, he says that

    “If something genuinely exhibits specified complexity, then one can’t explain it in terms of all material mechanisms (not only those that are known but all of them, thanks to the universal probability bound of 1 in 10150; see chapter ten).”

    So it is very clear that Dembski intends the observation of CSI to establish that there has been Design even when we do not know all natural processes that could have been at work. He bases his argument for that on his conservation law. A law that, as I have argued, has been shown neither to be proven nor to do the job.

    2. gpuccio is, by contrast, using dFCSI only after ruling out natural selection (unlike Dembski). gpuccio’s argument thus ends up as equivalent to Michael Behe’s in Darwin’s Black Box: the mutations needed to allow natural selection to begin to operate are too rare, and would not have occurred even once in the history of the Universe. The 500-bit (or 150-bit) criterion is just a way of saying that they are too rare for that, and all the information calculations are only for that purpose.

    3. A technical point: gpuccio describes natural selection (the effect of differences of fitness) as a deterministic process. When I teach theoretical population genetics, I describe it as such and model it deterministically. But that is only possible if one assumes that the population is infinitely large. A purely deterministic model of multiple-site selection requires that all possible genotypes exist initially, albeit at very low frequencies. Of course this cannot happen: for all combinations of a 200-base DNA sequence to exist in a population would require that we have at least 4200 individuals, more than there are particles in the Universe.  So to model multiple-site selection reasonably, one needs a stochastic process model; one must treat mutation as introducing new bases randomly. Such a process can explore nearby genotypes by mutation, and also produce more of them by combining mutations by recombination. If gpuccio wants to model the effect of fitness differences purely deterministically, gpuccio is implicitly assuming that all of the relevant possible sequences are already there.

    Thus what we have in gpuccio’s argument is a mathematically-more-explicit calculation of rareness of the required mutations in a Behe-style argument, and not a version of Dembski’s entirely different argument. 

  63. When I teach theoretical population genetics, I describe it as such and model it deterministically. But that is only possible if one assumes that the population is infinitely large.

    … and that scaling a population has no impact on the ‘random assortment’ assumption underlying many mathematical treatments. Essentially, the population is treated almost like an ‘ideal gas’, with no dimension or locality for the entities of which it is composed. Even infinite populations on a geometric surface would suffer local stochastic fluctuation. Further sources of stochasticity arise when we consider inevitable variation in selective advantage with time and location. It isn’t only sampling error that creeps in when real populations are considered and we depart from ‘large numbers’ expectations. ‘Real’ populations probe accessible ‘function’ from a position of current ‘function’, which is a different kind of algorithm.

    GP:

    So, darwinists, including you, have the duty to model some real life example of, say basic protein domain, and explain how it emerged at some time in natural history, and what RV did, and what NS did. And they can’t. For the simple reason that RV and NS cannot do it.

    No – for the even simpler reason that NS/Drift is by its very nature an eliminatory process! Unless steps were ‘frozen’ (a la Lenski) by speciation and subsequent lineage survival, and the signal in them has not become overly scrambled, we are stuck with the fact that the very essence of the evolutionary process is erasure of history! Selective advantage is only with respect to current alleles in the population. Once fixation has occurred, the ‘true’ advantage cannot be recovered. That unavoidable obscuration does not justify the assertion “RV and NS cannot do it.”

  64. GP’s position is even more ridiculous than the no intermediate fossil argument. GP requires living intermediates.

  65. gpuccio: “It’s the things you say that leave me amazed. Maybe cognitive incompatibility? “

    It’s the things that IDists can’t say that amaze me!

    I see our side using our “cognitive” abilities searching for any and all truths, but to a man, every single IDist refuses to test the most profound mystery.

    Why?

    Why can you not criticize the very mechanism you claim is responsible for life as we see it to see if design as a theory holds up?

    You will test “Darwinism”, but you won’t scientifically test the designer to see if he’s capable of what you claim.

     

  66. Mung: “And some programmers go straight to programming and let the design evolve as needed. “

    This is the problem gpuccio faces when people like Mung help the anti-ID side by giving examples of “design via unspecified evolution”.

    A programmer, without pre-specified direction, can achieve acceptable functionality, simply by feedback from his testing.

    Thanks Mung.

  67. gpuccio,

    So, in clear english, dFSCI is a property objectively observable in objects. Now, listen with attention, please:

    The connection between dFSCI and designed objects is purely empirical. We observe that all objects that exhibit dFSCI, and of which we can know the origin, are design objects. No oject of which we can know the origin, and which exhibits dFSCI, has a non designed origin.

    That’s because you define dFSCI such that it does not exist if a deterministic cause is known:

    No. It’s because dFSCI is present in designed objects and in nothing else. Empirical fact.

    There is nothing empirical about it. Here, again, are your definitions:

    Functional Complexity: The ratio of the number of digital strings that encode a particular function to the number of digital strings of the same length.

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    It is not possible by your definitions for dFSCI to exist if a deterministic cause is known. You explicitly define it that way.

    You do no testing. You assert that the value of dFSCI is “true” when you don’t know of a deterministic mechanism. This is inherent in your definition:

    Have you problems with the english language? I said (and you quote it): “I just test known possible deterministic explanations.”. So, I do test. I test known deterministc explanations. Obviously I do not test deterministic mechanisms of which I know nothing. I have tested NS, for example.

    You have tested natural selection and proven that it could not possibly demonstrate functional complexity greater than that required to meet your threshold that indicates dFSCI? That’s quite an accomplishment. Please provide a reference to your work.

    All you are identifying is your ignorance about an artifact’s provenance.

    Is that some form of catechism?

    It’s a direct consequence of your definitions. Consider an artifact described by a digital string with functional complexity of greater than 150 bits. To determine if this artifact exhibits dFSCI we need to apply the two criteria of your definition:

    1) Is the functional complexity greater than 150 bits?

    2) Is a deterministic explanation for the artifact known?

    Look at that second criteria carefully. All that is required for a determination that dFSCI exists is an assessment of our knowledge of an explanation. There isn’t any positive evidence for design, merely a lack of knowledge of a deterministic mechanism. dFSCI just means “I don’t know the provenance of this artifact.” by your own definitions.

     

  68. gpuccio,

    A value of dFSCI is neither true nor false.

    Yes, it is, by your definition:

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    Are you changing your definition mid-conversation?

    I think I understand what you are saying, but could you please answer the questions directly? Assuming for the sake of this example that the string does have the minimal amount of functional complexity required, does the string generated by the human have a dFSCI value of “true”? Does the string generated by the GA have a dFSCI value of “false”?.

    No. Assuming for the sake of example something that cannot be, let’s say that the strin in itself has 180 bits of FC, and the algorithm that generates it only of 100 bits.

    The situation is simple: how the string was generated is of no importance to the calculation.

    a) If I don’t know that the algorithm exists, I will evaluate the dFSCI at 180 bits, and make a designinference.

    b) If I know that the algorithm exists, I will evaluate the dFSCI at 100 bits, and make no design inference.

    I’m still not understanding your criteria for determining whether or not dFSCI is present. Let me restate my questions.

    First, assuming for the sake of this discussion that a string encoding a solution to Lizzie’s problem as a series of 1s and 0s has functional complexity of more than 150 bits, consider the case where a human being sits down, thinks about the problem for a few moments, uses his knowledge of math and his creativity, and generates the solution string. Does this string exhibit dFSCI? If not, why not?

    Second, assuming for the sake of this discussion that a string encoding a solution to Lizzie’s problem as a series of 1s and 0s has functional complexity of more than 150 bits, consider the case where a GA is configured with a fitness function that ranks strings according to the “product of head run lengths” criteria and allowed to run until a solution is found. Does this string exhibit dFSCI? Because you have stated that you consider the GA model of evolutionary mechanisms to be deterministic, my understanding is that the answer here is “No.”

     

  69. I don’t agree. We find a lot of information in genomes and proteomes about what happened in distant times. It seems strange that just the thousands of functional, expanded intermediates for protein domains did not happen to leave any trace! That is just an easy ad hoc excuse for a theory that has no scientific validity.

    The entire structure of GP’s argument rests on a god of the gaps foundation. With a side order of incredulity.

    I thought that the point of the UD thread thread is that ID is not the default position. But what GP is doing is arguing that since living intermediates of protein domains are missing, and there are no fossil domains, they never existed. This is the history of science vs creationism in a nutshell. This line of debate has characterized every branch of science from astronomy to geology. The fact that it is taking place in biology simply reflects the relative newness of genomics as a science.

    And of course there is interest in reconstructing the evolutionary history of proteins. It’s hard, so GP isn’t interested. Heaven forbid that an ID advocate take an interest i actually doing research.

    http://pages.uoregon.edu/joet/PDF/dean_thornton-NRG2007.pdf

    The most devastating critique of ID is not that it is wrong, but that it is useless for formulating research proposals. When it does research, it is carefully targeted to fail. As in the work of Axe.

    It is remarkably easy to design research to give negative results. Somewhat harder to do what Thornton is doing.

    I find it amusing that GP and other IDists find the actions of a non-material, non-existent entity more compelling than the extrapolation of a demonstrated mechanism. But just because they ignore proven mechanisms in favor of a completely imaginary non-mechanism doesn’t mean that ID is a default position where biology has gaps in it’s account of history.

  70. onlooker writes:

    gpuccio,

    A value of dFSCI is neither true nor false.

    Yes, it is, by your definition:

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    Are you changing your definition mid-conversation?

    I brought this to gpuccio’s attention here, along with several other contradictory statements he’s made about dFSCI. His response? By pointing out his contradictions, I wasn’t reading him “intelligently” and “in good faith”.

    Evidently, if gpuccio contradicts himself, that’s fine. It’s your responsibility as his reader to fix all of his mistakes for him. And to assert that he contradicts himself just because he, well, contradicts himself? That’s not reading “in good faith”.

  71. Migration between folds is much more probable than previously believed, and certain folds may even be intersections between multiple folds. Many of the folds that scientists have been able to “switch” , I suspect, are connected to yet other folds via short mutational paths. This is an active area of research these days.

    http://www.ncbi.nlm.nih.gov/pubmed/20591649

    Evolution is often underestimated.

  72. This is exactly what happened when I discussed this with gpuccio, in my Mathgrrl persona, on Mark Frank’s blog. That discussion spanned four separate threads during which gpuccio kept redefining his terms while refusing to admit that he was doing so. I finally gave up in frustration.

    While he might be one of the more polite UD regulars, his lack of intellectual honesty is as bad as any of the others.

  73. Joe: “Umm it can be measured in bits and still be Boolean. How do you think we know if it is true/ present or not? By counting the number of bits. “

    But then;

    Joe: “The VALUE, as in the number of bits, ie it is what it is. A car is neither true nor false. “

    So Joe, why must a value exceeding 1 be considered boolean in one case but not another?

    Car = 2000;

    dFSCI = 3;

    if( Car || dFSCI ) { f(x);} else { f(y);}

     

     

     

  74. It seems strange that just the thousands of functional, expanded intermediates for protein domains did not happen to leave any trace! 

    I struggle to see why that would be remotely strange. If a series of amendments have successive selective advantages of sufficient magnitude to fix them, what on earth is going to happen to all the ‘intermediates’? Of course, there are techniques for probing in the darkness- even behind LUCA, since the genome itself can be analysed phylogenetically, thanks to gene duplication events. But there are limits, inevitably – this is not simply some made-up excuse, but a fact of nature.  

    The issue boils down to coalescence. The further back you go, the fewer individuals from that time leave descendants, in a finite world. Therefore, greater and greater proportions of surviving genomes derive from fewer and fewer ancestors as you go back. You end up sampling individuals – not even populations. So the loss of the history of the other individuals is unsurprising and predictable. You are looking askance at the absence of a mutational history of individuals, and the selective history of particular populations, a few hundred million years on? It’s pretty remarkable we have anything to go on. New domains will arise in individuals, and typically either end up in all descendants, or be lost without trace. The losers and ancestral states are not generally recorded, anywhere.

    That is just an easy ad hoc excuse for a theory that has no scientific validity.

    Is it heck! It is an incontrovertible fact. Poo-pooing this fact, which can even be given a mathematical proof – the same basic iterative sampling that drives neutral drift, the evolutionary ‘baseline’ – as  “an easy ad hoc excuse for a theory that has no scientific validity” is just posturing. It is akin to dismissing linguistics if it fails to establish the order in which eye, eagle-headed thingy and person in the classic “walk like an Egyptian” pose became established in hieroglyphics, nor what they replaced or how they were pronounced. Must they therefore have been devised in the Tower of Babel?

    We can only work with the material we have, plus models. It would be great if we had more, but the limitations and absences do not force ad hoc attribution to spirit causes. 

  75. Joe: “The CONTEXT is all important. Ya see in order to know if you have dFSCI you need to get a value, ie the number. “

    I agree and wish gpuccio would stick to one use of dFSCI.

    What it really implies is :

    if( dFSCI > UPB ){}

    But gpuccio doesn’t always use the term in that sense and it makes understanding difficult.

    If dFSCI was *always* a value that must be tested against a known boundary condition, we could start talking about how to calculate it, but by saying something *has* dFSCI in the next sentence, you’ve implicitly converted dFSCI from a value to a boolean.

    Using one term in two different ways is not helpful when promoting an idea.

    gpuccio’s own statements don’t clarify his use or why he insists on one term to represent two different concepts.

     

  76. gpuccio,

    To onlooker (at TSZ):

    Already answered: post #488 and post #629.

    I hadn’t seen your 629, but after reading it I do not find that you have addressed the issues I’ve raised in either of those comments.  Your dFSCI is still clearly an indicator of our knowledge (or, conversely, ignorance) of the provenance of an artifact, not of the involvement of an intelligent agent, by your own definitions.

    You have also still failed to directly answer my question about whether or not dFSCI is present in a human generated solution to Lizzie’s head-tail sequence problem.  Here it is again, pared down to its essence:

    Assume that a string encoding a solution to Lizzie’s problem has functional complexity of more than 150 bits.  If a human uses his knowledge of math and his creativity to generate the solution string, does this string exhibit dFSCI?  If not, why not?

    Thank you in advance for a clear and unambiguous response.

  77. I believe that gpuccio’s most recent reply to me (at the UD thread it is comment #656) agreed that his argument is not directly related to William Dembski’s argument; it is a Michael-Behe-style argument in spite of the name dFCSI.

    Furthermore, in computing gpuccio’s dFCSI you have to first rule out that it could have gotten into the genome by any natural cause, including differences of fitness. So if you are supposed to rule out that Elizabeth’s string was produced by a human before you call it dFCSI. That answers your question: it does not exhibit dFCSI. It also makes the dFCSI argument uninteresting, as natural (or human) causation is already ruled out.

    In Dembski’s argument the amount of SI is computed without yet having eliminated all natural causes. But that is not what gpuccio is doing.  

  78. Joe,

    Furthermore, in computing gpuccio’s dFCSI you have to first rule out that it could have gotten into the genome by any natural cause, including differences of fitness. So if you are supposed to rule out that Elizabeth’s string was produced by a human before you call it dFCSI. That answers your question: it does not exhibit dFCSI. It also makes the dFCSI argument uninteresting, as natural (or human) causation is already ruled out.

    To gpuccio, human intelligence is not a ‘natural cause’:

    Abd science has found no way to explain the emergence of dFSCI in a “naturalistic” (that is, cosciousness independent) way.

    And he clearly thinks that humans can produce dFSCI:

    …from human design and from the properties of human artifacts, including language and software, wondeful examples of extremely abundant dFSCI.

  79. I stand by my analysis. To call it dFSCI you have to rule out “natural” causes, and you can rule out natural causes because there’s just so doggone much dFSCI. He makes this explicit in his discussion of the origin of protein domains. He takes the absence of living cousins as proof of poof. No living cousins means none ever existed.

  80. To gpuccio, human intelligence is not a ‘natural cause’:

    Well yes. According to GP, any five year old is capable of seeing that the brain is not the seat of human consciousness.

    I’m not sure where the consciousness of crows resides.

    But GP’s commitment to magical thinking is at least self-consistent and seamless and thoroughly dualistic. Once you understand that he is a metaphysical dualist who believes in a non-physical world that interacts continuously with the physical, his views are understandable. Most of his compatriots at UD are also dualists.

  81. Joe: “Umm it can be measured in bits and still be Boolean. How do you think we know if it is true/ present or not? By counting the number of bits. “

    It’s dFSCI only if it’s not the result of natural causes, and we know it’s not the result of natural causes because there are so many bits. Any five year old can see that.

  82. Yes, I was kind-of aware that most UD types don’t think humans are natural, so human causation counts as non-natural in their arguments.

    That makes it hard to study models of natural systems, since the models are necessarily constructed by the researcher and that means that the models are declared by the ID types to be irrelevant because they are not natural processes. Even if the models consist of nonintelligent processes such as random mutation and Brownian motion.

  83. This insistence on using coin flips to compute whatever this “dFSI” is supposed to be makes no sense if one is presuming to extend any of this to atomic and molecular assemblies in the real world. If dFSI is suppose to be the logarithm of the ratio of the “target space” to “sample” space, how does one determine what a “target space” and a “sample space” is for real, atomic and molecular systems?

    For example; what are the sample spaces and the target spaces for the formation of H2O and H2O2 when equal numbers of hydrogen atoms and oxygen atoms are brought together? How does any of this account for temperature and concentration?

    In order to take a logarithm of the ratios of those “spaces,” one has to know implicitly the probabilities of formation of those two compounds at various temperatures and concentrations. Where does one get those probabilities? One cannot extrapolate from the sample spaces and target spaces of non-interacting objects such as coins or dice where all one has to do is count the sizes of these spaces using permutations and combinations.

    Ignoring the warnings of simple high school physics and chemistry calculations is not going to make these kinds of calculations meaningful in any kind of atomic/molecular system, even when such systems are as simple as water/hydrogen peroxide systems. How can such calculations have any meaning whatsoever when applied to more complicated systems such as large organic molecules immersed in larger systems at a constant temperature? How can they have any meaning when one is dealing with systems forming in energy cascades under non-equilibrium conditions?

    There is a huge difference between phenomenological calculations using empirical data from experiments and ab initio calculations using accurate, detailed knowledge of the physical processes actually involved in the interactions of atoms and molecules.

    The reason ab ignitio calculations are difficult is because of the rapidly emerging phenomena that occur when systems start becoming complex as a result of the interactions of their constituents. But ab initio calculations are extremely important for checking the detailed consequences of physical models; and accurate predictions help confirm the details of our understanding.

    Naive calculations using logarithms of the ratios of target spaces to sample spaces tell us nothing significant when we ignore the physics and chemistry. Simply calling these logarithms “information” of some sort is deceptive. It gives the illusion of having knowledge one does not have; and this illusory knowledge is the essence of the “theories” of people like Dembski and Abel.

    Those airy dismissals of the dramatic warnings from simple high school physics and chemistry calculations simply compound the ignorance being showcased by these attempts to replace science with ID pseudo-science. And those simple calculations don’t even include the further complications added by the redistribution of charge and quantum mechanical rules when atoms and molecules interact with each other.

  84. Joe Felsenstein said: That makes it hard to study models of natural systems, since the models are necessarily constructed by the researcher and that means that the models are declared by the ID types to be irrelevant because they are not natural processes. Even if the models consist of nonintelligent processes such as random mutation and Brownian motion.

    It is now beginning to appear that they also think all of chemistry and physics are irrelevant as well.

    Apparently they are asserting that only models constructed by ID/creationists are relevant. That would certainly explain why they stopped learning science before completing middle school. Even Ken Ham knows to get the kids when they are young.

  85. Joe Felsenstein and gpuccio,

    Furthermore, in computing gpuccio’s dFCSI you have to first rule out that it could have gotten into the genome by any natural cause, including differences of fitness. So if you are supposed to rule out that Elizabeth’s string was produced by a human before you call it dFCSI. That answers your question: it does not exhibit dFCSI.

    Interesting, I read it exactly the opposite way, based on this from the material quoted by Alan:

    8b) What if the operator inputted the string directly?

    A: Then the string is designed by definition (a conscious intelligent being produced it). If we inferred design, our inference is a true positive. If we did not infer design, our inference is a false negative.

    gpuccio, could you please directly answer the question I posed? Here it is again:

    Assume that a string encoding a solution to Lizzie’s problem has functional complexity of more than 150 bits. If a human uses his knowledge of math and his creativity to generate the solution string, does this string exhibit dFSCI? If not, why not?

Leave a Reply