An Invitation to G Puccio

gpuccio addressed a comment to me at Uncommon Descent. Onlooker, a commenter now unable to post there,

(Added in edit 27/09/2012 – just to clarify, onlooker was banned from threads hosted by “kairosfocus” and can still post at Uncommon Descent in threads not authored by “kairosfocus”)

has expressed an interest in continuing a dialogue with gpuccio and petrushka comments:

By all means let’s have a gpuccio thread.

There are things I’d like to know about his position.

He claims that a non-material designer could insert changes into coding sequences. I’d like to know how that works. How does an entity having no matter or energy interact with matter and energy? Sounds to me like he is saying that A can sometimes equal not A.

He claims that variation is non stochastic and that adaptive adaptations are the result of algorithmic directed mutations. Is that in addition to intervention by non-material designers? How does that work?

What is the evidence that non-stochastic variation exists or that it is even necessary, given the Lenski experiment? Could he cite some evidence from the Lenski experiment that suggests directed mutations? Could he explain why gpuccio sees this and Lenski doesn’t?

It’s been a long time since gpuccio abandoned the discussion at the Mark Frank blog. I’d like to see that continued.

So I copy gpuccio’s comment here and add a few remarks hoping it may stimulate some interesting dialogue.

To Alan Fox: I am afraid you miss the points I made, and misrepresent other points.

I confess to many faults, among them reading too fast and typing too slowly. I also don’t have a good memory and don’t recall recently addressing any remarks to you other than this:

But, gpuccio, do you not see that Lenski was only manipulating the environment? The environment in this case, as in life in general, is the designer. Lenski provided the empty niche. Eventually a lucky mutant ended up and flourished in that niche. Selection is not random.

so I am not sure how I can be misrepresenting you.

a) The environment is no designer by definition. And it is not even the first cause of the adaptation. The adaptation starts in the bacteria themselves, in the information that allows them to replicate, to have a metabolism, and to exploit the environment for their purposes. Obviously, cahnges in the environment, especially extreme changes as those Lenski implemented, stimulate adaptation.

The environment designs by selecting from what is available. New genotypes arise by mutation, duplication, recombination etc, etc. Adaptation is the result of environmental selection. I think you mean to say that there is some suggestion that stress conditions can stimulate hyper-mutations in bacteria. This creates more material for environmental selection to sift through.

b) That adaptaion is a tweaking of the existing regulation of existing function. No new biochemical function is created. We can discuss if the adaptation is only the result of RV + NS (possible), or if it exploits adaptive deterministic mechanisms inherent in the bacterial genome (more likely). However, no new complex information is created.

If you are referring to the Lenski experiment, you are flat wrong here.

c) NS is not random, obviously.

Obviously! Glad we agree!

It is a deterministic consequence of the properties of the replicator (replication itself, metabolism, and so on) interacting with environmental properties. The environment changes are usually random in regard to the replicator functions (because they are in no way aware of the replicators, except in the case of competition with other replicators). Anyway, the environment has no idea of what functions can or should be developed, so it is random in that sense. The environmental changes made by Lenski are not really random (he certainly had some specific idea of the possible implications), but I can accept that they are practically random for our purposes. What we observe in Lenski’s experiment is true RV + NS OR true adaptation. I don’t think we can really distinguish, at present. Anyway, it is not design. And indeed, the result has not the characteristic of new design.

Whilst I find your prose a bit dense here and somewhat anthropomorphic (awareness in replicators, environments having ideas) I can’t see much to argue with.

d) NS is different from IS (intelligent selection, but only in one sense, and in power: d1) Intelligent selection (IS) is any form of selection where a conscious intelligent designer defines a function, wants to develop it, and arranges the system to that purpose. RV is used to create new arrangements, where the desired function is measured, with the maximum possible sensitivity, and artificial selection is implemented on the base of the measured function. Intelligent selection is very powerful and flexible (whatever Petruska may think). It can select for any measurable function, and develop it in relatively short times. d2) NS is selection based only on fitness/survival advantage of the replicator. The selected function is one and only one, and it cannot be any other. Moreover, the advantage (or disdvantage, in negative selection) must be big enough to result in true expansione of the mutated clone and in true fixation of the acquired variation. IOWs, NS is not flexible (it selects only for a very tiny subset of possible useful functions) and is not poweful at all (it cannot measure its target function if it is too weak). Those are the differences. And believe me, they are big differences indeed.

I think I have to accuse you of reification here. What is “intelligent selection” with regard to evolutionary processes?

By the way, just to tease Petrushka a little, it is perfectly possible to implement IS that works exactly like NS: we only need to measure reproductive fitness as our desired function. That’s exactly what Lenski did. Lenski’s experiment is, technically, an example of intelligent selection that aims to imitate, as much as possible, NS (which is perfectly fine).

My response would depend on whether and how you can define or identify the process you call “intelligent selection”. I suspect Petrushka can speak for herself!

_________________________________________________________________________

Any interested party is invited to comment. In Lizzie’s absence, I can only approve new commenters in threads I author. I am sure matters will be regularised soon. I hope gpuccio will find time to visit as unfortunately not many of us are able to comment at Uncommon Descent. Remember Lizzies rules and assume others are posting in good faith.

(Added in edit 22nd September 2012)

Gpuccio replies;

To Alan Fox (on TSZ):

Thank you for your kind invitation to take part in The Skeptical Zone. My past experiences in similar contexts have been very satisfying, so I would certainly like to do that.

Unfortunately, I am also aware of how exactling such a task is for my time, and I don’t believe that at present I can do that, while still posting here at UD.

So, for the moment I will try to answer the main questions you make there about my statements here, so that they are also visible to UD commenters, who after all are those to whom I feel more committed. I hope you understand. And anyway, it seems that you guys at TSZ are reading UD quite regularly!

I would just point out to Petrushka that it is not a question of “courage”, but of time: I have already done that (discussing things on a darwinist friendly forum) twice, and my courage has not diminished, I believe, since then.

No problem, gpuccio. I’ll paste any comments from you that I see that are directed at TSZ as I get chance.

88 thoughts on “An Invitation to G Puccio

  1. My questions are fairly simple:

    I would like to know how the designer overcomes the problem of emergence–the inability of chemists to predict the properties of molecules from the properties of constituent atoms. GP could illustrate overcoming this barrier with a simple molecule like water.

    of course biological molecules are a bit parger, and I would like to know how the designer acquired his knowledge of the properties of coding strings and how he indexes them by function and by need in changing environmental niches.

    Some of my other questions are listed in the OP.

    I’m particularly interested in how a designer fixes mutations that have no obvious somatic effect, but which become important later.

  2. gpuccio,

    I’m glad you’re responding at UD and hope you will choose to join us here. I understand your point about time constraints, but you should also consider the fact that this is a more open forum where neither you nor other participants will be arbitrarily banned. To provide a little context, here’s what I was responding to in the UD thread that began our conversation (your original words are nested twice, my response once):

    Any string that exhibits functional complexity higher than some conventional threshold, that can be defined according to the system we are considering (500 bits is an UPB; 150 bits is, IMO, a reliable Biological Probability Bound, for reasons that I have discussed) is said to exhibit dFSCI.

    Okay, so dFSCI is a true/false value based on the calculated functional complexity.

    It is required also that no deterministic explanation for that string is known.

    Now this is problematic. You seem to be defining dFSCI as a measure of ignorance. If you calculate the functional complexity of a string of unknown provenance and conclude that it meets the threshold for dFSCI, why would that calculation suddenly be moot if you learn how the string was created? Further, if a person designs an object deterministically, does it not have dFSCI? Maybe I need to understand what you mean by “deterministic” better.

    dFSCI cannot be created by unguided evolution.

    Well, depending on what you mean by “deterministic”, that may be true by definition. That wouldn’t be particularly interesting, though.

    I’ll respond to your latest comment to me in my next comment on this thread.

  3. gpuccio,

    From your most recent comment to me at UD:

    In that case, I think you have a fundamental problem because you are defining dFSCI such that only “non-deterministic” mechanisms can create it. Just so I’m clear, do you consider evolution (random mutations of various types, differential reproductive success, neutral drift, etc.) to be deterministic? If so, dFSCI doesn’t distinguish between “designed” and “non-designed” but between “known to be designed”, “known not to be designed”, and “unknown”. And just to be further painfully clear, would you agree that deterministic mechanisms can create functional complexity of more than 500 bits, by your definition?

    I will try to be more clear. In the definition of dFSCI, the exclusion of deterministic mechanisms is meant to exclude those cases of apparent order or function that can be explained as the result of known physical laws.

    This still sounds like you are explicitly defining dFSCI such that it cannot be created by any known process. This makes it impossible to use as an identifier of intelligent design because you would first have to eliminate all possible other causes.

    It also still remains an argument from ignorance. If you observe an object that has the necessary level of functional complexity and conclude that it has dFSCI, finding out more about its provenance later could change the conclusion from “dFSCI is present” to “dFSCI is absent” without any change in the measured functional complexity. That means that all dFSCI indicates is “we know how this was made” or “we don’t know how (or if) this was made”. This is problematic because of what I asked before:

    Are your concepts of functional complexity and dFSCI intended to be used to identify design where it is not known to have taken place or merely to tag design where it is known to have happened?

    Obviously the first option.

    dFSCI as you define it cannot be used to identify design where it is not already known to have taken place.

    a) The concept of dFSCI applies only to the RV part. What I mean is that dFSCI tells us if some step that should happen only by RV is in the range of the probabilistic resources of the system. As I have said, 150 bits (35 AAs) are more than enough to ensure that a single step of that magnitude will never happen. Empirically, as shown by Behe, anything above 3 AAs is already in the field of the exceptional.

    b) NS instead is a special form of determinstic effect, mainly due to the properties of replication itself, and partly to environmental factors that interact with replication. dFSCI has nothing to say about NS. The modeling of the NS effect must be made separately.

    What you seem to be saying here is that dFSCI is present if a single mutation generates more than 150 bits of functional complexity. Is that the case? Would you consider a gene duplication event of more than 75 bases to be such a mutation?

    If you do consider such (observed) duplication mutations to constitute dFSCI, then it is obvious that evolution can generate it. If you, for whatever reason, exclude such mutations, it suggests to me that dFSCI is defined deliberately to exclude any evolutionary process because evolution overwhelmingly tends to only explore regions of genotype space very close to points known to be viable.

    Before I make way too many assumptions about your position, let me close with a question: Does dFSCI only exist when a single change of more than 150 bits of functional complexity takes place or does it exist if that change takes place in multiple steps?

     

  4. Pasting the part of gpuccio’s comment addressed to me at Uncommon descent;

    Alan Fox:

    I confess to many faults, among them reading too fast and typing too slowly. I also don’t have a good memory and don’t recall recently addressing any remarks to you other than this:…

    I did not intend to criticize you in any way. I like the way you express things. I only meant that your comment, IMO, seemed not pertinent to what I had said about Lenski. You say: “But, gpuccio, do you not see that Lenski was only manipulating the environment?” But I had never denied that. So I wrote: “I am afraid you miss the points I made”, in the sense that I had said exactly what you were inviting me to “see”. And I apologize for the “misrepresent” word: “and misrepresent other points” is not probably a brilliant way to express it, but I was not referring to you misrepresenting me, but to you misrepresenting the role of the environment as designer in the second part of your phrase: “The environment in this case, as in life in general, is the designer.” Which indeed I commented upon. So, I apologize if I gave the impression that I was saying that you were misrepresenting me.

    The environment designs by selecting from what is available.

    Well, im my use of words that is not design. I have given an explicit definition of design, to avoid confusion. And anyway, the environment just interacts with the replicators. My point is that NS is the result of an interaction between replicators, with their biological information, and the environment. I don’t think that this point is really questionable.

    If you are referring to the Lenski experiment, you are flat wrong here.

    Yes, I am referring to the Lenski experiment there. Why am I wrong?

    I think I have to accuse you of reification here. What is “intelligent selection” with regard to evolutionary processes?

    I would invite you to reread what I wrote: “d1) Intelligent selection (IS) is any form of selection where a conscious intelligent designer defines a function, wants to develop it, and arranges the system to that purpose. RV is used to create new arrangements, where the desired function is measured, with the maximum possible sensitivity, and artificial selection is implemented on the base of the measured function.” I would think that is a clear definition. And, obviously, it has nothing to do with unguided evolutionary processes. It is a form of intelligent design. “d2) NS is selection based only on fitness/survival advantage of the replicator. The selected function is one and only one, and it cannot be any other. Moreover, the advantage (or disdvantage, in negative selection) must be big enough to result in true expansione of the mutated clone and in true fixation of the acquired variation.” NS has to do with unguided evolutionary processes. What is wrong with that? “Those are the differences. And believe me, they are big differences indeed.” I maintain that. Where is the reification?

    My response would depend on whether and how you can define or identify the process you call “intelligent selection”.

    I believe I had done exactly that.

  5. Thanks for the response, gpuccio and no offence taken on misrepresentation. I’d like to expand on what constitutes design. A problem that often arises in discussion is miscommunication. I accept that saying the environment acts as the designer in evolutionary processes may not coincide with your idea of “design” as applied to the diversity of life. I say:

    The environment designs by selecting from what is available.

    because that is exactly how I visualise the process of evolution. Variation arises in the gene pool by mutations, duplications, recombination and so forth and variations that are not immediately deleterious are then available for the environment to filter. The environment is dynamic and multi-dimensional. It is climate, weather, diurnal, seasonal, catastrophic and infinitesimal, plate tectonics, black smokers and reprodutive isolation. It’s intra-species competion, extra-species competition, predators, prey, parasites, hosts, symbionts and symbiogenesis. I see it mainly as a passive process, organisms inexorably being honed to better fit the niches they haphazardly tumble into. This seems very noticeable watching plants recolonising cleared land, say after a forest fire or weeding my garden. The most convincing evidence that this is what happens is that we always find organisms making a living in places to which they are supremely well adapted and dead in places to which they are not. If this isn’t design, I don’t know what is! There is also the element of parsimony here. Natural selection does not need to account for the amazing adaptations we find among living and extinct species, it is a predicted result of the theory.

    But “design” by the environment is only a word I am using as a shorthand for the process that is natural selection. Maybe when you use design you are not thinking about the lockstep between organism and niche. By the way, though myself an atheist and not finding the need of a religious explanation for life, the universe and everything, I wonder why the idea that God could create through natural processes is anathema to (for instance) Biblical literalists.

    Anyway to your definition of “design”. “Design” (plus designer etc) appear nearly 1,000 times in the UD thread comments and I did not spot where you define “design”, so I would be most grateful if you could point me to your definition. 

    Unless you mean this

     Intelligent selection (IS) is any form of selection where a conscious intelligent designer defines a function, wants to develop it, and arranges the system to that purpose.

    in which case, I’m even more in the dark! 😉

  6. Continuing:

    gpuccio wrote:

    b) That adaptaion is a tweaking of the existing regulation of existing function. No new biochemical function is created. We can discuss if the adaptation is only the result of RV + NS (possible), or if it exploits adaptive deterministic mechanisms inherent in the bacterial genome (more likely). However, no new complex information is created.

    and I said

    If you are referring to the Lenski experiment, you are flat wrong here.

    gpuccio:

    Yes, I am referring to the Lenski experiment there. Why am I wrong?

    Because a new strain of E. coli arose by variation that was able to digest citrate. That strain bloomed in the niche provided by Lenski’s flasks. How on Earth can the novel ability to digest citrate not be a new biochemical function?

    Where is the reification?

    “Intelligent selection” does not seem to have been in general use as a phrase linked to ID before I cam across it in your comment. Using it so confidently, you seem convinced such a concept is meaningful. I. e. it seems real to you. Notwithstanding your definition – Intelligent selection (IS) is any form of selection where a conscious intelligent designer defines a function, wants to develop it, and arranges the system to that purpose. – I am unconvinced such a process exists.

    My response would depend on whether and how you can define or identify the process you call “intelligent selection”.

    I believe I had done exactly that.

    Well, no. Selection in an evolutionary context is the same process. There is no distinction between artificial and natural selection. In plant and animal husbandry, the plant or animal breeder is a very important part of the environment; the selection process is not different in kind. I don’t see what you are driving at with “intelligent” selection, unless you are bringing in imaginary forces or actions. If so, fine, and we have to agree to disagree on the existence of imaginary intelligent designers.

  7. gpuccio

    To Alan Fox (on TSZ):

    No, that was my definition of Intelligent Selection. My definition is in post #5. I paste it here, but if you read the original post you will find other brief, useful definitions of mine:

    a) Design is the act by which conscious intelligent beings, such as humans, represent some intelligent form and purposefully output that form into some material system. We call the conscious intelligent being “designer”, and the act by which the conscious representation “models” the material system “design”. We call the material system, after the design, a “designed object”.

  8. Thanks for the clarification, gpuccio.

    I don’t find your definition very much help as it throws us back on to what is meant by “intelligent” and you include the phrase “such as humans”. I first came across “intelligent design” as a phrase round mid 2005 on encountering an ID proponent in a general discussion forum. I was intrigued enough to follow the links provided and ended up at Uncommon Descent (at the time still operated and moderated by Bill Dembski). I registered and submitted a comment asking for a definition of “Intelligent Design”. No comment appeared and my registration wouldn’t work. Thinking it was a glitch, I attempted to register several more times before suspecting I was not going to get an answer.

    Sorry for the digression but I am still doubtful a clear and consistent explanation of “Intelligent Design” exists in a scientific context. There is certainly no clear scientific definition of “intelligence” other than just an ad hoc comparative. You seem to suggest by saying “material output” that an intelligent designer’s input is immaterial. Science works by observing and postulating regularities, conservation of mass/energy, action and reaction equal and opposite. Could not ID scientists look for a material output with no material input? Our designer could tinker by loading the dice on variation events, I guess, but how would we tell?

  9. @ mung

    I agree that “Natural Selection” is a poor descriptive for differential survival of alleles. I propose “Environmental Design”. 

  10. The point of the word “natural” is that populations get shaped regardless of whether humans are managing the breeding or not. What GP fails to realize is that “natural” selection integrates hundreds of dimensions simultaneously. Something humans have never mastered.

    When humans try to manage selection they tend to focus on one or two traits and wind up with weak (inbred) populations that cannot survive without continuous human intervention. This happens over and over in our pets and in our crops.

    It’s why we have potato famines. Why our sweet bananas are likely to go extinct in the near future, and why many purebred animals are sickley. Intelligent selection simply isn’t as clever as selection that sees all dimensions of fitness simultaneously.

  11. petrushka: “Intelligent selection simply isn’t as clever as selection that sees all dimensions of fitness simultaneously. “

    Yes, and it’s a point that should be pressed.

    ID never seems to argue multiple on-going threads of “intelligent design”.

    Any “designer” that can “fine-tune” the universe should be able to multi-task.

     

  12. Another thing everyone should have learned by eighth grade is the purpose of limiting and controlling variables in an experiment (such as Lenski’s). Gpuccio’s sneering at such limitations indicates a profound ignorance of method. Perhaps he never participated in a science fair.

    At any rate, limiting the variables allows checking for things like non-stochastic mutation. It also allows replication of the experiment.

  13. Gpuccio: in the first 20,000 generations of the Lensli experiment, what were the desires properties, and how was the selection of them directed?

  14. gpuccio,

    To onlooker (on TSZ):

    I appreciate your willingness to continue this discussion around kairosfocus’ censorship, but wouldn’t it be easier for you to come here? Everyone at UD, with the exception of one person banned for vulgarity, can comment here. Almost no one here is welcome at UD.

    In any case, to the discussion.

    No, dFSCI can be used to infer design.

    Okay, in that sense your dFSCI is similar to Dembski’s CSI. It differs in that Dembski claims to be able to detect design without knowing anything about the provenance of the artifact under consideration.

    Our empirical experience says that dFSCI is always associated to design.

    We haven’t got to empirical observations yet. I’m still trying to repeat back to you your definition of dFSCI in a form you agree matches your intended meaning. Once we have an agreed definition, we can look for empirical evidence. This is my understanding thus far:

    Functional Complexity: The ratio of the number of digital strings that encode a particular function to the number of digital strings of the same length.

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    Is this accurate?

    Eliminating known necessity explanations is simply dutiful: if someone can provide a credible necessity explanation, the concept itself of dFSCI in that object is falsified, and the necessity explanation becomes by default the “best explanation”. But if no credible necessity explanation is available, design becomes the best explanation, not by any logical ptinciple, but because of the known empirical association between dFSCI and design.

    If you’re talking about artifacts that could conceivably have been created by humans, the only known source of dFSCI you have mentioned, then you may be able to make an argument. If you are talking about artifacts that humans could not possibly have created, then your dFSCI boils down to “We don’t know how it came about.”

    That is not an “argument from ignorance”

    I didn’t call it an argument from ignorance, I said it was only a measure of ignorance, not of design.

    The argument is empirical

    I don’t think that word means what you think it means. You can’t go from observations of humans to extrapolations of unknown, unevidenced designers and claim to have empirical support.

    Before I make way too many assumptions about your position, let me close with a question: Does dFSCI only exist when a single change of more than 150 bits of functional complexity takes place or does it exist if that change takes place in multiple steps?

    Definitely, as I have already said, it can take place in as many steps you like.

    Thanks for the clarification. I would like to apply your proposed design indicator to the CSI experiment that Lizzie organized here.

    The basic idea is to find a string of 500 bits representing coin flips such that the product of the length of the substrings containing consecutive heads is greater than 10e60.

    There are two approaches used to solve this problem. The first was for a human being to sit down and think about it, do a little math, and figure out that the optimal solution consists of a certain number of repeating sequences of heads broken up by single tails. The second is to use a genetic algorithm to model the application of simple evolutionary techniques to the problem. Both approaches led to a solution.

    I have three questions about this in relation to dFSCI:

    1) Do you agree that a string representing a solution has functional complexity in excess of 150 bits?

    2) Do you consider the GA approach to be deterministic, even though it includes a random component?

    3) If the human and the GA come up with the same string, does the string generated by the human have dFSCI and the string generated by the GA not have dFSCI, according to your definition?

     

  15. As I understand it, gpuccio has stated that:

    a) The concept of dFSCI applies only to the RV part. What I mean is that dFSCI tells us if some step that should happen only by RV is in the range of the probabilistic resources of the system. As I have said, 150 bits (35 AAs) are more than enough to ensure that a single step of that magnitude will never happen. Empirically, as shown by Behe, anything above 3 AAs is already in the field of the exceptional.

    b) NS instead is a special form of determinstic effect, mainly due to the properties of replication itself, and partly to environmental factors that interact with replication. dFSCI has nothing to say about NS. The modeling of the NS effect must be made separately,

     

    OK, so if I understand correctly, whenever we see FCSI which is so great that a pure mutational process could not plausibly explain it, even once in the history of the Universe, we cannot yet say that RV+NS cannot explain it. We can only say that the RV part, acting alone, cannot explain it.

    So gpuccio still has the task before him of showing that differences of fitness (together with the RV) cannot explain the evolution of the FCSI.

    gpuccio, are you relying on William Dembski’s Law of Conservation of Complex Specified Information to establish that? Or something else?

     

  16. Eine kleine Physik Lektion (at a high school level) for ID/creationists:

    The ratio between the electrical potential energy and the gravitational potential energy between two protons is 1.24 x 1036.

    The charge-to mass ratio for a proton is 9.58 x 107 C/kg.

    If something on the order of 10 gram sized objects like dice and coins had the same charge-to-mass ratios as protons, the potential energies of interaction among them for something like 10 cm separations would be on the order of 1023 J. This energy is on the order of 107 Megatons of TNT.

    Why do ID/creationists use dice, coins, and alphabets – objects which don’t interact among themselves with these kinds of energies – to calculate probabilities that “prove” that proteins and complex molecules have such a low probability of forming that intelligence is required to assemble them?

    How do probabilities associated with non-interacting objects prove that random variation in the presence of natural selection cannot do the job of evolution?

    Even high school students would be puzzled by ID/creationist pre-eighth grade “proofs”.

  17. Joe: “Mr Mike, the fact that there is that electrical potential energy and gravitational potential energy between two protons is evidence for ID. The same for the charge-to-mass ratio for a proton. “

    This is without doubt the single most powerful Creationist argument for God I have ever heard.

    It’s clean, testable and the conclusion stands on its own.

    After all, if God designed the “Energizer Bunny”, he must have created the real ones too! 🙂

     

  18. gpuccio’s reply (at least, the first few paragraphs of it, the whole thing is comment 497 in this thread at Uncommon Descent) was:

    That is naturally true. NS, or however we want to call it, is only a side effects of biological repreoduction, so ite needs complex biological reproductors just to exist.

    And anyway, even considering the huge amount of dFSCI already existing, for example in LUCA, that was probably nothing more and nothing less than a prokaryote similar to those we can observe now, that cannot explain the successive emergence of new information throughout the whole natural history.

    That’s why I focus my arguments on the emergence of basic protein domains, just as Axe does. I am perfectly aware that they are not all: a lot of other cases for dFSCI and design in bioloigcial beings could be done and will be done: regulation systems, body plans, irreducibly complex molecular machines, and so on. 

    OK, so yours is an argument about the improbability of the Origin Of Life and about the emergence of things like protein domains.  And these are based on the improbability of the mutations arising that are minimally needed for those.

    So I gather that the mere presence of FCSI (which would be high fitness so high that a pure mutation process could not plausibly being it about) does not indicate that evolution could not accomplish this.  As long as the individual mutations could arise and then successively have their gene frequencies changed owing to their improved fitness you could get arbitrarily high amounts of SI.

    Given that, I suggest you not phrase your argument in terms of the amount of FCSI.  The amount isn’t the issue, it is whether the increments of change can occur as changes of gene frequency or whether one would have to wait far too long for the required mutations.  In short, spite of the terminology yours is a Michael-Behe-style argument rather than a William-Dembski-style argument. 

  19. Joe,

    kairosfocus: “As I have said, Jerad can seek whatever help he wants to compose that 6,000 or so word essay.”

    Does that mean KF is prepared to write an essay on the abilities of the designer to actually do what ID claims he can, and that is design for “unseen future functionality”?

     

  20. As near as I can tell, none of them has any significant understanding beyond 8th grade science, if even that.  Nor do they seem to read even their own gurus like Dembski and Abel who actually make such ludicrous probability calculations and assertions.  Discussions with them are pointless and go nowhere.

  21. We bother engaging with them because if no one does, one day our kids will come home with an A grade in Intelligent Design.

    It`s a serious issue that won’t go away.

    I don’t want to see a religious fundamentalist group deciding what gets taught outside of their churches.

    I also believe we’re slowly losing.

     

  22. Oleg said: This makes me wonder why people here bother engaging the ID fans at all.

    I agree. I’m not sure what these threads are trying to accomplish. On the other hand, I am not really familiar with the history of the interactions between the individuals here and those over at UD. I started looking more closely at some of the UD people only within the last year. I’ve been traveling and immersed in other things; and they don’t strike me as very interesting or novel. They are very strange, however.

    The people over at UD don’t respond in any way that suggests any understanding of a scientific point or of scientific evidence. All that copy/paste stuff is a dead giveaway. I get the impression that the people over there haven’t had even the early childhood experiences with things like magnetic marbles and beads.

    Experiences like that provide a background upon which physics and chemistry can be taught. One would not try to calculate the probabilities of molecular assemblies the way Dembski and Abel do if they had the vaguest hint of what chemistry and physics teaches us about atomic and molecular assemblies.

    As it is, their camp followers have no idea what anyone is talking about when someone points to the fields of condensed matter and chemistry as counterexamples to Dembski’s or Abel’s probability calculations. Their responses are complete, uncomprehending non-sequiturs.

    Toronto said: I also believe we’re slowly losing.

    Some of what is happening in public education can be blamed on the ID/creationists, but not all. There is a broader ideology that totally rejects the social contracts that large societies must maintain among its members in order to coordinate their activities and survive. With the support of extreme sectarians, these ideologues have managed to disrupt much of what needs to be maintained in our society; and that includes public education.

    Rejecting the social contracts and tearing down the structures that maintain fairness and justice also contributes to poverty, which in turn puts more stress on public education.

    But that in no way excuses ID/creationist disruptions of education. They are serious participants in this ideology that rejects secular society and social contracts.

  23. olegt: “This makes me wonder why people here bother engaging the ID fans at all

    gpuccio: “That’s exactly what I too wonder about.”

    We do it because it shows how “religion-based” your arguments are.

    Whenever someone exposes your faulty logic and complete misunderstanding of evolution over on UD, they get banned.

     

     

  24. gpuccio: “Ah, now I unserstand! It’s a moral issue (we are the bad guys), and nobly motivated by your deep love for innocent children. “

    No you just have fears you can’t handle.

    If there is no God, who will tell you “right” from “wrong”?

    If there is no God, then you’re “just an animal” with no purpose.

    So you fight to create a “social order” with a very big emphasis on “order”.

    What you need to accept is that as an adult, you make all the decisions and you’ll be held accountable for all of them.

    You can’t hide behind religious dogma.

    Ask Dembski why he had to “re-evaluate” the Noah’s Ark story.

    It wasn’t his idea.

     

  25. gpuccio,

    Mung: “ID critics at TSZ are known and established liars. “

    As long as your side has Mung’s and Joe’s, we stand a chance! 🙂

  26. Upright BiPed: “GP meets the physicist Mike Elzinga

    Elizabeth Liddle’s very own Benito Mussolini…

    UB shows class in his second straight loss with his “Semiotic Theory Of Genesis”.

     

  27. Sheesh! 🙂

    The calculations and the point of the calculations went right over their heads; yet it appears to have made them really mad.

    It certainly demonstrates how these kinds of sectarians can nurse hatreds until they are completely blind.  They even hate people who can understand high school level science and can do math.  How much more they must hate PhDs who actually do science.

    A number of years ago there were some street thugs in East Rochester, NY who ambushed and beat up kids coming home from school just because the kids were carrying books.  Apparently we are seeing the sectarian equivalent over at UD.

  28. Don’t get me wrong, I can see the entertainment value. It’s hilarious when these guys try to parse the word salad of that fearsome retired veterinarian and renowned ID scholar David L. Abel. Or when they learn physics from ba77.

    But a threat to public schools? After Dover? You’ve got to be kidding. 

  29. Folks, if Elizabeth were active here she would send the lot of you to the Sandbox.  Your material on people’s personal and political motivations is irrelevant to the gpuccio discussion.  The same thing is happening at UD where everyone here is being called liars.

    I am trying to have a discussion with gpuccio but the noise level here looks to be a problem. 

  30. Over at UD mung has posted a link to some sort of response to the points I made here.

    Alas, the link is not only nonworking, a peek at the HTML source of that comment shows the link text to be entirely absent.  (I had hoped that I would find something like a missing quotation mark and be able to copy the link out from the HTML source, but no such luck). 

  31. gpuccio commented, in comment 509 at the UD thread:

    (quoting me:) Given that, I suggest you not phrase your argument in terms of the amount of FCSI. The amount isn’t the issue, it is whether the increments of change can occur as changes of gene frequency or whether one would have to wait far too long for the required mutations. In short, spite of the terminology yours is a Michael-Behe-style argument rather than a William-Dembski-style argument.

    (gpuccio’s response:) More or less, that is correct. But, style apart, the essence is similar.

    I certainly share with Behe the biology centered approach, but Dembski’s concepts are fundamental both for Behe and for me.

    But you are right, Dembski is pursuing more a pure logico-mathemathical approach (that is obviously natural for him, and certainly very intersting). His approahc can be extremely stimulating to get to some universal formal theory of CSI, and his analisys of GAs is very useful to debunk amny evolutionists’ myths.

    What Behe, Axe and others are trying to do is to apply Dembski’s simple original concepts to an empirical analysis of the biological issue. For that, the original explanatory filter is practically enough, but a lot of detailed analysis of the neo darwinian theory is necessary to show that in no way it is a credible random + deterministic explanation. My personal approach is similar, probably more centered on the empirical concept of “conscious intelligent beings”.

     

    The original explanatory filter stated that if a certain level of high adaptation (in effect) was seen, this could not be explained by RV+NS.  You have dropped the NS part entirely.    You seem not to be using the Law of Conservation of Complex Specified Information that was the essential part of Dembski’s argument.

    If the requisite mutations occur, and one by one are fixed by natural selection owing to their fitnesses, then your dFCSI can in fact be achieved.  So just calculating it gets you nowhere.  Dembski’s argument had his LCCSI which he argued prevented both RV and NS from achieving the required level of Specified Information.

    Basically you aren’t using the LCCSI. Which is a wise step because it turned out to be wrong. You instead add on Behe’s argument about Irreducible Complexity.   I don’t see what any calculation of dFCSI aaccomplishes beyond that.

  32. I second this.  The moderation policy at UD has meant that this blog has become a sort of parallel discussion. It would be easier if it were one blog but it works in a fashion. 

    Let’s make sure this blog does not turn into another AtBC. I am not saying that there isn’t a place for AtBC as well – but it’s a different place. UD combines genuine criticism with more personal stuff  – but it works better if they are separate.

     

  33. Joe Felsenstein,

    You can have zero noise if you like if you can convince gpuccio to come here on the condition that everyone else stays out of it.

    I’m willing to do that just to see a fair discussion.

    The rest of us, and that includes those from UD, will simply refrain from commenting and allow you two to have an unmoderated debate.

    Ask gpuccio if this is acceptable.

     

  34. gpuccio,

    We haven’t got to empirical observations yet. I’m still trying to repeat back to you your definition of dFSCI in a form you agree matches your intended meaning. Once we have an agreed definition, we can look for empirical evidence. This is my understanding thus far:

    OK, but we will soon need empirical evidence in our discussion. Please remember that the conncetion between dFSCI and design is purely empirical.

    I would be delighted to discuss empirical observations. That does require clarity on the terms we’re using to describe those observations, though.

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    That’s fine, but please remember that the threshold of 150 bits is my personal proposal for realistic biological systems on out planet.

    Understood. What’s important at this point is that I understand that functional complexity is measured in bits while dFSCI is a boolean indicator. The question that remains is: An indicator of what?

    The reasoning goes this way: we define design starting from what we observe in huamn design.

    This does not appear to be the case. You define dFSCI as a boolean indicating whether or not an artifact was designed. You base this on the functional complexity, measured as the minimal number of bits required to describe some function of the artifact plus the lack of a known deterministic mechanism responsible for the artifact.

    You are still not measuring “design” in any objective sense. You are measuring functional complexity, by your definition, and in addition measuring your knowledge about the provenance of the artifact.

    Now, let’s remain in the field of possible human artifacts. If we apply the design inference to any material object that could be a human artifact, but of which we at present we don’t know the origin, we will see that, in all cases, if we use the correct method (for example dFSCI with 500 bits of threshold, just to be sure for the whole universe as a system) and if we can, after, ascertain the origin of the object, we will easily confirm that the method has no false positives, and that all objects exhibiting design are human artifacts, as independently ascertained after the inference.

    I don’t believe you can do this with the 100% accuracy you claim. Another poster here has already pointed out the difficulty with determining whether or not a particular piece of stone or bone found in an archaeological dig is natural or a tool.

    And what about natural objects that are not human artifacts? It is simple. They all can be classified into two classes:
    a) Biological objects, which often (but not always) exhibit clearly dFSCI: genes, proteins.
    b) Everything else, that never exhibits it.

    Not true. There are many non-biological phenomena that exhibit functional complexity of more than 500 bits. I believe Lizzie referred to a beach somewhere in the UK where all the rocks are sorted by size. The Giant’s Causeway is another example.

    Further, you are assuming your conclusion in (a). You can only conclude dFSCI, by your own definition, if you know of no deterministic explanation. Assuming that you consider the modern evolutionary synthesis to be deterministic by your definition, the most you can say is that there are biological artifacts with functional complexity in excess of 150 (or 500) bits. You don’t know that those artifacts are designed because you have no evidence for any intelligent agent that was present when they came into existence.

    dFSCI remains a measure of ignorance, not of design.

     

  35. Joe F. writes:

    If the requisite mutations occur, and one by one are fixed by natural selection owing to their fitnesses, then your dFCSI can in fact be achieved. So just calculating it gets you nowhere.

    I discussed ‘dFSCI’ with gpuccio earlier this year and reached the same conclusion. There’s no point in calculating dFSCI, since it can only rule out ‘tornado in a junkyard’ scenarios which no one believes anyway.

    I wrote:

    gpuccio, By your own admission, dFSCI is useless for ruling out the evolution of a biological feature and inferring design. Earlier in the thread you stressed that dFSCI applies only to purely random processes:

    As repeatedly said, I use dFSCI only to model the probabilitites of getting a result in a purely random way, and for nothing else. All the rest is considered in its own context, and separately.

    But evolution is not a purely random process, as you yourself noted:

    b) dFSCI, or CSI, shows me that it could not have come out as the result of pure RV.

    c) So, some people have proposed an explanation based on a mixed algorithm: RV + NS.

    And since no one in the world claims that the eye, the ribosome, the flagellum, the blood clotting cascade, or the brain came about by “pure RV”, dFSCI tells us nothing about whether these evolved or were designed. It answers a question that no one is stupid enough to ask. [“Could these have arisen through pure chance?”]

    Yet elsewhere you claim that dFSCI is actually an indicator of design:

    Indeed, I have shown two kinds of function for dFSCI: being an empirical marker of design, and helping to evaluate the structure function relationship of specific proteins.

    That statement is wildly inconsistent with the other two. I feel exactly like eigenstate:

    That’s frankly outrageous — dFSCI hardly even rises to the level of ‘prank’ if this is the essence of dFSCI. I feel like asking for all the time back I wasted in trying to figure your posts out…

    You have an obligation to make it clear in future discussions that dFSCI is utterly irrelevant to the “designed or evolved” question. In fact, since dFSCI is useless, why bring it up at all? The only function it seems to serve is as pseudo-scientific window dressing.

  36. gpuccio,

    1) Do you agree that a string representing a solution has functional complexity in excess of 150 bits?

    Maybe. Apparently, I should calculate how many strings of 500 bits have the defined property. I suppose it can be done mathematically, but please don’t ask me to do that. Let’s say, just for the sake of discussion, that only 2^100 strings have the defined property. Then apparently, the dFSCI of the output is 400 bits, which is enough to affirm dFSCI according to my threshold (which, indeed, is a threshold for biological systems, but that is not important here).

    Why do I say “apparently”?. Because you are telling me (and I believe it) that the output can be given by a GA. So, the GA is a deterministic way to produce the output (even if it uses some random steps). That does not mean that there is no dFSCI in the output. It just means that the true dFSCI of the output is the functional complexity of the GA. About which I know nothing.

    So, to some up:

    a) if the GA is more functionally complex than the output, we can simply consider the dFSCI of the output, 400 bits, and safely infer design for it. It can be designed directly or through the GA, that makes no difference.

    b) If the GA is less functionally complex than the output, then the functional complexity of the GA is the true functional complexity of the output, its Kolmogorov complexity. That’s waht we muhst consider, becasue it is the minimum functional complexity that can explain what we observe. IOW, the GA could have arisen randomly with a probability given by its functional complexity, and then the output would come as a necessity consequence.

    The GA engine itself has nothing to do with the complexity of the string describing the solution, by your own definitions of “functional complexity” and “dFSCI”. Based on those definitions, and the calculations provided in the thread about the coin flipping problem, it seems we agree that the solution string does have functional complexity well in excess of 150 bits, approaching 500 bits.

    3) If the human and the GA come up with the same string, does the string generated by the human have dFSCI and the string generated by the GA not have dFSCI, according to your definition?

    I have already answered that in my answer to the first point.

    You did in part. You did not directly state whether or not the human generated string exhibits dFSCI while the GA generated string does not. Assuming for the sake of this example that the string does have the minimal amount of functional complexity required, do either or both have dFSCI by your definition?

  37. gpuccio September 24, 2012 at 2:31 pm:

    If you look at my older definitions of dFSCI, you will see that I used to say that the complexity we have to measure is the Kolmogorov complexity, given known deterministic explanations that can compress the complexity of the string. It is the same thing as saying that we must exclude deterministic explanation, or just take them in consideration if they are known and credible. Now I avoid to use the concept of Lomogorov complexity, just for simplicity. Anyway, the concept in itself is simple: dFSCI measures the probability of coming into existence by random variation. It implies, therefore, a separate evaluation of the influence of known deterministic effects. As I have done for NS.

    This guy needs to switch to decaf, pronto. If “dFSCI” boils down to Kolmogorov complexity (a term he now avoids “just for simplicity”) then any random string is guaranteed to be highly complex according to Kolmogorov’s definition as its minimal description is the string itself. An object can be highly complex in this sense of the word, but there is no reason to pin its origin on a designer.

  38. What does “coming into existence by random variation” mean? It sounds like tornado in a junkyard.

  39. What does “coming into existence by random variation” mean? It sounds like tornado in a junkyard.

    That’s right, which is why dFSCI is useless as an indicator of design.

    Gpuccio seems to be coming (finally!) to realize this. He attempts to redefine dFSCI in comment 553 at UD:

    To be clear:

    a) If I am not aware of the GA, I would only compute the target space and then the dFCI.

    b) If the GA exists, and I know it, I would compute the dFSI of the GA too.

    The lower value netween a) and b) is the DFSI of the string, independently on whetehr it was directly designed by a human, or indirectly through a GA.

    So the computed dFSCI is always an upper bound on the real dFSCI. Any computed dFSCI that exceeds the threshold will be a false positive if we later discover a GA with sufficiently lower dFSCI that can produce the string.

    In other words: dFSCI, even under gpuccio’s new definition, gives false positives. That makes it useless as an indicator of design.

    There are some other problems with gpuccio’s new formulation, but I’ll hold off on those. No point in bringing them up unless gpuccio can address the fatal problem of false positives.

  40. Actually, looking at GP’s response, it appears that he’s calculating the probability of a specific string arising. In other words, a specific target. Like weasel.

  41. There are some rather glaring sources of false positives.

    One is the existence of alleles, which demonstrates that there are functionally equivalent sequences nearby and that RMNS found them. This petty much proves that evolution operates by exhaustively trying all nearby sequences.

    Another problem is the assumption that what is extant is something thst was pre-specified rather than the current bottom of the pond. The fact that most species that have ever lived are extinct argues that there are no pre-specified forms.

  42. Henry Morris and Duane Gish would be very pleased to see how their memes are being expressed in their intellectual descendants over at UD. Every misconception and misrepresentation about science, from thermodynamics to atoms and molecules to the origins of life and onward to evolution, is expressed as clearly as a deadly genetic disease in Morris’s and Gish’s intellectual progeny over there.

    Even funnier, those UD characters – like abandoned street urchins – don’t know anything about their own intellectual ancestry.  They don’t know or understand the genetic and memetic markers that identify them; and they actually believe they are solving the “scientific” problems that Morris and Gish set out for them by introducing them into their intellectual memes and genes.  ID is their attempted “solution” to the pseudo-scientific memes that reside in their heads.

    Who says ID is not a morph of “scientific” creationism?  The paternity test screams otherwise; right over there at UD.  It’s in their intellectual DNA.

  43. Let me answer gpuccio’s replies to me out of order:

    In comment #537 in the UD thread you say (in its first part):

    I have just read your post, and I believe I have already answered in some detail in my previous post. I would only add that I don’t think that the explanatory filter did not take into consideration NS. NS is a deterministic explanation (or at least, an attempt at an explanation), so it must be considered. And falsified. Now I don’t want to speak for Dembski, I am not intereste in who said what, but IMO the concept is clear. If you can explain the observed result by a credible non design theory, be it purely random, purely deterministic, or a mix of the two, you have made the trick: the design inference is no more warranted.

    So in other words, if we just observe 500 bits of dFCSI, that by itself does not establish that Design is responsible. Glad to see we are in agreement on that. Dembski had his Law Of Conservation of Complex Specified Information (LCCSI) which was supposed to be able to establish that natural processes (deterministic and/or random) could not generate that much SI.   If that theorem did the job for which it was designed, one could just invoke the presence of CSI and then conclude for Design. Alas, his theorem does not work (it is not proven, and also it is not formulated so as to be able to do the job even if it were proven).  However this has not stopped numerous Design advocates from citing the presence of CSI as proof by itself that Design operated. Where have they done this? All over UD many, many times.
     

    In the earlier reply, comment #534, gpuccio argued that in the case of the origin of new protein domains Design must be responsible, instead of natural selection, for which “The problem is, it does not work because those intermediates don’t exist”. In short, the Behe argument. Just observing 500 bits of SI does not do the job of establishing Design, because there could be cases with that much SI where it was put into the genome bit by bit by random variation and differential fitness.

    So the 500-bit-ness is not the issue at all. If there were not the required intermediates, a much smaller gain of SI would be inconceivable, while if there were enough intermediates, a much larger one would be conceivable. So why present this as if the amount of SI is somehow crucial?

    In another long comment of gpuccio’s, #548, he argues that one has to add into the calculation all the information needed to set up a replicating system in the first place, one which has metabolism.

    (gpuccio:) we are again in a situation where the active information already included in a designed system allows some, very limited, output of new information through random search couple[d] to the properties of the existing complex information (reproduction and metabolism), and some information derived from the environment. 

    In that situation you are in effect acknowledging that information, wherever it originally comes from, can get into the genome as a result of this process.  However there is no proof from you that the replication system degrades as this happens or that its capacity for putting more information into the genome in the future is diminished in some predictable way. You do indicate that you feel that the capacity to do so is “very limited” but again, that is basically Behe’s argument and does not derive simply from the concept of Specified Information. So there is no basis to consider that the future information content of the genome is limited by the amount of information in the replicating system.  Any such assertion would need to be proven.

  44. I should add that the replies to gpuccio that I just posted are very parallel to the arguments about GAs given just above by keiths and petrushka, and involve the same issues.

  45. So all this sophistication boils down to the old creationist argument: [something] looks complicated, so it must have been made by someone. Unless you figure out a natural explanation for [something], we’ll assume it was created.

    The bit-counting cargo cult was not invented by Dembski. Henry Morris of Institute for Creation Research explained its origin

    Dembski uses the term “specified complexity” as the main criterion for recognizing design. This has essentially the same meaning as “organized complexity,” which is more meaningful and which I have often used myself. He refers to the Borel number (1 in 1050) as what he calls a “universal probability bound,” below which chance is precluded. He himself calculates the total conceivable number of specified events throughout cosmic history to be 10150 with one chance out of that number as being the limit of chance. In a book written a quarter of a century ago, I had estimated this number to be 10110, and had also referred to the Borel number for comparison. His treatment did add the term “universal probability bound” to the rhetoric.

    Been there, done that, got the T-shirt.

  46. olegt, I think Dembski’s main addition to the argument was his Law of Conservation of Complex Specified Information. If valid, it would rule out natural selection as a means of getting organisms far enough out on a fitness scale for there to be CSI.

    And it is self-evident that organisms are that high in adaptation — if all that was available was mutation (with no natural selection) there is no hope, even once in the whole history of the Universe, of producing a fish or a bird.

    But could it happen if you also have natural selection? The LCCSI was intended to rule that out. This was a pretty gutsy thing to put forward. It proposed to invalidate 100 years of work in theoretical population genetics. If the LCCSI had been valid, it would have been the greatest advance in thinking about evolution since Darwin, or maybe even greater than Darwin. I would have written the letter of recommendation for the Nobel Prize myself.

    Unfortunately …

    Dembski’s other two additions to the ID corpus are his use of the No Free Lunch argument and (with Robert Marks) his Sewach For a Search argument.

  47. gpuccio,

    So, in clear english, dFSCI is a property objectively observable in objects. Now, listen with attention, please: The connection between dFSCI and designed objects is purely empirical. We observe that all objects that exhibit dFSCI, and of which we can know the origin, are design objects. No oject of which we can know the origin, and which exhibits dFSCI, has a non designed origin.

    That’s because you define dFSCI such that it does not exist if a deterministic cause is known:

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    There are non-designed objects that have functional complexity, by your definition, in excess of that required to demonstrate dFSCI, but because a deterministic cause is known, their dFSCI value is “false”. That’s fine, but you can’t pretend that you’ve identified any empirical observations when the result is simply a consequence of your definitions.

    You are still not measuring “design” in any objective sense.

    I have no intention of “measuring design”. I infer design. What I measure is FSI, and then I categorize it as a boolean, dFSCI.

    You are measuring functional complexity, by your definition,

    QED.

    and in addition measuring your knowledge about the provenance of the artifact.

    No. I just test known possible deterministic explanations.

    You do no testing. You assert that the value of dFSCI is “true” when you don’t know of a deterministic mechanism. This is inherent in your definition:

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    All you are identifying is your ignorance about an artifact’s provenance.

     

  48. gpuccio,

    The GA engine itself has nothing to do with the complexity of the string describing the solution, by your own definitions of “functional complexity” and “dFSCI”.

    Wrong. It has all to do with it. The GA is a deterministic explanation for the string, and therefore a compression of its complexity. If the GA is simpler than the string, it can more likely emerge randomly, and then the string comes automatically. So, in that case, the dFSI of the system is the dFSI of the GA. Is it so difficult to understand?

    I’m just going by your definitions:

    Functional Complexity: The ratio of the number of digital strings that encode a particular function to the number of digital strings of the same length.

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    The provenance of the string has nothing to do with its functional complexity, according to your definition. The fact that it was generated by a GA, which is a deterministic mechanism again by your definition, means that its dFSCI value is “false”, but that’s different from the number of bits of functional complexity it contains.

  49. gpuccio,

    I forgot your last question:

    Assuming for the sake of this example that the string does have the minimal amount of functional complexity required, do either or both have dFSCI by your definition?

    The string is the same. Its dFSCI is the same. If I am not aware of the GA, I will compute it in the direct way. If I become aware of the GA, I will refine my judgement by computing the dFSCI for the GA, abd then correcting my measurement only if that FC is lower than the direct FC of the string. Anyway, the string has always the same FC. My judgement can be different dpending on my awareness of the GA.

    I think I understand what you are saying, but could you please answer the questions directly? Assuming for the sake of this example that the string does have the minimal amount of functional complexity required, does the string generated by the human have a dFSCI value of “true”? Does the string generated by the GA have a dFSCI value of “false”?

Leave a Reply