An Invitation to G Puccio

gpuccio addressed a comment to me at Uncommon Descent. Onlooker, a commenter now unable to post there,

(Added in edit 27/09/2012 – just to clarify, onlooker was banned from threads hosted by “kairosfocus” and can still post at Uncommon Descent in threads not authored by “kairosfocus”)

has expressed an interest in continuing a dialogue with gpuccio and petrushka comments:

By all means let’s have a gpuccio thread.

There are things I’d like to know about his position.

He claims that a non-material designer could insert changes into coding sequences. I’d like to know how that works. How does an entity having no matter or energy interact with matter and energy? Sounds to me like he is saying that A can sometimes equal not A.

He claims that variation is non stochastic and that adaptive adaptations are the result of algorithmic directed mutations. Is that in addition to intervention by non-material designers? How does that work?

What is the evidence that non-stochastic variation exists or that it is even necessary, given the Lenski experiment? Could he cite some evidence from the Lenski experiment that suggests directed mutations? Could he explain why gpuccio sees this and Lenski doesn’t?

It’s been a long time since gpuccio abandoned the discussion at the Mark Frank blog. I’d like to see that continued.

So I copy gpuccio’s comment here and add a few remarks hoping it may stimulate some interesting dialogue.

To Alan Fox: I am afraid you miss the points I made, and misrepresent other points.

I confess to many faults, among them reading too fast and typing too slowly. I also don’t have a good memory and don’t recall recently addressing any remarks to you other than this:

But, gpuccio, do you not see that Lenski was only manipulating the environment? The environment in this case, as in life in general, is the designer. Lenski provided the empty niche. Eventually a lucky mutant ended up and flourished in that niche. Selection is not random.

so I am not sure how I can be misrepresenting you.

a) The environment is no designer by definition. And it is not even the first cause of the adaptation. The adaptation starts in the bacteria themselves, in the information that allows them to replicate, to have a metabolism, and to exploit the environment for their purposes. Obviously, cahnges in the environment, especially extreme changes as those Lenski implemented, stimulate adaptation.

The environment designs by selecting from what is available. New genotypes arise by mutation, duplication, recombination etc, etc. Adaptation is the result of environmental selection. I think you mean to say that there is some suggestion that stress conditions can stimulate hyper-mutations in bacteria. This creates more material for environmental selection to sift through.

b) That adaptaion is a tweaking of the existing regulation of existing function. No new biochemical function is created. We can discuss if the adaptation is only the result of RV + NS (possible), or if it exploits adaptive deterministic mechanisms inherent in the bacterial genome (more likely). However, no new complex information is created.

If you are referring to the Lenski experiment, you are flat wrong here.

c) NS is not random, obviously.

Obviously! Glad we agree!

It is a deterministic consequence of the properties of the replicator (replication itself, metabolism, and so on) interacting with environmental properties. The environment changes are usually random in regard to the replicator functions (because they are in no way aware of the replicators, except in the case of competition with other replicators). Anyway, the environment has no idea of what functions can or should be developed, so it is random in that sense. The environmental changes made by Lenski are not really random (he certainly had some specific idea of the possible implications), but I can accept that they are practically random for our purposes. What we observe in Lenski’s experiment is true RV + NS OR true adaptation. I don’t think we can really distinguish, at present. Anyway, it is not design. And indeed, the result has not the characteristic of new design.

Whilst I find your prose a bit dense here and somewhat anthropomorphic (awareness in replicators, environments having ideas) I can’t see much to argue with.

d) NS is different from IS (intelligent selection, but only in one sense, and in power: d1) Intelligent selection (IS) is any form of selection where a conscious intelligent designer defines a function, wants to develop it, and arranges the system to that purpose. RV is used to create new arrangements, where the desired function is measured, with the maximum possible sensitivity, and artificial selection is implemented on the base of the measured function. Intelligent selection is very powerful and flexible (whatever Petruska may think). It can select for any measurable function, and develop it in relatively short times. d2) NS is selection based only on fitness/survival advantage of the replicator. The selected function is one and only one, and it cannot be any other. Moreover, the advantage (or disdvantage, in negative selection) must be big enough to result in true expansione of the mutated clone and in true fixation of the acquired variation. IOWs, NS is not flexible (it selects only for a very tiny subset of possible useful functions) and is not poweful at all (it cannot measure its target function if it is too weak). Those are the differences. And believe me, they are big differences indeed.

I think I have to accuse you of reification here. What is “intelligent selection” with regard to evolutionary processes?

By the way, just to tease Petrushka a little, it is perfectly possible to implement IS that works exactly like NS: we only need to measure reproductive fitness as our desired function. That’s exactly what Lenski did. Lenski’s experiment is, technically, an example of intelligent selection that aims to imitate, as much as possible, NS (which is perfectly fine).

My response would depend on whether and how you can define or identify the process you call “intelligent selection”. I suspect Petrushka can speak for herself!

_________________________________________________________________________

Any interested party is invited to comment. In Lizzie’s absence, I can only approve new commenters in threads I author. I am sure matters will be regularised soon. I hope gpuccio will find time to visit as unfortunately not many of us are able to comment at Uncommon Descent. Remember Lizzies rules and assume others are posting in good faith.

(Added in edit 22nd September 2012)

Gpuccio replies;

To Alan Fox (on TSZ):

Thank you for your kind invitation to take part in The Skeptical Zone. My past experiences in similar contexts have been very satisfying, so I would certainly like to do that.

Unfortunately, I am also aware of how exactling such a task is for my time, and I don’t believe that at present I can do that, while still posting here at UD.

So, for the moment I will try to answer the main questions you make there about my statements here, so that they are also visible to UD commenters, who after all are those to whom I feel more committed. I hope you understand. And anyway, it seems that you guys at TSZ are reading UD quite regularly!

I would just point out to Petrushka that it is not a question of “courage”, but of time: I have already done that (discussing things on a darwinist friendly forum) twice, and my courage has not diminished, I believe, since then.

No problem, gpuccio. I’ll paste any comments from you that I see that are directed at TSZ as I get chance.

88 thoughts on “An Invitation to G Puccio

  1. It’s always struck me as just a tiny bit circular to argue that dFSCI exists only if we can’t cite a “natural” cause, and there is no possible natural cause because there’s just so doggone much dFSCI.

    That seems to lie at the heart of all ID arguments. including Behe’s.

    How does water find the contour of the pond bottom, anyway? Perhaps the pond bottom was designed so that water could find it.

  2. gpuccio: “So, observing 500 bits, or even less, of dFSCI in a biological system “does the job” perfectly, because there is no known artificial GA working in that system, only the biochemical laws, that have no power to generate those results.”

    There is no “artificial” GA because biology is the “real” GA.

    This is precisely what the debate is about, the “real” GA.

    You have just asserted that “real” GA doesn’t work because it can’t.

    That’s not very scientific.

    Evolution is like a simple electronic oscillator.

    Take random resistors and capacitors and insert them in the feedback loop of the oscillator.

    Some will work and some won’t.

    If you measure the frequencies of those that actually manage to work, you will find frequencies all over the map, none “specified”, but all “functional”.

     

  3. gpuccio September 25, 2012 at 10:00 am

    I said (and you quote it): “I just test known possible deterministic explanations.”.

    So, I do test. I test known deterministc explanations. Obviously I do not test deterministic mechanisms of which I know nothing. I have tested NS, for example.

    So, that is the good ole argument from ignorance. Intelligent design by default. Why are people here interested in discussing this? 

  4. So, that is the good ole argument from ignorance. Intelligent design by default. Why are people here interested in discussing this?

    It’s rather that we are trying not to prejudge the possibility that there may be an argument for intelligent design. Just because we see only default arguments, we cannot preclude the idea that there is an argument for “intelligent design” that does not follow this pattern.

  5. Gpuccio at UD

    To Alan Fox (at TSZ):

    Because a new strain of E. coli arose by variation that was able to digest citrate. That strain bloomed in the niche provided by Lenski’s flasks. How on Earth can the novel ability to digest citrate not be a new biochemical function?

    From Carl Zimmer’s article, which sums up Lenski’s results:

    “When E. coli finds itself in the absence of oxygen, it switches on a gene called citT. Like other species (including us), E. coli turns genes on and off by attaching proteins to short stretches of DNA nearby. When E. coli senses a lack of oxygen, proteins clamp onto one of these genetic switches near citT. Once they turn the gene on, it produces proteins that gets delivered to the surface of the cell. There they poke one end out into the environment and pull in citrate, while also pumping out succinate. After the citrate gets inside the microbe, the bacteria can chop it up to harvest its energy.”

    So, as Joe already pointed out, the citT gene, and the metabolism to digest citarte, were already there, and are not a consequence of Lenski’s experiment. Only the switching on of the citT gene is the main factor here. What we are seeing is not very different in principle from what happens in the most classicla example of gene regulation, tha lac operon. So, as I said, there is no new biochemical function, only “a tweaking of the existing regulation of an existing function”. QED.

    I recommend The Loom article to anyone interested in current state-of-play on the Lenski experiment. Note, in the comments, Zachary Blount says:

    Aerobic citrate utilization is a novel trait for E. coli, and it is one that evolved spontaneously in the Ara-3 population that I study. (By this I mean that it did not come from the acquisition of foreign DNA like a plasmid carrying a citrate transporter into the cell line from outside.) Moreover, the actualization stage that produced the qualitative switch to Cit+ did not involve any loss of gene function! As Carl explained so well, actualization involved a duplication mutation that produced two copies of a segment of DNA that 2933 bp long. These two copies are in a tandem, head-to-tail orientation, and placed a copy of the citT gene under the control of a copy of the promoter element that normally controls when the rnk gene is turned on – this is what we call the “rnk-citT module”. (I know, I know, incredibly catchy!) As rnk is turned on when oxygen is present, the copy of its promoter in the new rnk-citT module likewise turns on citT when oxygen is present. Voila!…

     

     …Is the material that went into the new element fundamentally new? No. But to deny that the new module is new is like saying that a new word is nothing new merely because it was formulated from pre-existing letters. Now something that I found to be really interesting about this mode of innovation is that it wouldn’t eliminate any pre-existing function. The duplication generates the new rnk-citT module, but there ends up remaining in the genome complete copies of rnk under the control of the rnk promoter, as well as of the cit operon as it existed before the duplication took place. The duplication thus added a new regulatory element while not actually disrupting any pre-existing regulatory modules.

    As I remarked before, the environment designs by selecting from what is available, so it is no surprise that this novel trait is but a short step from and building from material that is already there. But – for heavens sake! – anaerobic to aerobic is just tweaking!

  6. Why should we have to defend the “size” of the function created? The simple fact is that the Lenski experiment confirms many of the mechanisms that were heretofore conjectured.

    It started with a series of drift mutations that turned out to be necessary, but which had no observable effect on function or adaptation. This isn’t much commented on, but it destroys any notion that an intelligent selector could have spotted them as incipient adaptations and selected for them. They are poster children for neutral drift theory.

    It validates the theory that new function is often accomplished by gene duplication followed by variation in the duplicate. This has now been observed in real time under controlled conditions.

    By the time the experiment was “concluded” we have exceeded Behe’s Edge with the number of required mutations, and all this happened in a few flasks in a laboratory in about a decade.

    It completely validates the ability of random mutation to exhaustively explore the nearby functional landscape in a finite amount of time. And it validates the assertion that useful new adaptations are often within reach of such a “braille” search.

    It totally destroys the ID assertion that new function comes at the expense of losing old function. I hope we never hear that argument again. One consequence of this observation is that there is no barrier to continuing this accretion of function. New function does not have to come at the loss of the old. And there was no evidence of “genetic entropy,” so we can expect to stop hearing about that.

  7. Gpuccio:

    I have redefined nothing. My definition of dFSCI has been the same for, I believe, years, at UD. You can check in the archives, if you want.

    I hardly need to go the archives. Your usage of dFSCI isn’t even consistent within the current thread at UD.

    First you tell us that dFSCI is a numerical quantity measured in bits:

    a) if the GA is more functionally complex than the output, we can simply consider the dFSCI of the output, 400 bits, and safely infer design for it.

    Then you tell us that dFSCI is a boolean:

    What I measure is FSI, and then I categorize it as a boolean, dFSCI.

    Then you tell us that dFSCI is not a boolean:

    A value of dFSCI is neither true nor false.

    Here (and many times previously) you tell us that dFSCI is a reflection of pure RV:

    a) The concept of dFSCI applies only to the RV part.

    Then you tell us that no, dFSCI also depends on any GA capable of producing the “output”:

    So, the GA is a deterministic way to produce the output (even if it uses some random steps). That does not mean that there is no dFSCI in the output. It just means that the true dFSCI of the output is the functional complexity of the GA.

    How do you expect others to follow your argument if you don’t use your terms consistently? Onlooker is doing you a considerable service by holding you to your earlier statements and asking you to state your argument consistently. You should be grateful, not petulant.

  8. Alan Fox said: It’s rather that we are trying not to prejudge the possibility that there may be an argument for intelligent design. Just because we see only default arguments, we cannot preclude the idea that there is an argument for “intelligent design” that does not follow this pattern.

    While that certainly can be of some interest, – after all, we do need to know what ID/creationists are currently thinking – default arguments that are derived from abject ignorance about the natural world are probably better addressed in a way that actually compares that ignorance with reality.

    For example, that KF character over at UD – as well the others over there – still can’t figure out how to scale up the ratio of the electrical potential energy to gravitational potential energy to objects the size of dice, coins, and alphabets and having separations on the order of centimeters. He therefore dismisses the calculation is an “atmosphere-poisoning” smear tactic.

    All this mucking around with logarithms of probabilities goes nowhere if one doesn’t understand the interactions among the things for which one is trying to calculate the probabilities of assembly. Even a high school level calculation provides a big hint that something is wrong with ID/creationist assertions; and that kind of hint also provides some knowledge about the real world.

    Within the science community, the calculations for atom and molecular assemblies are already being done on supercomputers using what is already known about the way atoms and molecules interact. And THOSE calculations are NOT the simplistic logarithms of probabilities about non-interacting objects with the label “information” attached to the logarithms in order to give the impression that such naive calculations are “sophisticated” calculations that automatically default to a “designer.”

    ID/creationists start with all the characteristic misconceptions they inherited from “scientific” creationism and then just make up things as they go while ignoring all of physics, chemistry, and biology.

    After something like 40+ years of observing this stuff, I stand by my assertion that the leaders and the followers of ID/creationism do not understand science beyond the eighth grade level if even that. What they possess instead are a bunch of misconceptions that consist of bent and broken science concepts designed to support a narrow set of sectarian beliefs. They got these from Morris and Gish who, in turn, may have learned them from A.E. Wilder-Smith. It is also obvious that people like Ken Ham, a protégé of Henry Morris, direct these misconceptions at children and middle school students; disrupting their science education at or before the eighth grade level.

    But these misconceptions don’t work in the real world. Every “alternative” they try to construct is therefore a straw man argument addressing their own misconceptions; and if they are exposed to real science, they think it is a straw man. That is abject ignorance at its worst.

    Not one of those characters over at UD – not one – can do a calculation that a high school physics or chemistry student can do very easily. Even when someone does it for them, they can’t comprehend its implications because they don’t even know what phenomena actually occur in the real world. So they simply assert “smear tactics” and airily dismiss the hint that such a calculation provides; a simple but serious hint that their own “calculations” are meaningless.

  9. I have always said that, when tested empirically, dFSCI can be shown to have 100% specificity, no false positives.

    Can you understand that? Empirically, not logically.

    So, show me in reality that example, the string and the algorithm. That will be my first empirical false positive.

    So biologists are required to provide the pathetic level of detail, and not just the system capable of generating it? I think you have the burden of proof backwards. GP. Once we have demonstrated a dynamic system capable of accumulating functional changes, the burden is yours to demonstrate the existence of a comparable competing system.

    Astronomers are under no obligation to wait for a full orbit of Pluto to declare that it does orbit. There is no branch of science in which regular processes are not extrapolated.

    I do believe this particular argument qualifies as “ID is the default scenario if every evolutionary gap is not filled.” How else could you possibly ignore the lab evidence that well established evolutionary mechanisms have actually been observed inserting information into the genome?

    This is a dirt simple argument. Once you have demonstrated a regular mechanism capable of inserting information into the genome in a time span consistent with the difference observed between related species, this is the default mechanism until such time as a competing mechanism is demonstrated.

  10. gpuccio: “What can I say to Toronto? Better say nothing. You know the old saying, “if you have nothing nice…” “

    That’s too bad because I do have a begrudging respect for the way you handle yourself.

    That doesn’t mean I agree with your arguments in any way.

    One question that has always bothered me and I think Petrushka, is how does the designer know in advance what semiotic codes to use for yet uncreated biological functionality?

     

     

  11. That bothers me. But I am also left wondering what purpose the designer serves. Evolution always uses minor variations of existing sequences, and Lenski has demonstrated that the entire nearby landscape of sequences can be tried in a short time.

  12. Joe: “How does a programmer know in advance what code to use for a yet uncreated program? “

    Human programmers get the “specified functionality” before we start the design, while your designer doesn’t have a specification before the fact that you can show evidence for.

    There are many possible “codes” that lead to specific functionality meaning our possible targets aren’t limited to one particular “string of information”.

     

  13. gpuccio has responded (in comment #596 here) at some length. Basically, gpuccio’s argiment is that dFCSI tests whether random variation can produce an adaptation — that it is only computed when it has been ruled out that natural selection can produce such an adaptation:

    “the judgement about dFSCI implies a careful evaluation of available deterministic explanations, as said many times.”

    Let me make three points:

    1. The notion of dFCSI is not the same as Dembski’s CSI.  Dembski’s argument involves a scale (the relevant one is fitness, one of the possibilities he mentions) and CSI exists when the organism is so far out on the scale that it is in the top 10-150 of the original mutational distribution, say one of equal frequencies of the two alleles at all loci). Dembski then invoked his Law of Conservation of Complex Specified Information (LCCSI) as showing that natural selection (or any other natural process) cannot have moved the system from not showing CSI to showing it. He does not say that first one must have ruled all possible explanations by random variation and differences of fitness, and one should only compute the amount of SI after that. In fact he argues that finding that the genotype is that far out on the fitness scale accomplishes that without considering all those explanations.  For example in The Design Revolution he says that repeatedly in chapter 12.  On page 96, for example, he says that

    “If something genuinely exhibits specified complexity, then one can’t explain it in terms of all material mechanisms (not only those that are known but all of them, thanks to the universal probability bound of 1 in 10150; see chapter ten).”

    So it is very clear that Dembski intends the observation of CSI to establish that there has been Design even when we do not know all natural processes that could have been at work. He bases his argument for that on his conservation law. A law that, as I have argued, has been shown neither to be proven nor to do the job.

    2. gpuccio is, by contrast, using dFCSI only after ruling out natural selection (unlike Dembski). gpuccio’s argument thus ends up as equivalent to Michael Behe’s in Darwin’s Black Box: the mutations needed to allow natural selection to begin to operate are too rare, and would not have occurred even once in the history of the Universe. The 500-bit (or 150-bit) criterion is just a way of saying that they are too rare for that, and all the information calculations are only for that purpose.

    3. A technical point: gpuccio describes natural selection (the effect of differences of fitness) as a deterministic process. When I teach theoretical population genetics, I describe it as such and model it deterministically. But that is only possible if one assumes that the population is infinitely large. A purely deterministic model of multiple-site selection requires that all possible genotypes exist initially, albeit at very low frequencies. Of course this cannot happen: for all combinations of a 200-base DNA sequence to exist in a population would require that we have at least 4200 individuals, more than there are particles in the Universe.  So to model multiple-site selection reasonably, one needs a stochastic process model; one must treat mutation as introducing new bases randomly. Such a process can explore nearby genotypes by mutation, and also produce more of them by combining mutations by recombination. If gpuccio wants to model the effect of fitness differences purely deterministically, gpuccio is implicitly assuming that all of the relevant possible sequences are already there.

    Thus what we have in gpuccio’s argument is a mathematically-more-explicit calculation of rareness of the required mutations in a Behe-style argument, and not a version of Dembski’s entirely different argument. 

  14. When I teach theoretical population genetics, I describe it as such and model it deterministically. But that is only possible if one assumes that the population is infinitely large.

    … and that scaling a population has no impact on the ‘random assortment’ assumption underlying many mathematical treatments. Essentially, the population is treated almost like an ‘ideal gas’, with no dimension or locality for the entities of which it is composed. Even infinite populations on a geometric surface would suffer local stochastic fluctuation. Further sources of stochasticity arise when we consider inevitable variation in selective advantage with time and location. It isn’t only sampling error that creeps in when real populations are considered and we depart from ‘large numbers’ expectations. ‘Real’ populations probe accessible ‘function’ from a position of current ‘function’, which is a different kind of algorithm.

    GP:

    So, darwinists, including you, have the duty to model some real life example of, say basic protein domain, and explain how it emerged at some time in natural history, and what RV did, and what NS did. And they can’t. For the simple reason that RV and NS cannot do it.

    No – for the even simpler reason that NS/Drift is by its very nature an eliminatory process! Unless steps were ‘frozen’ (a la Lenski) by speciation and subsequent lineage survival, and the signal in them has not become overly scrambled, we are stuck with the fact that the very essence of the evolutionary process is erasure of history! Selective advantage is only with respect to current alleles in the population. Once fixation has occurred, the ‘true’ advantage cannot be recovered. That unavoidable obscuration does not justify the assertion “RV and NS cannot do it.”

  15. GP’s position is even more ridiculous than the no intermediate fossil argument. GP requires living intermediates.

  16. gpuccio: “It’s the things you say that leave me amazed. Maybe cognitive incompatibility? “

    It’s the things that IDists can’t say that amaze me!

    I see our side using our “cognitive” abilities searching for any and all truths, but to a man, every single IDist refuses to test the most profound mystery.

    Why?

    Why can you not criticize the very mechanism you claim is responsible for life as we see it to see if design as a theory holds up?

    You will test “Darwinism”, but you won’t scientifically test the designer to see if he’s capable of what you claim.

     

  17. Mung: “And some programmers go straight to programming and let the design evolve as needed. “

    This is the problem gpuccio faces when people like Mung help the anti-ID side by giving examples of “design via unspecified evolution”.

    A programmer, without pre-specified direction, can achieve acceptable functionality, simply by feedback from his testing.

    Thanks Mung.

  18. gpuccio,

    So, in clear english, dFSCI is a property objectively observable in objects. Now, listen with attention, please:

    The connection between dFSCI and designed objects is purely empirical. We observe that all objects that exhibit dFSCI, and of which we can know the origin, are design objects. No oject of which we can know the origin, and which exhibits dFSCI, has a non designed origin.

    That’s because you define dFSCI such that it does not exist if a deterministic cause is known:

    No. It’s because dFSCI is present in designed objects and in nothing else. Empirical fact.

    There is nothing empirical about it. Here, again, are your definitions:

    Functional Complexity: The ratio of the number of digital strings that encode a particular function to the number of digital strings of the same length.

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    It is not possible by your definitions for dFSCI to exist if a deterministic cause is known. You explicitly define it that way.

    You do no testing. You assert that the value of dFSCI is “true” when you don’t know of a deterministic mechanism. This is inherent in your definition:

    Have you problems with the english language? I said (and you quote it): “I just test known possible deterministic explanations.”. So, I do test. I test known deterministc explanations. Obviously I do not test deterministic mechanisms of which I know nothing. I have tested NS, for example.

    You have tested natural selection and proven that it could not possibly demonstrate functional complexity greater than that required to meet your threshold that indicates dFSCI? That’s quite an accomplishment. Please provide a reference to your work.

    All you are identifying is your ignorance about an artifact’s provenance.

    Is that some form of catechism?

    It’s a direct consequence of your definitions. Consider an artifact described by a digital string with functional complexity of greater than 150 bits. To determine if this artifact exhibits dFSCI we need to apply the two criteria of your definition:

    1) Is the functional complexity greater than 150 bits?

    2) Is a deterministic explanation for the artifact known?

    Look at that second criteria carefully. All that is required for a determination that dFSCI exists is an assessment of our knowledge of an explanation. There isn’t any positive evidence for design, merely a lack of knowledge of a deterministic mechanism. dFSCI just means “I don’t know the provenance of this artifact.” by your own definitions.

     

  19. gpuccio,

    A value of dFSCI is neither true nor false.

    Yes, it is, by your definition:

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    Are you changing your definition mid-conversation?

    I think I understand what you are saying, but could you please answer the questions directly? Assuming for the sake of this example that the string does have the minimal amount of functional complexity required, does the string generated by the human have a dFSCI value of “true”? Does the string generated by the GA have a dFSCI value of “false”?.

    No. Assuming for the sake of example something that cannot be, let’s say that the strin in itself has 180 bits of FC, and the algorithm that generates it only of 100 bits.

    The situation is simple: how the string was generated is of no importance to the calculation.

    a) If I don’t know that the algorithm exists, I will evaluate the dFSCI at 180 bits, and make a designinference.

    b) If I know that the algorithm exists, I will evaluate the dFSCI at 100 bits, and make no design inference.

    I’m still not understanding your criteria for determining whether or not dFSCI is present. Let me restate my questions.

    First, assuming for the sake of this discussion that a string encoding a solution to Lizzie’s problem as a series of 1s and 0s has functional complexity of more than 150 bits, consider the case where a human being sits down, thinks about the problem for a few moments, uses his knowledge of math and his creativity, and generates the solution string. Does this string exhibit dFSCI? If not, why not?

    Second, assuming for the sake of this discussion that a string encoding a solution to Lizzie’s problem as a series of 1s and 0s has functional complexity of more than 150 bits, consider the case where a GA is configured with a fitness function that ranks strings according to the “product of head run lengths” criteria and allowed to run until a solution is found. Does this string exhibit dFSCI? Because you have stated that you consider the GA model of evolutionary mechanisms to be deterministic, my understanding is that the answer here is “No.”

     

  20. I don’t agree. We find a lot of information in genomes and proteomes about what happened in distant times. It seems strange that just the thousands of functional, expanded intermediates for protein domains did not happen to leave any trace! That is just an easy ad hoc excuse for a theory that has no scientific validity.

    The entire structure of GP’s argument rests on a god of the gaps foundation. With a side order of incredulity.

    I thought that the point of the UD thread thread is that ID is not the default position. But what GP is doing is arguing that since living intermediates of protein domains are missing, and there are no fossil domains, they never existed. This is the history of science vs creationism in a nutshell. This line of debate has characterized every branch of science from astronomy to geology. The fact that it is taking place in biology simply reflects the relative newness of genomics as a science.

    And of course there is interest in reconstructing the evolutionary history of proteins. It’s hard, so GP isn’t interested. Heaven forbid that an ID advocate take an interest i actually doing research.

    http://pages.uoregon.edu/joet/PDF/dean_thornton-NRG2007.pdf

    The most devastating critique of ID is not that it is wrong, but that it is useless for formulating research proposals. When it does research, it is carefully targeted to fail. As in the work of Axe.

    It is remarkably easy to design research to give negative results. Somewhat harder to do what Thornton is doing.

    I find it amusing that GP and other IDists find the actions of a non-material, non-existent entity more compelling than the extrapolation of a demonstrated mechanism. But just because they ignore proven mechanisms in favor of a completely imaginary non-mechanism doesn’t mean that ID is a default position where biology has gaps in it’s account of history.

  21. onlooker writes:

    gpuccio,

    A value of dFSCI is neither true nor false.

    Yes, it is, by your definition:

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    Are you changing your definition mid-conversation?

    I brought this to gpuccio’s attention here, along with several other contradictory statements he’s made about dFSCI. His response? By pointing out his contradictions, I wasn’t reading him “intelligently” and “in good faith”.

    Evidently, if gpuccio contradicts himself, that’s fine. It’s your responsibility as his reader to fix all of his mistakes for him. And to assert that he contradicts himself just because he, well, contradicts himself? That’s not reading “in good faith”.

  22. Migration between folds is much more probable than previously believed, and certain folds may even be intersections between multiple folds. Many of the folds that scientists have been able to “switch” , I suspect, are connected to yet other folds via short mutational paths. This is an active area of research these days.

    http://www.ncbi.nlm.nih.gov/pubmed/20591649

    Evolution is often underestimated.

  23. This is exactly what happened when I discussed this with gpuccio, in my Mathgrrl persona, on Mark Frank’s blog. That discussion spanned four separate threads during which gpuccio kept redefining his terms while refusing to admit that he was doing so. I finally gave up in frustration.

    While he might be one of the more polite UD regulars, his lack of intellectual honesty is as bad as any of the others.

  24. Joe: “Umm it can be measured in bits and still be Boolean. How do you think we know if it is true/ present or not? By counting the number of bits. “

    But then;

    Joe: “The VALUE, as in the number of bits, ie it is what it is. A car is neither true nor false. “

    So Joe, why must a value exceeding 1 be considered boolean in one case but not another?

    Car = 2000;

    dFSCI = 3;

    if( Car || dFSCI ) { f(x);} else { f(y);}

     

     

     

  25. It seems strange that just the thousands of functional, expanded intermediates for protein domains did not happen to leave any trace! 

    I struggle to see why that would be remotely strange. If a series of amendments have successive selective advantages of sufficient magnitude to fix them, what on earth is going to happen to all the ‘intermediates’? Of course, there are techniques for probing in the darkness- even behind LUCA, since the genome itself can be analysed phylogenetically, thanks to gene duplication events. But there are limits, inevitably – this is not simply some made-up excuse, but a fact of nature.  

    The issue boils down to coalescence. The further back you go, the fewer individuals from that time leave descendants, in a finite world. Therefore, greater and greater proportions of surviving genomes derive from fewer and fewer ancestors as you go back. You end up sampling individuals – not even populations. So the loss of the history of the other individuals is unsurprising and predictable. You are looking askance at the absence of a mutational history of individuals, and the selective history of particular populations, a few hundred million years on? It’s pretty remarkable we have anything to go on. New domains will arise in individuals, and typically either end up in all descendants, or be lost without trace. The losers and ancestral states are not generally recorded, anywhere.

    That is just an easy ad hoc excuse for a theory that has no scientific validity.

    Is it heck! It is an incontrovertible fact. Poo-pooing this fact, which can even be given a mathematical proof – the same basic iterative sampling that drives neutral drift, the evolutionary ‘baseline’ – as  “an easy ad hoc excuse for a theory that has no scientific validity” is just posturing. It is akin to dismissing linguistics if it fails to establish the order in which eye, eagle-headed thingy and person in the classic “walk like an Egyptian” pose became established in hieroglyphics, nor what they replaced or how they were pronounced. Must they therefore have been devised in the Tower of Babel?

    We can only work with the material we have, plus models. It would be great if we had more, but the limitations and absences do not force ad hoc attribution to spirit causes. 

  26. Joe: “The CONTEXT is all important. Ya see in order to know if you have dFSCI you need to get a value, ie the number. “

    I agree and wish gpuccio would stick to one use of dFSCI.

    What it really implies is :

    if( dFSCI > UPB ){}

    But gpuccio doesn’t always use the term in that sense and it makes understanding difficult.

    If dFSCI was *always* a value that must be tested against a known boundary condition, we could start talking about how to calculate it, but by saying something *has* dFSCI in the next sentence, you’ve implicitly converted dFSCI from a value to a boolean.

    Using one term in two different ways is not helpful when promoting an idea.

    gpuccio’s own statements don’t clarify his use or why he insists on one term to represent two different concepts.

     

  27. gpuccio,

    To onlooker (at TSZ):

    Already answered: post #488 and post #629.

    I hadn’t seen your 629, but after reading it I do not find that you have addressed the issues I’ve raised in either of those comments.  Your dFSCI is still clearly an indicator of our knowledge (or, conversely, ignorance) of the provenance of an artifact, not of the involvement of an intelligent agent, by your own definitions.

    You have also still failed to directly answer my question about whether or not dFSCI is present in a human generated solution to Lizzie’s head-tail sequence problem.  Here it is again, pared down to its essence:

    Assume that a string encoding a solution to Lizzie’s problem has functional complexity of more than 150 bits.  If a human uses his knowledge of math and his creativity to generate the solution string, does this string exhibit dFSCI?  If not, why not?

    Thank you in advance for a clear and unambiguous response.

  28. I believe that gpuccio’s most recent reply to me (at the UD thread it is comment #656) agreed that his argument is not directly related to William Dembski’s argument; it is a Michael-Behe-style argument in spite of the name dFCSI.

    Furthermore, in computing gpuccio’s dFCSI you have to first rule out that it could have gotten into the genome by any natural cause, including differences of fitness. So if you are supposed to rule out that Elizabeth’s string was produced by a human before you call it dFCSI. That answers your question: it does not exhibit dFCSI. It also makes the dFCSI argument uninteresting, as natural (or human) causation is already ruled out.

    In Dembski’s argument the amount of SI is computed without yet having eliminated all natural causes. But that is not what gpuccio is doing.  

  29. Joe,

    Furthermore, in computing gpuccio’s dFCSI you have to first rule out that it could have gotten into the genome by any natural cause, including differences of fitness. So if you are supposed to rule out that Elizabeth’s string was produced by a human before you call it dFCSI. That answers your question: it does not exhibit dFCSI. It also makes the dFCSI argument uninteresting, as natural (or human) causation is already ruled out.

    To gpuccio, human intelligence is not a ‘natural cause’:

    Abd science has found no way to explain the emergence of dFSCI in a “naturalistic” (that is, cosciousness independent) way.

    And he clearly thinks that humans can produce dFSCI:

    …from human design and from the properties of human artifacts, including language and software, wondeful examples of extremely abundant dFSCI.

  30. I stand by my analysis. To call it dFSCI you have to rule out “natural” causes, and you can rule out natural causes because there’s just so doggone much dFSCI. He makes this explicit in his discussion of the origin of protein domains. He takes the absence of living cousins as proof of poof. No living cousins means none ever existed.

  31. To gpuccio, human intelligence is not a ‘natural cause’:

    Well yes. According to GP, any five year old is capable of seeing that the brain is not the seat of human consciousness.

    I’m not sure where the consciousness of crows resides.

    But GP’s commitment to magical thinking is at least self-consistent and seamless and thoroughly dualistic. Once you understand that he is a metaphysical dualist who believes in a non-physical world that interacts continuously with the physical, his views are understandable. Most of his compatriots at UD are also dualists.

  32. Joe: “Umm it can be measured in bits and still be Boolean. How do you think we know if it is true/ present or not? By counting the number of bits. “

    It’s dFSCI only if it’s not the result of natural causes, and we know it’s not the result of natural causes because there are so many bits. Any five year old can see that.

  33. Yes, I was kind-of aware that most UD types don’t think humans are natural, so human causation counts as non-natural in their arguments.

    That makes it hard to study models of natural systems, since the models are necessarily constructed by the researcher and that means that the models are declared by the ID types to be irrelevant because they are not natural processes. Even if the models consist of nonintelligent processes such as random mutation and Brownian motion.

  34. This insistence on using coin flips to compute whatever this “dFSI” is supposed to be makes no sense if one is presuming to extend any of this to atomic and molecular assemblies in the real world. If dFSI is suppose to be the logarithm of the ratio of the “target space” to “sample” space, how does one determine what a “target space” and a “sample space” is for real, atomic and molecular systems?

    For example; what are the sample spaces and the target spaces for the formation of H2O and H2O2 when equal numbers of hydrogen atoms and oxygen atoms are brought together? How does any of this account for temperature and concentration?

    In order to take a logarithm of the ratios of those “spaces,” one has to know implicitly the probabilities of formation of those two compounds at various temperatures and concentrations. Where does one get those probabilities? One cannot extrapolate from the sample spaces and target spaces of non-interacting objects such as coins or dice where all one has to do is count the sizes of these spaces using permutations and combinations.

    Ignoring the warnings of simple high school physics and chemistry calculations is not going to make these kinds of calculations meaningful in any kind of atomic/molecular system, even when such systems are as simple as water/hydrogen peroxide systems. How can such calculations have any meaning whatsoever when applied to more complicated systems such as large organic molecules immersed in larger systems at a constant temperature? How can they have any meaning when one is dealing with systems forming in energy cascades under non-equilibrium conditions?

    There is a huge difference between phenomenological calculations using empirical data from experiments and ab initio calculations using accurate, detailed knowledge of the physical processes actually involved in the interactions of atoms and molecules.

    The reason ab ignitio calculations are difficult is because of the rapidly emerging phenomena that occur when systems start becoming complex as a result of the interactions of their constituents. But ab initio calculations are extremely important for checking the detailed consequences of physical models; and accurate predictions help confirm the details of our understanding.

    Naive calculations using logarithms of the ratios of target spaces to sample spaces tell us nothing significant when we ignore the physics and chemistry. Simply calling these logarithms “information” of some sort is deceptive. It gives the illusion of having knowledge one does not have; and this illusory knowledge is the essence of the “theories” of people like Dembski and Abel.

    Those airy dismissals of the dramatic warnings from simple high school physics and chemistry calculations simply compound the ignorance being showcased by these attempts to replace science with ID pseudo-science. And those simple calculations don’t even include the further complications added by the redistribution of charge and quantum mechanical rules when atoms and molecules interact with each other.

  35. Joe Felsenstein said: That makes it hard to study models of natural systems, since the models are necessarily constructed by the researcher and that means that the models are declared by the ID types to be irrelevant because they are not natural processes. Even if the models consist of nonintelligent processes such as random mutation and Brownian motion.

    It is now beginning to appear that they also think all of chemistry and physics are irrelevant as well.

    Apparently they are asserting that only models constructed by ID/creationists are relevant. That would certainly explain why they stopped learning science before completing middle school. Even Ken Ham knows to get the kids when they are young.

  36. Joe Felsenstein and gpuccio,

    Furthermore, in computing gpuccio’s dFCSI you have to first rule out that it could have gotten into the genome by any natural cause, including differences of fitness. So if you are supposed to rule out that Elizabeth’s string was produced by a human before you call it dFCSI. That answers your question: it does not exhibit dFCSI.

    Interesting, I read it exactly the opposite way, based on this from the material quoted by Alan:

    8b) What if the operator inputted the string directly?

    A: Then the string is designed by definition (a conscious intelligent being produced it). If we inferred design, our inference is a true positive. If we did not infer design, our inference is a false negative.

    gpuccio, could you please directly answer the question I posed? Here it is again:

    Assume that a string encoding a solution to Lizzie’s problem has functional complexity of more than 150 bits. If a human uses his knowledge of math and his creativity to generate the solution string, does this string exhibit dFSCI? If not, why not?

Leave a Reply