Gpuccio’s Theory of Intelligent Design

Gpuccio has made a series of comments at Uncommon Descent and I thought they could form the basis of an opening post. The comments following were copied and pasted from Gpuccio’s comments starting here


To onlooker and to all those who have followed thi discussion:

I will try to express again the procedure to evaluate dFSCI and infer design, referring specifically to Lizzies “experiment”. I will try also to clarify, while I do that, some side aspects that are probably not obvious to all.

Moreover, I will do that a step at a time, in as many posts as nevessary.

So, let’s start with Lizzie’s “experiment”:

Creating CSI with NS
Posted on March 14, 2012 by Elizabeth
Imagine a coin-tossing game. On each turn, players toss a fair coin 500 times. As they do so, they record all runs of heads, so that if they toss H T T H H H T H T T H H H H T T T, they will record: 1, 3, 1, 4, representing the number of heads in each run.

At the end of each round, each player computes the product of their runs-of-heads. The person with the highest product wins.

In addition, there is a House jackpot. Any person whose product exceeds 1060 wins the House jackpot.

There are 2500 possible runs of coin-tosses. However, I’m not sure exactly how many of that vast number of possible series would give a product exceeding 1060. However, if some bright mathematician can work it out for me, we can work out whether a series whose product exceeds 1060 has CSI. My ballpark estimate says it has.

That means, clearly, that if we randomly generate many series of 500 coin-tosses, it is exceedingly unlikely, in the history of the universe, that we will get a product that exceeds 1060.

However, starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection, whereby I will cull the 50 series with the lowest products, and produce “offspring”, with random point mutations from each of the survivors, and repeat this over many generations.

I’ve already reliably got to products exceeding 1058, but it’s

possible that I may have got stuck in a local maximum.

However, before I go further: would an ID proponent like to tell me whether, if I succeed in hitting the jackpot, I have satisfactorily refuted Dembski’s case? And would a mathematician like to check the jackpot?

I’ve done it in MatLab, and will post the script below. Sorry I don’t speak anything more geek-friendly than MatLab (well, a little Java, but MatLab is way easier for this)


Now, some premises:

a) dFSI is a very clear concept, but it can be expressed in two different ways: as a numeric value (the ratio between target space and search space, expressed in bits a la Shannon; let’s call that simply dFSI; or as a cathegorical value (present or absent), derived by comparing the value obtained that way with some pre define threshold; let’s call that simply dFSCI. I will be specially careful to use the correct acronyms in the following discussion, to avoid confusion.

b) To be able to discuss Lizzie’s example, let’s suppose that we know the ratio of the target space to the search space in this case, and let’s say that the ratio is 2^-180, and therefore the functional complexity for the string as it is would be 180 bits.

c) Let’s say that an algorithm exists that can compute a string whose product exceeds 10^60 in a reasonable time.

If these premises are clear, we can go on.

Now, a very important point. To go on with a realistic process of design inference based on the concept of functionally specified information, we need a few things clearly definied in any particulare example:

1) The System

This is very important. We must clearly define the system for which we are making the evaluation. There are different kinds of systems. The whole universe. Our planet. A lb flask. They are different, and we must tailor our reasoning to the system we are considering.

For Lizzie’s experiment, I propose to define the system as a computer or informational system of any kind that can produce random 500 bits strings at a certain rate. For the experiment to be valid to test a design inference, some further properties are needes:

1a) The starting system must be completely “blind” to the specific experiment we will make. IOWs, we must be sure that no added information is present in the system in relation to the specific experiment. That is easily realized by having the system assembled by someone who does not know what kind of experiment we are going to make. IOWs, the programmer of the informational system just needs to know that we need random 500 bits string, but he must be completely blind to why we need them. So, we are sure that the system generates truly random outputs.

1b) Obviously, an operator must be able to interact with the system, and must be able to do two different things:

– To input his personal solution, derived from his presonal intelligent computations, so that it appears to us observers exactly like any other string randomly generated by the system.

– To input in the system any string that works as an executable program, whose existence will not be known to us observers.


2) The Time Span:

That is very important too. There are different Time Spans in different contexts. The whole life of the universe. The life of our planet. The years in Lenski’s experiment.

I will define the Time Span very simply, as the time from Time 0, which is when the System comes into existence, to Time X, which is the time at which we observe for the first time the candidate designed object.

For Lizzie’s experiment, it is the time from Time 0 when the specific informational system is assembled, or started, to time X, when it outputs a valid solution. Let’s say, for instance, that it is 10 days.


3) The specified function

That is easy. It can be any function objectively defined, and objectively assessable in a digital string. For Lizzies, experiment, the specified function will be:

Any string of 500 bits where the product calculated as described exceeds 10^60


4) The target space / search space ratio, expressed in bits a la Shannon. Here, the search space is 500 bits. I have no idea how big the target space is, and apparently neither does Elizabeth. But we both have faith that a good mathemathician can compute it. In the meantime, I am assuming, just for discussion, that the target space if 320 bits big, so that the ratio is 180 bits, as proposed in the premises.

Be careful: this is not yet the final dFSI for the observed string, but it is a first evaluation of its higher threshold. Indeed, a purely random System can generate such a specified string with a probability of 1:2^180. Other considerations can certainly lower that value, but not increase it. IOWs, a string with that specification cannot have more than 180 bits of functional complexity.


5) The Observed Object, candidate for a design inference

We must observe, in the System, an Object at time X that was not present, at least in its present arrangement, at time 0.

The Observed Object must comply with the Specified Function. In our experiment, it will be a string with the defined property, that is outputted by the System at time X.

Therefore, we have already assessed that the Observed Object is specified for the function we defined.


6) The Appropiate Threshold

That is necesary to transorm our numeric measure of dFSI into a cathegorical value (present / absent) of dFSCI.

In what sense the threshold has to be “appropriate”? That will be clear, if we consider the purpose of dFSCI, which is to reject the null hypothesis if a random generation of the Oberved Object in the System.

As a preliminary, we have to evaluate the Probabilistic Resources of the system, which can be easily defined as the number of random states generated by the System in the Time Span. So, if our System generates 10^20 randoms trings per day, in 10 days it will generate 10^21 random strings, that is about 70 bits.

The Threshold, to be appropiate, must be of many orders of magnitude higher than the probabilistic resources of the System, so that the null hypothesis may be safely rejected. In this particular case, let’s go on with a threshold of 150 bits, certainly too big, just to be on the safe side.

7) The evaluation of known deterministic explanations

That is where most people (on the other side, at TSZ) seem to become “confused”.

First of all, let’s clarify that we have the duty to evaluate any possible deterministic mechanism that is known or proposed.

As a first hypothesis, let’s consider the case in which the mechanism is part of the System, from the start. IOWs the mechanism must be in the System at time 0. If it comes into existence after that time because of the deterministic evolution of the system itself, then we can treat the whole process as a deterministic mechanism present in the System at time 0, and nothing changes.

I will treat separately the case where the mechanism appears in the system as a random result in the System itself.

Now, first of all, have we any reason here to think that a deterministic explanation of the Observed Object can exist? Yes, we have indeed, because the nature itself of the specified function is mathemathical and algorithmic (the product of the sequences of heads must exceed 10^60). That is exactly the kind of result that can usually be obtained by a deterministic computation.

But, as we said, our System at time 0 was completely blind to the specific problem and definition posed by Lizzie. Therefore, we can be safely certain that the system in itself contains not special algorithm to compute that specific solution. Arguing that the solution could be generated by the basic laws physics is not a valid alternative (I know, some darwinist at TSZ will probably argue exactly that, but out of respect for my intelligence I will not discuss that possibility).

So, we can more than reasonably exclude a deterministic explanation of that kind for our Observed Object in our System.

7) The evaluation of known deterministic explanations (part two)

But there is another possibility that we have the duty to evaluate. What if a very simple algorithm arose in the System by random variation)? What if that very simple algorithm can output the correct solution deterministically?

That is a possibility, although a very ulikely one. So, let’s consider it.

First of all, let’s find some real algorithm that can compute a solution in reasonable time (let’s say less than the Time Span).

I don’t know if such an algorithm exists. Im my premise c) at post #682 I assumed that it exists. Therefore, let’s imagine that we have the algorithm, and that we have done our best to ensure that it is the simplest algorithm that can do the job (it is not important to prove that mathemathically: it’s enough that it is the best result of the work of all our mathemathician friends or enemies; IOWs, the best empirically known algorithm at present).

Now we have the algorithm, and the algorithm must obviously be in the form of a string of bits that, if present in the System, wil compute the solution. IOWs, it must be the string corresponding to an executable program appropriate for the System, and that does the job.

We can obviously compute the dFSI for that string. Why do we do that?

It’s simple. We have now two different scanrios where the Observed Object could have been generated by RV:

7a) The Observed Object was generated by the random variation in the System directly.

7b) The Observed Object was computed deterministically by the algorithm, which was generated by the random variation in the System.

We have no idea of which of the two is true, just as we have no idea if the string was designed. But we can compute probabilities.

So, we compute the dFSI of the algorithm string. Now there are two possibilities:

– The dFSI for the algorithm string is higher than the tentative dFSI we already computed for the solution string (higher than 180 bits). That is by far the most likely scenarion, probably the only possible one. In this case, the tentative value of dFSI for the solution string, 180 bits, is also the final dFSI for it. As our threshold is 150 bits, we infer design for the string.

– The dFSI for the algorithm string is lower than the tentative dFSI we already computed for the solution string (lower than 180 bits). There are again two possibilities. If it is however higher than 150 bits, we infer design just the same. If it is lower than 150 bits, we state that it is not possible to infer design for the solution string.

Why? Because a purely random pathway exists (through the random generation of the algorithm) that will lead deterministically to the generation of the solution string, with a total probability of the whole process which is higher than our threshold (lower than 150 bits).


8) Final considerations

So, some simple answers to possible questions:

8a) Was the string designed?

A: We infer design for it, or we infer it not. In science, we never know the final truth.

8b) What if the operator inputted the string directly?

A: Then the string is designed by definition (a conscious intelligent being produced it). If we inferred design, our inference is a true positive. If we did not infer design, our inference is a false negative.

8c) What if the operator inputted the algorithm string, and not the solution string?

A: Nothing changes. The string is designed however, because it is the result of the input of a conscious intelligetn operator, although an indirect input. Again, if we inferred design, our inference is a true positive. If we did not infer design, our inference is a false negative. IOWs, our inference is completely independent from how the designer designed the string (directly or indirectly)

8d: What if we do not realize that an algorithm exists, and the algorithm exists and is less complex than the string, and less complex than the threshold?

A: As alreday said, we would infer design, at least until we are made aware of the existence of such an algorithm. If the string really originated randomly througha random emergence of the algorithm, that would be a false positive.

But, for that to really happen, many things must become true, and not only “possible”:

a) We must not recognize the obvious algorithmic nature of that particular specified function.

b) An algorithm must really exist that computes the solution and that, when expressed as an executable program for the System, has a complexity lower than 150 bits.

I an absolutely confident that such a scenario can never be real, ans so I believe that our empirical specificity of 100% will be always confirmed.

Anyways, the moment that anyone shows tha algorithm with those properties, the deign inference for that Object is falsified, and we have to assert that we cannot infer design for it. This new assertion can be either a false negative or a true negative, depending on wheterh the solution string was really designed (directly or indirectly) or not (randomly generated).

That’s all, for the moment.

AF adds “This was done in haste. Any comments regarding errors and ommissions will be appreciated.”




263 thoughts on “Gpuccio’s Theory of Intelligent Design

  1. I only require that you guys admit that the only reason why it seems so implausible to you is that you are already committed to a materialistic, reductionist view of reality.

    Oh, this annoys the hell out of me! I am interested in what is. If what is includes a God, brilliant. If not, equally brilliant. I’m sick to death of people trying to rationalise my opposition to their viewpoint on the basis of some imaginary ‘prior commitment’! It is a close relation of “you don’t get it because you are stupid”, and displays a complete lack of respect for one’s opponent’s intellectual honesty.

    eta, deep breath: but even reacting so is taken as evidence of the depth of one’s prior commitment! Hey ho.

  2. Gpuccio,

    This is an important point, so I’m really going to hammer it home:

    The model is distinct from the thing being modeled.

    The formation of black holes does not involve design, yet we can model it on computers using designed algorithms.

    The development of hurricanes does not involve design, yet we can model it on computers using designed algorithms.

    Evolution does not involve design, yet we can model it on computers using designed algorithms.

    Is this clear? Do you see now why your earlier statements don’t make sense?

    Take another look at what you wrote:

    You are completely wrong here. The algorithm is a designed algorithm. You and your friends may believe that it is a “model of the modern synthesis”, or of part of it. Unfortunately, that is a completely unwarranted statement.

    GAs are designed, and they are never a model of NS, which is the only relevant mechanism in the modern synthesis. Your algorithms for Lizzie’s example are no exception. They are based on controlled random variation and intelligent selection. They are forms of intelligent design.

    The fact that a model is designed does not disqualify it from modeling a non-design process.

    Please think about that until it sinks in.

  3. Flint: “The next bus operation is not entirely predictable, because there are sometimes race conditions, and in complex systems bus arbitration is not entirely deterministic. “

    I could agree with you if Joe hadn’t said, “known program”.

    In my known program, even if my platform is NOT an MCU, interrupts will be disabled as will co-processors, DMA controllers, etc.

    In this case, only the MCU containing the “known program” can access the buss and there is no arbitration at all, only external accesses to one buss controller and one memory segment, i.e., one CS.

    The point Joe was trying to make was related to randomness, and it fails with this analogy since with a single MPU environment we can *easily* know with certainty, what the next bus activity will be.

    When bringing up a board for the first time, you do exactly this, you write a routine that will exercise the bus so the engineer has a known pattern he is expecting to see.

    If those exact signals aren’t there, you look for a reason why, which may be as silly as mixing 5V and 3.3V logic.

  4. Joe: “…and yes, if you write a code that walks through 1s and 0s then of course you should know what to expect given a good board. “




  5. “Do you ask what the probability is for an object subject to the gravitationa field of the earth to fall down if it is not sustained by anything? Are you denying deterministic reasoning in science?”

    This question is the kind of mockery typically used by ID/creationists that reveals how little they know about how matter interacts with matter.

    An object under the mutual gravitational influence of another object behaves according the relative magnitudes of the kinetic energies of the objects, the depth of the gravitational potential well formed by their interaction, the angular momentum in the system, and how much energy is radiated out of the system during the interaction. The concepts of gravitational interaction also similarly apply to the electromagnetic interactions among particles. Star and galaxy formations involve all these processes and more.

    When modeling stochastic collections of particles interacting among themselves through electromagnetic, gravitational, strong and weak force interactions, one does indeed often use statistical mechanics ideas if kinetic theory becomes too difficult for a supercomputer to handle; which can happen very quickly.

    I would suggest right here that not one of the people over at UD would have the slightest idea of how to incorporate the physical laws into a computer program so that the computer program will replicate the evolution of such systems of particles into increasingly condensed states of bound particles that continue to interact and produce emergent properties as a result of stronger and stronger interactions.

    This kind of modeling has become routine in physics and chemistry; and it is nothing like taking logarithms of the ratios of “target spaces” to “sample spaces” and labeling the logarithm of this ratio “information.”

    That quote above is simply expressing cargo-cult notions about how physics and chemistry actually work. It will not make the Nobel Prize land on anyone’s doorstep. That Prize will go to the people who already did the real calculations that led to the discovery of the Higgs as well as other particles.

    What do ID/creationists imagine happens when two neutral hydrogen atoms come into close proximity and bind into a hydrogen molecule? Lay out the sample and target spaces for us, calculate the “information,” and explain why diatomic hydrogen molecules can’t exist. What do you think is required for a computer program to model this phenomenon?

    How do water molecules form snowflakes? Can you model it? Physicists can. How are designer drugs modeled? Can you do it? Chemists can.

    Why is blunt directness on the part of a scientist so offensive to ID/creationists who continue to make these crass assertions about science that are so often dead wrong? Why do you think you deserve respectful handling with kid gloves?

  6. What is sad about all this is that it turns out that there is no hope of having people here construct a computer model of the operation of random mutation together with natural selection, and then analyze whether it could produce gpuccio’s dFCSI, and see why gpuccio thinks that there is a hard limit such that dFCSI is an indicator of Design.

    Any attempt to do so gets portrayed as an effort to sneak Design in via the program.

    At my university there is a group of planetary scientists who use computers to simulate the random aggregation of rocks and dust into planets and solar systems. When they alter the parameters of their program, they usually do not know in advance what kind of solar system will result. Presumably, though, gpuccio and his friends would tell them that they are not properly modeling the aggregation of a solar system, and that they have intelligently designed it instead!

  7. These guys are hopeless. They do not understand what a model is and how important it is in science. I am going to quote from a book by P. W. Anderson, a Nobel laureate in physics. 

    This process of “model-building”, essentially that of discarding all but essentials and focusing on a model simple enough to do the job but not too hard to see all the way through, is possibly the least understood — and often the most dangerous — of all the functions of a theoretical physicist. I suppose all laymen and most of my scientific colleagues see our function as essentially one of calculating the unknown, B, from the most accurate approximation possible to the known, A, by some combination of techniques and hard work. In fact, I can barely use an electronic calculator — almost all of my experimental colleagues being far more skilled at that than I — and I very seldom produce actual numerical results — and, if so, make some graduate student or other junior colleague do the actual work. Actually, in almost every case where I have been really successful it has been by dint of discarding almost all of the apparently relevant features of reality in order to create a “model” which has the two almost incompatible features:

    (1) enough simplicity to be solvable, or at least understandable;

    (2) enough complexity left to be interesting, in the sense that the remaining complexity actually contains some essential features which mimic the actual behavior of the real world, preferably in one of its as yet unexplained aspects.

    P. W. Anderson, More and Different, World Scientific 2011.

    All of this will go over the heads of the UD peanut gallery.

  8. Well, the GA model is inadequate because it doesn’t accurately model chemistry, and the Designer model succeeds because it doesn’t require that pathetic level of detail.

  9. Mung provides this cogent critique of Elizabeth’s simulation. 

    If only she were actually using coin tosses, or even simulated coin tosses. But alas.

    The claim that she is taking subsets of sequences of coin tosses is a flat out lie. Oh, I have no doubt she’s sincere. She really does think the claim is true. By using the symbols T and H she’s done a fine job of fooling herself and apparently many other very bright people over at TSZ.

    I don’t know whether to laugh or cry.

  10. Yes, it’s a systemic problem. Some of you might recall the hilarious story of Gil Dodgen: 

    All computational evolutionary algorithms artificially isolate the effects of random mutation on the underlying machinery: the CPU instruction set, operating system, and algorithmic processes responsible for the replication process.

    If the blind-watchmaker thesis is correct for biological evolution, all of these artificial constraints must be eliminated. Every aspect of the simulation, both hardware and software, must be subject to random errors.

    Of course, this would result in immediate disaster and the extinction of the CPU, OS, simulation program, and the programmer, who would never get funding for further realistic simulation experiments.

    A Realistic Computational Simulation of Random Mutation Filtered by Natural Selection in Biology

    Further reading: Gil Has Never Grasped the Nature of a Simulation Model.

  11. I think the Mung and Joe G characters over there are just a couple of blackguards hooting and throwing feces; it’s their ultimate achievement in life.

    I have been going under the assumption that they have no clue and that their behavior is the only thing left in their lives that gives them any pleasure. Imagine what their brain stems must look like.

  12. Just noticed this from Joe:

    That is a misuse of the word “default” and it is false. Not only do we have to eliminate other causes but the design criteria must also be met.

    But what do the design criteria turn out to be?  CSI, FCSI, or FCSI or some such. And how do you determine whether you meet these criteria?  By measuring the low probability of a non-design solution meeting the target. i.e. the only evidence for design is lack of evidence for non-design.  That is why design is the default.  The EF makes this extremely clear and explicit.

  13. Mark,

    But what do the design criteria turn out to be? CSI, FCSI, or FCSI or some such. And how do you determine whether you meet these criteria? By measuring the low probability of a non-design solution meeting the target. i.e. the only evidence for design is lack of evidence for non-design. That is why design is the default. The EF makes this extremely clear and explicit.

    Indeed, that’s one of the reasons that IDers tend to love the “negative log base 2″ gambit. By taking the log of a low probability, flipping the sign, and speaking of “bits”, they disguise the fact that what they are really talking about, in the end, is just a probability — and not even a probability of design. It is, as you point out, a probability of non-design. Design is the default, invoked only when the probability of a non-design origin is deemed to be too low. (Don’t expect Joe to admit that, though.)

    By speaking of “bits of CSI”, IDers also invite the unwary to conclude that CSI is an intrinsic property of an object. They reinforce this notion when they speak of objects “containing” a certain number of bits of CSI. In reality, CSI is not intrinsic and can only be determined relative to a specified function. An object with n functions has n CSI values, one for each target space.

    CSI is just probability in a cheap tuxedo, to borrow a metaphor. And it’s a probability of non-design, with design as the default.

  14. Experiments, too, are models. They are ‘real’ ones, in that they incorporate the behaviour of actual physical objects, but they still involve abstracting the essence of an assumed phenomenon, limiting the variables and creating your own mini-version and prodding it with a metaphorical stick. It’s impossible to do without designing it (though experimental scientists are less frequently accused of ‘sneaking the results in’. I think there are rules about that). And (something Joe G thinks it clever to ‘call’ me on) without using your imagination. 

    Many phenomena, for temporal, spatial, probabilistic or health-and-safety reasons, cannot of course be observed in a lab or a lifetime. Which is the perpetual ‘out’ of the committed Creationist with respect to evolutionary phenomena.

  15. Gpuccio

    Are you really saying that there are scientific theories which are “reality”? Maybe your Bayesian probabilistic approach?

    I absolutely agree that no scientific theories are completely accurate – especially when asked to explain an event like the development of life. That is my point.  Reality is always more complex than the model. That is one reason why there is always an element of uncertainty i.e. chance! (The other reason being that increasingly scientific theories include a stochastic element in the theory).

    I am only claiming that, in my map of reality, a designer with the appropriate power, and the appropiate knowledge and will, is a full “deterministic” explanation of a designed object, and that no probability analysis is necessary beyond that.

    I am sorry I don’t understand what you mean by “my map of reality”. Is it simply that you believe there are designers who have the ability and knowledge to be 100% certain of obtaining the outcome?  If you belief that, then you have a perfect explanation of everything.  But many people don’t believe this.  They want to know how likely it is that a designer exists and how likely it is that the designer would create the target. Is this unreasonable? After all we are surrounded by designers who have less than certain probability of existing in the first place and who are very far from being certain of achieving their ends.

    Or perhaps you are just saying that if such a designer existed that would be a deterministic explanation.  Fair enough, but the question of whether such a designer exists is still open and controversial.

    Or maybe you are saying something more subtle?

    I have added many times that, for purely empirical considerations I tend to believe that the best hypothesis for the biological designer or designers is that he (they) are not physical beings (that is, with a physical body similar to ours), and that he or they may interact with biological matter in the same way as our consciousness interacts with our brains. That is my position, but it is not necessarily a universal requirement for ID. Others can believe differently, and still have a perfectly valid position in the ID scenario. For instance, the aliens scenario remains, as I have said many tiems, a perfectly valid scenario for the explanation of biological information on our planet. But personally I don’t find it really convincing. Again, map choices.

    Not sure why purely empirical considerations is in bold.  Do you deduce design from something other than empirical considerations? Anyhow you have begun to take the step I have asked for.  You have sketched out the beginnings of a specific design hypothesis (albeit extremely vague) which in time might be given the same kind of incredibly detailed scrutiny which you apply to non-design hypotheses. Just as you ask evolutionary biologists to explain exactly what mutations lead to certain characteristics and their selective advantages I would ask you to explain exactly how and when the non-human force acted on DNA.  After all the evolutionary biologists are busy putting forward hypotheses – some more credible than others.  Where are the corresponding design hypotheses?

    I actually think that as soon as that scrutiny begins your hypothesis would collapse – but that’s another issue.  My point is that until you go to some level of detail then the only evidence of design is perceived failures of non-design methods and in that sense design is the default.

  16. Gpuccio

    You always seem to forget that the association between dFSCI and design is empirical, and that dFSCI has 100% specificity in distinguishing between human designed objects and non designed objects.

    We have been round this before.  I don’t forget it. I disagree with it.  You effectively define dFSCI as “not created by a non-design solution” (you can see this because if you find a non-design solution it automatically loses its dFSCI status). Therefore it is always associated with design by definition.

  17. In comments 843 and 844 gpuccio wrote:

    [843:] To Joe Felsenstein:

    [me:] Any attempt to do so gets portrayed as an effort to sneak Design in via the program.

    [GP:] No. The simple fact is, if you sneak IS in the algorithm, you have sneaked design in the algorithm, because IS is a form of design.

    Please, read my previous answer to Keiths. I apologize to you too if you have been misdirected in evaluating my thoughts because of my unhappy phrasing of it.

    Elizabeth’s model is a standard one of some genotypes (the bit string) that have some fitnesses (the fitness function she uses). Yet it is said to involve IS (Intelligent Selection) and so to be an inappropriate model of natural selection.

    Here is a simpler case, the kind I might assign my theoretical population genetics class to simulate:  We have three (diploid) genotypes AA, Aa, and aa whose fitnesses are 0.99, 0.99, and 1.  In a random mating population with 1000 individuals, simulate the outcome and tell me what fraction of the time the A allele wins out.  

    OK, where did I sneak in IS?  Elizabeth’s model is just a bigger (and more haploid) model like that. She had an assignment of fitnesses to genotypes. Is there some assignment of fitnesses to genotypes that you would say isn’t IS and thus for which you would be willing to say whether it has dFCSI?

    As far as I can see all models that have fitnesses associated with genotypes are said by you to show IS. Including that one-locus diploid class exercise model I just gave.  Right?


    [Me:] Presumably, though, gpuccio and his friends would tell them that they are not properly modeling the aggregation of a solar system, and that they have intelligently designed it instead!

    A[GP:] No, I don’t say that. I hope that what I think is more clear now.

    [844:] To Joe Falsenstein (at TSZ):

    I hope you will agree that, if those scientists introduced in the simulation factors and principles that are not appropriate to simulate what thet say they are simulating, their results would be of no value.

    My point was that these models of rocks and dust aggregating into a solar system were models, intelligently designed, of unintelligent processes. If models where different genotypes have fitnesses, and there is mutation of a quite ordinary sort show IS, then the planetary scientists’ models must show Intelligent Collisioning. Let’s not go off sideways into the details of whether their models are fully adequate. No model ever is. The issue is whether you can evaluate dFCSI for a simple, boring model of ordinary sort.

  18. Gpuccio

    You seem to be defending your case with some rather obscure philosophy about maps and ontological categories.  Can you answer this question:

    Do you believe that solutions involving designers are certain to produce the target or is there an element of uncertainty?

    You say it is all about “choosing maps” whatever that means.  So does the design inference only work for people that have chosen your map?

  19. gpuccio: “Could you please explain why and where have you concluded that I don’t understand the concept of “feedback”?  “

    If you understood how “feedback” works in a system, you would have understood Elizabeth’s original experiment.

    You would also understand what evos mean when they use the term “evolution”.

    You would also have realized that with “feedback”, the “search space” IDists claim is infinite, is actually very small when considered from parent to child.

    You would have know all these things without our having to tell you.

    gpuccio: “What I don’t understand is the sudden emergence of this vague “feedback” in a discussion that was on something else. “

    If you think this discussion wasn’t always about “feedback”, then you don’t understand what is meant by the term, “evolution”.

  20. Gpuccio: Looking at your post 850 (which is becoming increasingly difficult on my tablet)

    You mention data you expect to come from mainstream research that will support design and not rmns. What kind of data would that be?

  21. It’s interesting how feedback, and discard of the suboptimal, play a role in both learning and intelligence. Our capacities for these things are sophisticated versions of the more diffuse process by which populations evaluate ‘information’ about how to survive in an environment … by actually surviving in it (or not)!.

    It would be possible to derive a ‘random’ way of deriving the fitness function for a GA, rather than just ‘intelligently’ choosing it. Take the depth of snow in your yard on December 14th last, add the license number of the next vehicle to pass, the number of times the word “obvioulsy” appears in UD comments … then simply evaluate strings according to their approach to this peak, and don’t copy (breed from) the worst. But still you’ve smuggled something in? Well, it can’t be the solution, since you didn’t choose it, so it must be the method – the method of ‘picking’ the solution ‘at random’, and of evaluating against it.

    Stochastic but biased differential survival of variant copies wrt a particular metric. That’s ‘intelligent selection’, is it? 

    In the Blind Watchmaker Dawkins toyed with the idea of allowing something in Nature to do the selection part – digital flowers that attracted insects or something. But still he’s smuggled that in, because he chose the choosers!

    So basically the only legitimate GA would lack the ‘genetic’ part and the ‘selection’ part … that would be an algorithm, then!

  22. 842 gpuccio September 29, 2012 at 2:31 am

    The problem is not that you are using an algorithm (designed by definition). The problem is that you are using an algorithm that measures a specific fucntion, and rewards it. That is IS, and not NS. IS has formal and logical properties and empirical powers that ae completely different from those of NS. That’s why you cannot derive ant credible conclusion from a model that uses IS to model NS. I hope that is more clear now.

    Emphasis mine. The “formal and logical properties” are defined by you, gpuccio. You obviously draw a line in the sand between natural selection and “intelligent” one (people have always called it artificial). That’s fine, although it is not always helpful and you will see a ghost in the machine and fixate on that. This is neither right nor wrong. It’s your choice.

    However, you are dead wrong to claim that the “empirical powers” of natural and intelligent selection are completely different. They are not. Both lead to incremental changes in the genetic makeup of a population. The only difference is that one occurs in nature and selects for whatever happens to be best for the current environment and the other happens in Elizabeth’s computer and selects for whatever she prescribes. That difference, however, is not essential to Elizabeth’s argument. What she set out to demonstrate was the possibility of starting with a population whose fitness is bad in a given environment and observing how it gets to the top of the fitness peak without any intelligent input. She did not have to input the solution (she did not even know which sequences would satisfy it). Instead, that task was left to the environmental feedback. Better sequences had a chance to leave offspring over worse ones.

    That is the crux of both natural and “intelligent” selection. There is no difference in their “empirical powers.” Each acts in a short-sighted way, merely pushing the population (mostly) up the fitness slope. Nature does not compute fitness, but neither does it compute the Moon’s trajectory.

  23. It is interesting how vigorously people argue against GAs, given that ID claims to be happy with NS as a basic principle.

    I wonder how GP would evaluate an algorithm that interacted with the ‘real world’ in some way. ie, you take the ‘selection’ out of the computer, (horror of horrors, evaluation/discard methods are written by programmers!) but leave the ‘genes’ inside. These genes are a recipe for dog-food, say. Quantities of meat, salt, fish, bananas, water … instead of evaluation against a fitness function, you make up the dog food, and give it to a population of dogs, with the genetic recipe in a little capsule buried deep in the mush. They will evidently lap up some mixes, turn their nose up at others, and be too stuffed to care about the rest. Your canine focus group eats the ‘genome’ along with the grub, and you fish around in what they left for the genotype and mirror the ‘real’ consumption in your cyber-genotypes. Then you make copies, with a bit of ‘blind’ mutation and repeat. I’d be willing to bet the food becomes more distasteful as time goes on.  

    Would that be ‘Intelligent’ or ‘Natural’ selection? What would we object to next? The genotype? The mutation process? The ‘breeding’ method?

  24. gpuccio has a new comment in which he explains the difference between natural and “intelligent” selection. It’s long-winded, but, as far as I can tell, it boils down to this.

    • The fitness is computed.
    • There are no actual fights.
    • And there is no sex.

    I kid you not.

    gpuccio, let me give you an example from physics and see what you think about it. Let’s take an elastic string and stretch it between a finger and a thumb. The string will be straight.

    Physicists have a theory about the shape of the string. We think that a string stretching from point A to point B has the smallest possible length. If there are no obstacles between A and B, this minimization principle yields a simple solution: the straight line connecting the two points. (Do the experiment.)

    Next, keep the finger and thumb stretched and, with your free hand, deform the string. Now release it. The string will oscillate a bit and eventually go back to the old state of being straight. In the process, it gradually reduces its length until it reaches the minimum (straight line). A natural process. 

    Meanwhile, I can do a simulation. I will start with exactly the same shape in my computer as your deformed string. I will then instruct the computer to do the following:

    • Make a slight change in the shape. 
    • Measure the length. 
    • If the length has decreased, accept the new shape; otherwise go back to the previous shape. 
    • Repeat. 

    I claim that this process will yield the same straight string as your experiment. Your experiment and my simulation yield the same result. Nature does not compute the length of the string, whereas my simulation does. Do you see a problem with that? 

  25. One should add that the mathematical theory of natural selection is also used to analyze artificial selection and vice versa. And this is not some intellectul dogmatism on the part of animal and plant breeders. They have big money at stake and are very pragmatic.

  26. Gpuccio

    I have already answered that:

    “I am only claiming that, in my map of reality, a designer with the appropriate power, and the appropiate knowledge and will, is a full “deterministic” explanation of a designed object, and that no probability analysis is necessary beyond that.”

    I know that is what you wrote but I found it hard to understand what you meant. So I tried rephrasing the question in a simpler manner that could be answered yes or no.

    IOWs, in that situation there is no more uncertainty, for all useful purposes, than in calculating the trajectory of a body in a known gravitational system.

    I am still a bit unclear what you mean but I think you are saying that any solution involving design is very close to certain to succeed?  This seems to me clearly false and slightly nuts.

    Let us take a real example where we are trying to decide if something was designed.  Say someone is found with their neck broken at the bottom of the stairs – was it accident or murder?  The dead person is a fit cautious young man not inclined to drink or drugs and the stairs are wide and easy. The probability of an accicent is extremely small.  Should we therefore deduce design? No. It depends on there being someone with the motive and means.  A real detective would review the chances of there being someone with the motive present at the scene and also the chances of them successfully pushing the victim down the stairs. The detective would be dealing probabilities and comparing them.

    It appears that in your world view that if there is someone who wants to push the young man downstairs then there is no more uncertainty in them succeeding, for all useful purposes, than in calculating the trajectory of a body in a known gravitational system.

    You may argue that I am ignoring the phrase “with appropriate power and knowledge”. But what is appropriate? Do you mean sufficent power and knowledge that they are bound to succeed? That seems circular. And what about all the possible solutions involving design that do not have the appropriate power and knowledge?  Do they not count as design solutions?

    No, the design inference, like all scientific theories, works for all, but there are maps of reality that will look at it with more sympathy, and others that will resist it.

    My “map” allows for uncertainty in whether designers succeed – so please can you explain how the design inference works for me (as it works for all)

  27. What, instead, you cannot do is to predefine some functions that are then actively measured by you designed system and actively expand the results that are in line with your definition of function, and then say that you are modeling NS.

    You are simply implementing an algorithm of intelligent selection, and modeling nothing.


    You are still confusing the model with the thing being modeled.

    Imagine we are developing an atmospheric model. I want to create a function that determines the barometric pressure at certain times and places so that we can use that information to predict future changes in the system. You argue against my function, saying

    What you cannot do is to predefine a function that actively measures the atmospheric pressure. That’s Intelligent Pressure Measurement, and nature doesn’t have built-in barometers.

    You are simply implementing an algorithm of Intelligent Pressure Measurement, and modeling nothing.

    Do you see why that argument is wrong?

  28. gpuccio,

    You are overlapping the concept of an algorithm with the data the algorithm processes.

    Example 1:  Algol_Selection( WeaselPopulation, “Methinks it is like a weasel”, 100, 10 );

    Example 2:  Algol_Selection( PrimePopulation, “3.141”, 20000, 200 );

    The “algorithm” knows nothing about the “target” and doesn’t change.

    It is the user, (environment), that “supplies” the “search” parameters.



  29. Mark Frank: “A real detective would review the chances of there being someone with the motive present at the scene and also the chances of them successfully pushing the victim down the stairs. The detective would be dealing probabilities and comparing them. “


    ID refuses to investigate the abilities of their “designer” and because of that, ID will never be able to provide a proper theory.


  30. I don’t think there is anything wrong with limiting the scope. Newton’s theory described how gravity works but did not explain where it came from. Likewise, theory of elasticity describes the properties of elastic solids but says nothing about the interactions giving rise to elasticity.

  31. In ID, the designer is responsible for the “mechanism” as Joe likes to assert.

    The designer’s capabilities directly impact the probability of design as an explanation.

    ID’s main argument has always been that “Darwinism” is not capable of doing what it claims,  which is exactly where their own explanations stop, at their designer’s capabilities.


  32. Yes. And likewise, molecular and atomic forces on the microscopic scale are responsible for the elastic forces on the macroscopic scales. Nevertheless, theory of elasticity is its own, separate body of knowledge that deals with macroscopic properties of solids without delving into the question of the mechanism. 

    Have a look at the table of contents of Theory of Elasticity by Landau and Lifshitz. There is no mention of van der Waals forces or covalent bonding in it. 

  33. Here is an algorithm of which I have intimate knowledge.

    You want to bring into focus defects and internal features inside an object; the object could be metal or the body of a living organism.

    You capture an array of digitized ultrasonic echoes and store the array in the memory of a computer.

    The algorithm for focusing is derived from a well-known property of nature that sound or light travels from point A in medium 1 to point B in medium 2 on a path that takes minimum time.

    So the program does the following:

    (1) It selects a digitized waveform from the raw data file in memory to become the waveform at the center of an aperture of a chosen size which is about to be synthesized as though it were a lens.

    (2) It then selects each of the nearby digitized waveforms encompassed by that chosen aperture and shifts those digitized waveforms in time in a way that would minimize its time of travel from a chosen point inside the medium below the center of the aperture to that particular position within the aperture.

    (3)The computer then adds each of these appropriately shifted digitized waveforms to the digitized waveform at the center of the aperture and stores the result at the position of the center of the aperture in a separate file.

    (4)The computer then retrieves from memory the next digitized waveform to become the center of the aperture and repeats steps (1) through (3) for every digitized waveform in the raw data file and outputs the result to every corresponding position in the processed data file..

    This process is done for every point within the volume of the medium that was scanned.

    The result is a processed file of digitized waveforms that contained focused results for every point within the volume that was scanned. If there are any reflectors in the medium, they will appear in focus in the processed file even though they can’t be seen in the raw data file. The result can be displayed on computer screen.

    The computer program doesn’t know what is in – or even IF there is anything in – the medium being scanned. It simply behaves like a lens scanning over the digitized data bringing whatever is there into focus. The computer doesn’t invent the images produced. It doesn’t intelligently design the images. It doesn’t put them there as a target. It has been programmed simply to behave like a fundamental law of nature in how it handles the digitized waveforms.

    We have taken a fundamental law of nature and used it to construct a Synthetic Aperture Focusing Algorithm within a computer that does exactly the same job a lens would do.

    This example contains the essence of programming computers to model nature. Either you program them to handle external objects or data in the way natural law would handle them; or you program them to behave the way natural law does, and also program them with the objects on which those laws will act.

  34. Gpuccio

    So, let’s say that our investigators find the person dead, at the bottom of the stairs, with a big knife deeply stuck in his back. is it accident or murder?

    The probability of an accicent is extremely small. But not exactly in the same way as it was in your original example.

    You may say that it is not exactly zero, because you like to think, being a serious Bayesian scientist, that the knife could have been flying in the air for some reason just when the man fell from the stairs.

    At the same time, the probability if an intentional killer would not be exactly one for you, because, as you say, your “map” allows for uncertainty in whether killers succeed.

    In this formulation murder seems obvious because it is very much more likely to produce the outcome (knife in back) than accident.  We know about murderers and what they are capable of and we subconsciously (or possibly consciously) compare it the chances of it happening by accident. That we are making such a comparison becomes clear if you change the account so the probability of a murderer succeeding is also dramatically lowered.

    Suppose that the stairs are on a spaceship and the dead person is the only person on it.  Now the probability of murder has also dropped very low. Whatever the explanation it was bizarre.  Maybe the dead person was using the knife and accidentally fell on it.  Maybe a murderer rigged up a system that allowed him/her to propel the knive remotely.  They are both extremely unlikely but I would say the accident is marginally the better explanation. But logically the investigators should compare the two and look for other explanations that are more probable (suicide?)

    The point is that comparing probabilities is the essence of the rational process.  As far as I can see this is nothing to do with worldviews and maps.  The ID argument fails in all cases.

  35. Mung: “It’s not “selection” of any kind or sort. Selection involves intelligent conscious choice. “

    I’m going to agree with you here that IDists believe it to be an intelligent conscious choice.

    Evos use the term as a metaphor but IDists tend to fixate on the term as being a literal equivalent to selection as practiced by humans.

    We see the same thing when the term “search space” is used. IDists literally believe that a “search” for a particular “target” has been undertaken.

    The “mechanism” of evolution has no target.


  36. Gpuccio,

    Guys, I have no intention to go on endlessly discussing the differences between NS and IS. I have clearly stated what I think. I have discussed this topic many times, and I am frankly tired to do it, also becasue, once clarified the relative positions, there is nothing more to say.

    That’s a cop-out. There is plenty more to say. We’ve shown why your central claim about GAs is wrong:

    Any fitness function in any GA is intelligent selection, and in no way it models NS.

    Can you defend your claim against our arguments, or do you concede that it is wrong?

    You keep your convictions, and I will keep mine.

    I have a better idea. Let’s place a premium on the truth, even when it is uncomfortable to do so.

    Here’s yet another example showing that your “intelligent selection” complaint is incoherent.

    Imagine we are modeling avalanches. I write a function that predicts whether a given patch of snow will begin to slide by looking at the temperature history, the type of snow, the slope angle, and whether adjacent patches are already sliding. In other words, my function selects which patches will begin to slide at a given time.

    According to your logic, this is “intelligent selection.” After all, our “designed system” contains a “predefined function” that selects which snow patches will begin to slide by “actively measuring” the critical parameters. Our model is invalid. We are modeling Intelligent Sliding, not avalanches.

    I hope you can see that this conclusion is absurd. Yet it is reached by the same logic you employ when you argue that any GA with a fitness function is invalid as a model of natural selection.

  37. A point hugely relevant to biological modelling is whether what GP calls IS (the writing or execution of the evaluation/discard procedure in a computer program that copies digital strings) is a fair model of NS (differential survival and reproduction of ‘real’ organisms). No-one is saying that they are precisely the same thing.   

    GP, with unanimous support from the interested ID community, thinks that the coding of an ‘IS’ process “in no way” models NS. Which is fair enough, I suppose, as an opinion . People who work with these things on a daily basis think otherwise.

    But … I still wonder why it’s such a big deal. “ID is OK with differential reproduction”, we are told, and significant change within ‘kinds’ off the back of it. But you can’t model it. Oh goodness, no. You might think you are but you aren’t, because … well,. you aren’t.

  38. gpuccio: “To know if RV can give differential reproduction in replicators, and if such an effect can build dFSCI, you need exactly that: true replicators, true differential reproduction of replicators, and, obsviously, RV. “

    I always get the feeling that IDists just don’t understand the whole concept of computer simulations.

    gpuccio, if I write a simulator for a piston engine in a car, I don’t need to use “true spark plugs”.

    My “virtual spark plugs” also don’t need to be 100% “true” either as they don’t need to have any threads machined on them for most of the tests I would need to do.

    I think it was Gil Dodgen who wanted to model evolution on a computer by randomly altering code in the OS.

    That is not a proper view of computer simulation.


  39. Oh, this annoys the hell out of me!

    GP has apologised as not being directed at me, but I did not take it personally in the first place. My rant was a general frustration at a common ID attitude – if critics don’t accept their crystal-clear argumentation, they must be somehow ideologically hobbled. Reminds me of the Baptist on my doorstep who said that I wasn’t accepting her evangelical stance because Satan had closed my mind. How do you answer that, other than with an expletive? ***

    *** (I didn’t, of course – that’s just rhetoric. I begin to wonder whether there should be some kind of markup for rhetorical flourishes!).

  40. Joe: “And just a compuetr language can call something a “default” doesn’t mean it actually is. “

    The whole point of R0bb’s response was that you used the term “default” in a non-standard way.

    “Programming” languages use default the way English does, i.e., “IF NOT (in set X), THEN default”.

  41. Joe: “Ya see you cannot simulate what you do not understand. “

    Your side is still at the point of learning what “simulation” means, never mind a practical application.

    Look at Gil Dodgen and others who wanted to model evolution by random changes to code in the OS.


  42. Gpuccio,

    Your argumments are, at best, ridiculous.

    I’ve noticed that you tend to lash out at people who have the audacity to point out your errors. Perhaps internet debate is not the best pastime for someone of your temperament.

    You model avalanches with what is known about avalanches. You should model NS with what is known about NS.

    We do. We know that mutations happen. We know that mutations can affect traits. We know that traits can affect survival and reproductive success. It’s not conjecture; all of this has been established experimentally. So why would a GA that models mutations, links mutations to traits, and favors some traits over others be an invalid model?

    To know if RV can give differential reproduction in replicators, and if such an effect can build dFSCI, you need exactly that: true replicators, true differential reproduction of replicators, and, obsviously, RV.

    All of those are present in GAs, although I suppose you will argue that they are not “true” somehow. If so, then specify exactly why and be prepared to defend your claims.  And remember:  the model is distinct from the thing being modeled.

    So you say: I take IS in my algorithm, and I generate as much dFSI as I like (after all, it simply depends on how much dFSI I put in my algorithm).

    No, I don’t say that, and neither does any ‘Darwinist’ I’m aware of. We all know that the choice of fitness function matters. That’s why no one, for example (including Dawkins himself), thinks that Weasel is a valid model of natural selection. It isn’t, and it was never intended to be. It’s a model of cumulative selection, yes, but not of natural selection.

    And if gpuccio objects, well, we are ready to attack him on all sides. Let him waste time in saying his things, we just don’t listen…

    We do listen, and that’s what makes you unhappy, because when we hear you saying something that’s incorrect, we point it out. Why wouldn’t we? Again, if this bothers you, perhaps internet debate is not really your cup of tea.

    Ah, your premium on the truth: you can keep it, if you like. I am not really a fan of premiums.

    It would improve your credibility if you were more concerned with the truth and less concerned with saving face.

    Where is your summary of my argument?

    I lost interest. People have already found serious flaws in your position. Until you address those flaws, there’s no point in putting effort into a summary.

  43. gpuccio wrote in comment #868: Just don’t come and tell me that NS can generated dFSCI, because GAs have demonstrated it. You already know that I don’t believe that this is true.

    and gpuccio wrote in comment #878 in the UD thread: By the way, still complete silence about my posts, those that were, after all, the starting point of their thread. What a disappointment! I was starting to feel that I was achieving something in life (a whole thread dedicated to my ideas on TSZ!), and then complete indifference…

    I think I speak for many here when we say that, as we find aspects of your arguments unclear, we were trying do to what would make them very clear, namely to take a GA example and see how your dFCSI concept applied to it.  The thread was devoted to considering your ideas in a concrete (model) situation.  Alas, that seems not to be possible, for some reason.

  44. As near as I can tell, gpuccio’s distinction between “intelligent” selections and “natural” selections hinges entirely on the existence of some “function” that is supposed to be part of the final state of evolution.

    That is a complete misconception of the process of evolution and of the role of selection.

    Properties and functions emerge in the process of evolution; they are not necessarily targets of evolution. The selection takes place on many things; the examples, such as Elizabeth’s program, simplify the illustration of selection to a single property.

    However, in the process of selection on the basis of some particular property of the system, the system evolves; and it may very well be the case that as the system evolves, new properties and functions arise. These may or may not have a significant effect on subsequent evolution; the property on which natural selection is operating may simply be more important than those other emergent properties.

    On the other hand, emergent properties can, and often do, change the course of the evolution of a system. For example, if gravitational effects become significant in the size of a plant or animal, the structures that support the weight of the system become thicker and may then become properties on which other forces in the environment begin to act. The evolution of the system may change course if that emergent property becomes one of the dominant properties on which other forces in the environment begin to act. The net direction of evolution of a system is determined by multiple forces acting on multiple properties of the system.

    Gpuccio is confusing simple illustrations that pick out a single property on which natural selection can act with the more general cases in which many properties or functions are grist for selection. As new properties and functions emerge, the course of evolution can indeed change. It happens all the time with even the simplest systems.

    Natural selection and artificial selection are distinguished only by the fact that some person or other animal sets up the conditions that act on a particular feature or function. Just because an intelligent being can do it doesn’t mean that it cannot, or does not, also happen within the environment of the evolving system.

    In fact, as far as the evolving system is concerned, the intelligent being is simply another change the environment of that system. For example, a bacterium and its descendants may already be under stress with their evolutionary path moving in some direction. Then the environment changes and the evolutionary path is altered. How would a bacterium know if a human suddenly started making its life miserable by ramping up the forces on some previously unstressed feature of the bacteria?

    Some dam could have been destroyed by an earthquake and new compounds are dumped into its environment. Or it could just as well have been a human spilling its coffee. If the human accidentally spills its coffee, is it natural selection? If the human spills its coffee deliberately, does it suddenly become “artificial” selection?

  45. Gpuccio,

    You cannot be serious. In less than 48 hours, you’ve taken three mutually inconsistent positions.

    First you told us that GAs are invalid as models of NS because they’re designed:

    You are completely wrong here. The algorithm is a designed algorithm. You and your friends may believe that it is a “model of the modern synthesis”, or of part of it. Unfortunately, that is a completely unwarranted statement.

    GAs are designed, and they are never a model of NS, which is the only relevant mechanism in the modern synthesis. Your algorithms for Lizzie’s example are no exception.

    Then you retracted that statement, asserting instead that it’s okay that GAs are designed, as long as they don’t have fitness functions:

    OK, I see from your last quote that you are right about one thing: in that passage I express myself in a wrong way…

    So, while it is true that “GAs are designed,”, and it is true that “they are never a model of NS, which is the only relevant mechanism in the modern synthesis”, the second part is not a consequence of the first…

    Any fitness function in any GA is intelligent selection, and in no way it models NS.

    Now you’re conceding that fitness functions are just fine as long as they have the gpuccio stamp of approval:

    I will show first why Lizzie’s GA is an example of IS, and not of NS. I will then show how it should be modified to drop IS and generically “model” NS…

    The important point now is: the algorithm can measure a product for each new string, but it must not be able to select for that product unless and until it is higher than 10^60…

    Therefore, point 4 becomes:

    4) The ability to positively select and expand each new string according to the calculated product if, and only if, the product is higher than 10^60.

    This is ridiculous. I’m beginning to think that this whole thread is actually a GA (implemented using human beings) with the goal of producing “Gpuccio’s Theory of ID”. You are randomly mutating your theory (without admitting that you are changing anything). Your critics are selecting the variations that survive (and ironically, in this case the process really is “intelligent selection”). Unfortunately, all of the mutants have died so far. I think you need to pick up the pace of mutation.

    Or better still, don’t. Instead, why don’t you take some time off, think things through, and come back when you have actually settled on a stable argument that you believe can withstand critical scrutiny. We’ll still be here.

    It’s inconsiderate to ask us to keep up with an argument that changes every few hours.

  46. GP’s #909 attempts to clarify his distinction between IS and NS, and the extent to which the ‘coins’ model is invalid because it is not ‘NS’ but ‘IS’.

    The problem appears to start with the ‘target’ – >10^60. But I think he may be mixing Lizzie’s term ‘house jackpot’ with ‘target’. You have to stop somewhere, in a model, whereas reproduction and mutation carry on as long as there are replicators to replicate. The house jackpot merely stops the simulation, and plays no part in the ‘selection’ process – it just announces ‘dFSCI’ according to some threshold.

    GP has chosen an inactivated, duplicated gene as the ‘real’ system he would like to analogise, and considers a couple of scenarios.

    1) Where a ‘new biochemical function’ is achieved only when the ‘jackpot’ is reached. An island in the ‘search space’, effectively. Lizzie’s model cannot find this, because essentially all strings not on ‘dry land’ should be equally fit, and random strings are overwhelmingly of this kind. They will drift upon the ocean.

    2) He drains the ocean a liittle, and creates a somewhat larger landmass at 10^58. The same applies – assuming all ‘ocean’ strings are equally fit, it will sail this sea till Kingdom Come.

    And you can keep going, removing all this ‘fitness-flattening’ water until, surrounded by gasping fish and shipwrecks, you expose Lizzie’s landscape – one where the ‘fitness-flattening’ is confined to muddy pools of the worst strings, which are perhaps as unlikely to arise as the best. Everything else is a mixture of peaks of varying heights, with the ‘target’ sitting atop a particularly lofty one.

    So yes, Lizzie is clearly dealing with a navigable landscape, unrealistically populated with viability throughout. BUT … fitness of strings is not evaluated with respect to the ‘target’. It is simply evaluated with respect to the current ‘gene pool’. Some members of the gene pool (by virtue of having a higher product than others around at the same time) are more likely to have digital babies than others – ie, differential reproduction causally linked to their genotype: NS

    This is an analogue of NS. It is totally unrealistic if you try and equate these particular strings to functional genes, but it serves its intended purpose of illustrating how apparent ‘dFSCI’ – an ‘unlikely’ target – can be located by stumbling around a fitness landscape guided only by differential reproduction of the members of any current population – ie, by NS. Even the IS in the model does not directly compare strings against 10^60. It compares those available against each other, analogising the greater likelihood that fitter genes will persist more frequently than the less fit. Its ‘selection module’ could be a bit more stochastic, to be a bit more realistic, but that’s all.

    It is also incorrect to say that it fails to include negative selection. It does not include lethality, but positive and negative selection are always relative to each other. Even an inviable genotype can be seen as simply ‘positive’ selection on all the ones that weren’t inviable, just as much as ‘negative’ selection on the one that was. Lost is lost, whether your genotype died instantly, a moment before reproducing, or after a long but tragically sex-impoverished life. Lethality still involves a reproductive differential, albeit an extreme one. Either there is NO difference between fitnesses – s=0, and all is Drift – or there IS a difference, in which case those with higher fitness are positively selected and those with lower are negatively selected.

    As a population moves towards a fitness peak, more and more mutations will be in the ‘negative’ direction – including strings which, in an ancestral population, were ‘selected’ positively, because they were fitter at the time.

    And I will point again to this paper as direct empirical evidence contradicting the assumption that protein function must be islanded (based more upon intuition than investigation of actual protein spaces). It’s not an evolutionary experiment, but it contains massive mutational steps sampling protein space, and a ‘live’ selective environment for the results. They sample a tiny portion of protein space, with proteins designed for structure (foldability) but not function, and find a surprising array of different functions within it (well, four, but I’m amazed they found any).

  47. I have been sarcastic and flippant at times, but I think my observation is true, that when it is convenient for the ID argument, genomes are information and can be modeled as code, but when genetics is modeled as information, suddenly such metaphors are invalid because they aren’t “real”. Suddenly you need real chemistry.

    Conversely, when real evolution is done by people like Lenski or Thornton, and the principles involved in bridging multi-step adaptations are laid out in pathetic detail, ID reverts back to information models and claims there’s just too doggone much dFSCI in some structures.

    GP has all the posting freedom he needs to respond to my questions about Lenski annd Thornton. He’s had versions of these questions since the discussion at Mark Frank’s blog.

    The questions are: how does a designer know the emergent properties of biological molecules. How does he know what properties are required and will be required in a dynamic environment? If evolution is capable of testing, and does in fact test, exhaustively, all possible variations adjacent to current sequences, what value is added by a hypothetical designer? In the Lenski scenario, how does a designer intelligently select the neutral mutations observed in the first 20,000 generations?

    What future findings are you expecting that will positively support a design inference an be incompatible with incrementalism?

  48. The fact that GP thinks intelligent selection striving toward a pre-specified function is more powerful than natural selection, which can simultaneously monitor thousands of dimensions of fitness and which is ecumenical with regard to targets, suggests he hasn’t really understood what evolution is.

    He keeps bringing up specifications as if they are goals.

    Only in laboratory condtions do we encounter such narrow kinds of selection. It’s actually rather remarkable that the Lenski bacterium invented its adaptation. In the real environment, species confronted with such a sudden constraint are more likely to go extinct. As most have.

Leave a Reply