Gpuccio’s Theory of Intelligent Design

Gpuccio has made a series of comments at Uncommon Descent and I thought they could form the basis of an opening post. The comments following were copied and pasted from Gpuccio’s comments starting here

 

To onlooker and to all those who have followed thi discussion:

I will try to express again the procedure to evaluate dFSCI and infer design, referring specifically to Lizzies “experiment”. I will try also to clarify, while I do that, some side aspects that are probably not obvious to all.

Moreover, I will do that a step at a time, in as many posts as nevessary.

So, let’s start with Lizzie’s “experiment”:

Creating CSI with NS
Posted on March 14, 2012 by Elizabeth
Imagine a coin-tossing game. On each turn, players toss a fair coin 500 times. As they do so, they record all runs of heads, so that if they toss H T T H H H T H T T H H H H T T T, they will record: 1, 3, 1, 4, representing the number of heads in each run.

At the end of each round, each player computes the product of their runs-of-heads. The person with the highest product wins.

In addition, there is a House jackpot. Any person whose product exceeds 1060 wins the House jackpot.

There are 2500 possible runs of coin-tosses. However, I’m not sure exactly how many of that vast number of possible series would give a product exceeding 1060. However, if some bright mathematician can work it out for me, we can work out whether a series whose product exceeds 1060 has CSI. My ballpark estimate says it has.

That means, clearly, that if we randomly generate many series of 500 coin-tosses, it is exceedingly unlikely, in the history of the universe, that we will get a product that exceeds 1060.

However, starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection, whereby I will cull the 50 series with the lowest products, and produce “offspring”, with random point mutations from each of the survivors, and repeat this over many generations.

I’ve already reliably got to products exceeding 1058, but it’s

possible that I may have got stuck in a local maximum.

However, before I go further: would an ID proponent like to tell me whether, if I succeed in hitting the jackpot, I have satisfactorily refuted Dembski’s case? And would a mathematician like to check the jackpot?

I’ve done it in MatLab, and will post the script below. Sorry I don’t speak anything more geek-friendly than MatLab (well, a little Java, but MatLab is way easier for this)

 

Now, some premises:

a) dFSI is a very clear concept, but it can be expressed in two different ways: as a numeric value (the ratio between target space and search space, expressed in bits a la Shannon; let’s call that simply dFSI; or as a cathegorical value (present or absent), derived by comparing the value obtained that way with some pre define threshold; let’s call that simply dFSCI. I will be specially careful to use the correct acronyms in the following discussion, to avoid confusion.

b) To be able to discuss Lizzie’s example, let’s suppose that we know the ratio of the target space to the search space in this case, and let’s say that the ratio is 2^-180, and therefore the functional complexity for the string as it is would be 180 bits.

c) Let’s say that an algorithm exists that can compute a string whose product exceeds 10^60 in a reasonable time.

If these premises are clear, we can go on.

Now, a very important point. To go on with a realistic process of design inference based on the concept of functionally specified information, we need a few things clearly definied in any particulare example:

1) The System

This is very important. We must clearly define the system for which we are making the evaluation. There are different kinds of systems. The whole universe. Our planet. A lb flask. They are different, and we must tailor our reasoning to the system we are considering.

For Lizzie’s experiment, I propose to define the system as a computer or informational system of any kind that can produce random 500 bits strings at a certain rate. For the experiment to be valid to test a design inference, some further properties are needes:

1a) The starting system must be completely “blind” to the specific experiment we will make. IOWs, we must be sure that no added information is present in the system in relation to the specific experiment. That is easily realized by having the system assembled by someone who does not know what kind of experiment we are going to make. IOWs, the programmer of the informational system just needs to know that we need random 500 bits string, but he must be completely blind to why we need them. So, we are sure that the system generates truly random outputs.

1b) Obviously, an operator must be able to interact with the system, and must be able to do two different things:

– To input his personal solution, derived from his presonal intelligent computations, so that it appears to us observers exactly like any other string randomly generated by the system.

– To input in the system any string that works as an executable program, whose existence will not be known to us observers.

OK?

2) The Time Span:

That is very important too. There are different Time Spans in different contexts. The whole life of the universe. The life of our planet. The years in Lenski’s experiment.

I will define the Time Span very simply, as the time from Time 0, which is when the System comes into existence, to Time X, which is the time at which we observe for the first time the candidate designed object.

For Lizzie’s experiment, it is the time from Time 0 when the specific informational system is assembled, or started, to time X, when it outputs a valid solution. Let’s say, for instance, that it is 10 days.

OK?

3) The specified function

That is easy. It can be any function objectively defined, and objectively assessable in a digital string. For Lizzies, experiment, the specified function will be:

Any string of 500 bits where the product calculated as described exceeds 10^60

OK?

4) The target space / search space ratio, expressed in bits a la Shannon. Here, the search space is 500 bits. I have no idea how big the target space is, and apparently neither does Elizabeth. But we both have faith that a good mathemathician can compute it. In the meantime, I am assuming, just for discussion, that the target space if 320 bits big, so that the ratio is 180 bits, as proposed in the premises.

Be careful: this is not yet the final dFSI for the observed string, but it is a first evaluation of its higher threshold. Indeed, a purely random System can generate such a specified string with a probability of 1:2^180. Other considerations can certainly lower that value, but not increase it. IOWs, a string with that specification cannot have more than 180 bits of functional complexity.

OK?

5) The Observed Object, candidate for a design inference

We must observe, in the System, an Object at time X that was not present, at least in its present arrangement, at time 0.

The Observed Object must comply with the Specified Function. In our experiment, it will be a string with the defined property, that is outputted by the System at time X.

Therefore, we have already assessed that the Observed Object is specified for the function we defined.

OK?

6) The Appropiate Threshold

That is necesary to transorm our numeric measure of dFSI into a cathegorical value (present / absent) of dFSCI.

In what sense the threshold has to be “appropriate”? That will be clear, if we consider the purpose of dFSCI, which is to reject the null hypothesis if a random generation of the Oberved Object in the System.

As a preliminary, we have to evaluate the Probabilistic Resources of the system, which can be easily defined as the number of random states generated by the System in the Time Span. So, if our System generates 10^20 randoms trings per day, in 10 days it will generate 10^21 random strings, that is about 70 bits.

The Threshold, to be appropiate, must be of many orders of magnitude higher than the probabilistic resources of the System, so that the null hypothesis may be safely rejected. In this particular case, let’s go on with a threshold of 150 bits, certainly too big, just to be on the safe side.

7) The evaluation of known deterministic explanations

That is where most people (on the other side, at TSZ) seem to become “confused”.

First of all, let’s clarify that we have the duty to evaluate any possible deterministic mechanism that is known or proposed.

As a first hypothesis, let’s consider the case in which the mechanism is part of the System, from the start. IOWs the mechanism must be in the System at time 0. If it comes into existence after that time because of the deterministic evolution of the system itself, then we can treat the whole process as a deterministic mechanism present in the System at time 0, and nothing changes.

I will treat separately the case where the mechanism appears in the system as a random result in the System itself.

Now, first of all, have we any reason here to think that a deterministic explanation of the Observed Object can exist? Yes, we have indeed, because the nature itself of the specified function is mathemathical and algorithmic (the product of the sequences of heads must exceed 10^60). That is exactly the kind of result that can usually be obtained by a deterministic computation.

But, as we said, our System at time 0 was completely blind to the specific problem and definition posed by Lizzie. Therefore, we can be safely certain that the system in itself contains not special algorithm to compute that specific solution. Arguing that the solution could be generated by the basic laws physics is not a valid alternative (I know, some darwinist at TSZ will probably argue exactly that, but out of respect for my intelligence I will not discuss that possibility).

So, we can more than reasonably exclude a deterministic explanation of that kind for our Observed Object in our System.

7) The evaluation of known deterministic explanations (part two)

But there is another possibility that we have the duty to evaluate. What if a very simple algorithm arose in the System by random variation)? What if that very simple algorithm can output the correct solution deterministically?

That is a possibility, although a very ulikely one. So, let’s consider it.

First of all, let’s find some real algorithm that can compute a solution in reasonable time (let’s say less than the Time Span).

I don’t know if such an algorithm exists. Im my premise c) at post #682 I assumed that it exists. Therefore, let’s imagine that we have the algorithm, and that we have done our best to ensure that it is the simplest algorithm that can do the job (it is not important to prove that mathemathically: it’s enough that it is the best result of the work of all our mathemathician friends or enemies; IOWs, the best empirically known algorithm at present).

Now we have the algorithm, and the algorithm must obviously be in the form of a string of bits that, if present in the System, wil compute the solution. IOWs, it must be the string corresponding to an executable program appropriate for the System, and that does the job.

We can obviously compute the dFSI for that string. Why do we do that?

It’s simple. We have now two different scanrios where the Observed Object could have been generated by RV:

7a) The Observed Object was generated by the random variation in the System directly.

7b) The Observed Object was computed deterministically by the algorithm, which was generated by the random variation in the System.

We have no idea of which of the two is true, just as we have no idea if the string was designed. But we can compute probabilities.

So, we compute the dFSI of the algorithm string. Now there are two possibilities:

– The dFSI for the algorithm string is higher than the tentative dFSI we already computed for the solution string (higher than 180 bits). That is by far the most likely scenarion, probably the only possible one. In this case, the tentative value of dFSI for the solution string, 180 bits, is also the final dFSI for it. As our threshold is 150 bits, we infer design for the string.

– The dFSI for the algorithm string is lower than the tentative dFSI we already computed for the solution string (lower than 180 bits). There are again two possibilities. If it is however higher than 150 bits, we infer design just the same. If it is lower than 150 bits, we state that it is not possible to infer design for the solution string.

Why? Because a purely random pathway exists (through the random generation of the algorithm) that will lead deterministically to the generation of the solution string, with a total probability of the whole process which is higher than our threshold (lower than 150 bits).

OK?

8) Final considerations

So, some simple answers to possible questions:

8a) Was the string designed?

A: We infer design for it, or we infer it not. In science, we never know the final truth.

8b) What if the operator inputted the string directly?

A: Then the string is designed by definition (a conscious intelligent being produced it). If we inferred design, our inference is a true positive. If we did not infer design, our inference is a false negative.

8c) What if the operator inputted the algorithm string, and not the solution string?

A: Nothing changes. The string is designed however, because it is the result of the input of a conscious intelligetn operator, although an indirect input. Again, if we inferred design, our inference is a true positive. If we did not infer design, our inference is a false negative. IOWs, our inference is completely independent from how the designer designed the string (directly or indirectly)

8d: What if we do not realize that an algorithm exists, and the algorithm exists and is less complex than the string, and less complex than the threshold?

A: As alreday said, we would infer design, at least until we are made aware of the existence of such an algorithm. If the string really originated randomly througha random emergence of the algorithm, that would be a false positive.

But, for that to really happen, many things must become true, and not only “possible”:

a) We must not recognize the obvious algorithmic nature of that particular specified function.

b) An algorithm must really exist that computes the solution and that, when expressed as an executable program for the System, has a complexity lower than 150 bits.

I an absolutely confident that such a scenario can never be real, ans so I believe that our empirical specificity of 100% will be always confirmed.

Anyways, the moment that anyone shows tha algorithm with those properties, the deign inference for that Object is falsified, and we have to assert that we cannot infer design for it. This new assertion can be either a false negative or a true negative, depending on wheterh the solution string was really designed (directly or indirectly) or not (randomly generated).

That’s all, for the moment.

AF adds “This was done in haste. Any comments regarding errors and ommissions will be appreciated.”

 

 

 

263 thoughts on “Gpuccio’s Theory of Intelligent Design

  1. It sounds like you would consider an evolvable Corewars style system to be a real model of natural selection, essentially a simulation where, like in the real world, the only goal is survival and reproduction.

    What he wants is something like Thomas Ray’s Tierra, as we discussed on Mark Frank’s blog lo these many months agao.

    I’m curious, how would you measure functional complexity in such an environment?

    Excellent question. I wish you luck in getting a straight answer. Intelligent Design Creationists in general and UD denizens in particular are not known for their willingness to make testable claims.

  2. Joe: IOW it demonstrates the severe limits of natural selection.

    Just so we’re clear, you agree that there are selectable intermediaries between wolves and toy poodles? 

    Joe: If the source of variation is planned then it cannot be natural selection, by definition.

    That is false. Again, you are confusing two different processes; the sources of variation and natural selection. For instance, if a genetically modified organism escapes into the natural environment, it will be subject to natural selection just like any other phenotype. 

    Joe: Darwin always referred to variation by chance.

    That is also false. Darwin proposed a non-random source of variation called Pangenesis, a speculative theory which included Lamarckian inheritance of acquired traits. 
     

  3. Once again, if anyone has ever produced a creationist critique of evolution as understood by science rather than as misrepresented by creationists, nobody here has ever found it. ALL critiques are of straw men.

    And so conversations with creationists consist often of “this is false, here’s the correction” repeated until the creationist drops the subject, often with insults. 

  4. Mung: GA’s do not use only differentials in birth and death, mutation and (optionally) recombination. 

    Differential refers to differences due to relative fitness, usually defined by a fitness function or map. 
     

  5. It’s slightly surprising how many people are willing to judge the efficacy of GA’s without being able to write one, even at the specification or pseudocode level. This really ought to be a prerequisite to discussing evolution.

  6. Mung is under the impression that a GA has to be seeded with potential solutions:

    For example, potential solutions must be encoded into a “chromosome.”

    No, Mung, potential solutions do not have to be encoded into a “chromosome”. That is optional. You can start with a purely random “genome”, as Lizzie does in her program.

  7. Mung writes:

    For example, potential solutions must be encoded into a “chromosome.”

    …and tries to support his statement by saying:

    There is at least the possibility that a solution will be found among the first 100 randomly generated genomes [in Lizzie’s program], though she doesn’t actually check to see if that is the case.

    Suppose Lizzie’s program initialized the genomes to all 0’s. Then there would be no potential solutions among the initial genomes, yet the program would still converge to a solution.

    Mung’s statement is incorrect. It is not mandatory to encode potential solutions into the genome.

    A few minutes after posting his initial comment, Mung seems to realize that he’s overstepped, and softens his claim. This time, instead of claiming that potential solutions must be encoded into the genome, he backtracks and links to a site that merely explains that you have to pick the encoding scheme:

    The most critical problem in applying a genetic algorithm is in finding a suitable encoding of the examples in the problem domain to a chromosome.   [emphasis is Mung’s]

    Well, duh. Of course you have to have an encoding. It’s a computer program! Complaining about that is as silly as complaining about this: You give Johnny a fiendishly difficult 12th degree polynomial equation to solve. He asks if the solutions are numbers. Before you can stop her, your friend tells Johnny that yes, the solutions are numbers. You complain bitterly, saying that she has given the answer(s) away.

    Telling Johnny that the answers are numbers doesn’t give away the solutions. Selecting an encoding for a GA doesn’t give away the solutions, either.

    Maybe someone over there at TSZ will be kind to you before you put your foot in it any more than you already have.

    I’m sorry, Mung, could you repeat that? I think you have your foot in your mouth.

    P.S. Please remind your buddy Upright Biped that Reciprocating Bill has some questions for him, and that Allan and I have refuted the latest version of his “semiotic argument for ID”.

  8. Also natural selection is supposed to be blind and mindless. And that cannot be with directed mutations.

    God, Joe – Variation and Selection are two different things! How long have you been discussing evolution, now? Exlicitly chosen mutations can still be filtered by the blind and mindless process (which could not be otherwise, unless there is also an Intelligent Selector with a population-wide overview) of one type leaving leaving more or fewer offspring than another.

  9. Zachriel: Some novel protein domains are available to completely random processes.

    Mung: Novel. Would that be like, new?

    Maybe you can Allan can talk: (link to this:)

    I do wonder what GP has in mind when he says “new protein domain” or “new biochemical function”? Does he have one that he considers ‘new’, and definitively inaccessible by the probabilistic resources available to any ancestors – something that can be investigated, rather than his personal, very general assumptions about the structure of protein space and its distribution of function? 

    The only proteins we need to concern ourselves with are the ones that exist, and their accessibility from other points in the space that (on the evolutionary assumption) were occupied by ancestral sequences with similar or other functions. So it would help to have a concrete example, rather than this ‘function is universally restricted to tiny, widely-separated islands’ nonsense, which is empirically demonstrated to be untrue.

    Note that I am asking GP what he considers ‘new’, not denying that anything in biology can ever be considered such. It’s a word we can choose to apply if we wish, and since GP does, I’d like to know what specific example he has in mind, rather than casting the word across the entirety of life because it must apply to something, somewhere.

    My point about accessibility in ‘space’ was that, barring a very few bases, every DNA base in existence today has apparently been template-copied from another. This is the mechanism that probes protein space, randomly, and detects ‘novelty’ within it if you wish to call it that. There appears to be no significant mechanism to introduce new DNA sequence other than through template copying and fragment shifting, outwith ID. There is no separate process that assembles new base sequences out of thin air, rather than from the various mutational processes acting upon existing sequence. 

    There is no doubt that ‘new’ folds, and ‘new’ function, must be capable of arising. But that ‘newness’ is something that we make a call on, not evolution. There is not a different mechanism dealing with ‘newness’ vs that dealing with ‘oldness’.

  10. GP, responding to my point on Behe’s omission of recombination.

    I think you are wrong here. […]

    The real point is that, while your discourse about recombination can make some sense in the recombination of functional elements, it is of no importance in the case of individual mutations that have no function until they conflate in a more complex output. The important point is: a recombination can certainly join two mutations, but it can join any set of two mutations with the same probability, unless we can show that some mutations, and in particular those that are necessary for the future function, recombine more frequently than others. IOWs recombination in this case does not alter the probabilistic scenario.

    This is an error often made by many darwinists. […]

    This is an error often made by Creationists! Without a better grasp of population genetics, and an apparent confusion about the separate roles of phenotype and genotype in evolution, you simply hand-wave away the probabilistic relevance of recombination. It’s only important when genes recombine, but not when subunits do? How come there are so many common modules, then? Not design, surely? The first, maybe, but what about all the children?

    On the CCC, it’s still a probabilistic case for A-B, but you have to include all the contributors to the probability of that combination, otherwise you’re just weighting the game to win a point. Behe makes an argument based upon serial probability of double mutation. The independent mutations A and B have an individual probability of arising, and the second mutation has a similarly small probability of arising wiithin the A- or B-bearing subset of the population. The result is a very small probabilistic product of the two. But given the nonzero probability of A-only and B-only existing (which must be the case in order to have a nonzero serial probability), there is a further probability to be taken into consideration – you don’t have to wait for A to add the B mutation, or B to add the A; there can be recombination between members of the A- and B- populations. This probability is multiplicative, akin to the counter-intuitive ‘birthday’ probability, and gives a substantial boost to the probability that A-B will arise in the population.  

    Crossover and other recombinational mechanisms massively increase the power and reduce the search time of GAs. Even on ‘single-gene’ models – ‘subgene’ swapping has a real effect. This is a fact, which you could establish for yourself by writing a GA.

    You seem to think that gene disruption – a phenotypic effect –  puts a damper on ‘real’ evolution. And of course it does. But to what extent? And is that extent greater than the disruption caused by new combinations of separate genes? You can’t just say “yes!”. If genes were just evaluated as ‘raw’ DNA, their evolution would be closer to that exhibited in the simplest GAs. Now, you could model that phenotypic constraint. You could introduce constraints such that recombinational products were typically less fit, particularly where involving subunits. Then, unsurprisingly, the power of recombination on that GA would be diminished. And if recombination were under genetic control, it too would be selected against. So the question is: how biologically realistic is the extreme version of that constraint? Has it just been imported because you don’t like the implications of the less-constrained model?

    I say again: within-protein and between-protein recombinations happen, as demonstrated by sequence data. So while some dampening is introduced by phenotype, it is not so extreme as to forbid recombination (else there would be selection against ALL recombination, since the mechanisms don’t ‘know’ where the genes start and finish). And domains can be deconstructed. Four amino acids will make a turn of a helix. Duplicate that ‘proto-domain’ a few times and you have an extended helix, 50, 100 bases long … and the ID-er comes along and declares that the domain is irreducible complex – for, if you remove it from the modern protein, or even chop it back to 4 bases, it ceases to work! 

  11. Yeah, he’s pretty confused on that point. 

    Mung: There is at least the possibility that a solution will be found among the first 100 randomly generated genomes, though she doesn’t actually check to see if that is the case.

    Yes, that the nature of randomness and its fit to a landscape. Think about it. Some random sequences will inevitably fit better than others. 

  12. gpuccio: It is possible that certain recombinations are favoured versus others by the structure itself of the genome, for instance whole gene or whole exon recombinations could be favoured versus purely random ones. There can be genomic sites that make recombination more likely. All that could increment the power of recombination, but should be explained as an adaptive mechanism already present in the existing genome.

    That isn’t necessary to show that recombination is a powerful mechanism for generating novelty. It depends on the fitness landscape, of course. In word-space, the “ing” in “king” can recombine to create gerunds or participles, such as “saying”. Word-space is full of such simple motifs that readily combine to create new words. Similarly, in protein-space, simple motifs are often repeated, and recombination between sequences that exhibit such motifs are much more likely to generate workable proteins. 

    gpuccio: This is an error often made by many darwinists. A random effect does not change the probabilities of a specific output, unless we can demonstrate some explicit connection between the effect and the output.

    That’s the error of IDists. They assume that evolutionary processes are no better than random assembly—but that’s simply not the case. If you recombine workable protein sequences, you are much more likely to find a new workable protein sequence than random assembly alone.

    gpuccio: NS is selection based only on fitness/survival advantage of the replicator. The selected function is one and only one, and it cannot be any other. Moreover, the advantage (or disdvantage, in negative selection) must be big enough to result in true expansion of the mutated clone and in true fixation of the acquired variation. IOWs, NS is not flexible (it selects only for a very tiny subset of possible useful functions) and is not poweful at all (it cannot measure its target function if it is too weak).

    Natural selection is based on the reproductive fitness of the replicator. There can be many functions that accomplish this aim, so if longer legs provide an advantage, then it can be subject to natural selection. In the abstract, this is done with a fitness landscape, but more detailed simulations are possible. As for the size of the advantage, that is also easily simulated.

     

  13. potential solutions must be encoded into a “chromosome.”

    All ‘chromosomes’ – start, intermediate and solution – must exist in the space-of-all-possible-strings available to the GA! Kind of axiomatic. In the case of evolution, the space-of-all-possible-chromosomes contains all-real-chromosomes. It’s the space of all AT/CG/GC and TA pair strings. So you couldn’t not start with a ‘potential solution’ (‘solution’ in evolution being a fitter genome for now).    

    In a GA you could start with a string of length zero, since (unlike biology), the copy method is not part of the heritable string. One would, of course, need a method which could add bits to such an empty string, and a fitness function that did not regard such strings as inviable. In a sense, an evolutionary GA could be regarded as examining the behaviour of ‘extra’ DNA, tacked onto a taken-for-granted replicative core. I know ID-ers don’t like taking the OoL for granted, but it is simply not part of evolution.

    The start string does not have to be a solution, but a path needs to exist that allows it to become one according to the variational methods incorporated and the probabilistic resources available. In any GA you have no idea if such a path exists, nor what the actual solution will be – kind of the point of doing it, to see.

  14. Zachriel: Some novel protein domains are available to completely random processes. However, the natural history is not well-documented.

    gpuccio: What do you mean? To what are you referring here?

    Random sequences can form active proteins (Keefe & Szostak, 2001). The origin of the original protein domains is still largely conjectural. However, random sequences are fairly rich in active proteins. By the way, if you want to find a needle in a haystack, try sitting on it.

    Zachriel: Keep in mind that your “don’t think” encompasses all evolutionary algorithms. Evolutionary algorithms, such as Word Mutagenation, can show you how and why recombination is such a powerful force for novelty.

    gpuccio: Can you give us the code? Can we discuss the oracles in it?

    The algorithm is very simple. The landscape is the dictionary of valid words. The population is composed of sequences of letters that form words. The algorithm randomly mutates and recombines these sequences of letters. If they form a word, they enter the population. If they do not form a word, they do not enter the population. So, if the population includes “king” and “say”, they might evolve in the next generation to form “hay” (mutation) and “saying” (recombination). 

    A couple of insights: It is possible to evolve long words much faster than random assembly. Recombination is essential to this process.  

  15. There can be genomic sites that make recombination more likely. All that could increment the power of recombination, but should be explained as an adaptive mechanism already present in the existing genome.

    This is simply contradictory. Shuffling bits and pieces of protein is an adaptive mechanism because it increases the power of module shuffling, which is a disruptive mechanism and has limited power of evolutionary exploration? Make your mind up!

    The bottom line point to bear in mind is that recombination (distinct from exon shuffling) is blind to gene expression. Totally. So it has nothing to ‘go on’ to establish what would be a legitimate swap and what would not. It is variable across genome length, for sure, for many reasons both ‘active’ and ‘passive’, but it is not attracted by regions that could do with a bit of a shake-up so much as repelled by those which would be better without.

    There are many different kinds of recombination, and I don’t know how much benefit there is in lumping them all together as ‘adaptive’ – which must involve direct genetic control with a fitness effect on the genes mediating that control to be meaningful. Recombination due to viruses, transposons, damage misrepair, ectopic misalignment in meiosis – these are no more obviously adaptive in themselves than point mutation. But, nonetheless, all recombinations, whether adaptive or not, still promote much wider exploration of protein space than you started off allowing for – but this is not always a good thing. Such exploration is not to the benefit of any individual organism, or most genes. It’s just something that happens, and organisms adapt if that-which-happens throws up a beneficial combination – one more source of the spectrum-of-variation on which NS works both positively and negatively.

  16. Joe: In what way is Lamarkian inheritence non-random?

    Because the posited source of variation is not random with respect to fitness, but are advantages acquired by the parent through use that are passed down to the children. So, if the parent uses a certain muscle a lot, then the child will be born with a larger muscle.

    Joe: For example, if a man loses his arm in an accident, an acquired trait, that would be random.

    A mouse losing a tail is not a heritable trait. (Weismann, 1899).

    Joe: GAs are a DESIGN mechanism, period.

    So are weather simulations and calculations of planetary orbits.

    Joe: Natural selection requires the fitness be due to heritable random variation(s)

    We already pointed to a simple counterexample. If a genetically modified organism enters the natural environment, it would be subject to natural selection. For that matter, so would a domestic dog entering the wild, à la The Call of the Wild.

     

  17. Joe: In what way is Lamarkian inheritence non-random?

    Because the posited source of variation is not random with respect to fitness, but are advantages acquired by the parent through use that are passed down to their children. So, if the parent uses a certain muscle a lot, then the child will be born with a larger muscle.

    Joe: For example, if a man loses his arm in an accident, an acquired trait, that would be random.

    A mouse losing a tail is not a heritable trait. (Weismann, 1899).

    Joe: GAs are a DESIGN mechanism, period.

    So are weather simulations and calculations of planetary orbits.

    Joe: Natural selection requires the fitness be due to heritable random variation(s)

    We already pointed to a simple counterexample. If a genetically modified organism enters the natural environment, it would be subject to natural selection. For that matter, so would a domestic dog entering the wild, à la The Call of the Wild.

     

  18. Joe: Natural selection requires the fitness be due to heritable random variation(s)

    Natural selection doesn’t give a damn how the variations were generated, nor what people variously mean when they stick ‘random’ in a sentence. 

    It’s also debateable whether the variation needs strictly to be heritable, although obviously you are’t going to get any evolutionary change if it isn’t. Heritability isn’t a boolean; it’s a continuum (unless you argue that 0 and non-0 are boolean, which is true but less informative). Heritability influences the coupling between the drive of NS and the wheels of evolutionary change. Anything over 0% means that phenotypic sorting has the power to influence genotype frequencies.

  19. gpuccio: But he never analyzed the original random sequences, which were selected for a mere very week ability to bind ATP, and then intelligently engineered into the final protein.

    That’s right. They form compact three-dimensional structures, i.e. folds, capable of enzymatic activity. That’s what we were talking about.

    gpuccio: Oh, yes. The algorithm is very simple. And it has a whole dictionary as a oracle! Simple indeed.

    That’s right. The algorithm is very simple. The landscape, however, is highly complex and specified. Yet, the simple algorithm can navigate the complex landscape billions of times faster than random trial.

    gpuccio: And how does the algorithm know that a word was formed? Ah, I forgot! The dictionary.

    That’s right. It’s no different than comparing a sequence to a vast library of possible proteins.

    gpuccio: And I suppose that the dictionary is essential to appreciate the successes of recombination.

    Not at all. It’s just a single example. You “didn’t think” recombination would produce different results than mutation or even random trial, and your objection was very general. Hence, a general example is sufficient for you to see why recombination is an essential evolutionary mechanism.

    Again, because words share many of the same motifs, recombining words has a much higher likelihood of producing a new word than random assembly. Similarly with proteins, which also exhibit motifs.

     

  20. Joe: Differential reproduction due to heritable random variation (mutation)= natural selection

    The “random” is extraneous. Nor is mutation the only source of variation. Natural selection can occur when there are existing variations in a population, regardless of whether there is a source for novel variations. 

  21. Mung tries to change the subject:

    I said randomly generated genomes. I’d say the chance that Lizzie’s program generated 100 strings of all 0′s at random are about the same as her generating CSI.

    Nice try, Mung, but your own words betray you:

    There is more to a GA exploring certain kinds of digital space than differences due to relative fitness (usually defined by a fitness function or map), mutation and (optionally) recombination.

    For example, potential solutions must be encoded into a “chromosome.”

    Encoding potential solutions into a chromosome implies there is a problem to be solved.

    Information about which potential solutions are more likely to solve the problem must be implemented. [emphasis mine]

    The bolded statements are both wrong, for the reason I already gave:

    Suppose Lizzie’s program initialized the genomes to all 0′s. Then there would be no potential solutions among the initial genomes, yet the program would still converge to a solution.

    This shows that you don’t need to encode potential solutions into the chromosome, and you don’t need to implement “information about which potential solutions are more likely to solve the problem.”

    P.S. I think you’re being unnecessarily modest by refusing to nominate yourself for your own Junk for Brains Award. You’ve earned it, Mung. 

    P.P.S. Please remind your buddy Upright Biped that Reciprocating Bill has some questions for him, and that Allan and I have refuted the latest version of his “semiotic argument for ID”.  Upright seems to be afraid of defending his argument, as usual.

  22. Mung: Today’s Junk for Brains winner is Zachriel, who chide’s ID’ers for “assuming that evolutionary processes are no better than random assembly” while appealing to random assembly by a random process such as recombination.

    Mung: I guess Zachriel can speak for himself, but randomness (as stochasticity) is of course part of the evolutionary process – recombination and mutation for sure, but stochastic influences on allele frequency changes come into it as well. The mistake would be to assume that this means that evolutionary ‘search’ is simply a matter of pulling sequences out of a metaphorical bag until, with a probability of 1 in n (the number of possible sequences) the desired sequence is hit. A random walk including random recombinations of existing sequence isn’t the same as random ‘assembly’ repeatedly from scratch.

    If you don’t make that mistake, well done you, but many people, from Fred Hoyle onwards, have.

  23. How many times do you need to point out that evolution doesn’t test the universe of sequences, but just the next door neighborhood?

    How many times do you need to point out that the mere existence of alleles refutes the vertical cliff charicature of sequence landscape?

    Recombination is a larger scale mutation than base point change, but it is still a walk in the neighborhood.

  24.  Zachriel: The “random” is extraneous.

    Joe: Only to equivocators, like yourself.

    Hmm. You provided this definition on your own blog: “Natural selection is the result of differences in survival and reproduction among individuals of a population that vary in one or more heritable traits.”

    Natural selection occurs on existing variation. Consider a population of moths, some of which are white, and others are black. Natural selection might occur if white moths are preferentially eaten by birds, leaving the black moths to leave offspring. This is true regardless of the original source of the variation. 

    Zachriel: Natural selection can occur when there are existing variations in a population.

    Joe: Yes, it can. 

    There you are then. 

  25. Mung: Zachriel, who chide’s ID’ers for “assuming that evolutionary processes are no better than random assembly” while appealing to random assembly by a random process such as recombination. 

    We admit, our language was poorly chosen. Thought it was clear in context, but you may not have followed the entire thread. By random assembly, we meant where each sequence is completely randomized with respect to previous sequences. Now that the point has been clarified, perhaps you would like to respond to our actual point. 

    Z: That’s the error of IDists. They assume that evolutionary processes are no better than searching completely randomized sequences—but that’s simply not the case. If you recombine workable protein sequences, you are much more likely to find a new workable protein sequence than searching completely randomized sequences. 

     

  26. Mung: There is at least the possibility that a solution will be found among the first 100 randomly generated genomes, though she doesn’t actually check to see if that is the case.

    Zachriel: Yes, that the nature of randomness and its fit to a landscape.

    Mung: iow, I’m right. you know it. But you don’t have the guts to tell keiths.

    There doesn’t seem to be any disagreement between our position and keiths’. There’s apparently some confusion on your use of the word “solution”. 

    Mung: For example, potential solutions must be encoded into a “chromosome.”

    Your statement appears to imply that someone is “encoding” solutions into the “chromosome”. If the sequences are randomized, assuming we are fitting solutions to a landscape of some sort, then some may naturally fit better, albeit probably poorly, than others. 

    Perhaps you are referring to the nature of the landscape. Some landscapes are such that evolutionary algorithms don’t work well, or don’t work at all. That depends on the specifics, of course, but nearly all the objections raised by kairosfocus, gpuccio, Mung and others are so general as to apply to all landscapes. For instance, recombination is a very powerful mechanism across many rugged landscapes, and evolutionary algorithms work millions of times faster for such landscapes than simply choosing random sequences one after another. 

    Mung: Every chromosome generated by the GA is a potential solution. Else what is the point of generating them?

    Of course they are *potential* solutions, though solution may or may not be a single entity, but a best fit, for instance. 

    Allan Miller: In a GA you could start with a string of length zero

    Mung: What would a string of length zero consist of?

    ∅. You sound like someone who was just told about the existence of zero. Word Mutagenation usually starts with the single-letter word “O”. 
    http://www.zachriel.com/mutagenation/Sea.asp

  27. gpuccio,

    It is completely wrong to model NS using IS, because they have different form and power.

    As I said, you help me to refine my concepts, and I appreciate that.

    Before someone states that I am changing arguments, I would suggest that you read again my original definitions of IS and NS, from which this statement can very clearly be derived:

    “d) NS is different from IS (intelligent selection, but only in one sense, and in power:

    d1) Intelligent selection (IS) is any form of selection where a conscious intelligent designer defines a function, wants to develop it, and arranges the system to that purpose. RV is used to create new arrangements, where the desired function is measured, with the maximum possible sensitivity, and artificial selection is implemented on the base of the measured function. Intelligent selection is very powerful and flexible (whatever Petruska may think). It can select for any measurable function, and develop it in relatively short times.

    d2) NS is selection based only on fitness/survival advantage of the replicator. The selected function is one and only one, and it cannot be any other. Moreover, the advantage (or disdvantage, in negative selection) must be big enough to result in true expansion of the mutated clone and in true fixation of the acquired variation. IOWs, NS is not flexible (it selects only for a very tiny subset of possible useful functions) and is not poweful at all (it cannot measure its target function if it is too weak).

    This seems to be getting to the essence of our disagreement, especially when combined with your following comment:

    A distinction without a difference. The model shows that the mechanisms of the modern synthesis are quite capable of generating functional complexity in excess of that required by your dFSCI.

    This is exactly the type of wrong statement that has prompted me to analyze in detail this issue. Have you read my post #910 in the old thread? Please, refer to it for any following discussion on this.

    Yes, I re-read your 910 where you discuss what level of functionality is selectable. I find your thresholds to be arbitrarily selected, but that’s not relative to the essential difference I think we’re finding.

    What I see is you focusing on the details of how a model is implemented rather than on the concepts being modeled. Yes, in many GAs the environment is modeled via a fitness function that is designed to accomplish some goal and the threshold for terminating the simulation is set (independently of the fitness function and other components of the GA) to recognize when that goal has been reached. None of that changes the fact that the GA is a model of an observed, natural process.

    It doesn’t matter if you label the model “intelligent selection” to somehow distinguish it from “natural selection”, what matters is that the pertinent mechanisms of the model are the same as those we observe in real biological systems.

    Heritable variation with differential reproductive success does, demonstrably, generate large amounts of functional complexity, according to your own definition. The only reason not to consider the results of these mechanisms of the modern synthesis to have dFSCI is because you define dFSCI in terms of knowledge about the provenance of the results and you define those mechanisms as “deterministic”.

    If you disagree, and I suspect you do, please explain why your distinction between “intelligent” and “natural” selection has any bearing on what is being modeled rather than the details of how the model is implemented.

  28. gpuccio,

    I spent some time yesterday looking through the Tierra project and it does appear to meet your criteria for what you think is a proper model of natural selection. Before I go further with it, I would like a clarification from you.

    I’m curious, how would you measure functional complexity in such an environment? Would it simply be the length in bits of the digital organisms? If an organism with sufficient functional complexity to meet your dFSCI threshold were to appear, would you consider it to have dFSCI or would the fact that it arose through evolutionary mechanisms, which might even be tracked mutation by mutation, mean that the dFSCI medal could never be earned?

    It’s easy. I would proceed like Lenski. I would “freeze” (copy) the virus periodically to examine its code. If and when any functional string of code expresses a new function that helps the virus to reproduce, and therefore partially or totally replace the simpler version, then it will be easy enough to evaluate the funtional complexity of that new string of code, with the ususal methods detailed at the beginning of your thread at TSZ.

    What “usual methods”? How, exactly, would you compute the functional complexity of a digital organism in Tierra? Patrick noted in the threads he referenced on Mark Frank’s blog that Tierra results in a number of different reproductive strategies, including parasitism and hyper-parasitism. What is the functional complexity of those organisms?

  29. During the first 20,000 generations in the Lenski experiment, mutations occurred that were neutral with regard to citrate metabolism, but which turned out to be crucial after subsequent mutations.

    How does the intelligent selector identify  and promote precursor changes?

  30. gpuccio: d1) Intelligent selection (IS) is any form of selection where a conscious intelligent designer defines a function, wants to develop it, and arranges the system to that purpose. 

    A clarification please. What do you mean by “system”? If you mean the entire simulation, then obviously that would preclude any and all simulations. 
     

  31. Mung

    What would a string of length zero consist of?

    I presume you mean ‘what’s the real-world equivalent?’ rather than ‘how would you code it?’, but just in case, an example of a string of length zero in VBA would be one which returned zero to the VBA LEN function. In COBOL, a group with a next level OCCURS DEPENDING ON X where X is set to zero. I’m sure many languages offer the same kind of thing. The null, the empty set, the nothing. While the ‘replication’ function of biological replicators is a vital part of the string, that role is taken by the copy method in a GA, so the strings themselves don’t actually need to consist of anything at the start. The point of bringing them up is to point out that such strings are not likely to be ‘solutions’ to any worthwhile GA, so you aren’t necessarily ‘pre-seeding’ the population with anything. 

    So consider the zero-length digital organism as the absolute minimal replicator common to all GAs. As long as a method exists that occasionally adds random bits to a string, something will soon emerge, and variations between these ‘non-null’ bit-strings can be evaluated by the selection module. A set of strings of length zero evidently cannot vary, but they can still ‘compete’ via drift. You can still replicate and remove strings of length zero from a population.

    But you could learn something about evolution by observing the behaviour of such populations, particularly inevitable coalescence of ancestry, before building up to something more elaborate. That’s the whole point of modelling, to reduce to essentials then reconstruct. The behaviour of finite replicating populations with no selection, mutation or recombination tells you a lot about the role of replication.

    (I do know that such organisms do not actually exist…).

    Lizzie could easily have started from a string of length zero. If no possibility of extension existed, it would not work. But given a proportion of mutations that add bases (just like reality), the nature of her fitness function would mean that strings would simply extend forever. But if a ‘length cap’ were also part of the fitness function – the longer a string, the more likely to break and die, say – then provided it was not so punitive as to to disallow 500-bit strings, a 500-bit string with product > 10^60 could easily be generated, even from a null string. More generally, the GA would be expected to converge on the highest product available to strings of a length at or just below a ‘breakiness threshold’ – an ‘optimal’ length where the reward for higher products is counterbalanced by the penalty for length.

    Try it. 

  32. Gpuccio:

    To onlooker (at TSZ):

    Any fitness function in any GA is intelligent selection, and in no way it models NS.

    Please, do not consider any more that statement. Keiths is right, it was a wrong generalization.

    Thank you for saying that. I appreciate your willingness to admit error.

    d2) NS is selection based only on fitness/survival advantage of the replicator. The selected function is one and only one, and it cannot be any other… IOWs, NS is not flexible (it selects only for a very tiny subset of possible useful functions)…

    Fitness functions measure and reward fitness, by definition. But there are many, many, fitness functions, not just one. We have to select the right fitness function for the application.

    A fitness function that measures and rewards whiteness is fine if we are modeling a scenario in which whiteness contributes to survival and reproduction, as it does in the evolution of arctic hares. Not so much if we are modeling the evolution of tortoises, or running a GA that optimizes antenna designs.

    A fitness function that rewards shell strength is fine if we are modeling a scenario in which shell strength contributes to survival and reproduction, as it does in the evolution of tortoises. Not so much if we are modeling the evolution of arctic hares.

    A fitness function that rewards “product of run lengths in a sequence of 1’s and 0’s”, as in Lizzie’s example, is perfectly acceptable if we are modeling a hypothetical world in which “product of run lengths” contributes to survival and reproduction. Not so much if “primeness of run lengths” is the true criterion.

    These are all Darwinian processes. The point of Lizzie’s example is to show how a Darwinian process can solve a problem (and generate dFSCI) without any information from the fitness function other than “better” or “worse”. Real world Darwinian evolution can also solve problems (and generate dFSCI) without any information from the environment other than “better” (you survived and produced lots of viable offspring) and “worse” (you died early or failed to reproduce for some other reason).

    Your claim that there can be “one and only one [fitness function], and it cannot be any other” is false. There are many possible fitness functions. Some of them lead to the production of dFSCI, others don’t. The question is not whether it is legitimate to use other fitness functions. The question is whether a particular fitness function is legitimate for the scenario being modeled.

  33. keiths: Suppose Lizzie’s program initialized the genomes to all 0?s. Then there would be no potential solutions among the initial genomes

    Mung: That is false. Again, you demonstrate that you don’t understand what is being discussed. They would still be a potential solution. Just not a good solution. Just not an actual solution. A string of 500 0′s is still in the search space.

    But I was reminded of a challenge I had issued. That challenge consisted in setting all strings to the same initial value, rather than having them randomly generated.

    So please, have Lizzie initialize all her starting population of strings to all 0′s. By all means. Let’s see how well it performs then.

    Mung – I thought you’d written your own version that took 10 seconds? Surely you could try the amendment yourself.

    Here’s my prediction: a uniform starting population of all-0’s will be little different from a completely variable one in its ability to search the space. They will initially all be the same, and all products will be zero, therefore the fitness function will have nothing to select on, therefore the population will simply ‘drift’, replicating the same monotonous point in space. But if mutation occurs, variation will arise, the fitness function gains traction, and the ‘random (stochastic) walk’ has taken its first baby steps.

    As I have said elsewhere, you could start with one digital ‘organism’ of bit-length zero and still find the peak, provided the mutation method includes something that can increase the number of bits in a string – a biologically ‘real’ amendment.  

    This is the relevance of ‘being able to write a GA’ to evolution. Or more so, to run one and play with it and see what happens when you fiddle with the various subroutines – the selection, mutation, recombination methods – and their parameters. They are all at least intended to duplicate the ‘real’ processes of the evolutionary synthesis. If they don’t, you need to be able to explain to the profs using them why they are barking up the wrong tree – and you can’t do that if you don’t even know what you are talking about. Give it a whirl – twiddle the knobs; turn them up, down or completely off – it won’t bite you.

  34. So please, have Lizzie initialize all her starting population of strings to all 0′s. By all means. Let’s see how well it performs then.

    Mung – I thought you’d written your own version that took 10 seconds? Surely you could try the amendment yourself. Here’s my prediction: a uniform starting population of all-0′s will be little different from a completely variable one in its ability to search the space. They will initially all be the same, and all products will be zero, therefore the fitness function will have nothing to select on, therefore the population will simply ‘drift’, replicating the same monotonous point in space. But if mutation occurs, variation will arise, and the fitness function gains traction.

    On the off chance that Mung doesn’t want to try this with his own code for some reason, I just ran the test. If you get him to make a prediction, I’ll share my results and see who of the two of you is closest.
     

  35. Mung:

    Again, you demonstrate that you don’t understand what is being discussed. They would still be a potential solution. Just not a good solution. Just not an actual solution.

    A string of 500 0’s cannot be a solution. Something that cannot be a solution is not a “potential solution.”

    A string of 500 0’s can be mutated over time into a solution, but we know without a doubt that a string of 500 0’s is not a solution. It’s not a “good solution.” It’s not an “actual solution.” It’s not a “potential solution”. The only kind of solution it is is a “non-solution.”

    A string of 500 0′s is still in the search space.

    Obvioulsy [heh], and if that’s all you meant by “potential solution” then I would have no objection. However, you clearly think that information has to be smuggled into the initial genomes, as your full statement reveals:

    There is more to a GA exploring certain kinds of digital space than differences due to relative fitness (usually defined by a fitness function or map), mutation and (optionally) recombination.

    For example, potential solutions must be encoded into a “chromosome.”

    Encoding potential solutions into a chromosome implies there is a problem to be solved.

    Information about which potential solutions are more likely to solve the problem must be implemented. [emphasis mine]

    This is reinforced by your challenge:

    But I was reminded of a challenge I had issued. That challenge consisted in setting all strings to the same initial value, rather than having them randomly generated.

    So please, have Lizzie initialize all her starting population of strings to all 0′s. By all means. Let’s see how well it performs then.

    I’m very curious. Why do you think it would be a problem if the initial genomes were set to all 0’s?

    Have you thought this through?

  36. Mung: Darwinian evolution does not need “the right fitness landscape” to work. (What would a “wrong” fitness landscape look like?)

    The vast majority of conceivable landscapes are not amenable to evolutionary algorithms, such as highly chaotic or random landscapes. Landscapes that are amenable to evolutionary algorithms usually exhibit local structure. Indeed, some IDers argue that protein fitness landscapes are too rugged for evolution to be effective. 

  37. Mung: GAs work with a coding of the parameter set, not the parameters themselves.

    Well, yes. That’s the genetic part of genetic algorithms, which are a subset of evolutionary algorithms. So? What did you think it meant? 

  38. Mung,

    You issued this challenge:

    But I was reminded of a challenge I had issued. That challenge consisted in setting all strings to the same initial value, rather than having them randomly generated.

    So please, have Lizzie initialize all her starting population of strings to all 0′s. By all means. Let’s see how well it performs then.

    If, as you claim, you don’t think that Lizzie smuggled information into the genomes by initializing them randomly, then what is the point of your challenge?

    After you answer, go ahead and take your version of Lizzie’s program, set the initial genomes to all 0’s, and let us know what happens. We’ll see how it compares to Patrick’s results.

    It should be amusing.

    P.S. I see you’re also confused about fitness landscapes. Here’s a thought: Wouldn’t it make sense to learn about evolution and GAs before condescending to people who actually understand them?

  39. Mung: “However I freely admit it is not an exact duplicate of Lizzie’s program just written in another language, I rather attempted to capture the “spirit” of what she built. “

    You don’t understand what Elizabeth is trying to do, do you?

    Instead of coding right away, why don’t you just describe, in English, what you think you are attempting to do with your exercise.

     

  40. Weclome DrBot

    Apologies. Your comment, like everyone’s first comment, was held in moderation and I only just spotted it. Where’s Lizzie! 

  41. Mung quotes Allan Miller:

    While the ‘replication’ function of biological replicators is a vital part of the string, that role is taken by the copy method in a GA, so the strings themselves don’t actually need to consist of anything at the start. The point of bringing them up is to point out that such strings are not likely to be ‘solutions’ to any worthwhile GA, so you aren’t necessarily ‘pre-seeding’ the population with anything.

    So consider the zero-length digital organism as the absolute minimal replicator common to all GAs. As long as a method exists that occasionally adds random bits to a string, something will soon emerge, and variations between these ‘non-null’ bit-strings can be evaluated by the selection module. A set of strings of length zero evidently cannot vary, but they can still ‘compete’ via drift. You can still replicate and remove strings of length zero from a population.     [Emphasis Mung’s]

    Mung, the GA expert, then presumes to ‘correct’ Allan by quoting from a book:

    : Introduction to Evolutionary Computing

    The choice of representation forms an important distinguishing feature between different streams of evolutionary computing. From this perspective GAs and ES can be distinguished from (historical) EP and GP according to the data structure used to represent individuals. In the first group this data structure is linear, and it’s length is fixed, that is, it does not change during a run of the algorithm.    [Emphasis Mung’s]

    Mung apparently hopes that none of us have heard of Google. Google ‘GA variable length chromosome’ and you get page after page of links to authoritative descriptions and discussions of GAs that fit the bill.

    Oops.

  42. You’re right, Toronto. Mung has no clue.

    Two days ago, he announced his version of Lizzie’s program with typical Mungian bombast, even grandly labeling his program ‘Mung World’:

    Mung World

    ok, so I created my own version of Lizzie’s program.

    took less than 10 seconds
    1522 generations

    What’s the big deal?

    Allan and Patrick called Mung’s bluff, asking him to respond to his own programming challenge. Suddenly the bluster turned into excuses, apologies and requests for advice:

    Mung World

    To my admirers at TSZ.

    I tossed my program together in a short evening. I am actually rather pleased with it, I even managed to make it object-oriented (for the most part).

    However I freely admit it is not an exact duplicate of Lizzie’s program just written in another language, I rather attempted to capture the “spirit” of what she built.

    It’s a bit rough around some of the edges, but I would like suggestions on how it can be improved.

    I call my digital organisms LiddleLizzards, in honor of Elizabeth.

    Here’s my LiddleLizzard class. I think the first thing that can use improvement is the mutate method, it’s pretty rough. .

    # mutates this chromosome
    def mutate
      chromosome[rand(500)-1] = ’1′
    end

    All I do here is set one position in the chromosome to a ’1′. If it’s a zero it gets changed, if it’s a ’1′ it’s like a neutral mutation. I don’t know what that cashes out to in terms of a mutation rate, if someone wants to tell me.

    Some potential modifications:

    1. Set the chosen locus to either a zero or a one, that would not be too difficult to code.

    2. Explicitly set the mutation rate.

    3. Create a Mutation object that is passed in when the digital organism is created that encapsulates it’s mutation parameters.

    4. Pass in the length of the string to generate rather than hard-coding it in a constant.

    Honest evaluation, criticism, and suggestions for improvement are welcomed. You can leave comments at that link as well.

    I had a look at the code for Mung’s LiddleLizzard class. It’s atrocious, and it bears no resemblance to Lizzie’s program.

    His ‘mutate’ method sets a random bit in the chromosome to 1. It never sets bits to 0. That’s right — Mung’s program latches! KF will be apoplectic.

    Mung’s fitness function looks for the longest sequence of consecutive 1’s in the chromosome. The length of that longest sequence is the fitness value. That’s it. No kidding.

    That’s just the class definition. I’d hate to see the rest of the code.

    Either Mung has absolutely no idea what Lizzie’s program does, or he doesn’t know to code. Or both.

  43. Mung, 4:10 pm:

    Darwinian evolution does not need “the right fitness landscape” to work. (What would a “wrong” fitness landscape look like?)

    Your problem, keiths (and apparently the problem of a few others over there at TSZ), is that you don’t know what a fitness landscape represents.

    Mung, 8:17 pm:

    We’ll need to clarify what is meant by “landscape.” To me a fitness landscape isn’t something that is there waiting to be discovered (or climbed, ala Mount Improbable), it’s something that is created as populations evolve.

    keiths:

    I see you’re also confused about fitness landscapes. Here’s a thought: Wouldn’t it make sense to learn about evolution and GAs before condescending to people who actually understand them?

    Mung, 9:47 pm:

    It wasn’t entirely clear what sort of landscape(s) he was talking about, So I decided to wait and find out. You, otoh, plow ahead unabated.

  44. Mung:

    Think about what a fitness landscape for a 64 bit encryption key would look like – you have 18,446,744,073,709,551,615 possible key values and only one of them is right.

    The gives a binary fitness result of either 0 or 1. All a GA will do with a landscape like this is drift – any mutation that doesn’t produce the correct key value will result in a fitness of zero. Until the result is found (and the program terminates) all population members will have the same zero fitness value, all will reproduce with equal probability, so there will be no differential reproduction.  All you get is random drift – you might as well use a random search because in a search space like this a GA will do no better than just random sampling.

  45. Mung – here is a fitness landscape for a 10 bit key (1024 possible values) – visualised in two dimensions.

    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000100000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000
    00000000000000000000000000000000

    What is the most effective method of navigating this fitness landscape?

  46. I think we should all encourage Mung in his endeavor. He is at least trying to understand what Lizzie and a few others here did. That’s pretty rare at UD. 

    Mung, you do need to modify the mutation function to switch a bit randomly  in either direction. 

    I am also not sure how your fitness is defined. Not knowing Ruby, I can’t parse this piece of code

    # calculates the “fitness” of this chromosome
    def fitness
    score = 0
    chromosome.scan(/1*/).each do |str|
    score = str.length if str.length > score
    end
    score
    end

  47. Mung: We’ll need to clarify what is meant by “landscape.”

    A fitness landscape represents relative reproductive fitness. It can represent a real-life situation, such as protein function, or be an abstraction. 

    Mung: To me a fitness landscape isn’t something that is there waiting to be discovered (or climbed, ala Mount Improbable), it’s something that is created as populations evolve.

    Certainly, real biological landscapes change, but static landscapes are often sufficient for particular studies, such as protein evolution. Keep in mind that even though the fitness landscape may be static in a simulation, the total environment changes due to competition with neighbors. In any case, it makes sense to start with static landscapes, but the basic behavior is often similar with dynamic landscapes.

    Mung: The way I understand Zachriel’s argument is that he is appealing to a bag or assortment of pre-existing components (aka protein domains) that can be used in proteins and that their availability for use somehow lends less of a random character to the process (making a functional protein more likely) even though the main proposed mechanism for this shuffling is recombination, itself a random process and the protein domains themselves also arose largely as a result of a random process (perhaps “guided” by “natural selection”).

    Not quite. The assumption is that splice and insertion points on existing replicators are random, not along functional divisions. Even then, it’s easy to show that the chance of successful results can be millions or billions of times greater than using randomized sequences. 

    Mung: Personally I have no conflict with regular repeated processes going on inside living organisms because to me that smacks of teleology.

    There are many types of recombination (e.g. sexual), however, we’re concerned with random processes. 

    Mung: “In the first group this data structure is linear, and it’s length is fixed, that is, it does not change during a run of the algorithm.”

    Yes, that’s a common type of genetic algorithm, but certainly not the only one, otherwise, you couldn’t simulate genomes that change size during evolution. 

Leave a Reply