Mutual Algorithmic Information, Information Non-growth, and Allele Frequency

Due to popular demand I will take a quick stab at explaining the applicability of mutual algorithmic information and the information non-growth law to an allele frequency scenario.

First, I’ll outline the allele frequency scenario.

The alleles are 1s and 0s, and the gene G a bitstring of N bits.  A gene’s fitness is based on how many 1s it has, so fitness(G) = sum(G).  The population consists of a single gene, and evolution proceeds by randomly flipping one bit, and if fitness is improved, it keeps that gene, otherwise it keeps the original.  Once fitness(G) = N, the evolutionary algorithm stops and outputs G, which consists of N 1s.  The bitstring that is N 1s will be denoted Y.  We will denote the evolutionary algorithm E, and it is prefixed on an input bitstring X of length N that will be turned into the bitstring of N 1s, so executing the pair on a universal Turing machine outputs the bitstring of 1s: U(E,X) = Y.

Second, I’ll briefly state the required background knowledge on algorithmic mutual information.

Kolmogorov complexity K(X) (also called algorithmic information) is the length of the shortest program that generates bitstring X.  The standard form is prefix free so no program used to measure Kolmogorov complexity is the prefix of any other such program.  The shortest program itself will be denoted X*, so K(X) = |X*|.  Conditional Kolmogorov complexity is the length of the shortest program P* that given input I will generate X, so K(X|I) = |P*|.  Joint Kolmogorov complexity of X and Y is the minimal program XY* necessary to generate the pair {X,Y}, so K(X,Y) = XY*.  Mutual algorithmic information I(X:Y) is a symmetrical measurement (within a constant error) of two bitstrings: I(X:Y) = I(Y:X) = K(Y) – K(Y|X*) = K(X) – K(X|Y*) = K(X) + K(Y) – K(X,Y).

Third, I’ll state the non-growth theorem.

The law of information non-growth states the deterministic processing of random bitstrings (generated by a computable probability distribution) is not expected to increase the mutual algorithmic information between X and Y.  Formally: E[I(U(R,X):Y)] <= I(X:Y), where R is a randomly generated bitstring and U(R,X) is executing the concatenated pair of bitstrings {R,X} with a universal Turing machine.

Finally, I’ll apply the theorem to the scenario.

We will say Y is the target, which according to our scenario is a bitstring of length N that consists of 1s.  X is a randomly generated bitstring of length N.  The typical random bitstring will provide no information regarding Y, so K(Y|X) = K(Y).  Consequently, there is zero mutual algorithmic information with Y, since I(X:Y) = K(Y) – K(Y|X) = 0.  The non-growth theorem says we cannot expect to increase the mutual algorithmic information through generating another random bitstring R and executing the pair with a universal Turing machine, so E[I(U(R,X):Y)] = 0.

Next, we will bring in the evolutionary algorithm E.  As stated at the beginning when E is prefixed to bitstring X of length N, and executed on a universal Turing machine, then the result is N 1s, denoted Y.  Consequently, the pair {E, X} requires no further information to generate Y and K(Y|E,X) = 0.  This means the algorithmic mutual information between {E, X} and Y is maximal: I(Y:{E, X}) = K(Y) – K(Y|E,X) = K(Y) – 0 = K(Y).

Thus, since the combination of the evolutionary algorithm E with random input string X contains all the relevant information to generate Y, the information non-growth theorem states the combination of generating another random bitstring R and executing the triplet {R, E, X} can only decrease information regarding Y: E[I(U(R,E,X):Y)] <= I(E,X:Y).

133 thoughts on “Mutual Algorithmic Information, Information Non-growth, and Allele Frequency

  1. Hi Eric:
    Di I understand your correctly in saying that the

    1. evolutionary function E maps any bitstring into the bitstring of consisting solely of 1’s. Is E supposed to represent the mechanism of natural selection?

    2. Fitness is a function of bitstrings which reaches a single maximum when its input is a bitstring consisting solely of one’s.

    3. I am not sure what the role of R is supposed to represent in biology.

  2. BruceS: 1. evolutionary function E maps any bitstring into the bitstring of consisting solely of 1’s. Is E supposed to represent the mechanism of natural selection?

    Yes, random variation and natural selection.

    BruceS: 2. Fitness is a function of bitstrings which reaches a single maximum when its input is a bitstring consisting solely of one’s.

    Correct.

    BruceS: 3. I am not sure what the role of R is supposed to represent in biology.

    It shows that randomness on its own doesn’t help achieve the goal. Without E, randomness contributes nothing. With E, randomness at best does nothing and at worst destroys information.

  3. So assuming my understanding in above post is close enough, here are further comments:

    Your definition of the E function appears to grant that NS can maximize fitness, presumably because the target Y is hard-coded into the function. I don’t see how this differs from a GA model of biological evolution, in which case the pros and cons have been already debated, including whether it assumes the target must be built-in by intellgience. Is the your formulation adding to that discussion somehow?

    I read the discussion involving R as saying that once NS-based evolution has inevitably maximized fitness, any further mutations must decrease it (or at least cannot increase it?). Presumably in the next generations E would increase it back to the max.

    So this seems to be to say that in a fixed fitness landscape where NS has maximized fitness, random mutations must decrease it, at least temporarily.

    Is that fair? If so, what do you think that implies about real world environments which shape and re-shape fitness surfaces over time?

  4. Thanks for laying this out. First, a comment on the model. The “flip one bit” scheme is mutation, and the “if fitness is improved” is extremely strong selection. This similar to, but not identical to, a Wright-Fisher model of mutation and selection in a population of size 1. It is one of the versions of Dawkins’s Weasel. (See my post on that in 2016 here). So it isn’t my biological model, which had an infinite population size and selection that had fitnesses at each locus (bit) of 1.01 : 1 for bits 1 and 0.

    But let that pass for the moment. It’s sort-of a model of a very small biological population with very strong selection and mutation rate 1/L, where L is the initial string length.

    Second, on what the proof has proven. Like BruceS I do not see what role the random bitstring R plays in the biological model. Perhaps it is a draw from the initial population in my model, where all gene frequencies started at 0.5, so that a random initial haplotype (haploid genotype) would be a random string R of length 100. But in your model, X is the initial genotype, and R is prefixed to it as some sort of extension of the genome. It is not stated how long R is, there is no requirement I can see that it be the same length as X.

    In fact, since R lengthens X, the outcome of the flip-and-select process is a bitstring of length Length(R)+Length(X). Different lengths of R generate different outcomes, contrary to your assertion that R does not change the outcome. Unless the Turing Machine is allowed to scan the input string and determine its length, and then clip off Length(R) bits.

    In having the Turing Machine work until it outputs a string of 1’s, you are not concerning yourself with how long it takes to get there (as long as it is a finite time, which it will be). No matter what initial string you start with, the Turing Machine reaches all 1’s ultimately in your model. It takes longer with some starting points than others. And, as I just mentioned, how long the resulting string is varies with the length of R. I think.

    Finally, I don’t see anything in the model that shows that selection needs some outside information to work. So the model shows no ineffectiveness of selection in finding strongs of all 1’s.

  5. BruceS: Your definition of the E function appears to grant that NS can maximize fitness, presumably because the target Y is hard-coded into the function. I don’t see how this differs from a GA model of biological evolution, in which case the pros and cons have been already debated, including whether it assumes the target must be built-in by intellgience. Is the your formulation adding to that discussion somehow?

    My thinking with this example was to illustrate two ends of the spectrum. Then from there we can add to or remove from the scenario and see how that changes things.

    For example, the stopping criterion is hard coded, and that’s part of what makes E succeed. But what about the rest of the algorithm, such as random variation? How much does that contribute to success? If we replaced the stopping criterion of N with a random number, then we can find out.

    We’ll denote the modified E as E’ and so now we have a triple {E’, X, R}, which evolves X until the fitness reaches R. Ignoring R for now, K(Y|E’,X) is the size input necessary to halt on Y, which is the bitstring representation of N, which in turn is log2 N. So, K(Y|E’,X) = log2 N.

    Thus, I(Y:E’,X) = K(Y) – K(Y|E’,X) = K(Y) – log2 N. However, since Y is so simple to describe, then K(Y) = log2 N. Thus, I(Y:E’,X) = log2 N – log2 N = 0.

    Now, according to the non-growth theorem, sticking in randomness in the stopping criterion gets us no closer to Y:

    E[I(Y:U(E’,X,R))] <= I(Y:E’,X) = 0

    This means just about all the information that helped us get to the target was in the hardcoded selection function, i.e. the intelligently designed bit of the algorithm.

    BruceS: Is that fair? If so, what do you think that implies about real world environments which shape and re-shape fitness surfaces over time?

    Since the stopping criterion is no longer hardcoded, then as before, replacing it with randomness results in information loss.

  6. EricMH
    Since the stopping criterion is no longer hardcoded, then as before, replacing it with randomness results in information loss.

    OK. And I guess if the environment changed randomly and very often, it would make sense that NS would not help. But I don’t think that random, constant changes to the environment are a good model for evolution either. (Well, maybe for asteroid impacts!)

    What about changes in the middle of the spectrum? Environment changes in a small, incremental way over long time period but still at random with a probability distribution that reflects real world physics and its potential impacts on the fitness function topology.

    Wouldn’t E increase fitness? I understand R might decrease it. Or at least not increase it. But both situations are also part of biological evolution and modeled without thereby excluding NS.

  7. Joe Felsenstein: In fact, since R lengthens X,

    R is prefixed to {E, X}, so doesn’t lengthen X.

    Joe Felsenstein: So the model shows no ineffectiveness of selection in finding strongs of all 1’s.

    Yes, the model is a starting point. See my above response to BruceS to show how we can modify the model and learn about the impact of the halting criterion. We could likewise modify the selection function, but I’ll leave that for another time.

  8. BruceS: What about changes in the middle of the spectrum? Environment changes in a small, incremental way over long time period but still at random with a probability distribution that reflects real world physics and its potential impacts on the fitness function topology.

    Wouldn’t E increase fitness? I understand R might decrease it. Or at least not increase it. But both situations are also part of biological evolution and modeled without thereby excluding NS.

    Yes, these are good questions, but I’ll have to address them at another time. Or, if you are feeling up for it, take a stab at it yourself and I can provide feedback.

  9. OK, so R is prefixed to the program E. I f the Turing Machine halts before it can explore R, then R’s only effect is its presence there.

    So what does R model? At this stage all we seem to have is a Weaseloid model of evolution, plus an irrelevant prefix string.

  10. EricMH: Yes, these are good questions, but I’ll have to address them at another time.Or, if you are feeling up for it, take a stab at it yourself and I can provide feedback.

    Neat challenge: be charitable by making the best argument you can from the viewpoint you are questioning. I’ll do that if you do the same when you return.

    I hope you and your family have good holiday.

  11. I think the rules of the game should be that Eric is allowed to change his argument, in search of one that shows a Kolmogorov complexity limit of some sort on the effectiveness of natural selection. As long as he then makes clear that he has changed it.

  12. EricMH: This means just about all the information that helped us get to the target was in the hardcoded selection function, i.e. the intelligently designed bit of the algorithm.

    This statement caught my attention, since I was once told off by Tom English for focusing on the algorithm instead of the modeled system.

    Of course it is true that the evolutionary algorithm E was designed by you, but the reason it was designed was to model an evolutionary process; mutation followed by a bout of selection. Did you mean to suggest that the simuland, the evolutionary process, is itself intelligently designed?

    In addition, I don’t see why you want to introduce R to represent “randomness”. The mutation part already introduces a random component, since it occasionally flips a 1 to 0 resulting in a deleterious mutation. It is the intense selection in your algorithm that prevents them from ever making it to the population.

  13. Mung: So, not biologically realistic then.

    https://arxiv.org/abs/1509.07946

    The question is whether a model is sufficiently realistic to justify some conclusion — some claim about reality.

    Edit to add: Your link leads to to a discussion of optimization algorithms. Please, let’s not conflate those with models.

  14. BruceS: But I don’t think that random, constant changes to the environment are a good model for evolution either.

    But the environment is constantly changing, and changes in the environment are random with respect to fitness. We just choose to ignore that fact in our models.

  15. Freelurker: Your link leads to to a discussion of optimization algorithms. Please, let’s not conflate those with models.

    An optimization problem is just what is being solved for in the OP. And that’s the way fitness is often presented here at TSZ. Take for example the many long discussions over FI and gpuccio’s posts at UD.

  16. Mung: But the environment is constantly changing, and changes in the environment are random with respect to fitness

    Sure and the rate of change can exceed the rate at which adaptation can proceed. Result: extinction. A worry now with the current rate of climate change, sea level rise, melting of permafrost and so on.

  17. Joe Felsenstein:
    I think the rules of the game should be that Eric is allowed to change his argument, in search of one that shows a Kolmogorov complexity limit of some sort on the effectiveness of natural selection.As long as he then makes clear that he has changed it.

    If you had ever set the boundaries of where the limits of natural selection are, maybe Eric wouldn’t have to adjust his math?
    Unfortunately, there are no such limits and neither he knows it nor what he is up against…

  18. Joe Felsenstein,

    So what does R model? At this stage all we seem to have is a Weaseloid model of evolution, plus an irrelevant prefix string.

    It shows to find the sequence the algorithm must be designed with the target in mind where the change moves consistently toward the target. The information (target sequence) was therefore provided by the programmer. The natural selection algorithm was simply able to recover the original information in this case.

    If God designed evolution based on mutation and selection this hypothesis says that natural selection if it works was based on the overall algorithm designed in the universe that allowed the original information to be recovered.

    Pretty cleaver exercise.

  19. colewd:
    Joe Felsenstein,

    It shows to find the sequence the algorithm must be designed with the target in mind where the change moves consistently toward the target. The information (target sequence) was therefore provided by the programmer. The natural selection algorithm was simply able to recover the original information in this case.

    If God designed evolution based on mutation and selection this hypothesis says that natural selection if it works was based on the overall algorithm designed in the universe that allowed the original information to be recovered.

    Pretty cleaver exercise.

    Does adding a methyl group CH3 to DNA sequance constitute an increase of information, in your view?

  20. colewd:
    J-Mac,

    Added how?

    What do you mean how? The methyl group is added to the chemical structure of DNA by mythelating cytosine… Get it?
    The DNA sequence is not altered but the gene transcription is, which often =cancer.
    Some of my colleagues believe this is an example of information increase in evolution? What do you think?

    ETA: This may intersting to some: the mechanism of adding the methyl group to DNA is via epigenetics… 🙂

  21. colewd:
    Joe Felsenstein,

    It shows to find the sequence the algorithm must be designed with the target in mind where the change moves consistently toward the target. The information (target sequence) was therefore provided by the programmer. The natural selection algorithm was simply able to recover the original information in this case.

    If God designed evolution based on mutation and selection this hypothesis says that natural selection if it works was based on the overall algorithm designed in the universe that allowed the original information to be recovered.

    Pretty cleaver exercise.

    This “cleaver” argument is the same old absurd one that if a computer program simulates a process, that proves that the process in nature is intelligently designed. You’ve proved too much. Erosion? Mendelian segregation? Brownian motion? I doubt that Eric Holloway means to make an argument like that!

  22. Alan Fox: That seems rather crucial!

    Remember that the next time you think about appealing to “the niche” as the designer. Selection doesn’t need any outside information to work.

    And so much for all the talk about how information gets into the genome from the environment. It seems that all that’s required now is mere osmosis.

  23. Joe Felsenstein: This “cleaver” argument is the same old absurd one that if a computer program simulates a process, that proves that the process in nature is intelligently designed.

    I’m not familiar with that argument. Is that one of Dembski’s arguments? Behe? Axe?

  24. How about it Joe.

    Are not your arguments simply exercises in optimization? Given an optimization function FI can increase. etc. Such an optimization function exists. That optimization function is natural selection. Therefore no design required for large increases in FI.

    Is that not the argument?

  25. Mung: Selection doesn’t need any outside information to work.

    Doesn’t make sense to me. Evolution needs bias in the process to produce adaptation. Models that don’t include an element of bias that their proposers use to claim evolution doesn’t work can be rejected out of hand.

  26. Mung,

    My argument is that the situation in my model has genotypes with different fitnesses. A very simple and easily analyzed one. In that situation (as often occurs in more complicated situations too) the distribution of fitnesses in the population shifts toward higher fitnesses.

    Measuring the functional information, we see the FI increase as a result of those reproduction with those different fitnesses.

    I am testing assertions by Holloway and others that ordinary evolutionary processes cannot increase FI unless there is an outside source of information.

    Their argument is supposed to be general, so it is supposed to apply to all cases, and of course would then apply to this one. It fails there. The last I heard, generalizations have to apply always.

    Now whether that means that we have to stop everything to discuss whether evolution, more generally, is an optimization process, this I doubt. It seems like a side issue. At best. (By the way mean fitness does not always go up in population genetic models — we can come up with ones where it goes down, even ones in which the population evolves its way to extinction).

  27. Mung:

    Joe Felsenstein: This “cleaver” argument is the same old absurd one that if a computer program simulates a process, that proves that the process in nature is intelligently designed.

    I’m not familiar with that argument. Is that one of Dembski’s arguments? Behe? Axe?

    No, they do not make that argument, but numerous commenters here have done so. The more trollish their behavior, the more likely that they use it.

    I don’t have the energy to go hunt down examples and list citations of its use. But I’m very surprised that you have never noticed the argument here. Perhaps some other commenters could tell us whether they ever saw the argument used here.

  28. Joe Felsenstein,

    Joe,
    With all due respect calling something absurd is not a counter argument. Eric has made a very interesting claim based on the law of information non growth and the halting problem. You have your work cut out for you here.

  29. Joe Felsenstein: And if it could be shown that evolution is optimization, what, pray tell, does that show?

    Don’t know don’t care. Because surely “evolution” has to mean more than that. So I’d be more concerned about modeling evolution as if it were an optimization algorithm. Not that anyone actually does that.

  30. Joe Felsenstein: I am testing assertions by Holloway and others that ordinary evolutionary processes cannot increase FI unless there is an outside source of information.

    What would that source be?

    Doesn’t the embryo development process; the cell differentiation imply an outside source of information already? Why ignore that and continue the nonsense of FI?

  31. BruceS: Neat challenge: be charitable by making the best argument you can from the viewpoint you are questioning. I’ll do that if you do the same when you return.

    I hope you and your family have good holiday.

    Certainly. Don’t think that I have any fundamental commitment to ID, and never try to think up counter arguments. I’m just interested in the best argument, and just about all the anti-ID arguments I’ve ever seen are pretty bad, although the surrounding rhetoric tends to “feel” persuasive if I don’t understand the underlying math/logic.

    I’ll try my best to come up with a good counter to my simulation argument, although I’ll avoid the normal epistemic/philosophical sorts of arguments I generally see around here.

    Best wishes for you and your family’s holidays (holy days!) too 🙂

  32. EricMH: Certainly.Don’t think that I have any fundamental commitment to ID, and never try to think up counter arguments.I’m just interested in the best argument, and just about all the anti-ID arguments I’ve ever seen are pretty bad, although the surrounding rhetoric tends to “feel” persuasive if I don’t understand the underlying math/logic.

    I’ll try my best to come up with a good counter to my simulation argument, although I’ll avoid the normal epistemic/philosophical sorts of arguments I generally see around here.

    Best wishes for you and your family’s holidays (holy days!) too 🙂

    It seems Eric you are like Antony Flew who changed his mind about atheism because he believed in an ancient scientific rule: “We must follow the argument wherever it leads…” If that is so about you, you are a real seeker of truth…

  33. colewd:
    Joe Felsenstein,

    Joe,
    With all due respect calling something absurd is not a counter argument.Eric has made a very interesting claim based on the law of information non growth and the halting problem.You have your work cut out for you here.

    Yes, you’re right about that. Eric has made the claim that his argument somehow shows that information needs to be put into the evolving population by being preloaded into some sort of algorithm. The example we are both analyzing is of 100 loci (genes, say) each with two alleles, one having fitness 1.01 and the other fitness 1. The result of a large population evolving with those fitnesses is that gene frequencies change, the more fit alleles rise in frequency at each locus, and the overall fitness of the population rises, and as it does the Functional Information embodied in those genomes rises.

    Eric imagines an algorithm that changes the gene frequencies in a Weasel-like small population, setting them to 1. So embodied in his algorithm is an assessment of the two alleles when one of them arises by mutation, and setting the one of higher fitness to gene frequency 1. That would be equivalent to the result of a great many generations of natural selection.

    Now here is where I have my work cut out for me: I have to figure out why making an algorithm that does what natural selection does means that, in the natural selection model I suggested, there is Design being supplied to the genome.

    How? By natural selection? If so Eric is accepting that the result of these fitness differences will change gene frequencies and increase FI. So whatever that is, it is not an argument saying that without some Design, simply having differences of fitness will not raise FI. It accepts that differences of fitness can raise FI.

    Or is there some other way that information is supplied by Design in the 100-locus selection model? If so, I don’t see it. Figuring out whether information is being supplied, and if so, how, seems daunting. Yes, I have my work cut out for me.

    Perhaps it’s clear to everyone else and they can tell me how this works.

  34. I’m confused. Eric, you said that your argument doesn’t apply to scenarios where the fitness landscape is a mountain with a single peak. Now you propose an algorithm that uses that exact fitness function and you conclude that your argument works to infer design based on conservation of information. Am I missing something obvious here?

  35. Joe Felsenstein: Figuring out whether information is being supplied, and if so, how, seems daunting.

    I thought I proved and stated it plainly. The information in my algorithm is being supplied in the halting criterion.

    At any rate, I believe I’ve provided all the essential ingredients if you wish to try to apply or refute my argument. Being mindful of my time use, the next place my material will appear will have to be a Bio-C article.

    dazz: Eric, you said that your argument doesn’t apply to scenarios where the fitness landscape is a mountain with a single peak.

    You are right. I was wrong. My argument does apply to a smooth landscape.

    I thank everyone for their time and feedback.

  36. J-Mac: It seems Eric you are like Antony Flew who changed his mind about atheism because he believed in an ancient scientific rule: “We must follow the argument wherever it leads…” If that is so about you, you are a real seeker of truth…

    To some degree. I started out somewhat of a skeptic at Biola around the time of the Dover trial, although even then the treatment of ID seemed unfair. Considered becoming an atheist after leaving college due to discontent with standard Christian apologetics. Turns out the arguments for atheism are even worse, so didn’t become an atheist, and a big turning point was ID. Partly because the ID argument made a lot of sense to my computer science background, but also because of how bad the counter arguments were and still are.

  37. EricMH: To some degree. I started out somewhat of a skeptic at Biola around the time of the Dover trial, although even then the treatment of ID seemed unfair.Considered becoming an atheist after leaving college due to discontent with standard Christian apologetics.Turns out the arguments for atheism are even worse, so didn’t become an atheist, and a big turning point was ID.Partly because the ID argument made a lot of sense to my computer science background, but also because of how bad the counter arguments were and still are.

    I hear you Eric.
    The hypocrisy in religion and the nonsensical argumentation against ID just proves that people have free will to choose…That’s why a spend limited time or none arguing against nonsense…I admire your patience and the love for math. I used to love math because it was easy for me but I reached a point when it didn’t do it for me anymore…I may regret it one day… I chose a different career and have many hobbies, like QM but math is not one of them…unfortunately…
    ETA: atheism is just a denial of what’s obvious but the hypocrisy in religion makes that denial much easier…

  38. EricMH: To some degree. I started out somewhat of a skeptic at Biola around the time of the Dover trial, although even then the treatment of ID seemed unfair. Considered becoming an atheist after leaving college due to discontent with standard Christian apologetics. Turns out the arguments for atheism are even worse, so didn’t become an atheist, and a big turning point was ID. Partly because the ID argument made a lot of sense to my computer science background, but also because of how bad the counter arguments were and still are.

    Good one Eric, good one. 🙂

  39. “Turns out the arguments for atheism are even worse”

    As far as I can tell, the only argument for atheism is lack of evidence for anything else, UNLESS we start with unsupportable conclusions and simply ASSUME everything we see obviously supports them.

    “atheism is just a denial of what’s obvious”

    And here’s a perfect example.

  40. Flint: As far as I can tell, the only argument for atheism is lack of evidence for anything else

    Not exactly.
    There are many arguments for atheism, which do not merely rest on insisting that theism hasn’t met it’s burden of proof.
    Just off the top of my head there is the argument from divine hiddenness/argument from nonbelief, argument from evil, the argument for the biological role of pain and pleasure, the argument from biological evolution, and then a lot of more specific arguments against particular Christian versions of theism.

    Here’s some more for atheism/against theism:
    https://www.patheos.com/blogs/secularoutpost/2016/06/26/pererz1-25-evidences-against-theism/

  41. Mung: But the environment is constantly changing, and changes in the environment are random with respect to fitness. We just choose to ignore that fact in our models.

    Yes, you are right that scientific modeling idealizes and abstracts. What you missed here and in your armchair example posted elsewhere was that models still must be subject to scientific evaluation, which includes empirical testing as well as the application by the scientific community of other norms of scientific theorizing

    My thinking in making this point had nothing to do with the independence (so-called ‘randomness’) of mutation and fitness benefit. But I should have been more careful in specifying the probability distribution of those random changes to the environment.

    Basics
    Philosophical discussion applied to biology

  42. EricMH:

    Joe Felsenstein: Figuring out whether information is being supplied, and if so, how, seems daunting.

    I thought I proved and stated it plainly. The information in my algorithm is being supplied in the halting criterion.

    And where does that show up in the simple population genetics model? Or even in your Weasel algorithm? In both of them the fitness of the population goes up and up, and in neither does it go down, so stopping time is hardly critical.

    [EricMH:]
    At any rate, I believe I’ve provided all the essential ingredients if you wish to try to apply or refute my argument. Being mindful of my time use, the next place my material will appear will have to be a Bio-C article.

    Well, I hope that when you do so, your argument goes farther than it did here. Farther enough to show what has been missing here — how the theorem about ASC has a counterpart in my simple population genetics model. And whether the information is being supplied within the population genetics model. And whether without it natural selection somehow does not change the gene frequencies. Or whether the information somehow is being supplied by the differences in fitness (as in the “active information” arguments of Dembski, Ewert, and Marks). Not knowing those makes your claim to show that FI increases only when information is being supplied to the population a mystery to me.

    I thank everyone for their time and feedback.

    Thanks to you for the responses you did give. They do leave me mystified, but I suppose they satisfied everyone else here that you had achieved what you claimed. Because they stopped talking about the issue.

  43. EricMH: I thought I proved and stated it plainly. The information in my algorithm is being supplied in the halting criterion.

    Why is the halting criterion important? What happens when we remove that part from the algorithm:

    Once fitness(G) = N, the evolutionary algorithm stops and outputs G, which consists of N 1s.

    I am pretty sure adaptive evolution hasn’t stopped or reached any target, so I don’t see why that part of your simulation is relevant to the thing we are simulating at all. It is just a convenience thing for the programmer.
    Perhaps I am missing something?

Leave a Reply