250 thoughts on “What was the most significant scientific development in Intelligent Design in 2018?

  1. Mung: What about that one that created an antenna out of nothing? The clock one. The cars one.

    I guess what Joe is saying is that if we don’t know ahead of time which solution the computer will choose as the best, then there is no target.

    Is that funny, if you stop and think about it, certainly.

    Is it funny, even if you don’t stop and think about it? Of course.

  2. Mung: And yet you just said that they do, Joe.

    Where did I say that? And where is the information about the desired solution hiding in programs like BoxCar2D ?

    I’ll join the laughing if you can persuade me that it’s in there. Till then I’m Mr. No-Sense-Of-Humor.

  3. It would be most instructive if someone could lead us through the development of a genetic algorithm that does not have any information about the target genotype front-loaded into it.

    Joe Felsenstein: I’ll join the laughing if you can persuade me that it’s in there. Till then I’m Mr. No-Sense-Of-Humor.

    🙂

    According to you a target exists only if the one single target with the highest fitness is predefined. If it’s not predefined, then, you say, there is no information about the desired solution hiding in the program.

    And yet there is, else the objective function would not be able to evaluate which genotype is better than any other. And there would be no way to progress towards genotypes of ever higher “fitness.” And you acknowledge this in your post.

    You just want to quibble over what constitutes a target, when your claim was what is known about the genotype. You simply cannot evaluate these “genotypes” without programming in information about the desired sequence. Whether there is a single genotype that has the highest fitness of them all is a red herring.

  4. Mung: What about that one that created an antenna out of nothing? The clock one. The cars one.

    No targets.

  5. Mung: And yet there is, else the objective function would not be able to evaluate which genotype is better than any other. And there would be no way to progress towards genotypes of ever higher “fitness.”

    A fitness function is not a target. And in any case, there’s no fitness function in avida, yet there is still differences in fitness.

  6. Mung: You simply cannot evaluate these “genotypes” without programming in information about the desired sequence.

    Sure you can. You just make a simple evaluation after recording the performance of several different entities. Is A bigger than B or C? Copy biggest, allow some chance of mutation.
    There is no information about what particular, or even class of solutions will emerge from this.

    Where is there information about the outcomes in this?

  7. Allan Miller:
    Freelurker,

    I’m as guilty as anyone of using ‘GA’ as shorthand for a model of evolution. Though, I do find it striking that something entirely inspired by simple genetic mechanisms in populations – something deemed not to actually work ‘out there’ in any significant sense – finds widespread application in the bottom-line-driven world of commercial data processing.

    Something to note about the application of GA’s: they are used to analyze static situations, not dynamic situations like biological populations changing over time.

    The parameters of a GA are not time-dependent. The “generations” are just steps in an iterative mathematical procedure; they don’t represent changes in the real world.

  8. If I understand the ‘target’ objection correctly, it is that only a subset of the space of all possible genotypes will be returned by the process. It favours fitter genotypes, if they are accessible, so yeah, bound to be a subset.

    But isn’t that what happens in reality, too?

  9. Mung: And it’s not uncommon to see evolution presented in just that way, as a procedure for problem solving (and as a search).

    Yes, I’ve seen this too in the past but I don’t now have any copies of such presentations to comment on.

  10. Freelurker: Yes, I’ve seen this too in the past but I don’t now have any copies of such presentations to comment on.

    Problem solving in cases where solutions are not known and where traditional methods are ineffective or slow.

    Cases where incremental improvement is desirable.

  11. Mung: You simply cannot evaluate these “genotypes” without programming in information about the desired sequence

    To me, this objection has real force when applied to many of the simple EA models which are claimed to be models of biological evolution (at least, biological evolution constrained to NS mechanism).

    I think the root of the problem is that the fitness evaluation function is based on an external standard, not something intrinsic to the simulated entity and its needs.

    I suggest that the Avida simulations of artificial life competing for CPU resources address this issue, since there fitness is measure of something that plausibly can be considered intrinsic to the needs of the simulated organism and its survival.

    It would be fair to point out the those interests and the standard for meeting have been built into the simulation. But the counter would be that abiogenesis is not addressed by biological evolution, so we are allowed to do that and still claim to be producing a fair model of biological evolution.

    ETA: There are EA’s of cars which measure fitness by mileage per unit resource. Possibly that could be interpreted as intrinsic need. It seems forced to me. Some kind of simulated last-car-running demolition derby might be better! Perhaps that comes down to one’s intuitive judgement of intrinsic.

  12. BruceS: Mung: You simply cannot evaluate these “genotypes” without programming in information about the desired sequence

    To me, this objection has real force when applied to many of the simple EA models which are claimed to be models of biological evolution (at least, biological evolution constrained to NS mechanism).

    You surprise me. To me, this objection has no force at all.

    I think the root of the problem is that the fitness evaluation function is based on an external standard, not something intrinsic to the simulated entity and its needs.

    Granted, it is important to be clear about what aspects of evolution are being modeled, and what conclusions are drawn from the results of the modeling.
    But having said that, the perceived “externality” of the fitness function just does not strike me as the show-stopper that IDists seem to think it is. There has to be some sort of genotype->phenotype mapping, and Mung’s claim that the existence of a genotype->phenotype mapping function constitutes “programming in information about the desired sequence” is bogus.
    Given that Mung never seems willing to explain what he means, perhaps you could help me out here: why does the degree to which one views the fitness function as “forced” affect conclusions regarding the ability of GA’s to find optima that are rare beyond the UPB?
    Thanks.

  13. Rumraket: Voldemort?

    Because of self-deception… Do you know what that means?
    ETA: I don’t really care if you are doing to yourself because of the cancer thing… It can be difficult… I get it…

  14. DNA_Jock:perhaps you could help me out here: why does the degree to which one views the fitness function as “forced” affect conclusions regarding the ability of GA’s to find optima that are rare beyond the UPB?

    It does not.

    My concern is with it being an appropriate model for biological evolution. I am saying that it misses something important about biological organisms. Namely, they have intrinsic interests. The degree to which they are successful in meeting those interests is related to their fitness.

    I do think that the force of the objection resides in whether one agrees this intrinsic/extrinsic divide is important and, given that, how one judges intrinsic versus extrinsic.

    Just to be clear, I’m not saying this has anything to do with the concept of “information being smuggled in”, which is too vague for me

    Perhaps here is a useful analogy. We do not program deep learning AIs; they learn on their own.

    The analogy is this: I think we want to be able to say we did not program the fitness into the organism, rather it is inherent in its staying alive and reproducing successfully (as simulated). The simulated niche then has to challenge that capability directly.

    ETA: In the Avida example I gave (which I have not studied deeply), these ideas are met by having organisms compete for CPU time. If program execution time is taken as simulated living, then it seems reasonable to me to say that is an intrinsic interest. The niche limits it. I did not check on how the niche fitness landscape is structured and changes; perhaps it could change the allocation algorithm for CPU?

  15. BruceS,

    Just to completethe thought in my previous post:

    My understanding is that certain ID luminaries start with the assumption that “evolution is just a search”. Under that assumption, my concern would not apply: GA searches are effectively agreed to as appropriate simulations. By definition a search involves something imposed as a parameter of the search algorithm.

    For that case, the issue is about making sure the topology of the searched/fitness landscape meets biological constraints (eg is not white noise).

  16. BruceS,

    Thank you for the explanation. I see what you mean. There is a dimension to GA’s, whereby the more confected the fitness function is, the less well it captures the ‘survival is intrinsic to the organism’ aspect of evolution.
    I have a prejudice, no more, that the ‘the metric must be survival and fertility’ limitation is not really a significant constraint. I am far more concerned about the topology and navigability of the fitness surface.
    If you are looking for highly ‘intrinsic’ applications, there’s always Core War; perhaps an interesting model of predator-prey relationships.

  17. The problem with GAs that evolve due to intrinsic fitness is that their self interest may not coincide with ours, unless our interest is building a didactic model.

    Commercial GAs solve problems that have monitary value. I view them as analogous to power tools. They amplify our abilities.

    Gregory hates this, but there is a broad definition of evolution that encompasses incremental change guided by some kind of “reward” mechanism.

    No one designed an iPhone from first principles. Phones evolved incrementally. The details do not resemble biology, but it is, broadly defined, evolution.

  18. Rumraket: A fitness function is not a target.

    No one said that the GA is trying to find the fitness function. That would be silly. That’s like telling someone they are getting hotter or getting colder without knowing with respect to what it is that they are getting closer to or farther away from.

    The fitness function is a guiding hand. It doesn’t know what it’s purpose is. That’s defined by the intelligent designer(s) of the GA.

  19. Rumraket: Where is there information about the outcomes in this?

    Are you serious? Let’s say you define a genotype with only two characters that can only take one of two values. 00, 01, 10, 11. Right there you have provided information that the outcome must be within that set. That’s information about the outcome.

  20. Mung,

    Are you serious?
    Rumraket was objecting to your statement “You simply cannot evaluate these “genotypes” without programming in information about the desired sequence. ”
    which remains wrong and (by you) undefended, since your example is off-topic.

  21. Mung: Are you serious? Let’s say you define a genotype with only two characters that can only take one of two values. 00, 01, 10, 11. Right there you have provided information that the outcome must be within that set. That’s information about the outcome.

    And if I ask you to guess which of my hands holds a prize, have I given you front-loaded information? (Namely that it’s in one or the other).

    Usually when we compute information we start there, not with the assumption that prize could be anywhere in the universe.

  22. Mung: No one said that the GA is trying to find the fitness function. That would be silly.

    I didn’t take you to be insinuated that the fitness function itself is a target, but rather that the fitness function provides a target. But it doesn’t, it just makes a comparative evaluation between individuals. It doesn’t say where anything will end up, it has no idea.

  23. Mung: Are you serious?

    Deadly.

    Let’s say you define a genotype with only two characters that can only take one of two values. 00, 01, 10, 11. Right there you have provided information that the outcome must be within that set. That’s information about the outcome.

    That’s like saying information about the outcome of the chance roll of 10 dice is contained in the fact that the dice can each only take one out of six values. If that’s really what you mean by information being present “about the outcome”, it is a trivial and irrelevant statement because that does not imply there is any sort of bias in the program that makes functional, or complex, or information-rich outcomes more likely than not.

    Can you even conceive of a program without some boundary conditions in this way? The computer, by necessity of it’s physical architecture, can only operate with ones and zeroes, so by extension whatever program you make to simulate a GA will have results represented in ones and zeroes, and any finite string of ones and zeroes will have a limited number of permutations, within which that outcome will lie. How is this smuggling in information about the outcomes?

    To take your analogy to real evolution it’s sort of like saying that because organisms are made of atoms from the periodic table of elements, which obey the laws of physics, then information about the outcomes of evolution are smuggled into the process: They’re going to be made of atoms.

    Holy shit, really?

  24. Whatever argument Mung makes about information being supplied by the simple fact that we are told that there are a bunch of possible genotypes, this does not make any argument about front-loading. We can see that because calculations of CSI consider a set of possible genotypes. Say, S sites with 4 possible bases at each, so that there are 4^S possible genotypes.

    To calculate SI and see if it is big enough for there to be CSI, we define a target set of genotypes, such as those that have firness greater than or equal to a threshold value. Then we calculate what fraction of all sequences achieve that threshold. Then we take -\log_2 of that fraction.

    So notice. We started with all possible genotypes, not everything in the entire universe, nor everything in all possible universes. Whatever amount of information might be calculated in narrowing down from the whole universe to the set of all possible DNA sequences S sites long, it is not used in the calculation of SI. Only the further narrowing down to sequences whose fitness exceeds the threshold is used. When ID advocates argue for font-loading, they cannot be arguing about narrowing down to the set of all 4^S DNA sequences, for their CSI assumes that for a start, at, before any amount of specified information is calculated.

  25. Somehow I’m reminded of blackjack in Vegas. “At random”, the odds favor the house, where “at random” means that all subsequent cards to be dealt have an equal probability of coming up.

    But some folks figured out that we’re not dealing entirely with random distributions after some visible cards have already been dealt. These cards can no longer be in the deck, which alters the probabilities of getting subsequent cards. That is, the cards already dealt represent “front loading” of subsequent probabilities. And this front-loaded information allowed really good card counters to alter the odds to favor themselves over the house. But this is a special situation not applicable to evolution.

    (And the house devised three methods to combat it — shuffling very often, using multiple decks in the shoe, and refusing to allow card counters to play at all. The first two methods slow play down, which slows down the rate the house wins money. So the third is widely adopted.)

  26. Mung:

    The fitness function is a guiding hand. It doesn’t know what it’s purpose is. That’s defined by the intelligent designer(s) of the GA.

    Can you be more specific about why this is a problem? Here is what I mean:

    1. GA examples in this context are intended to teach/illustrate how biological NS can optimize fitness.

    2. The nature of biological fitness is front loaded by abiogenesis. By definition, the living organisms that result from any viable model of abiogenesis must be able to continue to live long enough to reproduce.

    3. To model/illustrate this biology of evolution based on the given biological starting point of living organisms, the GA must include a definition of fitness consistent with optimizing in the landscape it is based on. That is not an issue for what GA’s aim to do in this context, ie illustrate the generic effectiveness of the concept fitness to optimize.

    So where is your issue? If you think it is in 3, can you be specific about why?

  27. Rumraket: How is this smuggling in information about the outcomes?

    I don’t believe I said anything about “smuggling in information.”

    You realize, don’t you, that by determining the constraints you determine the size of the search space, right? So in essence you are saying that if there is a target or targets, look for it within this space of possibilities. Information about where the target or targets can be found.

    You don’t think that’s important to the success or failure of a GA to find a particular solution to the problem?

    Oh, and “genotypes” don’t always have to be represented as 0’s and 1’s. And even if they are selection can be done at the level of a “phenotype.”

    The idea is if you don’t give the program some idea of what to look for and where to look for it what’s the point of writing the program?

  28. Mung,

    You are being too kind. Determining that those which consume the most resources are the most successful IS setting a target.

  29. Mung: You realize, don’t you, that by determining the constraints you determine the size of the search space, right? So in essence you are saying that if there is a target or targets, look for it within this space of possibilities. Information about where the target or targets can be found.

    You don’t think that’s important to the success or failure of a GA to find a particular solution to the problem?

    Off-topic, and (additionally) wrong.
    It’s a two-fer 😀

  30. I have important information about how to win at chess. Any move you make should be to one of the 64 squares on the board and not to anywhere else.

    I await your grateful applause.

  31. Apparently the Designer carefully crafted the Earth (through another gradual process) and the environment (the fitness function) so that evolution (the GA) could proceed and eventually produce the Mungs and phoodoos of the world by frontloading all the “information” in nature, but then unfortunately at some point it got stuck and still needed intervention to get it rolling again (oops). We’re unsure if intervention by the designer was required only to get the bacterial flagellum going, or if He was such an incredibly incompetent designer that the process failed to produce any new “kinds”, but we know something for sure, GA’s are intelligent design at work. Only that GA’s usually work without external intervention… ok, nevermind, nothing to see here

    And that’s ID for y’all

  32. Mung: applause

    Why have an increased search space where nothing that is “more fit” will be found? Just for grins?

    That would be a good question if the real world had been designed like a GA, don’t you think?

    From an evolutionary perspective, the search space gets larger as sequences get longer, which in fact helps explore a lot more sequence space (and find fitter solutions) than having it limited to a few nucleotide long sequences

  33. Joe Felsenstein:
    I have important information about how to win at chess.Any move you make should be to one of the 64 squares on the board and not to anywhere else.

    I await your grateful applause.

    Clearly your winning strategy was front-loaded to succeed by this remarkable hidden insight. I will immediately inform the World Chess Federation of this fraud!

  34. Mung: Why have an increased search space where nothing that is “more fit” will be found? Just for grins?

    You’ve stopped making sense I’m afraid.

  35. BruceS: Can you be more specific about why this is a problem?

    It’s only a problem when people deny it. I don’t know why they think it’s necessary to deny something so obvious. Maybe they think that GAs tell us something interesting about how evolution works.

    1. GA examples in this context are intended to teach/illustrate how biological NS can optimize fitness.

    If biological NS has a predefined goal or target (or targets) and something to tell it whether it is getting hotter or colder with respect to that goal or target (or targets), yes.

    …the GA must include a definition of fitness consistent with optimizing in the landscape it is based on.

    Something other than who leaves the most offspring then?

    Evolution itself is supposed to be so generic that it can solve any optimization problem. Not only that, but it can find target sequences not before known and then optimize them. GAs just don’t work like evolution is supposed to work.

  36. Mung: If biological NS has a predefined goal or target (or targets)

    It doesn’t, and neither does a GA. It has boundary conditions, limitations, often times set up by the limitations of the computer that runs the GA.

  37. Mung: Evolution itself is supposed to be so generic that it can solve any optimization problem.

    Who ordained that it is supposed to be able to do that? Where?

    Not only that, but it can find target sequences not before known and then optimize them.

    Just like a GA.

  38. Mung: Why have an increased search space where nothing that is “more fit” will be found? Just for grins?

    That’s what we say when you talk about restricting ourselves to one of the set of all possible genotypes as if that’s front-loading information.

  39. Mung:

    Evolution itself is supposed to be so generic that it can solve any optimization problem. Not only that, but it can find target sequences not before known and then optimize them. GAs just don’t work like evolution is supposed to work.

    As best I can tell, your concern is similar to the one I expressed in upthread post on intrinsic needs of organisms versus the extrinsic goals inserted by the GA designer. Does that post capture and address your concerns? Here are the two links:

    What was the most significant scientific development in Intelligent Design in 2018?

    What was the most significant scientific development in Intelligent Design in 2018?

    I also think it is imprecise to say biological evolution solves optimization problems. The scientific model called biological evolution, specifically the mechanisms of NS, can explain the appearance of design in organisms. Biological evolution need not optimize in the sense of finding the globally best solution to a single problem.

    Science is our third party view of the world, It is not the viewpoint of the organisms. Organisms have an intrinsic interest to live and reproduce. Not to optimize.

    NS is a mechanism that science has found to successfully model changes in population genetics which lead to the appearance of adapting to the organisms changing niche. But that is the scientist’s view/model of what is happening, not the organism’s. That is the point I am trying to make.

    As per upthread post, my understanding of Avida models is that they do better capture an organism’s viewpoint/intrinsic needs. They model the organism as an artificial life in a virtual environment to try to do so.

    Your points of GA not capturing salient aspects of biological evolution are correct, IMHO. Such simplifications are part of all scientific models.: models abstract and idealize. Nonetheless, they still may be useful. The scientific community evaluates that as part of doing science.

    It seems to me that GAs as models of biological evolution are not used by biologists for doing science. They are used for populatizations and for teaching, not research. If any of the biologists are paying attention, I welcome feedback on that.

    GAs as algorithms for optimization in general serve a different purpose.

  40. Joe Felsenstein: That’s what we say when you talk about restricting ourselves to one of the set of all possible genotypes as if that’s front-loading information.

    So why do we do it then Joe? Why do our “genotypes” in a GA fit the problem we are trying to solve?

    For example, why doesn’t Dawkins include more than 27 characters in his WEASEL program and why did people object when I restricted the set of characters even further, to only those that were actually needed?

  41. Mung: So why do we do it then Joe? Why do our “genotypes” in a GA fit the problem we are trying to solve?

    For example, why doesn’t Dawkins include more than 27 characters in his WEASEL program and why did people object when I restricted the set of characters even further, to only those that were actually needed?

    Dawkins devised a model of cumulative selection in breeding. He called it the monkey/Shakespeare model. He wrote a computer program for simulation of the model. He executed the program in order to generate examples of cumulative selection. He did not execute the program in order to obtain a solution to a problem. That is why there is no statement of a problem in The Blind Watchmaker.

    The formal similarity of Dawkins’s simulation of a monkey at a typewriter to an evolutionary algorithm for solving a problem does not make it an evolutionary algorithm. Given that there is no problem, there is no search for a solution to a problem. What Dawkins implemented was a biased sampling process, along with a monitor of the process, not a search.

    (The umbrella term evolutionary computation, covering genetic algorithms, evolutionary algorithms, and genetic programming, has been in widespread use for 25 years. The two main journals in the field are Evolutionary Computation and the IEEE Transactions on Evolutionary Computation. Why people arguing about ID insist on referring to all forms of evolutionary computation as genetic algorithms is a mystery to me. To be honest, it strikes me as a sign that they don’t know what they’re talking about. Dawkins’s algorithm more closely resembles a conventional evolutionary algorithm than any conventional genetic algorithm.)

  42. Tom English: Dawkins’s algorithm more closely resembles a conventional evolutionary algorithm than any conventional genetic algorithm.

    You mean because it gives a target, and says search until you find that target? And each time you get close we will tell you.

    Ok.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.