Functional information and the emergence of biocomplexity

Journal club time again 🙂

I like this paper: Functional information and the emergence of biocomplexity by Hazen et al, 2007 in PNAS, and which I hadn’t been aware of.

I’ve only had time to skim it so far, but as it seems to be an interesting treatment of the concepts variously referred to by ID proponents as CSI, dFCSI, etc, I thought it might be useful.  It is also written with reference to AVIDA.  Here is the abstract:

Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define “functional information,” I(Ex ), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA–GTP binding energy), I(Ex ) = −log2[F(E x)], where F(Ex ) is the fraction of all possible configurations of the system that possess a degree of function ≥ Ex . Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of function.

I thought it would be interesting to look at following the thread on Abel’s paper.  I’d certainly be interested in hearing what our ID contributors make of it 🙂

 

155 thoughts on “Functional information and the emergence of biocomplexity

  1. Wow, how can you say you haven’t been made aware of that paper when it has been linked to many times over on UD in discussions you were having?

    I linked to here on this blog a few times already.

    I love the paper’s calculations- I tell evos to use their equations to determine the CSI of the sequence they are investigating.

    But anyway- stuff just “emerges”- that they do not provide evidence for. And AVIDA has been laid bare by reality.

    *1- Avida “organisms” are far too simple to be considered anything like a biological organism

    2- Avida organisms “evolve” via unreasonable parameters:

    The effects of low-impact mutations in digital organisms

    Chase W. Nelson and John C. Sanford

    Theoretical Biology and Medical Modelling, 2011, 8:9 | doi:10.1186/1742-4682-8-9

    Abstract:

    Background: Avida is a computer program that performs evolution experiments with digital organisms. Previous work has used the program to study the evolutionary origin of complex features, namely logic operations, but has consistently used extremely large mutational fitness effects. The present study uses Avida to better understand the role of low-impact mutations in evolution.

    Results:

    When mutational fitness effects were approximately 0.075 or less, no new logic operations evolved, and those that had previously evolved were lost. When fitness effects were approximately 0.2, only half of the operations evolved, reflecting a threshold for selection breakdown. In contrast, when Avida’s default fitness effects were used, all operations routinely evolved to high frequencies and fitness increased by an average of 20 million in only 10,000 generations.

    Conclusions:

    Avidian organisms evolve new logic operations only when mutations producing them are assigned high-impact fitness effects. Furthermore, purifying selection cannot protect operations with low-impact benefits from mutational deterioration. These results suggest that selection breaks down for low-impact mutations below a certain fitness effect, the selection threshold. Experiments using biologically relevant parameter settings show the tendency for increasing genetic load to lead to loss of biological functionality. An understanding of such genetic deterioration is relevant to human disease, and may be applicable to the control of pathogens by use of lethal mutagenesis.

  2. It makes you wonder how long Neo-Darwinists are going to keep denying that functional complexity is a measurable commodity which has serious implications about the capacity of non-ID forces and interactions to sufficiently explain evolution.

  3. Joe G: But anyway- stuff just “emerges”- that they do not provide evidence for. And AVIDA has been laid bare by reality.

    *1- Avida “organisms” are far too simple to be considered anything like a biological organism

    2- Avida organisms “evolve” via unreasonable parameters:

    Joe, I am sure that you are aware of this paper. But have you actually read it? Avida organisms aren’t the only example that is considered. From the paper:

    Three examples (letter sequences, the artificial life platform Avida, and RNA aptamers) serve to illustrate the concept of functional information.

  4. William J. Murray,

    William,

    Why don’t you read the paper and join the discussion? You stayed away from the discussion of Abel’s paper. Don’t miss this one. It has some actual content.

  5. they slapped the emergence assertion in sentence 1. Looks like similar Dembskiesque -log calculations. I’ll read this today.

  6. olegt: Joe, I am sure that you are aware of this paper. But have you actually read it? Avida organisms aren’t the only example that is considered. From the paper:

    AVIDA was the only one that supposed to represent darwinian/ neo-darwinian evolution.

  7. Joe G: AVIDA was the only one that supposed to represent darwinian/ neo-darwinian evolution.

    That’s clearly not so. Another excerpt:

    The in vitro evolution of RNA aptamers (e.g., refs. 47 and 48) provides a dramatic illustration of the evolution and selection of systems with high functional complexity. Aptamer evolution experiments begin with large populations (up to 10^{16} randomly generated RNA sequences), which are subjected to a selective environment, a test tube coated with a target molecule, for example. A small fraction of the random RNA population will selectively bind to the target molecules. Those RNA strands are recovered, amplified with mutations (through reverse transcription, PCR, and transcription), and the process is repeated several times. Each cycle yields a more restricted RNA population with improved binding specificity (i.e., a higher degree of function, E_x ).

    Read the paper, Joe, try to understand its contents, and then make your arguments.

  8. olegt: Well, neither can Avida organisms. That refutes your own argument.

    Seriously, both cases represent Darwinian evolution. There is random variation and differential reproduction. These are the two main ingredients of Darwinian evolution. Both numerical experiments with Avida and in vitro experiments with RNA aptamers have that.

    AVIDA organisms don’t reproduce?

  9. Joe G: AVIDA organisms don’t reproduce?

    Not “on their own.” They are reproduced by the program, in silico. Just like the RNA aptamers are reproduced artificially in vitro.

  10. Hazen et al.: For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA–GTP binding energy), I(Ex ) = −log2[F(E x)], where F(Ex ) is the fraction of all possible configurations of the system that possess a degree of function ≥ Ex .

    It’s important to note that the functional information is calculated for a given function and degree of function. In other words, the measure changes depending on how you define how functional the configuration is within the system. Change the system, the function or the degree of function, and the measure of functional information changes.

  11. William J. Murray:
    It makes you wonder how long Neo-Darwinists are going to keep denying that functional complexity is a measurable commodity which has serious implications about the capacity of non-ID forces and interactions to sufficiently explain evolution.

    Who is denying that “functional complexity is a measurable commodity”?

    You can measure any quantitative construct! But first you have to define your construct, and figure out how to measure it.

    The whole reason I posted this paper (having belatedly become aware of it) is that I hoped it might help us actually discuss the very issue you raise – whether there is some informational construct that can be measured, and can be shown, as ID proponents claim, that it can’t be created without an Intelligent Designer.

    Have you read the paper?

  12. olegt:
    William J. Murray,

    William,

    Why don’t you read the paper and join the discussion? You stayed away from the discussion of Abel’s paper. Don’t miss this one. It has some actual content.

    And it’s much more readable. The equations are in much bigger print, for a start!

  13. OK, I’m down to the end of Functional Information as a Measure of System Complexity. In many ways it seems simply a much more elegant version of Dembski’s CSI, but instead of “compressibility” for the S part (“as or more compressible”) they have degree of functionality (“performs function as well as, or better than”). So I guess that would make it CFI :p

    Better than adding the F in post hoc, I think.

  14. Zachriel: It’s important to note that the functional information is calculated for a given function and degree of function. In other words, the measure changes depending on how you define how functional the configuration is within the system. Change the system, the function or the degree of function, and the measure of functional information changes.

    What Zachriel said. Functions are not inherent properties of things. They are descriptions that depend on the context and the specified goals the observer chooses to associate to the system (does that make them subjective? I think so, but I missed that thread, sorry!). They can be useful to help us explain what is happening, but not necessarily to explain how things came to be the way they are.

  15. Geoxus: What Zachriel said. Functions are not inherent properties of things. They are descriptions that depend on the context and the specified goals the observer chooses to associate to the system (does that make them subjective? I think so, but I missed that thread, sorry!). They can be useful to help us explain what is happening, but not necessarily to explain how things came to be the way they are.

    I quite like the definition of function given in the paper:

    All complex systems alter their environments in one or more ways, which we refer to as functions.

  16. F(Ex ) is the fraction of all possible configurations of the system that possess a degree of function ≥ Ex . Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree.

    It is interesting to unpack the assumptions here:

    * It is possible to enumerate all possible configurations of the system in an unambiguous way
    * There is a clearly understood concept of an arbitrary configuration and in this case all configurations are equally probable

    There is also I suspect a hidden assumption that F(Ex ) is in some way relevant to the probability of achieving a functional configuration in the real world.

  17. Since I became interested in evolutionary computing I have often thought of evolution (both biological and technological) as a process in function induction.
    “Function” is a very important concept, and despite what many people say, not at all unique to biology. I once read a philosopher state that the concept of “function” was uniquely biological (and technological).
    I went over to my bookshelf, to test my memory, and opened Joos’ hoary tome on “Theoretical Physics” and found “functions” on virtually ever page. (Some of those functions took many pages to describe!)
    Contrary to that philo I concluded that “function” is part of the causal vocabulary of all sciences, and not in fact uniquely biological or technological.
    As EL told me, a function is a cause.
    Does that make any diff in these discussions? I can’t see that it makes any difference.

    I’m just asking.

  18. Elizabeth: I quite like the definition of function given in the paper:

    All complex systems alter their environments in one or more ways, which we refer to as functions.

    I think it’s not that simple. I’d argue that following that definition, every process and interaction in a system can be considered as a function, and that a function is not just something that happens, but a “proper” way for things to happen (from this would follow the notion of “degree of function”?).

    Anyway, I have yet to read the paper in detail and I won’t be able to do that until next week, so I’d better shut up for now.

  19. Rock,

    John Wilkins addresses biological function from a more general perspective, including a parallel with physics, in this unpublished manuscript. If I read him correctly, my views are very close what he wrote there (how convinient for me! :P).

  20. Please can we keep this thread for focussed discussion of the paper linked in the OP, thanks. For general chatter about GAs please go here where I have moved the somewhat tangential posts.

  21. William J. Murray: It makes you wonder how long Neo-Darwinists are going to keep denying that functional complexity is a measurable commodity which has serious implications about the capacity of non-ID forces and interactions to sufficiently explain evolution.

    It’s fairly easy to get the general idea from the first example in the paper.

    (1) Pick some function you would like a complex assembly to perform (e.g., catalyze a reaction at some rate)

    (2) Find the numbers of this assembly that will do this job at that rate or better.

    (3) Divide the total number of assemblies which get that specific job done at that rate or better by the total number of different ways one can configure the assembly.

    (4) Take the logarithm to base 2 (in order to express in number of bits of “information”) and put a minus sign in front of it. (Why call it “information?”)

    Ok, so that establishes a relationship between something you want your assembly to do with the fraction of all assemblies that can do it at some minimum or better.

    However, it doesn’t say anything about whether the “function” you specify is in any way viable in an environment that is also selecting system configurations that can do that function and many others as well. What someone thinks may be an important “function” may not have any long-term relevance in an environment that is changing and is selecting other arrangements of the system.

  22. Mark Frank: There is also I suspect a hidden assumption that F(Ex ) is in some way relevant to the probability of achieving a functional configuration in the real world.

    It’s fairly easy to get the general idea from the first example in the paper.

    (1) Pick some function you would like a complex assembly to perform (e.g., catalyze a reaction at some rate)

    (2) Find the numbers of this assembly that will do this job at that rate or better.

    (3) Divide the total number of assemblies which get that specific job done at that rate or better by the total number of different ways one can configure the assembly.

    (4) Take the logarithm to base 2 (in order to express in number of bits of “information”) and put a minus sign in front of it. (Why call it “information?”)

    Ok, so that establishes a relationship between something you want your assembly to do with the fraction of all assemblies that can do it at some minimum or better.

    However, it doesn’t say anything about whether the “function” you specify is in any way viable in an environment that is also selecting system configurations that can do that function and many others as well. What someone thinks may be an important “function” may not have any long-term relevance in an environment that is changing and is selecting other arrangements of the system.

  23. Defining “functional information” and showing that a pure random mutational process cannot achieve the observed value of it gets you only so far. You have to also show that natural selection cannot achieve that. As far as I know this criterion, just like Dembski’s CSI, is useful for arguing that a monkey with an ATGC DNA typewriter could not, in any reasonable time, achieve organisms as well-adapted as those we see in real life. I don’t think that will be a great revelation to most biologists.

    To rule out that natural selection could do the job, you need something else, like Dembski’s Law of Conservation of Complex Specified Information (LCCSI). It would do the job, and would be the most important result in evolutionary biology since Darwin (and maybe before), if

    * … it were provable. Unfortunately Elsberry and Shallit in 2003 found a hole in Dembski’s sketch of the proof, and the hole has not since been plugged. And

    * … it would also have to be formulated so as to do the job. Unfortunately I pointed out in my article on Dembski’s arguments (Google: “Dembski Felsenstein”) that Demsbki used a different specification before and after the evolutionary process works. If he were to have required that we measure using the same specification before and after, then it can be seen immediately that the LCCSI theorem is not true — there are lots of easy counterexamples. This criticism has also not been refuted.

    ID types who argue that no natural processes exist that could create Complex Specified Information, or Functional Information, are implicitly basing themselves on Dembski’s theorems. They seem unaware that this use of the theorem has been disproven: it is not proven and, even if proved, it is not formulated in a way that would make it achieve the desired purpose. Natural selection can be shown (for an example see my paper) to be able to put Specified Information (or Functional Information) into the genome.

    The calls for critics of ID to explain Specified Information or Functional Information in the genome are not, er, functional.

  24. sez wjm: “It makes you wonder how long Neo-Darwinists are going to keep denying that functional complexity is a measurable commodity which has serious implications about the capacity of non-ID forces and interactions to sufficiently explain evolution.”
    I am not aware of anyone, “Neo-Darwinist” or otherwise, who denies that functional complexity can be a measurable quantity. I am also not aware of any ID-pusher who, having invoked the concept of ‘functional complexity’, ever got around to defining the term with sufficient clarity/details that it would be possible to measure the quantity to which that ID-pusher has applied the label ‘functional complexity’.
    Of course, I could simply be ignorant of all those ID-pushers who, having invoked the concept of ‘functional complexity’, genuinely have defined said concept to a level of clarity/detail sufficient that it’s possible to measure the concept to which the label ‘functional complexity’ was applied. If I am indeed ignorant of this particular matter, perhaps you might care to help me remedy my ignorance, wjm?
    Please explain what you mean when you use the term ‘functional complexity’, and also please explain how I would go about measuring the stuff. If you don’t know all the details yourself, but you know of a particular ID essay/book/paper which you believe does define ‘functional complexity’ to a level of clarity/detail sufficient that the stuff can be measured, please tell me which essay/book/paper it is, so that I might be able to read it and see for myself whether you’ve understood that essay/book/paper.

  25. Joe Felsenstein:
    Defining “functional information” and showing that a pure random mutational process cannot achieve the observed value of it gets you only so far. You have to also show that natural selection cannot achieve that. As far as I know this criterion, just like Dembski’s CSI, is useful for arguing that a monkey with an ATGC DNA typewriter could not, in any reasonable time, achieve organisms as well-adapted as those we see in real life. I don’t think that will be a great revelation to most biologists.

    To rule out that natural selection could do the job, you need something else, like Dembski’s Law of Conservation of Complex Specified Information (LCCSI).It would do the job, and would be the most important result in evolutionary biology since Darwin (and maybe before), if

    *… it were provable.Unfortunately Elsberry and Shallit in 2003 found a hole in Dembski’s sketch of the proof, and the hole has not since been plugged.And

    * … it would also have to be formulated so as to do the job.Unfortunately I pointed out in my article on Dembski’s arguments (Google: “Dembski Felsenstein”) that Demsbki used a different specification before and after the evolutionary process works.If he were to have required that we measure using the same specification before and after, then it can be seen immediately that the LCCSI theorem is not true — there are lots of easy counterexamples. This criticism has also not been refuted.

    ID types who argue that no natural processes exist that could create Complex Specified Information, or Functional Information, are implicitly basing themselves on Dembski’s theorems.They seem unaware that this use of the theorem has been disproven: it is not proven and, even if proved, it is not formulated in a way that would make it achieve the desired purpose.Natural selection can be shown (for an example see my paper) to be able to put Specified Information (or Functional Information) into the genome.

    The calls for critics of ID to explain Specified Information or Functional Information in the genome are not, er, functional.

    Natural selection is merely a result- if you have differential reproduction due to heritable random variation, you have natural selection as the result. It doesn’t do anything and in the end whatever is “good enough” is what survives.

    That said if it is ever demonstrated tat NS or some similar blind and undirected process could produce what IDists call CSI, then you would have refuted ID.

    I eagerly await your paper.

  26. Defining “functional information” and showing that a pure random mutational process cannot achieve the observed value of it gets you only so far. You have to also show that natural selection cannot achieve that.

    Theoretically, adding natural selection adds nothing to the capacity of chance mutation to achieve a target island of function. Simple logic. All natural selection does is weed out information. It doesn’t create anything new. Natural selection decreases the odds of chance mutation in acquiring significant complex functional islands because it limits pathways of aggregate variance to that which can pass the natural selection filter.

    If there was no natural selection – IOW, if nothing died or limited reproduction – all biological pathways would be explored much faster and much more thoroughly. So, in a simulation, If such targets cannot be expected to be acquired in our universe via pure, non-selection chance (infinite monkey theory), adding natural selection doesn’t help, because it only removes and trashes papers coming out of most of the monkey’s typewriters.

  27. Cubist:
    sez wjm: “It makes you wonder how long Neo-Darwinists are going to keep denying that functional complexity is a measurable commodity which has serious implications about the capacity of non-ID forces and interactions to sufficiently explain evolution.”
    I am not aware of anyone, “Neo-Darwinist” or otherwise, who denies that functional complexity can be a measurable quantity. I am also not aware of any ID-pusher who, having invoked the concept of ‘functional complexity’, ever got around to defining the term with sufficient clarity/details that it would be possible to measure the quantity to which that ID-pusher has applied the label ‘functional complexity’.
    Of course, I could simply be ignorant of all those ID-pushers who, having invoked the concept of ‘functional complexity’, genuinely have defined said concept to a level of clarity/detail sufficient that it’s possible to measure the concept to which the label ‘functional complexity’ was applied. If I am indeed ignorant of this particular matter, perhaps you might care to help me remedy my ignorance, wjm?
    Please explain what you mean when you use the term ‘functional complexity’, and also please explain how I would go about measuring the stuff. If you don’t know all the details yourself, but you know of a particular ID essay/book/paper which you believe does define ‘functional complexity’ to a level of clarity/detail sufficient that the stuff can be measured, please tell me which essay/book/paper it is, so that I might be able to read it and see for myself whether you’ve understood that essay/book/paper.

    Hellooooo?! Just read the paper in the OP….

  28. William J. Murray: Theoretically, adding natural selection adds nothing to the capacity of chance mutation to achieve a target island of function. Simple logic. All natural selection does is weed out information. It doesn’t create anything new. Natural selection decreases the odds of chance mutation in acquiring significant complex functional islands because it limits pathways of aggregate variance to that which can pass the natural selection filter.

    Fundamental misconception.

    There is no “target” in evolution. Just look that the billions of different life forms that exist and have existed. Consider all the other forms of condensed matter in the universe and the process by which these came about. If there is a niche, physics and chemistry finds it.

    “Information” does not push atoms and molecules around. It is not all “spontaneous molecular chaos” down there.

  29. William J Murray says:

    Theoretically, adding natural selection adds nothing to the capacity of chance mutation to achieve a target island of function. Simple logic. All natural selection does is weed out information. It doesn’t create anything new. Natural selection decreases the odds of chance mutation in acquiring significant complex functional islands because it limits pathways of aggregate variance to that which can pass the natural selection filter.

    Fundamental misconception.

    There is no “target” in evolution. Just look that the billions of different life forms that exist and have existed. Consider all the other forms of condensed matter in the universe and the process by which these came about. If there is a niche, physics and chemistry finds it.

    “Information” does not push atoms and molecules around. It is not all “spontaneous molecular chaos” down there.

  30. Joe G: Natural selection is merely a result- if you have differential reproduction due to heritable random variation, you have natural selection as the result. It doesn’t do anything and in the end whatever is “good enough” is what survives.

    That said if it is ever demonstrated tat NS or some similar blind and undirected process could produce what IDists call CSI, then you would have refuted ID.

    I eagerly await your paper.

    Look at it on the web. It can easily be found using Dembski+Felsenstein in a search engine. Note particularly the section on “Generating Specified Information” which gives a simple gene frequency example of natural selection getting you out further on a fitness scale — exactly the sort of thing that is also being called Functional Information. The example generates about 2 bits of it in 84 generations, and that can be repeated elsewhere in the genome.

  31. William J. Murray: Theoretically, adding natural selection adds nothing to the capacity of chance mutation to achieve a target island of function.Simple logic. All natural selection does is weed out information. It doesn’t create anything new.Natural selection decreases the odds of chance mutation in acquiring significant complex functional islands because it limits pathways of aggregate variance to that which can pass the natural selection filter.

    If there was no natural selection – IOW, if nothing died or limited reproduction – all biological pathways would be explored much faster and much more thoroughly. So, in a simulation, If such targets cannot be expected to be acquired in our universe via pure, non-selection chance (infinite monkey theory), adding natural selection doesn’t help, because it onlyremoves and trashes papers coming out of most of the monkey’s typewriters.

    This is a misconception, I am afraid. The elimination of members of some genotypes shifts the distribution of genotypes, and it shifts it in the direction of increasing the frequencies of the more fit genotypes. And it is these frequencies (not the numbers) that affect the genotypes that show up in the next generation (after Mendelian genetics has done its work).

    The result is that in almost all cases the distribution of genotypes found at the start of the next generation is shifted towards more fit genotypes. If we have a scale which is fitness (and use that as the scale in calculating Functional Information), we find that natural selection plus Mendelian genetics is moving the distribution further and further out on that scale.

    If this is unclear I can explain a numerical example in detail.

  32. As Mike E states in his example (3), Hazen takes the number of equal or better solutions and divides by the number of possible solutions and extracts the base 2 –log which represents the functional information. At its root, there appears to be similarities with the equation: u = -log(p(x)), whereas u is uncertainty. It appears that the core ID arguments, and Hazen’s argument, in some sense are beating the same drum. Whereas in Shannon’s suprisal argument, p is the probability of the set of possible outcomes of the element’s in a source population, in which as far as I understand, Dr.^2 Dembski uses in determining the initial premise for CSI, which is argued by measuring the compressibility of a string that is output from a population of elements with equaprobable frequencies, i.e. high in Shannon entropy. CSI simply put is: high complexity, compressibility, and function identified in a string, (whereas there is no conflation of Shannon and K-comp arguments). As such, if complexity is established by calculating the frequency that the source elements are likely to arrive on the scene, it then follows that the string can be isolated from the population and compressed.
    As I understand Hazen determines the weight of the function by considering that it is dynamic. This seems to be in no way at odds with the concept of CSI, as ID folks will argue that simply a function is needed.

    I would advance an assertion and say that Hazen’s I(Xe) shares a relationship to some degree with Shannon’s (u), and that case might be argued that in some sense I(Xe) = u.

  33. Joe G: That said if it is ever demonstrated tat NS or some similar blind and undirected process could produce what IDists call CSI, then you would have refuted ID.

    Joe,
    I’d like to take you up on that challenge. I’d suggest gene duplication as a possible starting point. However you’ve already noted that gene duplication is not necessarily a non-telic process. So can you provide an example we can use?

  34. WJM:adding natural selection doesn’t help, because it only removes and trashes papers coming out of most of the monkey’s typewriters.

    This is a fundamental misconception. DNA sequences aren’t being typed de novo by any genetic equivalent of a monkey at a typewriter. They are being copied. All DNA sequence comes from a prior copy that must have been viable. NS trashes bad copy. In doing so, it retains ‘good copy’. But there is no reason to suppose that the present copy is the best possible for current circumstances. Non-trashed copy remains – the accurate, and the non-detrimentally mutated. This allows ‘monkey-type’ to probe local regions of space from the comfort of a working genome, NOT to generate a near-infinity of shit. NS removes the ‘relatively worse’, as well as the unequivocally bad. A previously successful sequence becomes ‘relatively worse’ in the presence of one that is ‘relatively better’.

    ‘Islands of function’ is also hogwash. What (apart from reading a bit too much kairosfocus) makes you think that function invariably exists on islands within the space of all possibilities? Where it does, evolution may not get there. That leaves all the places it can access.

  35. ‘Islands of function’ is also hogwash.

    I await your reference to the rebuttal paper.

    Fundamental misconception. There is no “target” in evolution.

    Fundamental misconception. I’m talking about “target” in the general, post hoc sense of any significant island of novel function..

  36. Allan: ‘Islands of function’ is also hogwash.

    WJM: I await your reference to the rebuttal paper.

    That was the rebuttal paper. Someone on the internet (kairosfocus) created a lengthy and non-peer-reviewed piece of guff, with no knowledge of genetics or function and no clear understanding of the evolutionary process.

    Someone else on the internet (me) just produced a non-peer-reviewed response, with fewer words but approximately the same content.

    But if you want a little more … KF’s post was based upon analogising a system that does involve ‘islands’ – the subset of the space of all possible alphabetic combinations that contains meaningful English – with a system that has absolutely bog all to do with that – the subset of viable organisms in the space of all possible combinations of DNA strings. The only relationship between these two systems is the fact that they can be represented as symbolic, digital strings. But DNA and protein are not symbolic (even though AUG results in methionine being attached to a protein, the physical base sequence AUG is not a symbol for methionine).

    I could easily devise a rule whereby there are no Islands of Function in the space of all alphabetic combinations. It is the ‘rule’ (“meaningful English”) that creates the granularity, or not, and not simply string length or the number of combinations. There is nothing in the rule “meaningful English” that obliges us to consider the rule “viable organism” to have the same properties.

    Indeed, we could combine English and genomes within the same space of all alphabetic characters, if we alphabetise genetic makeup of all organisms – ACCATTACCATTTTACC etc. We have two different rules that determined “well-formed strings” in this space: “meaningful English” and “viable organism”. Although shorter strings might be viable in both rules – CAT, TAG – at anything over a few bits, there would be no instances where a well-formed string under one rule would be a well-formed string under the other. Given that there is no overlap, why on earth would the granularity observed under one of those rules have anything to do with that expected under the other? Hogwash, in short.

  37. William J. Murray: Theoretically, adding natural selection adds nothing to the capacity of chance mutation to achieve a target island of function.Simple logic. All natural selection does is weed out information. It doesn’t create anything new.Natural selection decreases the odds of chance mutation in acquiring significant complex functional islands because it limits pathways of aggregate variance to that which can pass the natural selection filter.

    If there was no natural selection – IOW, if nothing died or limited reproduction – all biological pathways would be explored much faster and much more thoroughly. So, in a simulation, If such targets cannot be expected to be acquired in our universe via pure, non-selection chance (infinite monkey theory), adding natural selection doesn’t help, because it onlyremoves and trashes papers coming out of most of the monkey’s typewriters.

    The issue is, not whether all possible solutions can be explored, but whether those solutions that are expliored result in an increase of fitness. That is, result in the distribution of genotypes going farther out on the scale (in this case the fitness scale). If that happens there is an increase in Functional Information (in Hazen’s sense).

    The presence of natural selection removes some of the individuals, and these tend to be the less fit ones. That changes the frequencies of the more fit ones. And it is the frequencies that matter, not the numbers. Because that disttribution affects what the distribution of genotypes at the start of the next generation is.

    I can give a numerical example if you need. You will see that something happens that Dembski’s theorem declares to be impossible.

  38. Elizabeth — I have tried three times to reply to William J Murray’s response to my earlier comment. Why is the system now not taking these replies? If you see all three replied in moderation, please just allow one through, they are very similar.

  39. OK, maybe posting a comment rather than using the Quote in Reply will work. Let’s see:

    William J. Murray said:
    Theoretically, adding natural selection adds nothing to the capacity of chance mutation to achieve a target island of function. Simple logic. All natural selection does is weed out information. It doesn’t create anything new. Natural selection decreases the odds of chance mutation in acquiring significant complex functional islands because it limits pathways of aggregate variance to that which can pass the natural selection filter.

    If there was no natural selection – IOW, if nothing died or limited reproduction – all biological pathways would be explored much faster and much more thoroughly. So, in a simulation, If such targets cannot be expected to be acquired in our universe via pure, non-selection chance (infinite monkey theory), adding natural selection doesn’t help, because it only removes and trashes papers coming out of most of the monkey’s typewriters.

    I take it that you are saying that natural selection, by removing a portion of the unfit genotypes, does not get you further out on the scale (in our case, the scale of fitness). It does change the distribution of fitnesses. And it is the frequencies of the genotypes, not thje numbers, that affects the distribution of genotypes in the next generation. If necessary I can provide a detailed example, one that falls within the cases treated by Dembski’s conservation law. And you will see that fitness does increase in that case, that there comes to be more Functional Information (or Specified Information) in the genome.

    (To myself: OK, it seems to have worked. Not sure why Quote in Reply does not work).

  40. Allan Miller,

    …. with apologies for the drift OT – that is, we aren’t discussing kairosfocus’s Islands of Function post on UD! It was that to which I referred in saying “Islands of Function = hogwash” – ie that the genetic space is so comrehensively subdivided into islands that evolution – through any portion of space that isn’t so constrained – is impossible.

    ‘Islands’, as the paper says, is a misleading visualisation – multidimensional spaces have far more opportunity for interconnectedness than the simple 2D map that ‘Islands’ conjures up. We also need to add in the fact that the landscape is continually shifting – it is not just a question of organisms moving about a static space of eternally distinct ‘well-formed’ and ‘ill-formed’ strings. AND – final point – to call anything in genetic space an ‘island’ demands a consideration of probabilities. It is an island to evolution because the genetic ‘gap’ is asserted too wide to leap across by ‘random’ means in a realistic number of ‘tries’ n. Yet – for example – once-in-a-million-year events will occur roughly once every million years. Too wide for n attempts does not mean too wide for xn attempts.

  41. Joe Felsenstein:

    (To myself:OK, it seems to have worked.Not sure why Quote in Reply does not work).

    I think number of links triggers the spam filter. There are two, and I am having trouble tuning one of them. Will keep trying.

  42. Joe Felsenstein:
    I take it that you are saying that natural selection, by removing a portion of the unfit genotypes, does not get you further out on the scale (in our case, the scale of fitness). It does change the distribution of fitnesses.And it is the frequencies of the genotypes, not thje numbers, that affects the distribution of genotypes in the next generation. If necessary I can provide a detailed example, one that falls within the cases treated by Dembski’s conservation law.And you will see that fitness does increase in that case, that there comes to be more Functional Information (or Specified Information) in the genome.

    Please do. Would you like to post an OP?

  43. I take it that you are saying that natural selection, by removing a portion of the unfit genotypes,

    Unfit for what? Competitive progeny? While they could be less fit in terms of competitive progeny, they could be a more fit in terms of reaching any eventual island of functional high complexity, like a circulatory system or a nervous system. Natural selection isn’t removing that which is less likely to reach highly complex islands of functionality, it’s just removing whatever happens to not be more fecund right now, or whatever happens to be in the way of some non-survivable natural disaster.

    There’s no relationship between “what is more fecund right now” (what survives better in any given 5-6 generational span) and “reaching highly complex islands of interdependent, organized, systemic functionality”. Those are two entirely different things. Natural selection doesn’t know that a circulatory system (or something like it) will make future organisms more survivable, so it is not sorting variations to accomplish that.

    Therefore, all it can realistically do is decrease the chances of unfettered variation in reaching any such island.

    It does change the distribution of fitnesses.

    No, it eliminates the “fitness” filter altogether.

    And it is the frequencies of the genotypes, not thje numbers, that affects the distribution of genotypes in the next generation.

    And such parameters on distributions of genotypes in a NS system can only reduce the information being explored by a purely random, full and unfettered exploration of sequences towards any and all islands of highly complex function.

    If necessary I can provide a detailed example, one that falls within the cases treated by Dembski’s conservation law. And you will see that fitness does increase in that case, that there comes to be more Functional Information (or Specified Information) in the genome.

    It’s logically impossible (improbable beyond reasonable expectation) that any “fitness filter” other than one that specifically tagets (in some way) such an island can outperform a full and unfettered, purely random blind search, because all such a filter will do is immediately eliminate pathways that might otherwise lead to the island at some point in the future.

    Let’s look at the fecundity factor. The nature of proposed Darwinian evolution is that “fitness” = “fecundity over several generations”, more or less. Survivability of progeny and fecundity are the only real “algorithms” in play when it comes to computing the continuation of mutations that occur.

    Yet, over billions of years. what do we see arising – supposedly – from this evolutionary process? Less survivable, and less fecund organisms than, as the evidence indicates, we basically started with. Bacteria are extremely survivable, and extremely fecund. They might be the most hardy living organisms on earth; yet, that is – presumably – the kind of organisms we started with.

    What has evolved over hundreds of millions of years or maybe a billion years since such organisms were all that was on Earth are less fecund, less hardy, less “fit” organisms, by any non-tautological definition of the term. Organisms that are far more complex as organized systems, and as such can live in far fewer environments, and reproduce less and with much greater difficulty, and die much more quickly and easily due to environment. What we call “higher” life forms are, in the Darwinian sense, actually nothing more than what “fell through the cracks” of the NS and fitness function algorithms; higher life forms didn’t rise “because of” Darwinistic processes, but in spite of them.

    The claim that a “fecund progeny” algorithm can produce a highly complex, interdependent, organized system of function by happy coincidence is magical thinking, especially when one considers that in perhaps 4.5 billion years of evolution it hasn’t produced anything more fecund, or more “progeny survivability” than existed very early on in that process, and all it has generated since are less fecund, less “progeny survivable” organisms – less fit by any non-tautological definition of the term.

  44. Joe G: y a result- if you have differential reproduction due to heritable random variation, you have natural selection as the result. It doesn’t do anything and in the end whatever is “good enough” is what survives.

    That said if it is ever demonstrated tat NS or some similar blind and undirected process could produce what IDists call CSI, then you would have refuted ID.

    I eagerly await your paper.

    http://ncse.com/rncse/27/3-4/has-natural-selection-been-refuted-arguments-william-dembski

    I’d forgotten just how good that paper is 🙂

  45. William, have you read Joe Felsenstein’s paper, linked in my response to Joe G above, and which he has referenced a couple of times in this thread? It is beautifully and clearly written, and I would be interested to read your rebuttal if you have one.

    Your concession would be an acceptable alternative 🙂

Leave a Reply