61 thoughts on “A New View of Irreducible Complexity

  1. Allan Miller:
    Joe Felsenstein,

    The apparent analogy with reciprocal recombination is interesting, though I may be mistaken in thinking it a valid one.

    The analogy is not quite right. For that, we would have to have a diploid life stage with corresponding subtrees swapped.

    This (Koza’s “genetic programming”) is more analogous to chromosome rearrangement, in a species where the phenotype is strongly dependent on gene order.

  2. Joe Felsenstein: This (Koza’s “genetic programming”) is more analogous to chromosome rearrangement, in a species where the phenotype is strongly dependent on gene order.

    Considering what Bartlett is trying to do in his talk, I have to question the value of the analogy. In a LISP expression, a right parenthesis may be separated from its matching left parenthesis by an arbitrary number of characters. A machine with a (theoretically) unbounded stack is required for translation of LISP expressions. Similarly abstracted, translation of coding regions of a chromosome to amino-acid sequences requires only a finite-state machine.

    Bartlett is making a big deal of the processing of codes. I am saying, from the perspective of theoretical computer science, that we are at the bottom of the Chomsky hierarchy of languages/machines — not at the top, where Bartlett wants to put us. It takes only the least powerful of machines to translate DNA into proteins. What happens after that is chemistry, not computation. Bartlett is blurring the distinction between functions calculated by machines and functions served by molecules. Those are very different senses of function.

  3. Tom English: Similarly abstracted, translation of coding regions of a chromosome to amino-acid sequences requires only a finite-state machine.

    Bartlett is making a big deal of the processing of codes. I am saying, from the perspective of theoretical computer science, that we are at the bottom of the Chomsky hierarchy of languages/machines — not at the top, where Bartlett wants to put us.

    In fact, not much more than a 64-state machine. But maybe even less than that.
    People have also inferred from the code table that originally the code was a 16-amino-acid code with 2-base codons. The third codon is a later add-on that enabled 4 more amino acids.

  4. One problem with the code analogy is the lack of a code ladder. Any code system that purports to be analogous to biology must support equivalent sequences one step away from any functional sequence.

    Haven’t been to UD in quite a while, but they used to point out that a single character change is likely to crash a program. That makes computer code useless as a model or analogy for biology.

  5. Joe Felsenstein,

    OK. Nonetheless, it’s interesting that rearranging/recombining sequences ‘tested’ in the population independently makes a great difference to the evolution available when compared to small-change mutational punts into novel sequence space. IDist intuitions seem restricted to point mutation (and only ladders of advantage within that subset of methods). Much, maybe most, evolution of ‘novelty’ is by rearrangement.

  6. Joe Felsenstein,

    People have also inferred from the code table that originally the code was a 16-amino-acid code with 2-base codons. The third codon is a later add-on that enabled 4 more amino acids.

    I think the code was always triplet – there are energetic and stereochemical reasons why a triplet is favoured over a doublet or quadruplet for docking tRNA. But not all positions had to be identified down to the individual base (many still aren’t). What seems to have happened is that in some positions 4fold degeneracy (any base) has become 2fold (purine or pyrimidine), then 1 in some cases (exact base), subdividing the primitive code. 16 was an ‘invisible limit’ – the system had the capacity to get past it from the off.

  7. Allan Miller:
    Joe Felsenstein,
    OK. Nonetheless, it’s interesting that rearranging/recombining sequences ‘tested’ in the population independently makes a great difference to the evolution available when compared to small-change mutational punts into novel sequence space. IDist intuitions seem restricted to point mutation (and only ladders of advantage within that subset of methods). Much, maybe most, evolution of ‘novelty’ is by rearrangement.

    I am completely ignorant of these details, but my reading is that most vertebrate evolution proceeds in regulation, and that would seem to be an ideal place for point mutations to have large effects.

  8. Bartlett has been commenting in Torley’s thread instead of his own. Here’s part of a comment I made in the other thread.
    _____________________________
    It was clear by the fifth sentence of the OP that Vincent Torley was misrepresenting Bartlett’s presentation:

    I would also like to commend JohnnyB on his mathematical rigor, which has helped illuminate the key issues.

    Jonathan Bartlett’s talk is mathematical-istic word salad, devoid of rigor. In all sincerity, I would judge Bartlett mentally ill, were he not a young-earth creationist. The locus of the madness is YEC culture, not the individual YEC.

    I know that most folks were not sure what to make of the last third of the presentation. I’m telling you that there is nothing to make of it (nor of Bartlett’s paper). I’ve engaged a lot of ID math. I’ve also taught the theory of computation several times at the graduate level. I’d gladly respond if there were anything to which I might respond.

  9. petrushka,

    I am completely ignorant of these details, but my reading is that most vertebrate evolution proceeds in regulation, and that would seem to be an ideal place for point mutations to have large effects.

    They can have large effects for sure, because of the nature of the extensive cascade that sits below an early developmental change. Still, I don’t see retooling the vertebrate developmental pathways as being all that ‘novel’ (a semantic convenience that keeps my statement intact!).

    I was thinking more of protein evolution, which proceeds as much by reshuffling modules as by point mutational ladders from A to B. Point mutation tends to be an optimisation process rather than a generator of ‘novelty’ at that level. And even in developmental regulation, as we are often talking of binding, shuffling entire motifs can donate a particular binding motif from one place to another.

  10. My point would be that despite the apparent rather remarkable evolution of body plans since the Cambrian– the poster child for ID — the rate of molecular evolution hasn’t changed. But regulation amplifies visible changes to phenotype.

    Regardless of mutation type.

    JohnnyB’s argument contains many hidden assumptions, but they all boil down to, You can’t get here from there by small incremental changes.

    The analogy to computer code is useless unless you have a computing environment that allows code ladders, that allows copy errors that don’t crash the system.

    At UD it was common to argue that the inflexibility of code was proof that evolution can’t proceed by accumulation of copy errors.

    But that is actually proof that the analogy, the model, is flawed. If your model doesn’t model chemistry, it can’t be used to deduce limitation of chemistry.

  11. I might add that the failure to model chemistry is also relevant to arguments about the origin of life. Abstractions do not impose their limitations on the process being abstracted.

    Maps do not impose their errors and loss of detail on the territory.

Leave a Reply