My thoughts on JohnnyB’s new view of Irreducible Complexity

Jonathan Bartlett, known here as JohnnyB, has written a very thought-provoking post titled, A New View of Irreducible Complexity. I was going to respond in a comment on his post, but I soon realized that I would be able to express my thoughts much more clearly if I composed a post of my own, discussing the points which he raises.

Before I continue, I would like to say that while I find JohnnyB’s argument problematic on several counts, I greatly appreciate the intellectual effort that went into the making of his slide presentation. I would also like to commend JohnnyB on his mathematical rigor, which has helped illuminate the key issues.

Without further ado, I’d like to focus on some of the key slides in JohnnyB’s talk (shown in blue), and offer my comments on each of them. By the time I’m done, readers will be able to form their own assessment of the merits of JohnnyB’s argument.

Part One: What is JohnnyB trying to show?


Goals

Create a definition of Irreducible Complexity which shows that Darwinism is logically impossible over a wide range of assumptions.

Show the conditions for which evolution may or may not be possible…
[snip]

My Comments:

1. JohnnyB has set the bar very high here. He aims to show that a Darwinistic explanation of Irreducible Complexity is not merely vanishingly improbable, but logically impossible, like the term “married bachelor.” If he can do that, I’ll be very impressed. Not even Michael Behe claimed to be able to demonstrate this.

2. Right at the outset, JohnnyB assumes that the only good naturalistic explanation of Irreducible Complexity is a Darwinian one. That is Professor Richard Dawkins’ view, as JohnnyB points out later on in his talk. However, not all biologists agree with Dawkins.

Professor Larry Moran is a biochemist and a long-standing advocate of random genetic drift as the dominant mechanism of evolution. In a post titled, Constructive Neutral Evolution (CNE) (September 6, 2015), Moran goes further. Drawing on the work of Arlin Soltzfus, Michael Gray, Ford Doolittle, Michael Lynch, and Julius Lukes et al., Moran argues that non-adaptive mechanisms can account for the evolution of irreducibly complex systems. He illustrates his point with a simple hypothetical scenario (see here for a diagram):

Imagine an enzyme “A” that catalyzes a biochemical reaction as a single polypeptide chain. This enzyme binds protein “B” by accident in one particular species. That is, there is an interaction between A and B through fortuitous mutations on the surface of the two proteins. (Such interactions are common as confirmed by protein interaction databases.) The new heterodimer (two different subunits) doesn’t affect the activity of enzyme A. Since this interaction is neutral with respect to survival and reproduction, it could spread through the population by chance.

Over time, enzyme A might acquire additional mutations such that if the subunits were now separated the enzyme would no longer function (red dots). These mutations would be deleterious if there was no A + B complex but in the presence of such a complex the mutations are neutral and they could spread in the population by random genetic drift. Now protein B is necessary to suppress these new mutations making the heterodimer (A + B) irreducibly complex. Note that there was no selection for complexity — it happened by chance.

Further mutations might make the interaction more essential and make the two subunits more dependent on one another. This is a perfectly reasonable scenario for the evolution of irreducible complexity. Anyone who claims that the very existence of irreducibly complexity means that a structure could not have evolved is wrong. (Emphases mine – VJT.)

Throughout his talk, JohnnyB assumes that the evolution of irreducibly complex systems by chance processes is fantastically improbable. Perhaps this assumption is false. If it is, then his proof of the impossibility of irreducibly complex systems arising via unguided processes fails.

************************************************************************************************

Part Two: Evolution and computation


A Universal Turing machine U. U consists of a set of instructions in the table that can “execute” the correctly-formulated “code number” of any arbitrary Turing machine {\displaystyle {\mathcal {M}}} on its tape. In some models, the head shuttles back and forth between various regions on the tape. In other models the head shuttles the tape back and forth.

The following three slides from JohnnyB’s talk explain how he links Darwinian evolution to computation theory.


Why Computability Theory?

Evolution is, at its core, a statement about mapping changes in genotypes to changes in phenotypes.

In other words, there is a code which performs a function, and the change in code produces a change in function.

The mathematics developed to understand the relationship between codes and functions at a fundamental level is computability theory.

—————————————————————————————–

Turing’s Theory of Computation

All known paradigms of computation are reducible to Turing machines.

[snip]

—————————————————————————————–

Universal vs. Special Machines

A Turing machine is said to be a Universal machine if it can compute any computable function just by changing its tape.

Every Universal machine is equally powerful.

A non-universal Machine will only be able to implement a subset of computable functions.

If the set of needed functions is not known ahead of time, one must use a Universal machine.

Therefore, if biology is to evolve to environments it isn’t aware of ahead-of-time, then the proper mathematical model is the Universal machine.

My Comments:

1. It is very important to understand what the foregoing argument shows. It doesn’t show that evolution itself is a kind of computation. Nor does it imply that the biosphere is some sort of Universal Turing machine, which generated the dazzling variety of life-forms existing on Earth today.

Rather, what the argument purports to show is that if scientists want to model how evolution works – and in particular, how it can generate new functions without knowing in advance which ones it might be called upon to produce – then they will have to construct a Universal Turing machine, in order to do the job.

2. Does the argument even prove this much? I think not. All it shows is that if you want to explain how natural selection can generate any function, without knowing in advance which one it will be required to create in a given environment, then you will need a Universal Turing machine. But biologists don’t believe that natural selection can generate any function. What they believe is that it can generate some functions, where “some” might well mean: a vanishingly small fraction of the range of all possible functions that could enhance an organism’s fitness in some situations. I made the same point in my review of Dr. Douglas Axe’s book, Undeniable, where I wrote:

Finally, even if Axe’s argument purporting to show that accidental inventions are fantastically improbable were valid, it would still only apply to accidental inventions in general. A much stronger argument is needed to show that each and every accidental invention is fantastically improbable. By definition, the inventions generated by a blind evolutionary process will tend to be the ones whose emergence is most likely: the creme de la creme, which make up only a tiny proportion of all evolutionary targets. For these targets, the likelihood of success may be very low, but not fantastically improbable.

Next, JohnnyB, drawing upon the work of mathematician Stephen Wolfram, introduces a few useful definitions, which distinguish between four different classes of Universal Turing machines:


Stephen Wolfram’s Complexity Classes

Stephen Wolfram classified Turing machines into the following four classes:

Class 1 [Turing] machines [are machines that] tendsed to converge on a single result, no matter what the initial conditions.
Class 2
[Turing] machines [are machines that] give relatively simple and predictable results
Class 3
[Turing] machines [are machines that give] results that are individually unpredictable, but statistically predictable.
Class 4
[Turing machines are machines that give] results that are not predictable either individually or statistically….

Class 4 systems are the only systems in which Universal computation can occur.

(N.B. Words in brackets were added by me, as a paraphrase of what JohnnyB was saying. Words in blue appear on JohnnyB’s slide, at 15:36 – VJT.)

My Comments:


UPDATE: In a comment below, Tom English has pointed out a serious mistake in the slide above. Readers will note that JohnnyB states that Wolfram classified Turing machines into four classes. This is factually incorrect. Wolfram’s classification is of cellular automata, not Turing machines. Readers can confirm this by consulting this article on cellular automata in the Stanford Encyclopedia of Philosophy – something I should have done myself. I would like to thank Tom English for his correction.

I think the perceptive reader will be able to see where JohnnyB is going here. He’s going to argue that if scientists want to model evolution by natural selection, they’ll have to rely on the most chaotic kind of Universal Turing machines: Class 4 machines, whose results are radically unpredictable.

And now, at last, we come to the nub of JohnnyB’s argument. The numbering below is mine, not JohnnyB’s.

************************************************************************************************

Part Three: JohnnyB’s proof that natural selection is incapable of accounting for Irreducible Complexity


Visualization of a population evolving in a static fitness landscape. Image courtesy of Randy Olson and Bjørn Østman, and Wikipedia.


Universality and Natural Selection

1. Increasing the class [of a Turing machine complexity system – VJT] yields more degrees of freedom, but also makes the relationship more chaotic between changes in input and the resulting output.

2. Class 4 systems are the only systems in which Universal computation can occur.

3. Hidden premise identified by VJT: evolution requires Universal computation.
Proof: see the above slide on Universal vs. Special machines.


4. Therefore, if evolution were to occur, it would need a Class 4 complexity system.

5. For natural selection to operate, there has to be a smooth pathway of increasing function.

[N.B. “Smooth” is defined by JohnnyB as: moving in one direction, without any giant chasms – VJT.]


6. Class 4 systems, since they are chaotic (mappings of input changes to behavior changes are chaotic), cannot in principle supply such a smooth pathway.

7. Thus, the two requirements for evolution – evolution across Universal computation and a selectable pathway – are mutually incompatible.

My Comments:

1. When I looked at this slide, I realized that there was an unstated premise, which I inserted (premise 3). The wording is very important here: in premise 4, JohnnyB states that if evolution were to occur, it would need a Class 4 complexity system. This statement only makes sense if evolution itself is viewed as a computation, and a universal one at that. But as I argued above in Part Two (comment 1), JohnnyB hasn’t shown that. All he’s shown is that if scientists want to model how evolution could give rise to any kind of function, they’ll need a Class 4 Universal Turing machine for the job. That’s what premise 4 should say.

2. Premise 5 simply restates a point commonly made by Intelligent Design advocates: that evolution by natural selection won’t work unless we have a smooth fitness landscape. This requirement sounds very ad hoc, given that we can readily conceive of countless ways in which a fitness landscape might be so rugged as to render evolution by natural selection impossible. So are evolutionists begging the question by assuming that fitness landscapes are smooth, in the real world? Not at all. Professor Joe Felsenstein explains why in a widely quoted post critiquing a talk given by Dr. William Dembski on August 14, 2014, at the Computations in Science Seminar at the University of Chicago, titled, “Conservation of Information in Evolutionary Search.” Felsenstein writes:

Given that there is a random association of genotypes and fitnesses, Dembski is right to assert that it is very hard to make much progress in evolution. The fitness surface is a “white noise” surface that has a vast number of very sharp peaks. Evolution will make progress only until it climbs the nearest peak, and then it will stall. But…
That is a very bad model for real biology, because in that case one mutation is as bad for you as changing all sites in your genome at the same time!
Also, in such a model all parts of the genome interact extremely strongly, much more than they do in real organisms…
…I argue that the ordinary laws of physics actually imply a surface a lot smoother than a random map of sequences to fitnesses. In particular if gene expression is separated in time and space, the genes are much less likely to interact strongly, and the fitness surface will be much smoother than the “white noise” surface.

3. Another point I’d like to make is that evolution doesn’t occur in just two dimensions, but in hundreds of different directions. For this reason, the likelihood of evolution “hitting a wall” beyond which no further improvements can be made is greatly reduced, as computer scientist Mark Chu Carroll noted in a book review published several years ago:

A fitness landscape with two variables forms a three dimensional graph – and in three dimensions, we do frequently see things like hills and valleys. But that’s because a local minimum is the result of an interaction between *only two* variables. In a landscape with 100 dimensions, you *don’t* expect to see such uniformity. You may reach a local maximum in one dimension – but by switching direction, you can find another uphill slope to climb; and when that reaches a maximum, you can find an uphill slope in some *other* direction. High dimensionality means that there are *numerous* directions that you can move within the landscape; and a maximum means that there’s no level or uphill slope in *any* direction.

4. I would also object to Premise 6 in JohnnyB’s argument above. He writes: “Class 4 systems, since they are chaotic (mappings of input changes to behavior changes are chaotic), cannot in principle supply such a smooth pathway.” What he should have said is that Class 4 systems cannot guarantee the existence of a smooth pathway for the evolution of any given function. But all that shows is that some functions (probably the vast majority) will be incapable of evolving. The argument doesn’t show that no smooth pathway exists for any function. I made a similar point in my review of Dr. Douglas Axe’s book, Undeniable:

Axe is perfectly correct in saying that for any given functional hierarchy that we can imagine, most of its components would have been of no benefit earlier on, before the hierarchy had been put together in its present form. But all that proves is that the vast majority of the fantastically large set of possible functional hierarchies never get built in the first place: they are beyond the reach of evolution. If a functional hierarchy was built by evolution, in a series of steps, then by definition, its components must have performed some biologically useful function when the hierarchy had fewer levels than it does now. The functional hierarchies built by evolution are atypical. But that doesn’t make them impossible.

Hence JohnnyB is incorrect when he infers that “the two requirements for evolution – evolution across Universal computation and a selectable pathway – are mutually incompatible.” All his argument shows is that for a large number of possible functions, these two requirements will be incompatible – which means that these functions will never evolve in the first place. But what about the rest?

However, JohnnyB has another ace up his sleeve. As we’ll see, he argues that whenever there is a smooth, non-chaotic pathway which allows a function to evolve, that function can’t be called irreducibly complex, anyway. So it’s still true that irreducibly complex functions could only evolve via a highly unpredictable, chaotic process, making their emergence a practically impossibility.

************************************************************************************************

Part Four: JohnnyB’s Redefinition of Irreducible Complexity, and his Argument for Design


Orson Welles performs a card trick for Carl Sandburg (August 1942). Image courtesy of Wikipedia.

In the argument below, JohnnyB endeavors to show that irreducibly complex systems, properly defined, require an intelligent designer. The numbering below is mine, not JohnnyB’s.


Redefinition of Irreducible Complexity

1. To implement the arbitrary complexity within biology, biological systems must be Class 4 systems.

2. A “hard” problem is a problem for which a solution only exists utilizing the chaotic space of a Class 4 system.

3. If a Class 4 system needs to solve a “hard” problem, it cannot do so by a process of selection, because the chaotic nature of the system will prevent selection from pointing in any specific direction.

4. Therefore, the chance of hitting a correct solution to a “hard” problem is equivalent to that of chance, since selection cannot canalize the results.

[Here’s a short explanation of “canalize,” from the slide titled, “Multilevel Complexity Classes”:

(i) A Class 4 system can be used to create a non-chaotic Class 1 or Class 2 system,for which changes can lead to smoother searches.

(ii) However, this limits the scope of selectable parameters to those which the implemented Class 1 and Class 2 systems operate.

(iii) Thus, we can say that to the extent that evolution occurs, it is parametrized – or “canalized” – VJT.]


5. Information Theory tells us that this will have a difficulty that increases exponentially with the size of the shortest solution.

6. Solutions to “hard” problems can be achieved only if the solution has prior programming which guides either the mutation or the selection through.

7. An Irreducibly Complex system is a system which utilizes (and utilizes necessarily) the chaotic space of a Class 4 system to implement a function.

8. The existence of an Irreducibly Complex system is evidence of design, because design is the only known cause which can navigate the complexity of a Class 4 system to implement functionality.

My Comments:

1. I would criticize the wording of premise 1: “To implement the arbitrary complexity within biology, biological systems must be Class 4 systems.” It should read as follows: “To model the evolution is any kind of function, of any arbitrary level of complexity, scientists must use Class 4 systems.” Once again, JohnnyB is implicitly assuming that the biosphere is a gigantic natural computer, and that evolutionary changes are computations. This only makes sense on a hyper-computationalist view of the world, satirized by the philosopher John Searle in a memorable essay titled, Is the brain a Digital Computer?: “Thus for example the the wall behind my back is right now implementing the Wordstar program, because there is some pattern of molecule movements which is isomorphic with the formal structure of Wordstar.” Against this view, Searle argues that computation is something which is inherently mind-relative:

There is no way you could discover that something is intrinsically a digital computer because the characterization of it as a digital computer is always relative to an observer who assigns a syntactical interpretation to the purely physical features of the system… [T] to say that something is functioning as a computational process is to say something more than that a pattern of physical events is occurring. It requires the assignment of a computational interpretation by some agent.

2. Premise 2 is the critical one in JohnnyB’s argument. He defines a “hard” problem as one that can only be solved within the chaotic space of a Class 4 system, and he goes on to argue in premise 7 that an irreducibly complex system is one whose generation requires the solution of a “hard” problem: “An Irreducibly Complex system is a system which utilizes (and utilizes necessarily) the chaotic space of a Class 4 system to implement a function.” As we’ll see, JohnnyB thinks that only a designer is capable of finding such a solution.

The problem with this approach is that while it may (if the reasoning proves to be correct) establish that intelligent design is required in order to generate irreducibly complex systems, what it does not show is that any such systems actually exist in Nature. Perhaps the bacterial flagellum, for instance, can be solved within a more restricted, non-chaotic space. JohnnyB cannot rule out such a possibility, for he writes that “to the extent that evolution occurs, it is parametrized” – i.e. “canalized.” At least some evolution occurs in Nature. Who is to say, then, that “canalized” evolution could not possibly give rise to a bacterial flagellum, over a period of aeons?

3. Premises 4 and 5 completely undermine the goal of JohnnyB’s presentation. Premise 4 states that “the chance of hitting a correct solution to a ‘hard’ problem is equivalent to that of chance,” and premise 5 adds that the degree of difficulty for a “hard” problem “increases exponentially with the size of the shortest solution.” But in the Goals section at the beginning of his talk, JohnnyB declared that he was trying to “create a definition of Irreducible Complexity which shows that Darwinism is logically impossible over a wide range of assumptions.” There’s a vast philosophical difference between logically impossible and exponentially improbable.

4. I might also add that “chance” and “chaos” are two entirely different concepts. A chaotic system is radically unpredictable; a system whose processes are governed by chance may still be statistically predictable. The reason why I mention this here is that I argued above that random genetic drift might be able to account for the evolution of some irreducibly complex systems. But this chance process, if it took place, would not have been a totally chaotic one. If it had been, then the systems would almost certainly never have evolved in the first place.

5. Premise 6, which says that “hard” problems can only be solved by “prior programming,” does not warrant the conclusion that an Irreducibly Complex system is evidence of design. The fact that design is the only known cause of some irreducibly complex systems (with which we are familiar) does not imply that design is able to create any irreducibly complex system, of an arbitrarily high degree of complexity. For all we know, there might be systems which are beyond the reach of any designer, because they’re too complex for anyone to model. This is important, for as the eminent chemist Professor James Tour points out, in his 2016 talk, The Origin of Life: An Inside Story, life itself is fiendishly complex. And what makes the puzzle of life’s origin all the more baffling is that even if you had a “Dream Team” of brilliant chemists and gave them all the ingredients they wanted, they would still have no idea how to assemble a simple cell. In Tour’s words:

All right, now let’s assemble the Dream Team. We’ve got good professors here, so let’s assemble the Dream Team. Let’s further assume that the world’s top 100 synthetic chemists, top 100 biochemists and top 100 evolutionary biologists combined forces into a limitlessly funded Dream Team. The Dream Team has all the carbohydrates, lipids, amino acids and nucleic acids stored in freezers in their laboratories… All of them are in 100% enantiomer purity. [Let’s] even give the team all the reagents they wish, the most advanced laboratories, and the analytical facilities, and complete scientific literature, and synthetic and natural non-living coupling agents. Mobilize the Dream Team to assemble the building blocks into a living system – nothing complex, just a single cell. The members scratch their heads and walk away, frustrated…

So let’s help the Dream Team out by providing the polymerized forms: polypeptides, all the enzymes they desire, the polysaccharides, DNA and RNA in any sequence they desire, cleanly assembled. The level of sophistication in even the simplest of possible living cells is so chemically complex that we are even more clueless now than with anything discussed regarding prebiotic chemistry or macroevolution. The Dream Team will not know where to start. Moving all this off Earth does not solve the problem, because our physical laws are universal.

You see the problem for the chemists? Welcome to my world. This is what I’m confronted with, every day.

So it seems that we have a Mexican standoff. It seems that JohnnyB would have to agree that the first living cell must have contained one or more irreducibly complex systems. That being the case, its evolution via unguided processes would have been extraordinarily unlikely, if JohnnyB’s argument is successful. But as Professor James Tour points out, our top designers are incapable of creating such a cell, either. So what produced it? We are left without an answer.

************************************************************************************************

Part Five: JohnnyB’s criticisms of Avida


Avida Checklist

Is the programming language of Avida a Class 4 system? Yes.

Do any of the evolved Avida programs require the use of chaotic spaces in the Class 4 system? No.

Therefore, the evolved Avida programs are not Irreducibly complex.

My Comments:

I know very little about Avida, so I’ll keep my comments brief. What I will say is that JohnnyB’s reasoning above sounds quite similar in thought and tone to a piece that Winston Ewert wrote on the subject of Avida, back in 2014:

What Avida demonstrates is that given a gradual slope, Darwinian processes are capable of climbing it. What irreducible complexity claims is that irreducible complex systems are surrounded on all sides by cliffs. Notice the distinction. Avida says that evolution can climb gradual slopes. Irreducible complexity claims that there are no gradual slopes. Avida is about what we can do with the gradual slopes, and irreducibly complexity is about whether or not the slopes exist. Avida provides no evidence that gradual slopes exist, it just assumes that they do. What Avida demonstrates is simply beside the point of the claim of irreducible complexity. (Emphasis mine – VJT.)

Winston Ewert’s statement that “Irreducible complexity claims that there are no gradual slopes” is basically equivalent to JohnnyB’s claim that irreducibly complex systems require the use of chaotic spaces in a Class 4 system. I have critiqued above (see Part Three, comment 2) the claim that evolution requires a smooth fitness landscape, as an argument for design.

Conclusion

As I see it, JohnnyB’s presentation on irreducible complexity, while far more mathematically rigorous than anything I have seen previously in the Intelligent Design literature, suffers from several major problems:

(i) it completely overlooks non-Darwinian naturalistic mechanisms for the evolution of irreducibly complex systems;

(ii) it commits the “pan-computationalist” fallacy, by treating the biosphere itself as if it were one vast computational system, and equates this system with a Class 4 Universal Turing machine, whose outputs (or results) are radically unpredictable (i.e. chaotic);

(iii) all it shows is that scientists would require such a machine, if they wanted to demonstrate that evolution was capable of generating any kind of function, no matter how complex;

(iv) smooth fitness landscapes are a fairly straightforward consequence of physics in our universe, at any rate – and what’s more, since evolution proceeds in not two but hundreds of different directions at once, the likelihood of evolution getting stuck at some local maximum is very low;

(v) in any case, Class 4 systems are not always chaotic in their behavior – which means that there may well be some complex structures in living things that could have evolved as a result of processes occurring outside the “chaotic space” of radical unpredictability;

(vi) arbitrarily redefining “irreducibly complex systems” as those systems whose evolution would have had to take place within the “chaotic space” of a Class 4 system is no way to safeguard the Argument from Design, because one still needs to show that there are any such systems in Nature, and that the systems which ID advocates consider to be designed could only have evolved in a chaotic fashion, if they evolved at all;

(vii) finally, it has not been shown that there exists an Intelligent Designer Who is capable of creating irreducibly complex systems of an arbitrarily high level of complexity – such as we find in even the simplest living cell. As we’ve seen, not even all the world’s scientists working together would have any idea how to create such a cell. As far as we know, then, intelligence is inadequate for the task of creating life. There may, of course, be some Super-Agent that was capable of creating the first life. But the argument from irreducible complexity, taken by itself, gives us no reason to think that.

I would like to conclude by saying that on an intuitive level, I feel the force of the design argument as much as anyone, and I am quite sure that the first cell was in fact designed – as well as many other complex structures we find in Nature. (How they were designed is another question entirely, on which I try to keep an open mind.) But if I am asked whether it has been rigorously demonstrated that the molecular machines we find within the cell were intelligently designed, I would have to answer in the negative. At the present time, I am inclined to think that the best argument for design is a multi-pronged one, which makes use of several converging lines of evidence.

What do readers think of JohnnyB’s argument? Over to you – and JohnnyB!

118 thoughts on “My thoughts on JohnnyB’s new view of Irreducible Complexity

  1. VJT –

    Thank you for the in-depth commentary! I hope to provide a detailed response later this weekend, but let me leave you for the moment with a few items that I think are important for the discussion and I can write about quickly:

    (1) I agree that perhaps the words “logical impossibility” were too strong, but I don’t know of another term for the situation I am trying to describe. If the existence of X makes the production of Y turn into an exponentiating probability, I don’t know how much further you can get. It does, in fact, make it a logical impossibility as a dependable mechanism, which it would have to be for it to work as a “mechanism”. So, you’ll have to forgive my terminology – to show that the nature of the problem implied that the probabilities decrease exponentially is certainly hitting the target in my book.

    (2) I think you have misunderstood some of what I was saying, because you say I don’t “account for” non-Darwinian evolution. I PROPOSED NON-DARWINIAN EVOLUTION!!!! That is precisely the proposition of the “prior programming” model. If you have prior programming that allows you to get around the issue, it is possible. That is a model of non-Darwinian evolution! You also might be interested in my video on Evolutionary Teleonomy that I did for the AM-Nat Biology conference. I proposed that part of this can be reformatting the state space to be class 1 or class 2, which is a way of providing a space for non-Darwinian evolution.

    In fact, in the “Using IC in the Lab” I show how this view of IC can be beneficial for investigations of evolutionary biology where evolution is proceeding upon non-Darwinian lines.

    (3) With organisms, you seem to think that it is possible for organisms to stay away from Class 4 functionality, but part of my presentation shows that organisms (utilizing negative feedback loops) do in fact step right in the middle of irreducible complexity. In fact, it is not hard to see how organisms require Class 4 functionality. Dynamic recursion is the hallmark of Class 4 functionality, which is precisely what organisms do!

    (4) With Avida, you misunderstood the argument. I don’t disagree with Ewert, but my argument has ZERO to do with Winston’s argument. Winston’s is about selective steps and whether or not there is design in the rewarding system. I don’t disagree with Winston, but it has no impact on my analysis of Avida. My analysis is based on the *instructions* that are used. Avida did not and to my knowledge has never evolved an open-ended loop that contributed to function. Ewert’s analysis has nothing to do with open-ended loops.

    (5) You missed the big part of the Avida discussion – that there is SPECIFIC PROGRAMMING in Avida which both *IS* designed specifically and which design *IS* detectable by my methodology. I find it odd that you didn’t even find that worthy of comment. There is a specific set of instructions within every Avida organism that bear the mark of their designer, and this marking is detectable through this methodology.

  2. The basic methodology of ID hasn’t changed since Paley:

    Make a model of evolution, mathematical or otherwise
    Demonstrate that evolution is impossible in the model.
    Assert that the failure of the model must reflect the thing being modelled.
    Ignore the fact that reality isn’t modelled correctly.

    Bumblebeeshit.

  3. On Mr Moran’s and so VJT about how chance mutations can by pass impossible organization of complexity by mutations needing each other for so make complex stuff.
    To oppose JohnnyB its being said that a mutation can come into being in a population, linger and expand without usefullness or selection going on, until it bumps into another mutation/other population , also lingering, and BOOM its combination has created complexity in the result of the combination which can be a positive selective attribute in a new population.
    its agreeing with iD thinkers that complexity can not come from reduced elements that are not working together for a new organization/population. They are not working for a union of cause.
    However they conceive of mutations just lingering around with no agenda.
    Hmm.
    Its seeming impossible what evolutionists ask for to explain chance creating complexity. Yes its impossible.
    Yet does it have a logic in this unlikely scenario?
    It has the creation of mutations for storage for later use.
    Well one could say there are millions of mutations lingering around waiting to be usefull. Very unlikely.
    The whole point of mutations is that non use makes them not reproduce.
    IC still works, i think, in saying random mutations would not be lingering around.
    they can’t. they can’t diffuse through a population and linger.

  4. In the long-term evolution experiment with E coli, an irreducibly complex function evolved without the involvement of natural selection. Only when the function had emerged, did natural selection set in to preserve and enhance it.

    I’m speaking about the function of aerobic citrate transport. This function requires three criteria to be met and if any one of them is missing, the function fails completely. As such, it is by definition irreducibly complex, as each component on it’s own is nonfunctional, and only when all three are combined in the right place does function proceed.

    The three criteria which must be met, are:
    1. A gene coding for a citrate transporter protein.
    2. A promoter controlling the gene.
    3. The promoter must be active when oxygen is present.

    To begin with, in the experiment, E coli could not transport citrate into the cell cytoplasm when oxygen was present in the environment. Because the citrate transporter gene was under control of a promoter that was inhibited when oxygen was present. But at one point over the course of the experiment, a gene duplication of the citrate transporter gene into an area downstream of another promoter, this one active under aerobic conditions, created the specific association such that all three criteria were met. Now the bacteria had acquired the function “aerobic citrate transport. By a single mutation.

    Demonstrably, irreducibly complex functions/systems therefore can and do evolve.

  5. It gets even better. The particular example JohnnyB gives in his video, of a “feedback loop” system that could not evolve (or at least, the challgen is to identify evolving), is one where:

    X produces Y, Y produces Z, Z inhibits/regulates X

    Surprise surprise, that exact system has been observed evolving in a laboratory population of E coli. Utilizing deletions strains of E coli, an experimental population re-evolved a functional Lac-Operon in the laboratory:
    Evolution of a Regulated Operon in the Laboratory

    Abstract
    The evolution of new metabolic functions is being studied in the laboratory using the EBG system of E. coli as a model system. It is demonstrated that the evolution of lactose utilization by lacZ deletion strains requires a series of structural and regulatory gene mutations. Two structural gene mutations act to increase the activity of ebg enzyme toward lactose, and to permit ebg enzyme to convert lactose into allolactose, an inducer of the lac operon. A regulatory mutation increases the sensitivity of the ebg repressor to lactose, and permits sufficient ebg enzyme activity for growth. The resulting fully evolved ebg operon regulates its own expression, and also regulates the synthesis of the lactose permease.

    I guess the question now is, did God come down from the heavens to cause these mutations to happen?

  6. johnnyb,

    If the existence of X makes the production of Y turn into an exponentiating probability, I don’t know how much further you can get. It does, in fact, make it a logical impossibility as a dependable mechanism, which it would have to be for it to work as a “mechanism”. So, you’ll have to forgive my terminology – to show that the nature of the problem implied that the probabilities decrease exponentially is certainly hitting the target in my book.

    A general difficulty with ‘disproving evolution’ by such means is the failure to account for the compensatory exponentiating effect of multiple potential interactions. The chance of an interaction of some kind in a multi-partition genome existing in multiple copies is not a simple scale-up of intuitions on one X.

  7. Rumraket: I’m speaking about the function of aerobic citrate transport.

    The evolution of citrate metabolism is interesting, but it was not a goal, nor was it a problem to be solved.

    If it had been a problem (the solution of which was necessary) the cultures would have died. Extinction is the most likely outcome when environmental changes in ways that require an adaptation. Unless the necessary alleles already exist in the population by chance.

    Populations change because they can, not because they have to. Looking back and attributing purpose or direction is not a useful way of thinking.

  8. Somehow this all looks like the tornado in a junkyard, with a lot of lipstick.

  9. petrushka: The evolution of citrate metabolism is interesting, but it was not a goal, nor was it a problem to be solved.

    No, indeed. It was an opportunity to be stumbled upon. Jon’s comment in the other thread misses the point spectacularly:

    The point is that natural selection requires a genotype/phenotype map whose selection consistently points in the same direction. However, complexity theory shows that the road to arbitrary features (i.e., features that were not implicit in the system ahead-of-time) necessarily has a chaotic mapping through that configuration space.

    Thus, for such features, you can only stumble upon them by chance.* Information theory shows that chance grows exponentially large with the size of the minimum working system.

    Well, exactly!
    *My emphasis

  10. petrushka,

    The evolution of citrate metabolism is interesting, but it was not a goal, nor was it a problem to be solved.

    Was it a goal of the designers of the experiment?

  11. A static fitness landscape also seems problematic. Protein-space may be fixed, but competition for limited energy and resource will create a gradient favoring exploring any protein space allowing use of new resources, or more efficient use existing resource. Hitting a local maximum only means that a local niche will be filled, not that evolution stops at that point.

  12. colewd:
    petrushka,

    Was it a goal of the designers of the experiment?

    They provided the niche, then waited for individual bacteria to pick up mutations that might be selected for in that niche. No design occurred, other than that provided by natural selection..

  13. At the present time, I am inclined to think that the best argument for design is a multi-pronged one, which makes use of several converging lines of evidence.

    And, of course, ignore the evidence that life wasn’t designed, like the failure of “design” to be portable across genetically separated lineages. What is intelligence doing if not thinking beyond evolutionary capabilities?

    Sort of the killer of ID, but not something that can’t be constantly ignored.

    Glen Davidson

  14. Hi JohnnyB,

    Thanks very much for your response. I’ve had a very long day, but before I turn in for the evening, I’d like to offer a quick response.

    1. I’m glad to hear that you espouse a form of (non-Darwinian) evolution. I’ll have a look at your latest video (April 15, 2017) tomorrow, when I get some free time.

    2. Having said that, I’d be interested to hear what you think of Professor Larry Moran’s proposal that Constructive Neutral Evolution can account for the evolution of irreducibly complex structures. Do you accept or reject Moran’s proposal – and if the latter, why?

    3. Thank you for your concession on “logical impossibility.” Perhaps “practical impossibility” might be a better way of putting it, since an event which is so rare that one wouldn’t expect to see it occur even once during the history of the cosmos is an event which we can confidently say won’t happen.

    4. I’d be grateful if you could explain a bit more about dynamic recursion. Specifically, why do all organisms require it, and why is it radically unpredictable?

    5. Thank you for pointing out that your views on Avida have nothing to do with Winston Ewert’s. You write that “Avida did not and to my knowledge has never evolved an open-ended loop that contributed to function.” Could you please explain why you believe that (a) all living things require an an open-ended loop that contributes to functionality; (b) the evolution of an open-ended loop is radically unpredictable?

    6. I’m happy to agree that your methodology has a practical aspect, and that you put forward some highly specific criteria for detecting design. I didn’t mention that in my OP because I was mainly concerned with whether your view of irreducible complexity succeeded in demonstrating the impossibility of unguided evolution. However, the specificity of your program certainly distinguishes it from previous attempts to detect design. I’ll stop there for now. Cheers.

  15. Alan Fox: They provided the niche, then waited for individual bacteria to pick up mutations that might be selected for in that niche.

    My reading of Lensky is that there was no expectation that such an adaptation would occur, and it didn’t in most lineages.

    My point is that the bacteria didn’t “need” the adaptation. They were not solving a problem. JB engages in retrospective astonishment. Whatever happened must be the result of intention.

  16. Alan Fox/JohnnyB: Thus, for such features, you can only stumble upon them by chance.* Information theory shows that chance grows exponentially large with the size of the minimum working system.

    Tornado in a junkyard mates with retrospective astonishment. The chance calculation is relevant only if the current configuration was intended. And the assumption of intention is not supported by theory or evidence.

  17. petrushka: Rumraket: I’m speaking about the function of aerobic citrate transport.

    The evolution of citrate metabolism is interesting, but it was not a goal, nor was it a problem to be solved.

    Yeah I know, and that was sort of the point when I brought it up. Exactly because there was no selection for the ability before it evolved, it is a good case of the evolution of an irreducibly complex system, sensu JohnnyB.

  18. Tom English: It is ridiculous to have the arguments proceed in two threads. Vincent Torley is having a hard time adapting to the loss of his niche at Uncommon Descent. You will help him along by making your comments in the thread initiated by Jonathan Bartlett.

    Point taken. I have crossposted over there now.

  19. vjtorley,

    1. How bright/helpful is it to produce a lengthy interpretation of Bartlett’s argument when Bartlett is (or should be) available to answer questions? I doubt highly that he is clear as to what he’s trying to say, and I know surely that you have not read his mind.

    2. Why do you post when you do not have time to engage in discussion?

  20. petrushka: Tornado in a junkyard mates with retrospective astonishment. The chance calculation is relevant only if the current configuration was intended. And the assumption of intention is not supported by theory or evidence.

    I’m willing to bet that the probability of the junkyard tornado producing SOME configuration of SOME of the junk that SOMEONE could possibly find SOME use for, is quite high. Therefore god.

  21. Hi Tom English,

    One reason why I wrote my criticisms as a post was that I considered JohnnyB’s argument to be a substantive one, meriting a detailed analysis. The flaws in the argument which I claim to have identified were not readily apparent; I could only spot them after having set down the premises in writing, and going through them carefully. In writing my post, I was basically thinking aloud. Sometimes that helps.

    Yes, I could have written my criticisms as comments on JohnnyB’s thread, but they would have been very long comments, and I don’t think other readers would have appreciated that.

    Finally, I noticed that some readers were requesting a transcript of JohnnyB’s talk. My post doesn’t provide a transcript, but it does capture the nub of the argument, by quoting from the most important slides in the talk.

    As for time: I wrote the post in a single stretch, over a few hours. Writing a single comment on a thread is quicker, but responding to a series of comments from readers over a period of days (and sometimes weeks) can take up considerably more time than writing a post.

    I hope that answers your questions.

  22. LOL – the comment about questionable reasons for running alongside JohnnyB’s Discovery Institute-style IDist argumentation (it’s sympathy for his past). JohnnyB sadly is a user of IDist terms in an IDist tone of voice and self-confidence hocking jargon as if it were innovative & effective in it’s dehumanising ‘engineering’ way. They have apparently become the ‘occasionalists of the 21st century,’ with Torley-like chaps to support them.

    “Professor Larry Moran’s proposal that Constructive Neutral Evolution can account for…”

    Must admit I just don't understaND VInCent's capitalisation cHOIcEs; are they intentional? They are non-standard, but probably Teaching the stuff doesn't mean One always does it Right. Why does he capitalise theory names, paradigms, etc. when he clearly does not mean IMMEDIATELY to THEOLOGISE them as he does intend in capitalising Intelligent Design as he defines it.

    The biggest point between Bartlett & Torley, given they share an Abrahamic monotheist worldview, is that Bartlett is still pushing the IDM's 'strictly scientific' ideology and Torley is now torn between knowing he must discard that ideology and at the same time knowing, with a desire to recover yet stricken with Expelled Syndrome, divorced from the DI, that he doesn't really have anywhere else to go … except to critique an IDist in a 'skeptical zone' among people he will never 'convert' into his Isolationist-Islander way.

    Torley's uppercase 'Intelligent Design' is openly THEOLOGICAL. Check it. Otoh, when Master Bartlett in nearly full individual command of the IDist 'theory' writes and speaks of 'design' it is always in lowercase letters, never implies (just hints) at a particular Designer and is ONLY scientific.

    Oh, and Johnny continues to confuse human design with divine design. Apparently it is an intentionality feedback loop amongst IDists to confuse this *on purpose*. But Rope perhaps helped Johnny with that a bit, if he was willing to listen. In either case, I'd like JohnnyB to please point to the best source in the IDist literature of a leading IDT proponent distinguishing between human design and divine design.

    I’m sure he’ll appreciate how theology is necessarily involved in the distinguishing. In that sense, Torley is indeed ahead of Bartlett in the run alongside IDist race.

  23. “not readily apparent; I could only spot them after having set down the premises in writing…”

    It doesn’t happen that slowly for some people. Torley apparently has a speed issue. And as most IDists, won’t admit it for the martyr’s long haul. It took him a considerable distance of time to finally take a stand and to openly reject Doug Axe’s ‘logic’ of IDism and the Discovery Institute’s ___________ along with it.

    Yet indeed, here he is pushing the pace ahead of Bartlett apparently trying to untangle the ideology. Perhaps the signs are beginning to show for Vincent.

  24. I’d remind folks that discussion of moderation issues should take place in the dedicated thread.

  25. Rumraket,

    You misunderstood both my argument and the application.

    First of all, I did not say that such things could not evolve. What I said was that if such things evolved, it was due to prior information sources which this formulation then gives us warrant to look for.

    Second, your example was of a positive feedback loop, not a negative feedback loop. A positive feedback loop can retain function while searching for an optimized feedback regulation. A negative feedback loop is out of control (and thus negatively selectable) until it finds the appropriate downstream regulation. Now, I actually do think that the evolution of citrate functionality in E. coli is based on information that E. coli has, but it is not provable via this formulation of IC as other forms of evolution might be.

  26. vjtorley:
    2. Having said that, I’d be interested to hear what you think of Professor Larry Moran’s proposal that Constructive Neutral Evolution can account for the evolution of irreducibly complex structures. Do you accept or reject Moran’s proposal – and if the latter, why?

    I don’t know the specifics about Moran’s hypothesis, but, for the most part, most neutral evolutionary scenarios fall into one of two camps: (a) the author believes that it can be formed ex nihilo via random processes, or (b) the author thinks that a random combination of existing parts can do it. Under the (a) scenario, no, it is not possible (please take “impossibility” to imply exponential improbability). Under the (b) scenario, then, yes, if there was sufficient prior information, it certainly could happen under a neutral-like scenario. I actually view some forms of neutral theory as equivalent to Intelligent Design. Think of it this way – what would make neutral theory work? Existing repositories of information that could be recombined and redeployed easily. Thus, if the version of neutral theory works because of these large repositories of information, it is really no different than ID. Usually what happens is that neo-Darwinism is presented for the evolution of the initial repositories, and then neutral theory for modern evolution from those repositories. In such a case, I think they are wrong on the neo-Darwinism and at least somewhat correct on the neutral theory. Dembski’s “Searching Large Spaces” shows why the problem for creating such a repository of recombinable information is actually harder than making a single organism.

    3. Thank you for your concession on“logical impossibility.” Perhaps “practical impossibility” might be a better way of putting it, since an event which is so rare that one wouldn’t expect to see it occur even once during the history of the cosmos is an event which we can confidently say won’t happen.

    More importantly, it is that the necessity for X is exponentially prohibitive of getting it by Y process. I’m not even really attaching specific quantities at this point, but the exponential behavior means that for any modestly large functionality, yes, the probabilities would be beyond any reasonable range.

    4.I’d be grateful if you could explain a bit more about dynamic recursion. Specifically, why do all organisms require it, and why is it radically unpredictable?

    So, in computer programs, there are a variety of ways of getting Universal computation. Two of the more well-known are Turing machines and the Lambda calculus. Turing machines are like step-by-step processes with places to write things down, and lambda Calculus is like mathematics – function application. What makes them chaotic is that there are spaces where they can easily fall into infinite loops. In fact, some studies show that as programs get bigger, the amount of space covered by infinite loops approaches unity. Infinite loops are bad because computation never completes. A biological equivalent would be cancer – nothing to stop it at the appropriate time. However, without the possibility of infinite loops, you cannot have Universal computation – Universality requires the possibility of infinite loops. In Turing machines, what makes infinite loops is an unstructured loop. In the Lambda calculus, it is recursive functions. They are IC because what turns off the recursion is mutated independently of the recursion itself. This is also what makes it possible to be Universal.

    However, even many basic functions require universal computation. I couldn’t prove it at the moment, but I am relatively certain that even simple DNA transcription requires Universal computation – you have to repeat X over and over until a termination signal tells you to stop. In any case, there are innumerable examples of negative feedback loops in biology, so it is pretty certain that, whether or not biology requires a Universal machine, it does indeed both use one and uses the chaotic complex features of it.

    5. Thank you for pointing out that your views on Avida have nothing to do with Winston Ewert’s. You write that “Avida did not and to my knowledge has never evolved an open-ended loop that contributed to function.” Could you please explain why you believe that (a) all living things require an an open-ended loop that contributes to functionality; (b) the evolution of an open-ended loop is radically unpredictable?

    Most of this should be answered in the previous question. But for (b) Take my example factorial function from the video. This is a *very* simplistic function, but it uses an open ended loop. Now, imagine any mutation to it – changing a variable name, an operator, a number, and you will see that nearly all of them are not just slightly different, but radically so. Additionally, there are a great many ways of changing it that are catastrophic. Mostly because I have to manage both the loop and the conditions for exiting the loop independently. This is why negative feedback loops require information to evolve. If they don’t have information, then they would not be able to evolve past this dual simultaneous necessity.

  27. Allan Miller:
    A general difficulty with ‘disproving evolution’ by such means is the failure to account for the compensatory exponentiating effect of multiple potential interactions. The chance of an interaction of some kind in a multi-partition genome existing in multiple copies is not a simple scale-up of intuitions on one X.

    Actually it gets worse, because you have a bunch of pleiotropic effects, which make selectability in a single direction even more of a difficult situation.

  28. Tomato Addict:
    A static fitness landscape also seems problematic. Protein-space may be fixed, but competition for limited energy and resource will create a gradient favoring exploring any protein space allowing use of new resources, or more efficient use existing resource. Hitting a local maximum only means that a local niche will be filled, not that evolution stops at that point.

    I don’t see how a dynamic fitness landscape changes the problem except to exacerbate it. Again, you wind up with less of the ability to select in a continuous direction.

  29. petrushka
    The chance calculation is relevant only if the current configuration was intended. And the assumption of intention is not supported by theory or evidence.

    Actually, I would say that there is considerable evidence of intention. The same mutation can be deployed by the organism in days when under selective pressure. This means that the organism has information for the ability to perform the change. As I have pointed out in other papers, it is actually fairly reasonable for organisms to maintain alternative-good configurations. It seems like E. coli is cycling through its alternative-good configurations when it is not under selection as a hedge for future environments, but deploying it more pro-actively when it is under selection.

    Anyway, the ability for E. coli to deploy this alternative configuration in days when under selection is evidence that E. coli contains an information repository to do so.

  30. Hi JohnnyB,

    Thank you very much for your responses to my questions. They were very illuminating. So if I understand you rightly, you are saying that:

    (i) infinite loops are common in biology – e.g. even simple DNA transcription requires it, and so do negative feedback loops; and

    (ii) just about any mutation to the computation for an infinite loop will result in radical and often catastrophic changes; so therefore

    (iii) to prevent this from happening, both the loop and the conditions for exiting the loop need to be managed separately, which requires the input of a lot of information (via intelligent design).

    OK. Now I understand you better. However, what you’re assuming is that

    (i)’ biological changes are equivalent to computations, which means that

    (i)” evolutionary changes are equivalent to Universal computations, since only a Universal Turing machine can model evolution.

    I dispute premise (i)’ for reasons that I explained in my OP. The fact that we need a Universal Turing machine to model evolution doesn’t mean that evolution itself involves computations. What your argument shows is that models of evolution are fragile in a way that evolution itself need not be. Or am I missing something?

  31. Gregory,

    Kindly go away. Your presence on this thread will henceforth be ignored. And by the way, I capitalized Constructive Neutral Evolution because Professor Larry Moran (whom I was quoting) did so. Goodbye.

  32. johnnyb: The same mutation can be deployed by the organism in days when under selective pressure. This means that the organism has information for the ability to perform the change.

    facepalm

  33. “Kindly go away.”

    Willful ignorance is a good sign that a face-off (rather than facepalm) is exactly what’s needed for vjtorley in his Expelled Syndrome. Thus far he has been saved from ever confronting the break he actually made with the Discovery Institute due to their abominable politicking record and their claim, nay, DEMAND, that by fiat IDT is ‘strictly scientific.’

    And that’s the kind of ‘strictly scientific’ approach JohnnyB is taking here, though it has proven impossible for the best of the IDist leadership, from Johnson & Meyer, to Behe and Dembski, Nelson, Wells, et al. JohnnyB offers nothing beyond their work, nothing innovative about IDism himself, as far as it seems.

    Of course, JohnnyB responded to THE SKEPTICS. But not to the Abrahamic theist. Ah, yeah, sigh, that’s IDist ideology played out in PR & communications tactics. 🙁 Much negativism displayed in peoples hearts and minds … all in one place.

  34. It was clear by the fifth sentence of the OP that Vincent Torley was misrepresenting Bartlett’s presentation:

    I would also like to commend JohnnyB on his mathematical rigor, which has helped illuminate the key issues.

    Jonathan Bartlett’s talk is mathematical-istic word salad, devoid of rigor. In all sincerity, I would judge Bartlett mentally ill, were he not a young-earth creationist. The locus of the madness is YEC culture, not the individual YEC.

    I know that most folks were not sure what to make of the last third of the presentation. I’m telling you that there is nothing to make of it (nor of Bartlett’s paper). I’ve engaged a lot of ID math. I’ve also taught the theory of computation several times at the graduate level. I’d gladly respond if there were anything to which I might respond.

    Vincent Torley is not crazy. But something has gone terribly awry with a trained philosopher who not only pretends to understand utter nonsense, but also proclaims it illuminating.

    (Note, Vincent, that you will only make a greater ass of yourself if you resort to your usual evasions. You have no way out but to give a rigorous account of the “mathematical rigor,” along with an explanation of how it “has helped to illuminate the key issues” — which is to say, you have no way out.)

  35. johnnyb: The same mutation can be deployed by the organism in days when under selective pressure.

    dazz: facepalm

    petrushka: With evidence like that, there’s not much point in arguing.

    If I responded to his mathematical-ishness with a facepalm, I’d need reconstructive surgery.

  36. johnnyb: Actually, I would say that there is considerable evidence of intention. The same mutation can be deployed by the organism in days when under selective pressure. This means that the organism has information for the ability to perform the change.

    Whose intention? The organism has information that the change is doable and then it does it, but why is that intention rather than adaptation?

    If you mean the “designer’s” intention, then according to the fine-tuning argument (which I think is as fishy as ID), “designer’s” intention is revealed for the *environment* being fit to us, not the other way round. But according to you, are both of these manifestations of “designer’s” intention?

  37. “The locus of the madness is YEC culture, not the individual YEC.”

    The fields that study that are sociology and social psychology. Memetics is ideological historical discard.

  38. johnnyb: Anyway, the ability for E. coli to deploy this alternative configuration in days when under selection is evidence that E. coli contains an information repository to do so.

    Really, what you need to do here is produce a rationale for the fact that in the Lensky experiment, most lineages failed to deploy the magic alternative configuration.

  39. Question for the pros please. In Lenski’s replay experiments, was it the same mutation that enabled those strains to grow aerobically on citrate?

  40. dazz,

    Not a pro as such, but I believe there was a ‘potentiating’ mutation that rendered citrate evolution re-evolvable. Replaying from prior to that occurrence did not result in citrate metabolism being observed. Replays from afterwards did – even though the key mutation itself did not impart the capacity directly; something else had to happen which the first mutation rendered more likely. Dependent probabilities! 🙂

  41. Hi Tom English,

    Before I critique a presentation that someone has gone to a lot of trouble to put together, I generally try to say something complimentary about it. It’s called good manners.

    When you write, “In all sincerity, I would judge Bartlett mentally ill, were he not a young-earth creationist,” you undermine your own credibility. You, like me, are totally unqualified to assess a person’s mental health – particularly when it’s someone you haven’t even met.

    Here’s some information about Jonathan Bartlett’s background:

    Jonathan Bartlett is the director of technology at New Medio and is a cofounder of the healthcare software company Docvia.com. He has been the lead developer for numerous Web 2.0 applications while with New Medio and Docvia.com, and has lead [sic] both companies in their transitions to Ruby-on-Rails as a development framework. He is the author of Programming from the Ground Up, which is an introductory textbook on computer science using assembly language which is used at Princeton University. He is also a regular author at IBM’s DeveloperWorks, with articles on subjects ranging from high-performance Playstation 3 programming to advanced metaprogramming techniques.

    Jonathan has spoken to groups on both technical and non-technical topics. On the technical side, Jonathan has given presentations on secure programming practices, Ruby-on-Rails, REST, memory management, computer hardware organization, and introductory programming techniques. On the business side, Jonathan has given presentations on data security, email marketing, and dealing with technology in your organization.

    Judging from the above, I’d say he’s mathematically competent.

    Of course, being mathematically competent does not mean the same thing as being familiar with all fields of mathematics. Very few people could truthfully make that claim.

    You have criticized Jonathan for invoking an analogy between the genome and a program in his talk, but even you admit that the analogy was “in vogue with biologists for quite some time.” You go on to say: “A good response to Bartlett might be to point out the difference between a conventional program and a network of interacting elements… I see the interactions of a bunch of rules as more like a genetic regulatory network than the composition of expressions of functions.”

    Even supposing you’re right on this point (and you may well be), that in no way undermines the statement I made near the beginning of my OP that Jonathan Bartlett is to be commended for his mathematical rigor. Within his own field, he reasons well. What’s more, he managed to give a definition of irreducible complexity that’s streets ahead of anything provided by others working in the field, in terms of clarity and simplicity: “An Irreducibly Complex system is a system which utilizes (and utilizes necessarily) the chaotic space of a Class 4 system to implement a function.” I would also refer you to his comment number 30 above, in which he succinctly explains why mutations to infinite loops lead to radically unpredictable results. In my reply, I don’t question his mathematical competence; rather, the question that concerns me is its applicability to the world of biology.

    I note also that Jonathan Bartlett had the decency to state up-front how his claims could be falsified: “The funny thing is that my claim, if false, is almost trivially falsifiable. Find me a genetic algorithm that reliably creates open-ended loops as part of its problem-solving.” I was a little disappointed that none of his critics took him up on that specific point.

    Finally, you write that “Bartlett is blurring the distinction between functions calculated by machines and functions served by molecules.” That’s a valid criticism; however, it’s not a mathematical but a philosophical one.

    To sum up: one might fault Jonathan Bartlett’s methodology, the philosophical gaps in his reasoning and the analogies he draws between biological processes and computations. One might even suggest that there are other areas of mathematics that re more relevant to evolution, which he hasn’t discussed in his talk. But I think that a fair-minded person would agree that within the mathematical framework he is discussing, he does a good job of handling the math.

    I would also note in passing that Patrick (who is no fool) was sufficiently impressed by Jonathan Bartlett’s talk as to request a transcript. And I might add that Professor Felsenstein agreed with my critique of Jonathan Bartlett’s presentation, which did not question his grasp of mathematics.

  42. Being able to do sums correctly does not imply understanding of biochemistry or biology.

    Your model still has to be correct. And in this case, the limitations of the model make it irrelevant to his intended task.

  43. vjtorley,

    Tom English: Note, Vincent, that you will only make a greater ass of yourself if you resort to your usual evasions. You have no way out but to give a rigorous account of the “mathematical rigor,” along with an explanation of how it “has helped to illuminate the key issues” — which is to say, you have no way out.

    Hail Vincent, devoid of grace.

    You’ve laid on with rhetorical tricks that you would nail in a New York minute, were they coming from someone other than you. Thus I have to believe that you know precisely what you’re doing.

Leave a Reply