Organisms and Machines

In the “The Disunity of Reason” thread, Mung suggested that “the typical non-theist will insist that organisms are machines, including humans.” And there is a long tradition of mechanistic metaphysics in Western anti-theism (La Mettrie is probably the most well-known example). However, I pointed that I disagree with the claim that organisms are machines. I’m reposting my thoughts from there for our continued conversation.

A machine is a system with components or parts that can be partially isolated from the rest of the system and made to vary independently of the system in which they are embedded, but which has no causal loops that allow it to minimize the entropy produced by the system. It will generate as much or as little heat as it is designed to do, and will accumulate heat until the materials lose the properties necessary for implementing their specific functions. In other words, machines can break.

What makes organisms qualitatively different from machines is that organisms are self-regulating, far-from-equilibrium thermodynamic systems. Whereas machines are nearly always in thermodynamic equilibrium with the surrounding system, organisms are nearly always far from thermodynamic equilibrium — and they stay there. An organism at thermodynamic equilibrium with its environment is, pretty much by definition, dead.

The difference, then, is that machines require some agent to manipulate them in order to push them away from thermodynamic equilibrium. Organisms temporarily sustain themselves at far-from-equilibrium attractors in phase space — though entropy catches up with all of us in the end.

It is true that some parts of an organism can break — a bone, for example. But I worry that to produce a concept general enough that both breaking and dying are subsumed under it, one can lost sight of the specific difference that one is trying to explain.

Indeed, that’s the exact problem with Intelligent Design theory — the ID theorist says, “organisms and machines are exactly the same, except for all the differences”. Which is why the ID theorist then concludes that organisms are just really special machines — the kind of machines that only a supremely intelligent being could have made. As Fuller nicely puts it, according to ID “biology is divine technology”.

Spontaneous Generation

A century later we know that the overwhelming obstacle facing spontaneous generation is probability, or rather improbability, resulting from life’s enormously complex phenotypes. If even a single protein, a single specific sequence of amino acids, could not have emerged spontaneously, how much less so could a bacterium like E. coli with millions of proteins and other complex molecules? Modern biochemistry allows us to estimate the odds, and they demolish the spontaneous creation of complex organisms.

Looks like IDists aren’t the only ones to appeal to probability arguments. How does Wagner know what the probabilities are, or that spontaneous generation is even within the realm of what is possible?

Continue reading

The Reasonableness of Atheism and Black Swans

As an ID proponent and creationist, the irony is that at the time in my life where I have the greatest level of faith in ID and creation, it is also the time in my life at some level I wish it were not true. I have concluded if the Christian God is the Intelligent Designer then he also makes the world a miserable place by design, that He has cursed this world because of Adam’s sin. See Malicious Intelligent Design.
Continue reading

The Enigma of Lamarckism

Lamarckism (or Lamarckian inheritance) is the idea that an organism can pass on characteristics that it acquired during its lifetime to its offspring (also known as heritability of acquired characteristics or soft inheritance).

– Wikipedia

Many of us have probably been taught that Lamarkian inheritance is anathema. Heresy. But why would that be the case? Is it for theoretical reasons or simply because of a lack of empirical evidence?

Continue reading

The Myth of Biosemiotics

I recently came across this book:

Biosemiotics: Information, Codes and Signs in Living Systems

This new book presents contexts and associations of the semiotic view in biology, by making a short review of the history of the trends and ideas of biosemiotics, or semiotic biology, in parallel with theoretical biology. Biosemiotics can be defined as the science of signs in living systems. A principal and distinctive characteristic of semiotic biology lies in the understanding that in living, entities do not interact like mechanical bodies, but rather as messages, the pieces of text. This means that the whole determinism is of another type.

Pardon my skepticism, but

  1. There is no information in living systems.
  2. There are no codes in living systems.
  3. There are no signs in living systems.

Biosemiotics is the study of things that just don’t exist. Theology for biologists.

Continue reading

YEC part 1

[Alan Fox asked why I’m a YEC (Young Earth Creationist), and I promised him a response here at The Skeptical Zone.]

I was an Old Earth Darwinist raised in a Roman Catholic home and secular public schools, but then became an Old Earth Creationist/IDist, a Young Life/Old Earth Creationist/IDist, then a Young Life/Young Earth Creationist/IDist. After becoming a creationist, I remained a creationist even during bouts of agnosticism in the sense that I found accounts of a gradualistic origin and evolution of life scientifically unjustified.
Continue reading

Speculative Naturalism

The standard design-theorist argument hinges on the assumption that there are three logically distinct kinds of explanation: chance, necessity, and design.  (I say “explanation” rather than “cause” in order to avoid certain kinds of ambiguities we’ve seen worked out here in the past two weeks).

This basic idea — that there are these three logically distinct kinds of explanation — was first worked on by Plato, and from Plato it was transmitted to the Stoics (one can see the Stoics use this argument in their criticism of the Epicureans) and then it gets re-activated in the 18th-centuries following, such as in the Christian Stoicism of the Scottish and English Enlightenment, of which William Paley is a late representative.   Henceforth I’ll call this distinction “the Platonic Trichotomy”

There are at least two different ways of criticizing the Platonic Trichotomy.  One approach, much-favored by ultra-Darwinists, is to argue that unplanned heritable variation (“chance”) and natural selection (“necessity,” if natural selection is a “law” in the first place) together can produce the appearance of design.  (Jacques Monod is a proponent of this view, and perhaps Dawkins is today.) The other approach, which I prefer, is to reject the entire Trichotomy.

To reject the Trichotomy is not to reject the idea that speciation is largely explained in terms of the feedback between variation and selection, but rather to reject the idea that this process is best conceptualized in terms of “chance” and “necessity.”

So what’s the alternative?   What we would need here is a new concept of nature that is not beholden to any of the positions made possible with respect to the conceptual straitjacket imposed by the Trichotomy.

Reification of the tree metaphor

A brief note to a regular reader.

The Darwinian “tree of life” is not an actual tree. It is a diagram of relationships. Therefore it can survive without having established its “roots”.

It could be granted that the origin of life was artificial, or even supernatural, and the theory of evolution would still be applicable within its domain.

This is not the first time the error in the essay challenge has been pointed out, but it costs us little to hope that a sincere individual, in no way guilty of peddling a religiopolitical agenda, would acknowledge the mistake.

The Programmer and N.E.C.R.O.

A computer programmer noticed that he was not able to type very much in a single day.  But he mused that if there were a large number of software bots working on his code, then they might be able to proceed via totally blind trial and error.  So he decided to try an experiment.

In the initial version of his experiment, he established the following process.

1. The software was reproduced by an imperfect method of replication, such that it was possible for random copying errors to sometimes occur.  This was used to create new generations of the software with variations.

2. The new instances of the software were subjected to a rigorous test suite to determine which copies of the software performed the best.  The worst performers were weeded out, and the process was repeated by replicating the best performers.

The initial results were dismal.  The programmer noticed that changes to a working module tended to quickly impair function, since the software lost the existing function long before it gained any new function.  So, the programmer added another aspect to his system — duplication.

3. Rather than have the code’s only copy of a function be jeopardized by the random changes, he made copies of the content from functional modules and added these duplicated copies to other parts of the code.  In order to not immediately impair function due to the inserted new code, the programmer decided to try placing the duplicates within comments in the software.  (Perhaps later, the transformed duplicates with changes might be applied to serve new purposes.)

Since the software was not depending on the duplicates for its current functioning, this made the duplicates completely free to mutate due to the random copying errors without causing the program to fail the selection process.  Changes to the duplicated code could not harm the functionality of the software and thereby cause that version to be eliminated.  Thus, in this revised approach with duplicates, the mutations to the duplicated code were neutral with regard to the selection process.

The programmer dubbed this version of his system N.E.C.R.O. (Neutral Errors in Copying, Randomly Occurring).  He realized that even with these changes, his system would not yet fulfill his hopes.  Nevertheless, he looked upon it as another step of exploration.  In that respect it was worthwhile and more revealing than he had anticipated, leading the programmer to several observations as he reflected on the nature of its behavior.

Under these conditions of freedom to change without being selected out for loss or impairment of current function, what should we expect to happen to the duplicated code sequences over time and over many generations of copying?

And why?

[p.s. Sincere thanks to real computer programmer OMagain for providing the original seed of the idea for this tale, which serves as a context for the questions about Neutral Errors in Copying, Randomly Occurring.]