The Myth of Biosemiotics

I recently came across this book:

Biosemiotics: Information, Codes and Signs in Living Systems

This new book presents contexts and associations of the semiotic view in biology, by making a short review of the history of the trends and ideas of biosemiotics, or semiotic biology, in parallel with theoretical biology. Biosemiotics can be defined as the science of signs in living systems. A principal and distinctive characteristic of semiotic biology lies in the understanding that in living, entities do not interact like mechanical bodies, but rather as messages, the pieces of text. This means that the whole determinism is of another type.

Pardon my skepticism, but

  1. There is no information in living systems.
  2. There are no codes in living systems.
  3. There are no signs in living systems.

Biosemiotics is the study of things that just don’t exist. Theology for biologists.

Continue reading

What A Code Is – Code Denialism Part 3

My intent here in these recent posts on the genetic code has been to expose the absurdity of Code Denialism. The intent has not been to make the case for intelligent design based upon the existence of biological codes. I know some people find that disconcerting but that would be putting the cart before the horse. No one is going to accept a conclusion when they deny the premise. And please forgive me if I choose not to play the game of “let’s pretend it really is a code” while you continue to deny that it actually is a code.

First I’d like to thank you. It’s actually been pretty neat looking up and reading many of these resources in my attempt to see whether I could defend the thesis that the genetic code is a real code. I admit it’s also been much too much fun digging up all the reasons why code denialism is just plain silly (and irrational).

That the genetic code is a code is common usage and if “meaning is use” that alone ought to settle the matter. But this is “The Skeptical Zone” and Code Denialism is strong here. But I’m not just claiming that it’s a code because we say it’s a code in common usage. I’m claiming it is a code because it meets the definition of a code. The reason we say it is a code is because it is in fact a code.

My first two posts have been on some of the major players and how they understood they were dealing with a code and how that guided their research. I’ll have more to say on that in the future as it’s a fascinating story. But for now …

What A Code Is

Continue reading

Dictionary halting problem makes A=A fallible

It would seem superficially this is correct

A=A, necessarily and infallibly true

but it is false for non-trivial A and where the EQUAL SIGN means equal in essence, not just equal in description. This is a consequence of the dictionary and halting problem that has been well demonstrated by first rate mathematical logicians and computer scientists (like Alonzo Church who was a devout Christian) who actually work with formal languages vs. people clinging to antiquated and non-rigorous theological notions.
Continue reading

What is obvious to Granville Sewell

Granville Sewell, who needs no introduction here, is at it again. In a post at Uncommon Descent he imagines a case where a mathematician finds that looking at his problem from a different angle shows that his theorem must be wrong. Then he imagines talking to a biologist who thinks that an Intelligent Design argument is wrong. He then says to the biologist:

“So you believe that four fundamental, unintelligent forces of physics alone can rearrange the fundamental particles of physics into Apple iPhones and nuclear power plants?” I asked. “Well, I guess so, what’s your point?” he replied. “When you look at things from that point of view, it’s pretty obvious there must be an error somewhere in your theory, don’t you think?” I said.

As he usually does, Sewell seems to have forgotten to turn comments on for his post at UD. Is it “obvious” that life cannot originate? That it cannot evolve descendants, some of which are intelligent? That these descendants cannot then build Apple iPhones and nuclear power plants?

As long as we’re talking about whether some things are self-evident, we can also discuss whether this is “pretty obvious”. Discuss it here, if not at UD. Sewell is of course welcome to join in.

A question for Winston Ewert

Added June 17, 2015: Jump in with whatever comments you like, folks. Dr. Ewert has responded nebulously at Uncommon Descent. I’d have worked with him to get his meaning straight. I’m not going to spend my time on deconstruction. However, I will take quick shots at some easy targets, mainly to show appreciation to Lizzie for featuring this post as long as she has. Here, again, is what I put to Dr. Ewert:

Your “search” process decides when to stop and produce an outcome in the “search space.” A model may do this, but biological evolution does not. How do you measure active information on the biological process itself? Do you not reify a model?

Dr. Ewert seemingly forgets that to measure active information on a biological process is to produce a specific quantity, e.g., 109 bits.

One approach is to take the search space not to be the individual organisms, but rather the entire population of organisms currently alive on earth. Or one could go further, and take it to be the history of organisms during the whole of biological evolution. One could also take it to be possible spacetime histories. The target can then be taken to be spacetimes, histories, or populations that contain an individual organism type such as birds.

These “search spaces” roll off the tongue. But no one knows, or ever will know, what they actually contain. Even if we did know, no one would know the probabilities required for calculation of the active information for a given target. And even if we did know the probability of a given “target” for a given “search,” we would not be able to justify designating a particular probability distribution on the search space as the “natural” baseline. By the way, Dr. Ewert should not be alluding to infinite sets, as his current model of search applies only to finite sets.

Continue reading

The Quest for Certainty

According to Arrington:

“We cannot know completely. Kurt Gödel demonstrated that even the basic principles of a mathematical system while true cannot be proved to be true. This is his incompleteness theorem. Gödel exploded the myth of the possibility of perfect knowledge about anything. If even the axioms of a mathematical system must be taken on faith, is there anything we can know completely? No there is not. Faith is inevitable. Deny that fact and live a life of blinkered illusion, or embrace it and live in the light of truth, however incompletely we can apprehend it.”

Unfortunately, Arrington is not even wrong.

Continue reading

A resolution of the ‘all-heads paradox’

There has been tremendous confusion here and at Uncommon Descent about what I’ll call the ‘all-heads paradox’.

The paradox, briefly stated:

If you flip an apparently fair coin 500 times and get all heads, you immediately become suspicious. On the other hand, if you flip an apparently fair coin 500 times and get a random-looking sequence, you don’t become suspicious. The probability of getting all heads is identical to the probability of getting that random-looking sequence, so why are you suspicious in one case but not the other?

In this post I explain how I resolve the paradox. Lizzie makes a similar argument in her post Getting from Fisher to Bayes, but there are some differences, so keep reading.

Continue reading

“Darwin’s Delusion” Concise Version

Darwin’s Delusion vs. Death of the Fittest

From Kimura and Mayurama’s paper The Mutational Load (eqn 1.4), Nachman and Crowell’s paper Esitmate of the Mutation Rate per Nucleotide in Humans (last paragraph), Eyre-Walker and Keightley’s paper High Genomic Deleterious Mutation rates in Homonids (2nd paragraph) we see that by using the Poisson distribution, it can be deduced that the probability P(0,U) of a child not getting a novel mutation is reasonably approximated as:

where 0 corresponds to the “no mutation” outcome, and U is the mutation rate expressed in mutations per individual per generation.

If the rate of slightly dysfunctional or slightly deleterious mutations is 6 per individual per generation (i.e. U=6), the above result suggests each generation is less “fit” than its parents’ generation since there is a 99.75% probability each offspring is slightly defective. Thus, “death of the fittest” could be a better description of how evolution works in the wild for species with relatively low reproductive rates such as humans.

Dr Nim

It has struck me more than once that a lot of the confusion that accompanies discussions about stuff like consciousness and free will, and intelligent design, and teleology, even the explanatory filter, and the words “chance” and “random” arises from lack of clarity over the difference between decision-making and intention.  I think it’s useful to separate them, especially given the tendency for people, especially those making pro-ID arguments, but also those making ghost-in-the-machine consciousness or free will arguments, to regard “random” as meaning “unintentional”.  Informed decisions are not random.  Not all informed decisions involve intention.

This was my first computer:

It was called Dr Nim.  It was a computer game, but completely mechanical – no batteries required.  You had to beat Dr Nim, by not being the one left with the last marble, and you took turns with Dr Nim (the plastic board).  It was possible to beat Dr Nim, but usually Dr Nim won.

Continue reading

Searching for a search

Dembski seems to be back online again, with a couple of articles at ENV, one in response to a challenge by Joe Felsenstein for which we have a separate thread, and one billed as a “For Dummies” summary of his latest thinking, which I attempted to precis here. He is anxious to ensure that any critic of his theory is up to date with it, suggesting that he considers that his newest thinking is not rebutted by counter-arguments to his older work. He cites two papers (here and here) he has had published, co-authored with Robert Marks, and summarises the new approach thus:

So, what is the difference between the earlier work on conservation of information and the later? The earlier work on conservation of information focused on particular events that matched particular patterns (specifications) and that could be assigned probabilities below certain cutoffs. Conservation of information in this sense was logically equivalent to the design detection apparatus that I had first laid out in my book The Design Inference (Cambridge, 1998).

In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled “Conservation of Information Made Simple” (go here).


As far as I can see from his For Dummies version, as well as from his two published articles, he has reformulated his argument for ID thus:

Patterns that are unlikely to be found by a random search may be found by an informed search, but in that case, the information represented by the low probability of finding such a pattern by random search is now transferred to the low probability of finding the informed search strategy.  Therefore, while a given search strategy may well be able to find a pattern unlikely to be found by a random search, the kind of search strategy that can find it itself commensurably improbable i.e. unlikely to be found by random search.

Therefore, even if we can explain organisms by the existence of a fitness landscape with many smooth ramps to high fitness heights, we have are left with the even greater problem of explaining how such a fitness landscape came into being from random processes, and must infer Design.

I’d be grateful if a Dembski advocate could check that I have this right, remotely if you like, but better still, come here and correct me in person!

But if I’m right, and Dembski has changed his argument from saying that organisms must be designed because they cannot be found by blind search to saying that they can be found by evolution, but evolution itself cannot be found by blind search, then I ask those who are currently persuaded by this argument to consider the critique below.

Continue reading

Wolfram’s “A New Kind of Science”

I’ve just started reading it, and was going to post a thread, when I noticed that Phinehas also brought it up at UD, so as I kind of encouragement for him to stick around here for a bit, I thought I’d start one now:)

I’ve only read the first chapter so far (I bought the hardback, but you can read it online here), and I’m finding it fascinating.  I’m not sure how “new” it is, but it certainly extends what I thought I knew about fractals and non-linear systems and cellular automata to uncharted regions.  I was particularly interested to find that some aperiodic patterns are reversable, and some not – in other words, for some patterns, a unique generating rule can be inferred, but for others not.  At least I think that’s the implication.

Has anyone else read it?