The Myth of Biosemiotics

I recently came across this book:

Biosemiotics: Information, Codes and Signs in Living Systems

This new book presents contexts and associations of the semiotic view in biology, by making a short review of the history of the trends and ideas of biosemiotics, or semiotic biology, in parallel with theoretical biology. Biosemiotics can be defined as the science of signs in living systems. A principal and distinctive characteristic of semiotic biology lies in the understanding that in living, entities do not interact like mechanical bodies, but rather as messages, the pieces of text. This means that the whole determinism is of another type.

Pardon my skepticism, but

  1. There is no information in living systems.
  2. There are no codes in living systems.
  3. There are no signs in living systems.

Biosemiotics is the study of things that just don’t exist. Theology for biologists.

Continue reading

What A Code Is – Code Denialism Part 3

My intent here in these recent posts on the genetic code has been to expose the absurdity of Code Denialism. The intent has not been to make the case for intelligent design based upon the existence of biological codes. I know some people find that disconcerting but that would be putting the cart before the horse. No one is going to accept a conclusion when they deny the premise. And please forgive me if I choose not to play the game of “let’s pretend it really is a code” while you continue to deny that it actually is a code.

First I’d like to thank you. It’s actually been pretty neat looking up and reading many of these resources in my attempt to see whether I could defend the thesis that the genetic code is a real code. I admit it’s also been much too much fun digging up all the reasons why code denialism is just plain silly (and irrational).

That the genetic code is a code is common usage and if “meaning is use” that alone ought to settle the matter. But this is “The Skeptical Zone” and Code Denialism is strong here. But I’m not just claiming that it’s a code because we say it’s a code in common usage. I’m claiming it is a code because it meets the definition of a code. The reason we say it is a code is because it is in fact a code.

My first two posts have been on some of the major players and how they understood they were dealing with a code and how that guided their research. I’ll have more to say on that in the future as it’s a fascinating story. But for now …

What A Code Is

Continue reading

Repetitive DNA and ENCODE

[Here is something I just sent Casey Luskin and friends regarding the ENCODE 2015 conference. Some editorial changes to protect the guilty…]

One thing the ENCODE consortium drove home is that DNA acts like a Dynamic Random Access memory for methylation marks. That is to say, even though the DNA sequence isn’t changed, like computer RAM which isn’t physically removed, it’s electronic state can be modified. The repetitive DNA acts like physical hardware so even if the repetitive sequences aren’t changed, they can still act as memory storage devices for regulatory information. ENCODE collects huge amounts of data on methylation marks during various stages of the cell. This is like trying to take a few snapshots of a computer memory to figure out how Windows 8 works. The complexity of the task is beyond description.
Continue reading

The Sugar Code and other -omics

[Thank you to Elizabeth Liddle, the admins and the mods for hosting this discussion.]

I’ve long suspected the 3.1 to 3.5 gigabases of human DNA (which equates to roughly 750 to 875 megabytes) is woefully insufficient to create something as complex as a human being. The problem is there is only limited transgenerational epigenetic inheritance so it’s hard to assert large amounts of information are stored outside the DNA.

Further, the question arises how is this non-DNA information stored since it’s not easy to localize, in fact, if there is a large amount of information outside the DNA, it is in a form that is NOT localizable, but distributed and so deeply redundant that it provides the ability to self-heal and self-correct for injury and error. If so, in a sense, damage and changes to this information bearing system is not very heritable since bad variation in the non-DNA information source can get repaired and reset, otherwise the organism just dies. In that sense the organism is fundamentally immutable as a form, suggestive of a created kind rather than something that can evolve in the macro-evolutionary sense.
Continue reading

CSI-free Explanatory Filter…

…Gap Highlighter, Design Conjecture

Though I’ve continued to endear myself to the YEC community, I’ve certainly made myself odious in certain ID circles. I’ve often been the lone ID proponent to vociferously protest cumbersome, ill-conceived, ill-advised, confusing and downright wrong claims by some ID proponents. Some of the stuff said by ID proponents is of such poor quality they are practically gifts to Charles Darwin. I teach ID to university science students in extra curricular classes, and some of the stuff floating around in ID internet circles I’d never touch because it would cause my students to impale themselves intellectually.
Continue reading

Good UD post

Good guest post at Uncommon Descent by Aurelio Smith,

SIGNAL TO NOISE: A CRITICAL ANALYSIS OF ACTIVE INFORMATION

For those who prefer to comment here, this is your thread!

For me, the argument by Ewert Dembski and Marks reminds me of poor old Zeno and his paradox.  They’ve over-thought the problem and come to a conclusion that appears mathematically valid, but actually makes no sense.  Trying to figure out just the manner in which it makes no sense isn’t that easy, though I don’t think we need to invent the equivalent of differential calculus to solve it in this case.  I think it’s a simple case of picking the wrong model.  Evolution is not a search for anything, and information is not the same as [im]probability, whether you take log2 of it or not.  Which means that you don’t need to add Active Information to an Evolutionary Search in order to find a Target, because there’s no Target, no search, and the Active Information is simply the increased probability of solving a problem if you have some sort of feedback for each attempt, and partial solutions are moderately similar to better ones.

Enjoy!

2LOT and ID entropy calculations (editorial corrections welcome)

Some may have wondered why me (a creationist) has taken the side of the ID-haters with regards to the 2nd law. It is because I am concerned for the ability of college science students in the disciplines of physics, chemistry and engineering understanding the 2nd law. The calculations I’ve provided are textbook calculations as would be expected of these students.
Continue reading

Wagner’s Multidimensional Library of Babel (Piotr at UD)

I’ve wanted to start this discussion for several weeks, but wasn’t sure how to present Wagner’s argument. Fortunately Piotr has saved me the trouble with a post at UD.

Piotr February 24, 2015 at 1:35 pm
Gpuccio,

Do you mind if I begin with a simple illustrative example? Let’s consider all five-letter alphabetic strings (AAAAA, QWERT, HGROF, etc.). By convention, a string will be “functional” if it’s a meaningful English word (BREAD, WATER, GLASS, etc.). Functionality is therefore not a formal property of the string but something dictated by the environment. There are 26^5 = 11881376 (almost 12 million) possible five-letter strings. The number of five-letter words in English (excluding proper nouns and extremely rare, dialectal or archaic words) is about 6000, so the probability that any randomly generated string is functional is about 0.0005.

Any five-letter string S can produce 5×25 = 125 “mutants” differing from S by exactly one letter. If you represent the sequence space as a five-dimensional hypercube (26x26x26x26x26), a mutation can be defined as a translation along any of the five axes.

It would appear that the odds of finding a functional mutant for a given string should be about 125×0.0005 = 1/16 on the average. In fact, however, it depends where you start. If S is functional, the existence of at least one functional mutant is almost guaranteed (close to 90%). For most English words there are more than one functional mutants. For example, from SNARE wer get {SCARE, SHARE, SPARE, STARE, SNORE, SNAKE, SNARK…}. Though some functional sequences are isolated or form small clusters in the sequence space, most of them are members of one huge, quite densely interconnected network. You can get from one to another in just a few steps (often in more than one way), which is of course what Lewis Carroll’s “word ladder” puzzle is about:

FLOUR > FLOOR > FLOOD > BLOOD > BROOD > BROAD > BREAD

You can ponder the example for a moment; I’ll return to it later.

http://www.uncommondescent.com/darwinism/the-elephant-in-the-room/#comment-550345

The whole thread is worth a look.

I might add that there is a rather crude GA at http://itatsi.com that does something not entirely unlike a word ladder.

Junk DNA

Well, I just got banned again at UD, over my response to this post of Barry’s:

In a prior post I took Dr. Liddle (sorry for the misspelled name) to task for this statement:

“Darwinian hypotheses make testable predictions and ID hypotheses (so far) don’t.”

I responded that this was not true and noted that:

For years Darwinists touted “junk DNA” as not just any evidence but powerful, practically irrefutable evidence for the Darwinian hypothesis. ID proponents disagreed and argued that the evidence would ultimately demonstrate function.

Not only did both hypotheses make testable predictions, the Darwinist prediction turned out to be false and the ID prediction turned out to be confirmed

Continue reading

Siding with Mathgrrl on a point,and offering an alternative to CSI v2.0

[cross posted from UD Siding with Mathgrrl on a point, and offering an alternative to CSI v2.0, special thanks to Dr. Liddle for her generous invitation to cross post]

There are two versions of the metric for Bill Dembski’s CSI. One version can be traced to his book No Free Lunch published in 2002. Let us call that “CSI v1.0”.

Then in 2005 Bill published Specification the Pattern that Signifies Intelligence where he includes the identifier “v1.22”, but perhaps it would be better to call the concepts in that paper CSI v2.0 since, like windows 8, it has some radical differences from its predecessor and will come up with different results. Some end users of the concept of CSI prefer CSI v1.0 over v2.0.
Continue reading

The Laws of Thought

aren’t.

They are perfectly valid rules of reasoning, of course.  Wikipedia cites Aristotle: :

  • The law of identity: “that every thing is the same with itself and different from another”: A is A and not ~A.
  • The Law of Non-contradiction: that “one cannot say of something that it is and that it is not in the same respect and at the same time”
  • Law of Excluded Middle: “But on the other hand there cannot be an intermediate between contradictories, but of one subject we must either affirm or deny any one predicate.”

And of course they work just fine for binary, true-or-false, statements, which is why Boolean logic is so powerful.

But I suggest they are not Laws of Thought.

Continue reading

Specification for Dummies

I’m lookin’ at you, IDers 😉

Dembski’s paper: Specification: The Pattern That Specifies Complexity gives a clear definition of CSI.

The complexity of pattern (any pattern) is defined in terms of Shannon Complexity.  This is pretty easy to calculate, as it is merely the probability of getting this particular pattern if you were to randomly draw each piece of the pattern from a jumbled bag of pieces, where the bag contains pieces in the same proportion as your pattern, and stick them together any old where.  Let’s say all our patterns are 2×2 arrangements of black or white pixels. Clearly if the pattern consists of just four black or white pixels, two black and two white* , there are only 16 patterns we can make:

Patterns1And we can calculate this by saying: for each pixel we have 2 choices, black or white, so the total number of possible patterns is 2*2*2*2, i.e 24  i.e. 16. That means that if we just made patterns at random we’d have a 1/16 chance of getting any one particular pattern, which in decimals is .0625, or 6.25%.  We could also be fancy and express that as the negative log 2 of .625, which would be 4 bits.  But it all means the same thing.  The neat thing about logs is that you can add them, and get the answer you would have got if you’d multipled the unlogged numbers.  And as the negative log of .5 is 1, each pixel, for which we have a 50% chance of being black or white, is worth “1 bit”, and four pixels will be worth 4 bits.

Continue reading

Searching for a search

Dembski seems to be back online again, with a couple of articles at ENV, one in response to a challenge by Joe Felsenstein for which we have a separate thread, and one billed as a “For Dummies” summary of his latest thinking, which I attempted to precis here. He is anxious to ensure that any critic of his theory is up to date with it, suggesting that he considers that his newest thinking is not rebutted by counter-arguments to his older work. He cites two papers (here and here) he has had published, co-authored with Robert Marks, and summarises the new approach thus:

So, what is the difference between the earlier work on conservation of information and the later? The earlier work on conservation of information focused on particular events that matched particular patterns (specifications) and that could be assigned probabilities below certain cutoffs. Conservation of information in this sense was logically equivalent to the design detection apparatus that I had first laid out in my book The Design Inference (Cambridge, 1998).

In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled “Conservation of Information Made Simple” (go here).

 

As far as I can see from his For Dummies version, as well as from his two published articles, he has reformulated his argument for ID thus:

Patterns that are unlikely to be found by a random search may be found by an informed search, but in that case, the information represented by the low probability of finding such a pattern by random search is now transferred to the low probability of finding the informed search strategy.  Therefore, while a given search strategy may well be able to find a pattern unlikely to be found by a random search, the kind of search strategy that can find it itself commensurably improbable i.e. unlikely to be found by random search.

Therefore, even if we can explain organisms by the existence of a fitness landscape with many smooth ramps to high fitness heights, we have are left with the even greater problem of explaining how such a fitness landscape came into being from random processes, and must infer Design.

I’d be grateful if a Dembski advocate could check that I have this right, remotely if you like, but better still, come here and correct me in person!

But if I’m right, and Dembski has changed his argument from saying that organisms must be designed because they cannot be found by blind search to saying that they can be found by evolution, but evolution itself cannot be found by blind search, then I ask those who are currently persuaded by this argument to consider the critique below.

Continue reading

The Chewbacca Defense?

Eric Anderson, at UD writes, to great acclaim
:

Well said. You have put your finger on the key issue.

And the evidence clearly shows that there are not self-organizing processes in nature that can account for life.

This is particularly evident when we look at an information-rich medium like DNA. As to self-organization of something like DNA, it is critical to keep in mind that the ability of a medium to store information is inversely proportional to the self-ordering tendency of the medium. By definition, therefore, you simply cannot have a self-ordering molecule like DNA that also stores large amounts of information.

The only game left, as you say, is design.

Unless, of course, we want to appeal to blind chance . . .

Can anyone make sense of this? EA describes DNA as “an information rich molecule”. Then as a “self-ordering molecule”. Is he saying that DNA is self-ordering therefore can’t store information? Or that it does store information,therefore can’t be self-ordering? Or that because it is both it must be designed? And in any case, is the premise even true? And what “definition” is he talking about? Who says that “the ability of a medium to store information is inversely proportional to the self-ordering tendency fo the medium?” By what definition of “information” and “self-ordering” might this be true? And is it supposed to be an empirical observation or a mathematical proof?

Granville Sewell vs Bob Lloyd

Bob Lloyd, professor emeritus of chemistry at Trinity College Dublin, wrote an opinion article in Mathematical Intelligencer (MI) commenting on Sewell’s not-quite-published AML article. This was mentioned in a previous thread, where Bob briefly commented. Granville was invited to participate but never showed up.

In response to Lloyd, Sewell submitted a letter to the editor. On advice of a referee, his letter was rejected. (Rightly so, in my view. More on that later.) Sewell has now written a post on Discovery Institute’s blog describing his latest misfortune. The post contains Sewell’s unpublished letter and some of the referee’s comments. I invite you to continue the technical discussion of Sewell’s points started earlier.

Continue reading

More on Marks, Dembski, and No Free Lunch, by Tom English

Tom English has a great post at his blog, Bounded Science, which I have his permission to cross post here:

Bob Marks grossly misunderstands “no free lunch”

And so does Bill Dembski. But it is Marks who, in a “Darwin or Design?” interview, reveals plainly the fallacy at the core of his and Dembski’s notion of “active information.” (He gets going at 7:50. To select a time, it’s best to put the player in full-screen mode. I’ve corrected slips of the tongue in my transcript.)

[The “no free lunch” theorem of Wolpert and Macready] said that with a lack of any knowledge about anything, that one search was as good as any other search. [14:15]And what Wolpert and Macready said was, my goodness, none of these [“search”] algorithms work as well as [better than] any other one, on the average, if you have no idea what you’re doing. And so the question is… and what we’ve done here is, if indeed that is true, and an algorithm works, then that means information has been added to the search. And what we’ve been able to do is take this baseline, that all searches are the same, and we’ve been able to, in cases where searches work, measure the information that is placed into the algorithm in bits. And we have looked at some of the evolutionary algorithms, and we found out that, strikingly, they are not responsible for any creation of information. [14:40]

And according to “no free lunch” theorems, astonishingly, any search, without information about the problem that you’re looking for, will operate at the same level as blind search.” And that’s… It’s a mind-boggling result. [28:10]

Bob has read into the “no free lunch” (NFL) theorem what he believed in the first place, namely that if something works, it must have been designed to do so. Although he gets off to a good start by referring to the subjective state of the practitioner (“with a lack of knowledge,” “if you have no idea what you’re doing”), he errs catastrophically by making a claim about the objective state of affairs (“one search is as good as any other search,” “all searches are the same”).

Continue reading

Todd Wood on the Tennessee bill (and falsification)

He makes some very interesting comments, and also this point, which I think is worth making:

I know for a fact that evolution cannot be falsified; it can only be replaced.

“Falsification” has become kind of shibboleth that in my view has outlived its usefulness as a criterioin for what is, or is not, science.  We don’t actually proceed by what people usually mean by “falsification” in science, IMO, we proceed by replacing existing models with better ones (in that sense, of course, all models with a better fit to data than a previous model “falsify” the previous model – but that’s not what people usually mean).  So I think Todd, as so often, is right here.  Of course I think he is radically wrong about the age of the earth, but that’s another story!

Continue reading