Gil’s post With much fear and trepidation, I enter the SZone got somewhat, but interestingly, derailed into a discussion of David Abel’s paper The Capabilities of Chaos and Complexity, which William Murray, quite fairly, challenged those of who expressed skepticism to refute.

Mike Elzinga first brought up the paper here, claiming:

ID/creationists have attempted to turn everything on its head, mischaracterize what physicists and chemists – and biologists as well – know, and then proclaim that it is all “spontaneous molecular chaos” down there, to use David L. Abel’s term.

Hence, “chance and necessity,” another mischaracterization in itself, cannot do the job; therefore “intelligence” and “information.”

And later helpfully posted here a primer on the first equation (Shannon’s Entropy equation), and right now I’m chugging through the paper trying to extract its meaning. I thought I’d open this thread so that I can update as I go, and perhaps ask the mathematicians (and others) to correct my misunderstandings. So this thread is a kind of virtual journal club on that paper.

I’ll post my initial response in the thread.

damitallEven you can’t refute gobbledygook, Joe; but if you want to believe it all, that’s no problem.

As to “our position” not being able to account for transcription and translation, that may be true, but as Febble has pointed out and linked, there is a lot of research oin these areas.

I wonder if you would be good enough to link to any reasonably recent research on these subjects which you would NOT ignore.

Or do you prefer to think that somewhere, at some time, there was or is some entity possessing all the necessary information to set these things up from scratch?

Mike ElzingaNotice that Equation 2 on Page 253 is exactly of the mathematical form of the Shannon entropy. It is an average of the logarithms of probabilities. Again, order makes no difference to an average.

This formula simply makes use of a continuous distribution of probability rather than a discrete distribution.

It should be pointed out here that this is simply one in an arsenal of mathematical methods to find a number that is sensitive to some particular feature of the data one studies.

For example, autocorrelation is simply an integral of a continuous distribution matched against a shifted version of itself. One can do this same game of matching a discrete distribution against itself by sliding the distribution along a copy of itself and making a measurement of the number of entities that match up.

In either case, whether the distribution is continuous or discrete, if anything changes in the distribution as compared with a previous copy of the distribution, one can pick up on that by the change in the number that falls out in the calculation.

I should point out that attaching names such as “information” or “complexity” or whatever else one wants to attach to the calculation has absolutely nothing to do with what is calculated. You call it what you want to call it.

But one of the naming rules in science in the past has been to attach a name that doesn’t carry any intellectual baggage along with it. Calling something “information” or “order” or “complexity” will be extremely misleading if you don’t know what these words mean and what they refer to.

Mike ElzingaIt’s not clear why you would make such a claim if you haven’t read or cannot read the paper.

What, specifically, do you find wrong with the critiques of Abel’s calculations so far?

Joe GNeil R:

By what definition of the word?

Joe GElizabeth,

Just because there is research into transcription and translation does not mean your position can account for it.

damitallJumping in – sorry, Febble

Elizabeth has already agreed that “her” position cannot account for them, but points out that there is research into their evolution.

ID is doing no research into it. I think there is probably no way they can, because the mechanisms involved in ID are not amenable to research. Too much of a “pathetic level of detail”, or something – I believe that’s how a leading IDist thinks of such things. ID theory is not “mechanistic”.

Just pooftastic

ElizabethPost authorI know it doesn’t. I didn’t say it did.

Neil RickertI am talking of the ordinary common sense meaning of “information” which includes speech, writing, and what the brain does to provide us with perception.

This is heading off on a tangent from the topic. I suggest we stick with the topic.

Mike ElzingaIn an earlier comment I had mentioned that the same misconceptions and misrepresentations introduced by Henry Morris back in the early 1970s have propagated throughout ID/creationist writings for something like 40+ years now.

What we see here in this Abel paper we also find in Dembski and Marks when they refer to “endoginous information,” “exogenous information,” and “active information.” The mathematical formulas they use do not justify those names; and the mere labeling with such names evokes misconceptions in lay readers that something significant has been calculated.

In Granville Sewell’s second law paper, Sewell makes assertions that have nothing to do with entropy. Again he introduces an equation that might apply to an extremely narrow case and then proceeds to plug things into it that he has no business plugging into it.

In Sewell’s case, what he does is analogous to taking the formula for calculating the hypotenuse of a right triangle from the lengths of its sides and then plugging in weight and shoe size in order to calculate a person’s IQ.

Irreducible complexity, complex specified information, genetic entropy and dozens of other terms, such as those we see in this paper by Abel, are all based on the same basic misconceptions and misrepresentations of what actually takes place in the world of biology, chemistry, and physics. The names that ID/creationist authors choose are loaded terms that evoke beliefs about their work that are not justified by the calculations they do.

There are already sound definitions for terms like order and randomness. Chaotic, random, pseudo-random, indeterminate, noise, signal, entropy, and the like are well-defined in science and in many fields of engineering.

When Rudolf Clausius coined the term entropy, he was giving a label to a mathematical expression that was constantly showing up in calculations. This is a normal procedure in science and mathematics; to name recurring mathematical expressions for easy reference. Clausius was careful to pick a name that carried no emotional baggage with it. If he had not named it, somebody else may have given it another name, or quite possibly have named it after some physicist.

Abel muddles these concepts and applies names to things without any justification except to make assertions that appear to be more profound than they are.

FlintIn this respect, I’m reminded of the old wish on the part of programmers that the compiler would discard the code and compile the comments. After all, the comments describe what the code is

supposed to do.In this case, the labels and loaded terms for the calculations are the message, and the calculations themselves are the comments, intended to be discarded by the target reader who remembers and absorbs the words.

What you describe sounds no different in principle from the common practice of providing reference footnotes for claims which if dug into, turn out either to have nothing to do with the referencing claims, or to be inconsistent with them.

The purpose of the calculations is to give a good opaque veneer of quantitative “meaning” and precision to something that isn’t being calculated, and generally can’t be calculated anyway.

Mike Elzinga🙂 Man, I had forgotten that one.

And that reminds me of a story a computer administrator friend once told me.

This was way back in the days of the IBM 1620 and card readers, sorters, and collators.

He had a student show up at the service window (remember those) of the computer center with an armload of lab notebooks and papers. He asked my friend if he could run all that through the computer for him.

My friend said that he had to resist the very strong urge to put it all in the shredder and then look nervously at the computer and tell the student that the data broke the computer and that he was going to have to call the IBM Customer Engineer.

FlintYeah, my computing experience began in the early 1960s.

junkdnaforlifeMike Elzinga,I read the section you suggested. He seems to be talking about a source population of dna bases in Gaussian terms whereas adenine clusters around the mean and cytosine at the tail. Such that the occurrence of cytosine approaching the mean in any string, due to the low p increases the “surprise” factor

of that particular string, therefore increasing the uncertainty, or it’s information entropy. So Abel seems to suggest that we should expect that the entropy of the set of possible outcomes should be 1.5 bits, but instead we observe this closer to 2 bits.

The inference being that the uncertainty in an aa string exceeds the expectation (based on the population assumptions of abel) being acted on by physics and chemistry. Such that the information found in aa strings exceeds expectation in shannon terms.

I’ll finish reading this today at some point. I don’t really feel like it though.

ElizabethPost authorCould someone clarify what section is being referred to here? Page number?

Thanks!

Tom EnglishLizzie:

I hate to see you struggle with this article. I gave it a close reading shortly after it came out, and concluded that the verbiage related to information and computation is word salad. I looked at some of Abel’s other solo efforts, and resolved never again to waste time on him. The papers he’s coauthored with Trevor are coherent.

As pointed out above, Shannon entropy is measured on the source, not the string of symbols it emits, and Kolmogorov complexity is measured on the string. The way you have used the string is essentially to estimate the entropy of the source.

Contrary to what someone indicated, Kolmogorov complexity is related to randomness. Suppose that the alphabet for strings and programs is {0, 1}. If the Kolmogorov complexity of a string is greater than or equal to the length of the string (i.e., no program that outputs the string is shorter than the string itself), then the sequence of bits in the string passes all computable tests for randomness. Thus incompressible strings are referred to as Kolmogorov random. Almost all strings are Kolmogorov random, or very nearly so. (Randomness is a matter of degree.)

A string may be too “random” to be random. if a string over alphabet {0, 1} has the same number of 0’s as 1’s, then that property can be used to compress it. Furthermore, a string is incompressible only if it has a logarithmically long substring that is compressible. (This leads to speculation is that we live in an orderly region of a random universe.)

Tom EnglishYes, the name game is everything. Dembski and Marks go so far as to apply “-log” to subjective probability and treat the result as physical information created by intelligence. In the Weasel problem, for instance, the target string is not random. The uniform probability D&M introduce is clearly epistemic.

In search, if performance is 1 for success in obtaining a satisfactory solution to the problem, and 0 for failure, then the probability that an algorithm obtains a satisfactory solution is its expected performance. Endogenous, exogenous, and active “information” are just logarithmic transformations of expected performance.

Mike ElzingaI think he is referring to Abel’s calculation on Page 256.

The

interpretationthat one gives to a calculation depends on context. C = A x B can mean many different things depending on what A, B and C stand for.In the case of Shannon entropy, when all the probabilities are equal, the Shannon entropy is maximized. Now it is true that this means that if you dip into the set of entities for which this is calculated, you are equally likely to grab any one of them. If the Shannon entropy is low, then sampling successes will be skewed toward the entities with the higher probabilities.

But one doesn’t need Shannon entropy to tell you that; one could simply use the product of the probabilities, as I pointed out earlier. That wouldn’t generate any voodoo misconceptions about what is happening.

In the ID/creationists’ minds, and in the writings of the ID/creationist authors, generating these misconceptions by labeling their calculations with loaded words appears to be a big part of their game. Some of the authors appear to actually believe they are delving into deep understanding; but they are simply fooling themselves with their own words.

Most of that word-gaming comes from their sectarian history of exegesis, hermeneutics, etymology, and from their retranslations of old writings to extract the “proper” meaning.

If you look at all the discussion with ID/creationist followers of this stuff, you will note that it always gets bogged down into a mud-wrestling contest over the meanings of words. Abel captures that obsession in his writings.

Mike ElzingaIn fairness to Elizabeth, I was the one who, in another context, quoted a comment from Abel which then elicited a challenge to prove this particular paper wrong. That somewhat derailed that thread, but Elizabeth thought a virtual journal club might be a good idea.

As nauseating as these kinds of papers are – and I can assure you that I have to fight the gag reflex when I read them – I think those of us in the science community have some responsibility for clearing this junk out of public discourse.

This farce has been going on for nearly 50 years now, and somebody with the knowledge and skill needs to inform the public about how these pseudo-scientists operate.

I personally don’t enjoy it, but I had the misfortune to be in the wrong place at the wrong time back in the 1970s and had to study this stuff and inform students and the public about what was wrong with it.

However, I have been able to mitigate the nausea by using these kinds of writings to learn how misconceptions and mischaracterizations are generated and propagated. That actually helps in developing instructional materials; and it gives some insight into the psychological make-up of some of these con artists that do this in other areas as well.

The challenger who wanted me and others here to prove this paper wrong made accusations of handwaving generalities on our part.

Yet this video by Thomas Kindell is a classic example of what we are up against.

One can jump over to UD to see all sorts of demonizing and mischaracterizations of science and scientists.

As I mentioned earlier on that other thread, in the 40+ years I have been watching this phenomenon, I have yet to encounter a follower of the writings of ID/creationist authors who can actually read these works and defend and justify them. Yet they certainly have strong emotions about them and will engage in endless mud-wrestling about words in these papers.

ElizabethPost authorYes, I do understand that Shannon Entropy is normally evaluated on the source not the string.

I was (ill-advisedly as it turns out) putting myself in the shoes of the IDists whose project is to take a string of unknown origin and figure out whether it was designed. But I accept that even Abel bothered to figure out what the source must have been (his estimates of frequencies in the prebiotic soup).

However, I think my two points remain valid:

1. That Abel is claiming that Shannon “complexity” is necessarily at the opposite pole to an “ordered” string (using some kind of Kolmogorov measure), and that this is wrong.

2. That Abel’s “FSC” measure is simply a measure of mutual information between proteins that have a similar function, and at no point does he make a coherent argument that “FSC” cannot be produced by non-intelligent agents, or even that the measure is relevant to the question. He merely asserts this.

Mike ElzingaI would not even use the term “mutual information” in comparing proteins or any other sequences of entities.

There are better terms that don’t load a false impression onto the comparison. Autocorrelation, or percentage match (or difference), or p-value, whatever; but not some term that contains an implicit assertion that something is being measured that isn’t.

For example, one can compare sequences with references and count the differences in some way. It could count mismatches, or the number of permutations needed to get back to the original, or any other feature for which one can attach a number. That number could be expressed as a percentage change or whatever one can use to plug into a sensitive calculation.

Find some neutral term, but don’t call it information. “Information” is an extremely ambiguous term with implications of intention and communication and intelligence. That seems to be the reason ID/creationist theorist like to use it; it matches their preconceptions. Adding the math makes it look scientific.

Mike ElzingaI don’t know what one could mean by “information entropy” in this context.

Earlier in the thread I had demonstrated that even simple, two-constituent systems with probabilities of ½ chosen from a “primordial soup” will have vastly different “capabilities” (from Abel’s assertions on pages 255 and 256) depending on what the constituents are. Here are some of those examples. It is easy to make up more.

(electron, proton) ⇒ hydrogen atoms.

(water, sodium) ⇒ sodium hydroxide and hydrogen and lots of photons.

(black marble, white marble) ⇒ a grey mass or line when view from a distance.

(fox, rabbit) ⇒ an unstable ecosystem.

Simply change the probabilities to 1/3 and 2/3 and we can get all sorts of other “capabilities” (for exactly the same “Shannon uncertainty”) depending on the constituents.

Abel’s assertions and “analyses” are just irrelevant silliness; atoms and molecules and their more complex compounds interact strongly or weakly depending on what they are and what kinetic energies they have and what kind of heat bath and environment they are immersed in.

Abel’s “analysis” doesn’t apply to anything in particular because it doesn’t have anything to do with what actually happens with things in the real world. As I said, the entire argument in this paper is a non-sequitur.

junkdnaforlifeAbel’s argument is not ultra tight, but it’s not as if, (if one were to base their opinion strictly on what the posters here have offered ), that he clenched a pencil between his buttock cheeks, scribbled it across a piece of paper, signed it, and shipped it off for peer review.

(He should have leaned on the Cytosine issue, with at least a paragraph of why the chemistry put this dna base at the tail in his population assumptions.)

Your examples indicate some surprise, but I think lower value.

{electron | proton} -> hydrogen. There is a very small surprise factor here, with H being 1 and 1. Rather, a surprise would be a frequency whereas {U,TH} clustered around the

mean (most common elements) and {H,He} were clumped in the tail (least common). Instead, save for a couple odd balls, the elements appear to follow a Gaussian distribution whereas the lightest elements cluster around the mean and the heaviest occupy the tail.

The periodic table is an example of low Shannon entropy.

{white marble | black marble } = gray. If we dial back and see gray, there would be little surprise. Shannon entropy would be low in this case. A “surprise” would be a population of white and black marbles with a threshold closer to 0 or 255.

Gray would be an example of low Shannon entropy. I think the rest of your other examples could fit into one of these categories.

But say for instance this example:

{a,b,c,…,z} = Great Expectations

In this case, despite each character being equaprobable, we find that

the characters {e,t,a,o,i,n} cluster around the mean, and {z,q,x,j,k,v} occupy the tail,

This book is an example of high Shannon entropy.

In Abel’s molecular population of dna bases, pg. 256, the roles are reversed, but the outcome is the same. Instead of flat->Gaussian as in the Great Expectations example, his dna base population frequency follows from Gaussian -> flat. Both populations are outputting a similar high surprise ->increased Shannon entropy. His argument appears to be that in these examples the outcomes are different than what we would expect to see (as in your examples) if only chemistry and physics is acting on the population.

Abel’s argument I believe is that observing a string with high Shannon entropy that

is also simply describable (as in K-comp terms, and I assume executes some function, but I haven’t got that far yet), than infer some type of intent.

I done with this guy’s paper. You can have the final curb stomp.

Mike Elzingajunkdnaforlife,You are missing a crucial point. Letters and numbers do not interact strongly with each other. Atoms and molecules do.

I deliberately picked the simplest examples I could easily put in a comment. And even those simple examples have huge varieties in what falls out for Abel’s version of having exactly the same “Shannon uncertainty.” So Abel’s paper is meaningless.

Hydrogen has many, many more properties than do just electrons and protons by themselves. And this is just one of the simplest of cases of emergent phenomena that takes place at all levels of complexity with atoms and molecules.

Condensed matter is extremely complicated; and emergent phenomena appear rapidly with even very small changes in complexity or rearrangements or “contamination.”

Take just the properties of, say, lead (Pb) alone. I would hazard an assertion that most people cannot list more than a few properties of a chunk of lead. Most people would have no idea what goes on in any collection of atoms or molecules of exactly the same kind or how any of that changes with temperature and with the introduction of small amounts of other elements.

Do you understand that the formula for Shannon entropy is an average? It is the average of the logarithms of the probabilities, no matter where those probabilities come from. Averages are insensitive to order. And averages of logarithms or the sum of the logarithms or the product of the probabilities have nothing to do with order.

There is no way this “measure of probabilities” can tell you anything about the interactive properties of things to which those probabilities are tacked on.

I provided a comment above about the crucial difference between thermodynamic entropy and Shannon entropy. It is worth rereading.

I’m afraid I can find no meaning in this assertion.

What could it possibly tell us about the periodic table that chemists and physicists don’t already know light years beyond what any average of logarithms of probabilities can tell us?

olegtWould you indulge us with an actual calculation of the periodic table’s Shannon entropy? I would pay good money to see that.

David vun KannonI’ve read a few of Abel’s papers, but not this one. Therefore I’ll just make a general comment.

Abel, and UD’s UprightBiped, are holding to a philosophical position that goes back to HH Pattee and the intellectual father of modern ID, Michael Polanyi. These are the guys that can’t get how “formal” function can arise from raw bits, chemical flotsam and jetsam, etc. because all things formal are necessarily prior to the physical world. In the beginning was the Word, not the World, and God created Man in His Image, not vice versa.

Abel is simply trying, laboring mightily, to polish Polanyi’s turd. Polanyi’s JASA article (the Journal of the (Christian group) American Scientific Affiliation, not as UpBd thinks, the American Statistical Association) “Life’s Irreducible Structure”, 1968, is the cloaca from which the modern ID thinking flows.

ElizabethPost authorWell, thanks, I will! My final curb stomp is that that can’t be Abel’s argument because he argues that high Shannon “complexity” (for which he gives the formula for Entropy) is at the opposite pole from describability.

What you describe seems to be Demsbki’s position, not Abel’s (he allows strings to be both complex and describable).

Dembski’s position is also problematic, though, and Abel seems to attempt to get around Dembski’s problem by adding in “functional” – and then proposes “FSC”. But his “FSC” seems just to be something like the similarity between strings with similar function. I don’t see how it helps him.

Not that Abel seems to see Dembski’s problem. Perhaps we should have a look at Dembski now?

Mike ElzingaI would add here that ID also inherited fundamental misconceptions from Henry Morris, who set up a narrative that pitted a caricature of evolution against a caricature of the second law of thermodynamics.

So the stage was set when the morph from “creation science” to “intelligent design” occurred after the 1987 US Supreme Court decision. Dragging in Polyani’s writings gave the ID movement a little more panache (in their minds anyway).

I suspect some of the thoughts of A.E. Wilder-Smith seeped into Morris’s thinking, but I haven’t put much effort into tracking this down.

Fairly early on, back in the late 1970s and early 1980s, I started moving away from dealing with all the sophistry in the ID/creationist movement and started going directly after their misconceptions about chemistry, physics, and biology. In particular, I found that their mathematics and physics, when taken down to the bare bones, were ludicrously misleading.

Simply attaching words to calculations doesn’t change what calculations do. And the entities to which the calculations refer in ID/creationism have nothing to do with the forces and dynamics associated with atoms and molecules in the real world.

Mike ElzingaI have the Dembski and Marks paper as well as Sewell’s paper. Just as Abel does, the Dembski and Marks paper attaches misleading names to calculations without any justification.

Sewell’s paper is even worse in its misconceptions and mischaracterizations.

Demolishing these papers might look like pure cruelty to some, but that’s the fault of the papers themselves.

I think I will still be available for the next few weeks before I have to do some more traveling.

junkdnaforlifeElizabeth,I don’t think Abel would be all that rattled by this post, seeing that half a dozen charges of him being a crank, how wrong he is and all the flaws in his work were pointed about before many of the posters here seemed to have a clue on what Shannon entropy was in the first place. If you follow the evolution of this post, you will see that the Abel hammers come in swiftly, and are then followed quickly by confusion and bickering about what shannon entropy is, to the point where Mike E had to come in and take everybody step by step through the equations to keep the thread from derailing into a fight about entropy rather than the paper.

So I would recommend having Mike E write a top post about Dembski’s paper first, detailing the issues and problems he sees, before releasing the hounds next time.

junkdnaforlifeI don’t think that abel would be too rattled by this post, seeing that half a dozen charges of him being a crank, that he is wrong, and the pointing out of all the flaws in his work were made before many of the posters here seemed to even understand what shannon entropy was in the first place. If you follow the evolution of this post, you will see the abel hammers come in swiftly, followed by confusion and bickering about what shannon entropy is, to the point where Mike E had to jump in and take everybody through the equations step by step. So if you want to hammer on Dembski or Sewell, I would recommend having Mike E write a top post about the paper, detailing some of the problems and issues first, before releasing the hounds.

junkdnaforlifeElizabeth,I don’t think that abel would be too rattled by this post, seeing that half a dozen charges of him being a crank, that he is wrong, and the pointing out of all the flaws in his work were made before many of the posters here seemed to even understand what shannon entropy was in the first place. If you follow the evolution of this post, you will see the abel hammers come in swiftly, followed

by confusion and bickering about what shannon entropy is, to the point where Mike E had to junp in and take everybody through the equations step by step. So if you want to hammer on Dembski or Sewell, I would recommend having Mike E write a top post about the paper, detailing some of the

problems and issues first, before releasing the hounds.

olegtjunkdnaforlife,I’m not sure you are qualified to judge anyone’s understanding of Shannon’s entropy. Unless that line about the periodic table was a joke.

junkdnaforlifeolegt,my spectacular naked assertion is based on the premise that in a given population of {proton | electron} we should be less surprised to observe that elements consisting of 1 proton are in fact more abundant than elements consisting of 92. My spectacular naked assertion will nose dive into the Atlantic if U is in fact more probable than H.

junkdnaforlifeor rather a given of: {p|n|e}

olegtjunkdnaforlife,I can’t make heads or tails of what you are trying to say by this. In any event, periodic tables do not come in ensembles, so there is no way to compute their Shannon entropy.