I see that in the unending TSZ and Jerad Thread Joe has written in response to R0bb
Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI. Even Stephen C. Meyer says CSI is not amendable to compression.
A protein sequence is not compressable- CSI.
So please reference Dembski and I will find Meyer’s quote
To save Robb the effort. Using Specification: The Pattern That Signifies Intelligence by William Dembski which is his most recent publication on specification; turn to page 15 where he discusses the difference between two bit strings (ψR) and (R). (ψR) is the bit stream corresponding to the integers in binary (clearly easily compressible). (R) to quote Dembksi “cannot, so far as we can tell, be described any more simply than by repeating the sequence”. He then goes onto explain that (ψR) is an example of a specified string whereas (R) is not.
This conflict between Dembski’s definition of “specified” which he quite explicitly links to low Kolmogorov complexity (see pp 9-12) and others which have the reverse view appears to be a problem which most of the ID community don’t know about and the rest choose to ignore. I discussed this with Gpuccio a couple of years ago. He at least recognised the conflict and his response was that he didn’t care much what Dembski’s view is – which at least is honest.
That is not quite correct. While standard compression routines, such as those suitable for compressing text or pictures are not effective, it is still possible to compress biological sequences.
Adjeroh & Nan, On Compressibility of Protein Sequences, Proceedings of the Data Compression Conference 2006.
Also, http://data-compression.info/Corpora/ProteinCorpus/
This relates to our conversation with Joe earlier. The problem with defining CSI in terms of non-compressibility is that it can lead to false positives. Indeed, the more ignorant one is, the more false positives.
I see gpuccio has returned to his standard argument. RM+NS fails because it can’t create dFSCI, and dFSCI is that which RM+NS can’t create because there’s just too much of it.
Isn’t gpuccio’s argument that dFCSI can’t be put into the genome by natural selection because an organism capable of replication has to be there to have natural selection, and such an organism already has dFCSI, so that is the source of the dFCSI.
That argument implies that if an organism (having its initial dFCSI) evolves for a while by RM+NS and makes adaptation X which has enough extra SI to constitute dFCSI, that the dFCSI comes from the initial SI. Then if it continues and also achieves adaptation Y which also has enough extra SI to constitute dFCSI, that too comes from that initial complement of SI.
And so on: the initial SI keeps getting converted to make the dFCSI of each successive adaptation. This does not make sense to me: it is “the gift that keeps on giving”, too much so. Perhaps I misunderstand, but it seemed that in the previous discussion of gpuccio’s argument, whenever the genome ended up containing dFCSI because of a particular adaptation, gpuccio kept saying that that dFCSI was already there since the organism was capable of replication.
When pinned down, gpuccio always reverts back to the argument that protein domains are irreducible. He bolsters that by arguing there is a level beyond which they appear to have no cousin sequences and therefore must have been poofed into existence in their current form.
… which seems to have nothing to do with the stuff about dFCSI. So why bother with dFCSI?
So if the “information” is already there – who cares which kind of information it is called; it’s too confusing to keep track of all the sectarian versions of information – the question that no ID/creationist has ever answered is, “Just how does this information push atoms and molecules around?”
“If this information doesn’t push atoms and molecules around, then what is the mechanism by which this information gets to those atoms and molecules so that they “know” where to go?” Does information push the laws of physics and chemistry around? If so, how; what is the mechanism?
Why can’t ID/creationists answer these questions? Where along the chain of complexity does information kick in and take over from the laws of physics and chemistry? And which is it; semiotics or information?”
In comments to me, having made a similar interpretation, GP denied that this is his argument. Once the replication system or translation or whatever is in place, we take that dFSCI-to-date as a given, and apply the metric to the ‘extra’ dFSCI within a particular Time Span.
Mung,
To clarify.
KF claims that CSI is generated billions of times a day. Every message on a message board has a value for CSI.
When I (or Lizzie) claim that we can write a program that can output CSI the onus is not on us to define what CSI is. The onus is on your to test the output from the program and determine the level of CSI present, if any. After all, I might be just making it all up!
That might seem strange to you, but consider this: If ID claims to detect design via CSI then it’s irrelevant if I believe my program can output CSI or not as you can simply test it’s output and determine if it does in fact produce CSI or not.
So for you to say, as you seem to have by linking to the OP where Lizzie’s CSI generator was described that “CSI is real, look Lizzie claims to generate it and if she’s generating it she must know the definition” is a pathetic attempt at misdirection.
If you can really determine design from CSI then you don’t need any further information then the output of the program.
If KF can say that every message on the internet is an example of intelligent design and has a measurable value for CSI then you can’t stop at messages you don’t know the origin of and say “well, just no way to tell” as that shows that you only indicate CSI is present when you already know something is designed.
“This string of letters and punctuation makes sense and therefore is unlikely to have come about by chance” is one thing. Yet what if the message is in a language you don’t understand? No CSI? It might just be random for all you know, yet you claim to be able to detect design.
So detect it already!
“Scarcely” is probably too strong, but it was just an aside, and probably not relevant to the main point.
Sure, but the compression routine can usually be made proportionally smaller by extending the text, rendering the size of the compression routine negligible. That’s rarely an issue for a text the size of Hamlet, but if so, then try The Oxford Shakespeare: The Complete Works.
Yet confronted by Elizabeth’s GA program, gpuccio was not willing to acknowledge that the amount of SI increased in that program. gpuccio’s argument was that the dFCSI was already there because Elizabeth had made the program’s organisms able to reproduce.
That’s when we all started arguing about intelligently computer simulations of unintelligent natural processes.
This seems to me to be a big contradiction. When an organism has dFCSI and can reproduce, gpuccio says that we can count the “extra” SI put into the genome by an adaptation. But when the genomes are in a GA, gpuccio refused to count the extra SI that was put into those genomes. There all the SI was said to be coming from the original SI put in when the GA was set up.
Again, am I misunderstanding gpuccio’s argument? How?
We’ll number your points for reference.
It’s not important, but what is the function of Hamlet?
Again, just as an aside, how many permutations of words have the same function as Hamlet? Keep in mind the many, many versions of Hamlet. Seems intractable, especially given the lack of a clear functional specification.
Let’s grant that Hamlet has high functional complexity, per your definition.
So if we are ignorant, we are more likely to judge it to be design. This is nothing but a gap argument.
Of course. You just defined dFSCI in #4 as something with no known “deterministic explanation”. How could it be otherwise?
If we didn’t know the origin of nylonase, for instance, you would conclude design. Discovering its plausible evolutionary origin, you would then realize it was a false positive. But you could still say #5, because we would just shrink the universe of dFSCI to accommodate our findings.
Frankly, you don’t even need the math, just #4 & #5:
Any object whose origin is known that exhibits dFSCI is designed =
Any object whose origin is known that exhibits (no known deterministic origin) is designed =
If we already know the origin and that origin is not deterministic, then design.
Joe Felsenstein
His position shifts all the time. If you wait long enough, he will eventually agree with you on technical things. Not on the big picture, which is independent of technical arguments.
gpuccio:
Remo Rohs and Gorka Lasso:
Unlike Joe, I will not read an abstract and argue that the issue is settled. I will, however, argue that your claim of irreducibility is probably wrong and based entirely on absence of pathetic level of detail in the evolutionary history of sequences. This is probably true of all claims of irreducibility.
gpuccio:
That seems to have two unrelated problems. It violates the ID code of not discussing the motives and attributes of the Designer, and it makes no sense. An omniscient being, or one that can assemble long strings of functional DNA, anticipating its function within a changing ecosystem, would not have the kind of limitations characteristic of mere mortal designers. At any rate it makes no sense to assign attributes to invisible imaginary magicians. Except as an ad hoc rationalization.
That kairosfocus character lays out his “definitive” argument over at UD; and it demonstrates why ID/creationism cannot even explain the existence of galaxies, stars, the periodic table, compounds, liquids, and solids.
This is a pretty good example of why it would take far more than 6000 words just to deconstruct all the ID/creationist misconceptions about basic chemistry, physics, and biology. Then one would have to start all over again to try to bring them up to speed on all the science they stopped learning since middle school.
In a very rare inkling of insight, an ID/creationist, Sal Cordova, recognized something was wrong with Granville Sewell’s paper on the second law of thermodynamics. He recognized this just based on his classical understandings of thermodynamics alone.
When Sal tried to take that insight directly to the people over a UD, he was angrily rebuffed by KF and by Sewell as well as by others. And how was Sal “proven wrong?” The crowd over at UD found a textbook on statistical mechanics, written back in the 1980s, that attempted to apply an “information theory perspective” to statistical mechanics.
”Information” is the great, mysterious concept of ID/creationism on which all ID/creationist arguments appear to hinge. It has to be “information” because “information” is connected with “intelligence.” “Information” overcomes all. It overcomes uniform random sampling of huge sample spaces of inert things that have to assemble into complex structures that are specified ahead of time. Therefore, intelligent design.
To mal-appropriate a line from “The Music Man;” “INFORMATION! With a capital I, and that rhymes with pi and that stands for Intelligence!”
With random sequences of significant length, widely divergent hierarchies would typically have similar, albeit weak, degrees of fit.
If, however, you were to start with a single sequence of significant length, and subject the sequence to replication with variation, and assuming reasonable mutation rates, then it would form an objective fit to a single nested hierarchy, and you would be able to reconstruct the lines of descent with reasonable accuracy.
Perhaps someone who is allowed to post there should ask him from where he got the list of configurations that can occur without magical intervention, and how that list was assembled.
Without such a list you cannot separate configurations into designed and non-designed.
Perhaps the list is stored and indexed in the Library of Babel.
That’s right. #2-4 are the definition. As long as the definition is self-consistent and not conflated with other definitions, then it is what it is.
That’s right. #5 is a conclusion.
Per #4, anything with dFSCI has no known deterministic explanation; therefore if something with dFSCI has a known explanation, that explanation can’t be deterministic—by definition. #2 and #3 are superfluous to the vacuous tautology. They’re just window dressing.
Sure. That’s easily put into quantitative terms.
Sure, let’s take a protein, say a random sequence that weakly binds to ATP. The specified complexity would be low as these proteins are relatively common in sequence space. Now, let’s replicate and mutagenate the sequences, and select those with the most binding function. The specified complexity has increased. After repeated generations, CSI.
We’re just concerned with defining and measuring CSI at this point.
The usual way, as a hierarchical ordering of nested sets.
Our comment referred to the pattern of offspring.
That would depend on the specific history, of course. It’s something very easy to verify for anyone interested.
Zachriel: If you were to start with a single sequence of significant length, and subject the sequence to replication with variation, and assuming reasonable mutation rates, then it would form an objective fit to a single nested hierarchy, and you would be able to reconstruct the lines of descent with reasonable accuracy.
Probably? 🙂
Mung, are you saying you don’t know for sure?
Mung, are you saying you don’t know how to test whether CSI is present?
We’re discussing definitions of CSI, which is supposedly a signature of design. As such, we need a clear metric. Gpuccio provided a definition of what he calls “dFSCI”, which, unfortunately, includes design in its definition, so can’t be used to argue for design.
Um, Shannon Information is the theoretical backbone of information technology and communication systems.
This is not my area of expertise, but Shannon information seems tied to measures of bandwidth, and the various versions of CSI seem intended to measure meaning. I don’t see much prospect for a measure of meaning.
A nested set is one which is a subset of another. More generally, a nested set model is one where any two sets are either disjoint or one is a subset of the other. Hierarchy refers to whether sets are contained or containing.
This is off-topic for this thread. If someone wanted to start a new thread, we could continue this discussion there. Not sure it would be productive, though.
It’s certainly reasonable to say that Shannon Information isn’t what they mean when discussing ID, but it isn’t reasonable to say it’s not meaningful in terms of technology and communications.
Where does this leave kairosfocus and his example of ASCII characters?
If a digital representation of the characters we type is NOT CSI, then whatever kairosfocus sees typed on his computer screen is NOT CSI.
Where does this leave gpuccio’s argument about dFSCI?
By using a digital representation of FSCI, is it still an example of CSI or is gpuccio being more than a tad dishonest?
gpuccio,
This is in response to your commment 320 on the UD thread.
You keep using that word. I do not think it means what you think it means. — Inigo Montoya
You have defined dFSCI as follows:
dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.
You have also stated that the mechanisms of the modern synthesis are a “deterministic explanation” under your definitions.
You therefore cannot claim that your #5 is an empirical observation when there is no possible empirical observation that could lead to a conclusion that dFSCI is present in an artifact known to have evolved. The lack of dFSCI is a direct consequence of your definition, nothing else.
A more interesting question is whether or not evolution can generate functional complexity, by your definition, in excess of 150 bits. If it can, as numerous examples in these threads suggest, then whether you call it dFSCI or not is immaterial — evolution will have been shown to be a sufficient explanation for our actual empirical observations.
No, because you have defined dFSCI as something without a known deterministic explanation, hence any object with dFSCI whose origin is known can’t have a deterministic explanation — by definition.
Try removing that clause of the definition and see what you are left with.
Shameful? Seriously?!
And natural selection can often select for very specific functions, just like in Szostak’s experiment. A simple example is the evolution of antibiotic resistance which is often seen in natural settings.
How many possible Hamlets are possible? It’s as broad as human imagination. Only as a thought-experiment is it possible to count them.
In reading through this, particularly Mung’s question about the amount of Shannon information in 00101, it struck me that some of the folks at UD are sneaking in an assumption of context as the specification. In other words, there appears to be a Post Hoc Ergo Propter Hoc assumption in the assignment of CSI, as in – “Because DNA contains information about how an organism should develop, that’s what it’s supposed to do.” The “suppose” then is taken as the context/specification/intent. There does not appear to be any awareness that many biological functions can be adapted for a variety of conditions/contexts.
Okay, this must be the “nested hierarchy thread”. We’ll take it there.
Things That IDers Don’t Understand, Part 1 — Intelligent Design is not compatible with the evidence for common descent
gpuccio:
As long as you realize that you have invented an imaginary entity having exactly the attributes needed to fulfill your fantasy.
In detective fiction, say in some serious work of literature like Scooby Doo, your designer would be a ghost or evil spirit.
It has become abundantly clear that the people over at UD have absolutely no clue about what any kind of information is. And they certainly don’t know anything about Shannon “entropy,” Shannon “information,” Shannon “uncertainty,” or any of the different names they call it. They think taking a logarithm to base 2 endows a calculation with “information” even though they can’t tell anyone what this “information” is about, what it does, or what the mechanism is for how it pushes atoms and molecules around.
Not one of those characters over at UD has any idea what goes on in the world of signal and image processing. They have never done any signal and image processing; and they wouldn’t have a clue about how signals and images are processed. They are just making stuff up as they go; as is easily discernable by the fact that they have been mud wrestling and word-gaming for something like 50 years now without converging on anything. It has been all smoke and mirrors and primitive grunting for the entire 50 years.
So all anyone is going to get from those characters over at UD is immature sneering, name-calling, mooning, the finger, feces hurling, taunting, and repeated mimicking of any and all critiques of ID/creationism and of the people who offer those critiques.
The equation for Shannon entropy is – ∑pi log2 pi,
where pi the probability of the occurrence of the ith event, and the sum is over all these events. It is a very general equation that pops up frequently in analyses of the probabilities of ensembles of events.
We went over the behavior of this equation on an earlier thread. As was pointed out there, this is the average of the logarithms of the probabilities. And because all the probabilities have to add up to 1, this average becomes a maximum when all those probabilities are equal. Thus, all this formula does is become a maximum when all events are equally probable.
There is nothing weird going on here; there is no “magic information” that is being conveyed other than the fact that this calculation becomes smaller as some events become more probable than others in the ensemble of events.
In fact, one doesn’t even have to use a logarithm; simply looking at the products of those probabilities gets a similar result. The logarithm is both a convenience and, in certain contexts such as statistical mechanics, it establishes a relationship to other variables that describe the system under study. It depends on the context in which the equation is used.
Ask an ID/creationist what that means and he can’t tell you. He can’t tell you where the knowledge about those probabilities comes from. He can’t tell you how this equation is used in signal and image processing. He can’t tell you how it is used in statistical mechanics. He simply doesn’t have a clue! To an ID/creationist, this is just a big, bamboozling, advanced-math equation that somebody called “entropy” or “information” or “uncertainty;” but he can never tell you what it means or how it makes ID/creationism a science.
If you try to explain it to an ID/creationist, all you will get is feces hurling in return. It has been this way for decades; it never changes.
But of course the ID folks don’t really need to understand the derivation or application of any of this, because for them “information” is some ineffable something-or-other that can only be created by their god, so it’s a ritual incantation, a shibboleth that identifies them as followers of the One True Faith.
The application is actually quite straightforward: Decide whether someting requires their god, dub it “information”, and conclude that because it’s information, it must have been created by their god. What else is there to know?
Watching the churning over there at UD is a bit like watching some kind of bizarre acting routine in which the writers can’t write, the actors can’t act, the producers can’t produce, and nobody knows what is supposed to happen.
It’s neither a tragicomedy nor a comical tragedy. It’s a thoroughly screwed up version of the Keystone Cops or the Three Stooges being done by people with pompous egos, no senses of humor, and complete certainty that they are THE masters of all knowledge in the universe.
It might be funny if it were a single, sick routine being done on Saturday Night Live. Instead, it plods on endlessly as it churns itself into an infinite regress of grotesque caricatures of itself that just become nauseating to watch. I’m not sure that even Monty Python could capture it. It doesn’t stay funny; it just gets sicker.
Not even. The ‘algorithm’, such as it is, is essentially the processes ‘survive’ and ‘reproduce’ in each individual. When you have a set of individuals following that algorithm, higher-level constraints winnow the results in a finite world – there is not enough room for everybody, which impinges upon the ‘survive’ process. The results are winnowed whether NS is in operation or not.
Those were onlooker’s comments.
That may be the source of confusion. You had seemed to be including evolution as a deterministic process (taken broadly). However, evolution is not purely deterministic, but includes random elements. For that matter, so are evolutionary algorithms. (If you want to be pedantic, you can use a true-random generator.)
So evolutionary algorithms can generate dFSCI, per your definition #2-4.
Hold it. That can’t be right.
Your definition referred to a deterministic mechanism.
Heh. You couldn’t have stated the God of the Gaps more explictly. Per your own statements, there are some sequences with “functional complexity” and that some of these sequences have known causes! But you still conclude that those that don’t must be designed. And when another gap is filled, you simply remove it from the class and claim your definition never fails!
Shorter gpuccio:
1. Take a bucket of complex sequences.
2. Throw out the ones that are explained by a “known mechanism”.
3. Amazing! Of the sequences that are left, not a single one is explained by a known mechanism!
4. Later you discover a mechanism that can explain one of the remaining sequences.
5. Throw it out of the bucket and return to step #3.
Come on, gpuccio. You can do better than this.
It occurs to me that no one on either side of the debate know the history of sequences or how far removed they are from a random sequences they are. No one knows how many stepwise mutations separate a minimally functional sequence from a highly specialized one.
There is no grammar or syntax that we understand.
So it makes mo sense to count bases. Length of sequence does not imply meaning. This is what I had in mind when I tried to distinguish between bandwidth and meaning. DNA does not lend itself to quantifying meaning.
This pretty much gets to the point. All this posturing about the “improbabilities” of specified structures and functions is totally irrelevant; even at most of the simplest levels of complexity.
Given a bunch of oxygen and hydrogen. What prediction does one make about the properties and functions that emerge when they are put into the same volume of space and allowed to do whatever they do? How do you even predict what they will do without having seen it?
Will anyone predict that a function that emerges from this will be to erode huge canyons on planetary objects? Will they predict that within a very narrow temperature range that it will be instrumental in the leaching of salts out of rocks? Will they predict that within an even narrower temperature range that it will split rocks? Will they predict that it will be a solvent for millions of other compounds as well? Will they even predict snowflakes?
Water has thousands of properties and functions that are not predictable by knowing the properties of hydrogen and oxygen. Properties and function emerge not only from the increased complexity itself, but from the interactions of emergent properties with other emergent properties extant in the environment.
What possible prediction can anyone make about far more complex molecules and their environments without already having considerable experience with complex molecules along with the benefit of hindsight and experience? What possible prediction can one make about the properties and functions that emerge from all the atoms that make up a biomolecule in the presence of water within a narrow temperature range?
ID/creationist log base 2 math is a pretentious child’s game compared with the real world of chemistry, physics, and biology. ID/creationists just sneer at chemistry, physics, and biology; they don’t have to learn any of it. All they need to know is how to take a logarithm to base 2 of the ratios of the cardinalities of sets of non-interacting objects and suddenly they know all; and they can pompously “predict” what will NOT happen. This is ID/creationism in a nutshell.
Mike,
Oddly enough, I see a steady, entirely predictable pattern. Like a book of problems with the answers in the back. The answers might be wildly wrong, or unrelated to the problems, but they all use the same book and the answers are Defined Truth. If they don’t fit the problems, the problems are wrong.
Seriously, you know what they’re going to say in each instance sure as sunrise. By now, you’ve noticed that the answers never change. You can, by now, predict exactly what response you’ll get and you’ll never be wrong.
They’re like Joseph Heller’s soldier who saw everything twice. Hold up one finger, he sees two. Hold up two, he sees two. Hold up three, he sees two. You know what’s supposed to happen, and it always does.
But, as we all know, the answer is 42. 🙂
What keiths said.
Flint:
Your observation got me to thinking about the new window dressing over at UD.
That site has always been a pathetic scene of kvetching and self-pity about the cabal of bad old scientists throughout the entire world that rejects them and gets in the way of their winning the Nobel Prize or being the intellectual power houses of society.
Now they have apparently adopted those two blackguards, Mung and Joe, to sit all day and throw feces, belch, fart, and moon everyone in the world. Apparently that is their major talent; and with nothing else to do in life, what better exposure (pun intended) can two such blackguards have? They have become the face of UD and its true feelings. Indeed, the answer must always be two; how obvious! They are no longer even faking the intellectualism.
Maybe there is some humor in all that after all.
Let me relabel this for you Mung.
#define CSI_THRESHOLD 1.0e60
while (genome_array[0].dFSCI < CSI_THRESHOLD){ …}
return(CSI_TRUE);
A randomizer is sufficient to generate Shannon Information. Clearly CSI is meant to represent something else. The problem is getting a consistent metric.
We used the accepted terminology.
natural selection: a natural process that results in the survival and reproductive success of individuals or groups best adjusted to their environment and that leads to the perpetuation of genetic qualities best suited to that particular environment.
Actually, that’s precisely how we read gpuccio’s statements. He defines functional complexity, excludes those with known causes, then concludes the remaining sequences are designed. Keiths summarized it above.
What makes you think he can do better. This is all tha ID and creationism are. Gussied up gaps. The trick is to surround the gap with enough verbiage that you lose track of what is being done.
Mung, are you saying you don’t know how to compress this string?
Previously, you used said “no deterministic explanation for the string is known”. Now you use “necessity mechanism”. We suggested there was confusion with your terminology. Is evolution a necessity mechanism? You seem to imply so when you exclude protein relatives from the set of dFSCI.