Conflicting Definitions of “Specified” in ID

I see that in the unending TSZ and Jerad Thread Joe has written in response to R0bb

Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI. Even Stephen C. Meyer says CSI is not amendable to compression.

A protein sequence is not compressable- CSI.

So please reference Dembski and I will find Meyer’s quote

To save Robb the effort.  Using Specification: The Pattern That Signifies Intelligence by William Dembski which is his most recent publication on specification;  turn to page 15 where he discusses the difference between two bit strings (ψR) and (R). (ψR) is the bit stream corresponding to the integers in binary (clearly easily compressible).  (R) to quote Dembksi “cannot, so far as we can tell, be described any more simply than by repeating the sequence”.  He then goes onto explain that (ψR) is an example of a specified string whereas (R) is not.

This conflict between Dembski’s definition of “specified” which he quite explicitly links to low Kolmogorov complexity (see pp 9-12) and others which have the reverse view appears to be a problem which most of the ID community don’t know about and the rest choose to ignore.  I discussed this with Gpuccio a couple of years ago. He at least recognised the conflict and his response was that he didn’t care much what Dembski’s view is – which at least is honest.

261 thoughts on “Conflicting Definitions of “Specified” in ID

  1. gpuccio: Biological strings are scarcely compressible. 

    That is not quite correct. While standard compression routines, such as those suitable for compressing text or pictures are not effective, it is still possible to compress biological sequences. 
     
    Adjeroh & Nan, On Compressibility of Protein Sequences, Proceedings of the Data Compression Conference 2006. 

    Also, http://data-compression.info/Corpora/ProteinCorpus/ 

    This relates to our conversation with Joe earlier. The problem with defining CSI in terms of non-compressibility is that it can lead to false positives. Indeed, the more ignorant one is, the more false positives.

  2. I see gpuccio has returned to his standard argument. RM+NS fails because it can’t create dFSCI, and dFSCI is that which RM+NS can’t create because there’s just too much of it.

  3. Isn’t gpuccio’s argument that dFCSI can’t be put into the genome by natural selection because an organism capable of replication has to be there to have natural selection, and such an organism already has dFCSI, so that is the source of the dFCSI.

    That argument implies that if an organism (having its initial dFCSI) evolves for a while by RM+NS and makes adaptation X which has enough extra SI to constitute dFCSI, that the dFCSI comes from the initial SI. Then if it continues and also achieves adaptation Y which also has enough extra SI to constitute dFCSI, that too comes from that initial complement of SI.

    And so on: the initial SI keeps getting converted to make the dFCSI of each successive adaptation. This does not make sense to me: it is “the gift that keeps on giving”, too much so. Perhaps I misunderstand, but it seemed that in the previous discussion of gpuccio’s argument, whenever the genome ended up containing dFCSI because of a particular adaptation, gpuccio kept saying that that dFCSI was already there since the organism was capable of replication.

  4. When pinned down, gpuccio always reverts back to the argument that protein domains are irreducible. He bolsters that by arguing there is a level beyond which they appear to have no cousin sequences and therefore must have been poofed into existence in their current form.

  5. So if the “information” is already there – who cares which kind of information it is called; it’s too confusing to keep track of all the sectarian versions of information – the question that no ID/creationist has ever answered is, “Just how does this information push atoms and molecules around?”

    “If this information doesn’t push atoms and molecules around, then what is the mechanism by which this information gets to those atoms and molecules so that they “know” where to go?” Does information push the laws of physics and chemistry around? If so, how; what is the mechanism?

    Why can’t ID/creationists answer these questions? Where along the chain of complexity does information kick in and take over from the laws of physics and chemistry? And which is it; semiotics or information?”

  6. Joe F:

    Isn’t gpuccio’s argument that dFCSI can’t be put into the genome by natural selection because an organism capable of replication has to be there to have natural selection, and such an organism already has dFCSI, so that is the source of the dFCSI.

    In comments to me, having made a similar interpretation, GP denied that this is his argument. Once the replication system or translation or whatever is in place, we take that dFSCI-to-date as a given, and apply the metric to the ‘extra’ dFSCI within a particular Time Span.

  7. Mung, 
    To clarify.

    KF claims that CSI is generated billions of times a day. Every message on a message board has a value for CSI.

    When I (or Lizzie) claim that we can write a program that can output CSI the onus is not on us to define what CSI is. The onus is on your to test the output from the program and determine the level of CSI present, if any. After all, I might be just making it all up!

    That might seem strange to you, but consider this: If ID claims to detect design via CSI then it’s irrelevant if I believe my program can output CSI or not as you can simply test it’s output and determine if it does in fact produce CSI or not. 

    So for you to say, as you seem to have by linking to the OP where Lizzie’s CSI generator was described that “CSI is real, look Lizzie claims to generate it and if she’s generating it she must know the definition” is a pathetic attempt at misdirection.

    If you can really determine design from CSI then you don’t need any further information then the output of the program. 

    If KF can say that every message on the internet is an example of intelligent design and has a measurable value for CSI then you can’t stop at messages you don’t know the origin of and say “well, just no way to tell” as that shows that you only indicate CSI is present when you already know something is designed. 

    “This string of letters and punctuation makes sense and therefore is unlikely to have come about by chance” is one thing. Yet what if the message is in a language you don’t understand? No CSI? It might just be random for all you know, yet you claim to be able to detect design. 

    So detect it already! 

  8. gpuccio: Biological strings are scarcely compressible. 

    “Scarcely” is probably too strong, but it was just an aside, and probably not relevant to the main point.

    gpuccio: As I commented about Hamlet, you can certainly compress the text somewhat, but you would still need the compressd sequence plus the decompressing algorithm to get Hamlet. 

    Sure, but the compression routine can usually be made proportionally smaller by extending the text, rendering the size of the compression routine negligible. That’s rarely an issue for a text the size of Hamlet, but if so, then try The Oxford Shakespeare: The Complete Works
     

  9. Yet confronted by Elizabeth’s GA program, gpuccio was not willing to acknowledge that the amount of SI increased in that program. gpuccio’s argument was that the dFCSI was already there because Elizabeth had made the program’s organisms able to reproduce.

    That’s when we all started arguing about intelligently computer simulations of unintelligent natural processes.

    This seems to me to be a big contradiction. When an organism has dFCSI and can reproduce, gpuccio says that we can count the “extra” SI put into the genome by an adaptation. But when the genomes are in a GA, gpuccio refused to count the extra SI that was put into those genomes. There all the SI was said to be coming from the original SI put in when the GA was set up.

    Again, am I misunderstanding gpuccio’s argument? How? 

  10. gpuccio: dFSCI is the form of CSI that I explicitly define. The definition is more or less as follows:

    We’ll number your points for reference. 

    gpuccio: #1) Any material object whose arrangement is such that a string of digital values can be read in it according to some code, and for which string of values a conscious observer can objectively define a function, objectively specifying a method to evaluate its presence or absence in any digital string of information, is said to be functionally specified (for that explicit function).

    It’s not important, but what is the function of Hamlet? 

    gpuccio: #2) The complexity (in bits) of the target space (the set of digital strings of the same or similar length that can effectively convey that function according to the definition), divided by the complexity in bits of the search space (the total number of strings of that length) is said to be the functional complexity of that string for that function.

    Again, just as an aside, how many permutations of words have the same function as Hamlet? Keep in mind the many, many versions of Hamlet. Seems intractable, especially given the lack of a clear functional specification. 

    gpuccio: #3) Any string that exhibits functional complexity higher than some conventional threshold, that can be defined according to the system we are considering (500 bits is an UPB; 150 bits is, IMO, a reliable Biological Probability Bound, for reasons that I have discussed) is said to exhibit dFSCI.

    Let’s grant that Hamlet has high functional complexity, per your definition. 

    gpuccio: #4) It is required also that no deterministic explanation for that string is known.

    So if we are ignorant, we are more likely to judge it to be design. This is nothing but a gap argument. 

    gpuccio: #5) Any object whose origin is known that exhibits dFSCI is designed (without exception).

    Of course. You just defined dFSCI in #4 as something with no known “deterministic explanation”. How could it be otherwise? 

    If we didn’t know the origin of nylonase, for instance, you would conclude design. Discovering its plausible evolutionary origin, you would then realize it was a false positive. But you could still say #5, because we would just shrink the universe of dFSCI to accommodate our findings. 
     
    Frankly, you don’t even need the math, just #4 & #5: 

    Any object whose origin is known that exhibits dFSCI is designed = 
    Any object whose origin is known that exhibits (no known deterministic origin) is designed =

    If we already know the origin and that origin is not deterministic, then design. 

  11. Joe Felsenstein

    Again, am I misunderstanding gpuccio’s argument? How?

    His position shifts all the time. If you wait long enough, he will eventually agree with you on technical things. Not on the big picture, which is independent of technical arguments.

  12. gpuccio:

    It has all to do with dFSCI. Protein domains:
    a) Have high functional complexity (therefore cannot arise in a purely random system)
    AND
    b) Are irreducible to simpler functional naturally selectable intermediates, and therefore cannot be explained by the only available necessity mechanism, NS.

    Remo Rohs and Gorka Lasso:

    This paper provides new insights into the evolution of the symmetry of protein domains and into protein engineering. The authors show that the widely adopted domain duplication and divergence model is not the only source for domain evolution. A new evolutionary model is described, according to which a particular subdomain can lead to the assembly of a new symmetry-based protein domain by combining several repeats of the same subdomain. The latter implies that modular evolution is an ongoing process.

    Unlike Joe, I will not read an abstract and argue that the issue is settled. I will, however, argue that your claim of irreducibility is probably wrong and based entirely on absence of pathetic level of detail in the evolutionary history of sequences. This is probably true of all claims of irreducibility.

    gpuccio:

    The simple explanation for the nested hierarchy is that it is easier for the designer to modify what already exists than to redo everything from scratch. Is that so difficult to understand?

    That seems to have two unrelated problems. It violates the ID code of not discussing the motives and attributes of the Designer, and it makes no sense. An omniscient being, or one that can assemble long strings of functional DNA, anticipating its function within a changing ecosystem, would not have the kind of limitations characteristic of mere mortal designers. At any rate it makes no sense to assign attributes to invisible imaginary magicians. Except as an ad hoc rationalization.

  13. That kairosfocus character lays out his “definitive” argument over at UD; and it demonstrates why ID/creationism cannot even explain the existence of galaxies, stars, the periodic table, compounds, liquids, and solids.

    This is a pretty good example of why it would take far more than 6000 words just to deconstruct all the ID/creationist misconceptions about basic chemistry, physics, and biology. Then one would have to start all over again to try to bring them up to speed on all the science they stopped learning since middle school.

    In a very rare inkling of insight, an ID/creationist, Sal Cordova, recognized something was wrong with Granville Sewell’s paper on the second law of thermodynamics. He recognized this just based on his classical understandings of thermodynamics alone.

    When Sal tried to take that insight directly to the people over a UD, he was angrily rebuffed by KF and by Sewell as well as by others. And how was Sal “proven wrong?” The crowd over at UD found a textbook on statistical mechanics, written back in the 1980s, that attempted to apply an “information theory perspective” to statistical mechanics.

    ”Information” is the great, mysterious concept of ID/creationism on which all ID/creationist arguments appear to hinge. It has to be “information” because “information” is connected with “intelligence.” “Information” overcomes all. It overcomes uniform random sampling of huge sample spaces of inert things that have to assemble into complex structures that are specified ahead of time. Therefore, intelligent design.

    To mal-appropriate a line from “The Music Man;” “INFORMATION! With a capital I, and that rhymes with pi and that stands for Intelligence!”

  14. Mung: If genomes were just random assemblages, what sort of objective nested hierarchy would that result in?

    With random sequences of significant length, widely divergent hierarchies would typically have similar, albeit weak, degrees of fit.

    If, however, you were to start with a single sequence of significant length, and subject the sequence to replication with variation, and assuming reasonable mutation rates, then it would form an objective fit to a single nested hierarchy, and you would be able to reconstruct the lines of descent with reasonable accuracy.

  15. Perhaps someone who is allowed to post there should ask him from where he got the list of configurations that can occur without magical intervention, and how that list was assembled.

    Without such a list you cannot separate configurations into designed and non-designed.

    Perhaps the list is stored and indexed in the Library of Babel.

  16. gpuccio: Points 2-4 are intended to explain how dFSCI is defined and measured.

    That’s right. #2-4 are the definition. As long as the definition is self-consistent and not conflated with other definitions, then it is what it is. 

    gpuccio: Point 5 is a completely different thing. 

    That’s right. #5 is a conclusion. 

    Per #4, anything with dFSCI has no known deterministic explanation; therefore if something with dFSCI has a known explanation, that explanation can’t be deterministic—by definition. #2 and #3 are superfluous to the vacuous tautology. They’re just window dressing. 

  17. gpuccio: It is possible to describe descriptive information (like Hamlet) in term of an explicit function, such as: a text that can convey all the information about the story, the characters, the meaning, and if we want even the emotion and the beauty.

    Sure. That’s easily put into quantitative terms.  

  18. kairosfocus: Take a protein. How much can its string vary without disastrous loss of function? If not a lot, then it is specifically functional. (In short, we are in zones T when we have relatively narrow sets of possible configs in a much larger space, that will work.) 

    Sure, let’s take a protein, say a random sequence that weakly binds to ATP. The specified complexity would be low as these proteins are relatively common in sequence space. Now, let’s replicate and mutagenate the sequences, and select those with the most binding function. The specified complexity has increased. After repeated generations, CSI. 

  19. Joe: Umm replication is STILL the thing you need to explain. By just using replication you expose your desperation.

    We’re just concerned with defining and measuring CSI at this point.
     

  20. Joe: How are you defining nested hierarchy? 

    The usual way, as a hierarchical ordering of nested sets. 

    Joe: It is already a given that ancestor-descendent relationships for non-nested hierarchies.

    Our comment referred to the pattern of offspring. 

    Joe: List your criteria for each level and each set, please.

    That would depend on the specific history, of course. It’s something very easy to verify for anyone interested. 

    Zachriel: If you were to start with a single sequence of significant length, and subject the sequence to replication with variation, and assuming reasonable mutation rates, then it would form an objective fit to a single nested hierarchy, and you would be able to reconstruct the lines of descent with reasonable accuracy.
     

  21. OMTWO: “When I (or Lizzie) claim that we can write a program that can output CSI the onus is not on us to define what CSI is. The onus is on your to test the output from the program and determine the level of CSI present, if any. After all, I might be just making it all up!

    Mung: “LOL! And you probably ARE making it up.”

    Probably? 🙂

    Mung, are you saying you don’t know for sure?

    Mung, are you saying you don’t know how to test whether CSI is present?

     

     

  22. Mung: And the connection to biological reality is?

    We’re discussing definitions of CSI, which is supposedly a signature of design. As such, we need a clear metric. Gpuccio provided a definition of what he calls “dFSCI”, which, unfortunately, includes design in its definition, so can’t be used to argue for design. 

    Eric Anderson: Indeed Shannon “information” is not even true information in any meaningful sense of the word; certainly not in the CSI sense we are interested in for technology, communications, bioinformatics, etc. 

    Um, Shannon Information is the theoretical backbone of information technology and communication systems. 

  23. This is not my area of expertise, but Shannon information seems tied to measures of bandwidth, and the various versions of CSI seem intended to measure meaning. I don’t see much prospect for a measure of meaning.

  24. Joe: With your “definition” you need to define hierarchical ordering and nested sets.

    A nested set is one which is a subset of another. More generally, a nested set model is one where any two sets are either disjoint or one is a subset of the other. Hierarchy refers to whether sets are contained or containing.

    This is off-topic for this thread. If someone wanted to start a new thread, we could continue this discussion there. Not sure it would be productive, though.

  25. It’s certainly reasonable to say that Shannon Information isn’t what they mean when discussing ID, but it isn’t reasonable to say it’s not meaningful in terms of technology and communications.

  26. Joe: “Thank you Robb,

    So you did NOT compress the text, but a digital representation of the text. Got it. Not the same thing and your bait-n-switch is more than a tad dishonest.”

    Where does this leave kairosfocus and his example of ASCII characters?

    If a digital representation of the characters we type is NOT CSI, then whatever kairosfocus sees typed on his computer screen is NOT CSI.

    Where does this leave gpuccio’s argument about dFSCI?

    By using a digital representation of FSCI, is it still an example of CSI or is gpuccio being more than a tad dishonest?

     

     

  27. gpuccio,

    This is in response to your commment 320 on the UD thread.

    To Zachriel (at TSZ):

    That’s right. #5 is a conclusion.

    Are you kidding?

    #5 is not a conclusion. It is an independent empirical observation.

    You keep using that word. I do not think it means what you think it means. — Inigo Montoya

    You have defined dFSCI as follows:

    dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.

    You have also stated that the mechanisms of the modern synthesis are a “deterministic explanation” under your definitions.

    You therefore cannot claim that your #5 is an empirical observation when there is no possible empirical observation that could lead to a conclusion that dFSCI is present in an artifact known to have evolved. The lack of dFSCI is a direct consequence of your definition, nothing else.

    A more interesting question is whether or not evolution can generate functional complexity, by your definition, in excess of 150 bits. If it can, as numerous examples in these threads suggest, then whether you call it dFSCI or not is immaterial — evolution will have been shown to be a sufficient explanation for our actual empirical observations.

     

  28. gpuccio: #4) It is required also that no deterministic explanation for that string is known.

    gpuccio: #5) Any object whose origin is known that exhibits dFSCI is designed (without exception).

    gpuccio: #5 is not a conclusion. It is an independent empirical observation.

    No, because you have defined dFSCI as something without a known deterministic explanation, hence any object with dFSCI whose origin is known can’t have a deterministic explanation — by definition.

    Try removing that clause of the definition and see what you are left with.

    gpuccio: You are obviously referring to the shameful Szostak paper.

    Shameful? Seriously?!

    gpuccio: The only algorithm present in biological contexts is NS.

    And natural selection can often select for very specific functions, just like in Szostak’s experiment. A simple example is the evolution of antibiotic resistance which is often seen in natural settings.

    gpuccio: It is. That’s how it can be done.

    a) We define the function as the ability to convey the full set of meanings in the original text (we can refer to a standard version, for objectivity).

    b) We prepare 1000 detailed questions about various parts of the text.

    c) We define the following procedure to measure our function: the function will be considered as present if, and only if, an independent observer, given the text, is able to answer correctly all the questions.

    How many possible Hamlets are possible? It’s as broad as human imagination. Only as a thought-experiment is it possible to count them. 

     

  29. In reading through this, particularly Mung’s question about the amount of Shannon information in 00101, it struck me that some of the folks at UD are sneaking in an assumption of context as the specification. In other words, there appears to be a Post Hoc Ergo Propter Hoc assumption in the assignment of CSI, as in – “Because DNA contains information about how an organism should develop, that’s what it’s supposed to do.” The “suppose” then is taken as the context/specification/intent. There does not appear to be any awareness that many biological functions can be adapted for a variety of conditions/contexts.

  30. gpuccio:

    A smart designer, I would say. Maybe not omniscient or omnipotent, but certainly smart.

    would not have the kind of limitations characteristic of mere mortal designers

    But he could certainly have other kinds of limitations.

    As long as you realize that you have invented an imaginary entity having exactly the attributes needed to fulfill your fantasy.

    In detective fiction, say in some serious work of literature like Scooby Doo, your designer would be a ghost or evil spirit.

  31. It has become abundantly clear that the people over at UD have absolutely no clue about what any kind of information is. And they certainly don’t know anything about Shannon “entropy,” Shannon “information,” Shannon “uncertainty,” or any of the different names they call it. They think taking a logarithm to base 2 endows a calculation with “information” even though they can’t tell anyone what this “information” is about, what it does, or what the mechanism is for how it pushes atoms and molecules around.

    Not one of those characters over at UD has any idea what goes on in the world of signal and image processing. They have never done any signal and image processing; and they wouldn’t have a clue about how signals and images are processed. They are just making stuff up as they go; as is easily discernable by the fact that they have been mud wrestling and word-gaming for something like 50 years now without converging on anything. It has been all smoke and mirrors and primitive grunting for the entire 50 years.

    So all anyone is going to get from those characters over at UD is immature sneering, name-calling, mooning, the finger, feces hurling, taunting, and repeated mimicking of any and all critiques of ID/creationism and of the people who offer those critiques.

    The equation for Shannon entropy is – ∑pi log2 pi,

    where pi the probability of the occurrence of the ith event, and the sum is over all these events. It is a very general equation that pops up frequently in analyses of the probabilities of ensembles of events.

    We went over the behavior of this equation on an earlier thread. As was pointed out there, this is the average of the logarithms of the probabilities. And because all the probabilities have to add up to 1, this average becomes a maximum when all those probabilities are equal. Thus, all this formula does is become a maximum when all events are equally probable.

    There is nothing weird going on here; there is no “magic information” that is being conveyed other than the fact that this calculation becomes smaller as some events become more probable than others in the ensemble of events.

    In fact, one doesn’t even have to use a logarithm; simply looking at the products of those probabilities gets a similar result. The logarithm is both a convenience and, in certain contexts such as statistical mechanics, it establishes a relationship to other variables that describe the system under study. It depends on the context in which the equation is used.

    Ask an ID/creationist what that means and he can’t tell you. He can’t tell you where the knowledge about those probabilities comes from. He can’t tell you how this equation is used in signal and image processing. He can’t tell you how it is used in statistical mechanics. He simply doesn’t have a clue! To an ID/creationist, this is just a big, bamboozling, advanced-math equation that somebody called “entropy” or “information” or “uncertainty;” but he can never tell you what it means or how it makes ID/creationism a science.

    If you try to explain it to an ID/creationist, all you will get is feces hurling in return. It has been this way for decades; it never changes.

  32. But of course the ID folks don’t really need to understand the derivation or application of any of this, because for them “information” is some ineffable something-or-other that can only be created by their god, so it’s a ritual incantation, a shibboleth that identifies them as followers of the One True Faith.

    The application is actually quite straightforward: Decide whether someting requires their god, dub it “information”, and conclude that because it’s information, it must have been created by their god. What else is there to know? 

  33. Flint said: But of course the ID folks don’t really need to understand the derivation or application of any of this, because for them “information” is some ineffable something-or-other that can only be created by their god, so it’s a ritual incantation, a shibboleth that identifies them as followers of the One True Faith.

    Watching the churning over there at UD is a bit like watching some kind of bizarre acting routine in which the writers can’t write, the actors can’t act, the producers can’t produce, and nobody knows what is supposed to happen.

    It’s neither a tragicomedy nor a comical tragedy. It’s a thoroughly screwed up version of the Keystone Cops or the Three Stooges being done by people with pompous egos, no senses of humor, and complete certainty that they are THE masters of all knowledge in the universe.

    It might be funny if it were a single, sick routine being done on Saturday Night Live. Instead, it plods on endlessly as it churns itself into an infinite regress of grotesque caricatures of itself that just become nauseating to watch. I’m not sure that even Monty Python could capture it. It doesn’t stay funny; it just gets sicker.

  34. gpuccio: The only algorithm present in biological contexts is NS.

    Not even. The ‘algorithm’, such as it is, is essentially the processes ‘survive’ and ‘reproduce’ in each individual. When you have a set of individuals following that algorithm, higher-level constraints winnow the results in a finite world – there is not enough room for everybody, which impinges upon the ‘survive’ process. The results are winnowed whether NS is in operation or not.

  35. gpuccio: To Zachriel (at TSZ) 

    Those were onlooker’s comments. 

    gpuccio: “deterministic explanation” … They are a RV + NS (where NS is the deterministic part of the algorithm) 

    That may be the source of confusion. You had seemed to be including evolution as a deterministic process (taken broadly). However, evolution is not purely deterministic, but includes random elements. For that matter, so are evolutionary algorithms. (If you want to be pedantic, you can use a true-random generator.)

    So evolutionary algorithms can generate dFSCI, per your definition #2-4.
     

  36. Hold it. That can’t be right. 

    gpuccio: The concept is very simple: dFSCI that cannot be explained by any known mechanism warrants a design inference.

    Your definition referred to a deterministic mechanism. 

    gpuccio: The concept is very simple: dFSCI that cannot be explained by any known mechanism warrants a design inference. Why? Because dFSCI is a very good indicator of design (100% specificity in empirical tests). 

    Heh. You couldn’t have stated the God of the Gaps more explictly. Per your own statements, there are some sequences with “functional complexity” and that some of these sequences have known causes! But you still conclude that those that don’t must be designed. And when another gap is filled, you simply remove it from the class and claim your definition never fails!

  37. Shorter gpuccio:

    1. Take a bucket of complex sequences.

    2. Throw out the ones that are explained by a “known mechanism”.

    3. Amazing!  Of the sequences that are left, not a single one is explained by a known mechanism!

    4. Later you discover a mechanism that can explain one of the remaining sequences.

    5. Throw it out of the bucket and return to step #3. 

    Come on, gpuccio.  You can do better than this. 

  38. It occurs to me that no one on either side of the debate know the history of sequences or how far removed they are from a random sequences they are. No one knows how many stepwise mutations separate a minimally functional sequence from a highly specialized one.

    There is no grammar or syntax that we understand.

    So it makes mo sense to count bases. Length of sequence does not imply meaning. This is what I had in mind when I tried to distinguish between bandwidth and meaning. DNA does not lend itself to quantifying meaning.

  39. Petrushka notes: No one knows how many stepwise mutations separate a minimally functional sequence from a highly specialized one.

    This pretty much gets to the point. All this posturing about the “improbabilities” of specified structures and functions is totally irrelevant; even at most of the simplest levels of complexity.

    Given a bunch of oxygen and hydrogen. What prediction does one make about the properties and functions that emerge when they are put into the same volume of space and allowed to do whatever they do? How do you even predict what they will do without having seen it?

    Will anyone predict that a function that emerges from this will be to erode huge canyons on planetary objects? Will they predict that within a very narrow temperature range that it will be instrumental in the leaching of salts out of rocks? Will they predict that within an even narrower temperature range that it will split rocks? Will they predict that it will be a solvent for millions of other compounds as well? Will they even predict snowflakes?

    Water has thousands of properties and functions that are not predictable by knowing the properties of hydrogen and oxygen. Properties and function emerge not only from the increased complexity itself, but from the interactions of emergent properties with other emergent properties extant in the environment.

    What possible prediction can anyone make about far more complex molecules and their environments without already having considerable experience with complex molecules along with the benefit of hindsight and experience? What possible prediction can one make about the properties and functions that emerge from all the atoms that make up a biomolecule in the presence of water within a narrow temperature range?

    ID/creationist log base 2 math is a pretentious child’s game compared with the real world of chemistry, physics, and biology. ID/creationists just sneer at chemistry, physics, and biology; they don’t have to learn any of it. All they need to know is how to take a logarithm to base 2 of the ratios of the cardinalities of sets of non-interacting objects and suddenly they know all; and they can pompously “predict” what will NOT happen. This is ID/creationism in a nutshell.

  40. Mike,

    Oddly enough, I see a steady, entirely predictable pattern. Like a book of problems with the answers in the back. The answers might be wildly wrong, or unrelated to the problems, but they all use the same book and the answers are Defined Truth. If they don’t fit the problems, the problems are wrong.

    Seriously, you know what they’re going to say in each instance sure as sunrise. By now, you’ve noticed that the answers never change. You can, by now, predict exactly what response you’ll get and you’ll never be wrong.

    They’re like Joseph Heller’s soldier who saw everything twice. Hold up one finger, he sees two. Hold up two, he sees two. Hold up three, he sees two. You know what’s supposed to happen, and it always does.   

  41. Flint said: They’re like Joseph Heller’s soldier who saw everything twice. Hold up one finger, he sees two. Hold up two, he sees two. Hold up three, he sees two. You know what’s supposed to happen, and it always does.

    But, as we all know, the answer is 42. 🙂

  42. Flint:

    Your observation got me to thinking about the new window dressing over at UD.

    That site has always been a pathetic scene of kvetching and self-pity about the cabal of bad old scientists throughout the entire world that rejects them and gets in the way of their winning the Nobel Prize or being the intellectual power houses of society.

    Now they have apparently adopted those two blackguards, Mung and Joe, to sit all day and throw feces, belch, fart, and moon everyone in the world.  Apparently that is their major talent; and with nothing else to do in life, what better exposure (pun intended) can two such blackguards have?  They have become the face of UD and its true feelings.  Indeed, the answer must always be two; how obvious!  They are no longer even faking the intellectualism.

    Maybe there is some humor in all that after all.

  43. Mung: ” Yes, I see you do the same thing as Lizzie. You don’t actually calculate CSI.”

    //—————————-

    keiths: // program stops when this fitness threshold is exceeded
    #define FITNESS_THRESHOLD 1.0e60

    while (genome_array[0].fitness < FITNESS_THRESHOLD) …”

    //——————————

    Let me relabel this for you Mung.

    #define CSI_THRESHOLD 1.0e60

    while (genome_array[0].dFSCI < CSI_THRESHOLD){ …}

    return(CSI_TRUE);

     

     

  44. Mung: And then they think that if hey can just generate enough Shannon Information that it qualifies as CSI.

    A randomizer is sufficient to generate Shannon Information. Clearly CSI is meant to represent something else. The problem is getting a consistent metric. 

    Mung: Don’t blame your sloppy use of language on language.

    We used the accepted terminology. 

    natural selection: a natural process that results in the survival and reproductive success of individuals or groups best adjusted to their environment and that leads to the perpetuation of genetic qualities best suited to that particular environment.

    Zachriel: Per your own statements [gpuccio], there are some sequences with “functional complexity” and that some of these sequences have known causes! But you still conclude that those that don’t must be designed.

    Mung: That’s false. 

    Actually, that’s precisely how we read gpuccio’s statements. He defines functional complexity, excludes those with known causes, then concludes the remaining sequences are designed. Keiths summarized it above. 

  45. What makes you think he can do better. This is all tha ID and creationism are. Gussied up gaps. The trick is to surround the gap with enough verbiage that you lose track of what is being done.

  46. Mung: ” Finally, an actual string to analyze.

    So, can someone please post a program to algorithmically compress and decompress this string?

    And can some give me a description of it that doesn’t just consist of the string itself?

    H H H T H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H T H H H T H H H T H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H T H H H H T H H H H T H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H H T H H H T H H H H T H H H T H H H H T”

    Mung, are you saying you don’t know how to compress this string?

     

  47. gpuccio: The “known causes” have nothing to do with the assesment of dFSCI. The requisites to assess dFSCI are two (as I have said millions of times):

    a) High functional information in the string (excludes RV as an explanation)

    b) No known necessity mechanism that can explain the string (excludes necessity explanation)

    Previously, you used said “no deterministic explanation for the string is known”. Now you use “necessity mechanism”. We suggested there was confusion with your terminology. Is evolution a necessity mechanism? You seem to imply so when you exclude protein relatives from the set of dFSCI. 

Leave a Reply