Conflicting Definitions of “Specified” in ID

I see that in the unending TSZ and Jerad Thread Joe has written in response to R0bb

Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI. Even Stephen C. Meyer says CSI is not amendable to compression.

A protein sequence is not compressable- CSI.

So please reference Dembski and I will find Meyer’s quote

To save Robb the effort.  Using Specification: The Pattern That Signifies Intelligence by William Dembski which is his most recent publication on specification;  turn to page 15 where he discusses the difference between two bit strings (ψR) and (R). (ψR) is the bit stream corresponding to the integers in binary (clearly easily compressible).  (R) to quote Dembksi “cannot, so far as we can tell, be described any more simply than by repeating the sequence”.  He then goes onto explain that (ψR) is an example of a specified string whereas (R) is not.

This conflict between Dembski’s definition of “specified” which he quite explicitly links to low Kolmogorov complexity (see pp 9-12) and others which have the reverse view appears to be a problem which most of the ID community don’t know about and the rest choose to ignore.  I discussed this with Gpuccio a couple of years ago. He at least recognised the conflict and his response was that he didn’t care much what Dembski’s view is – which at least is honest.

261 thoughts on “Conflicting Definitions of “Specified” in ID

  1. Joe

    The data says there wasn’t a tornado in the UK which means you are a liar.

    In fact we often experience tornadoes in the UK, about 50 per year. An extreme example: http://www.telegraph.co.uk/topics/weather/9252146/Tornado-spotted-in-Oxfordshire-as-storms-batter-southern-England.html
    But I never said when this event happened. It was some time ago. I can’t say more then that (security!), but if your excuse is that “there was no tornado so this could not have happened” then so much for SETI! But it did, so will you help or will you not apply the “design detection skills” that you claim to have?

  2. Gpuccio,
     

    I can try no design inference for any of the documents unless I can recognize and define a function for one of them, or both.

    Really? So if we find a structure on Mars made of glass you’ll deny design until you know what it’s function is? Or would that “obviously” be designed? This seems circular to me. You cannot make a design inference unless you can determine the function it was designed to provide? Really?

    Just by looking at them, I cannot say if they are functional or not, and therefore I will make no design inference for any of the two.

    The same could be said for any string. If you happen not to know the function then all strings look the same, right? So Hamlet is designed because you can read and understand it but if you lack that you are stuck? Does ID not have more robust design detection mechanisms then that? 

    But I can suggest a few ways to investigate that problem, if your limited funds allow that. The simplest way would be to assume that they are sequences of protein coding genes, and compare them with existing databases.

    The problem is I only have enough money to do that for one of the documents. If only ID could provide a way to determine which of those documents I should study.

    The second way would be to decode them into AA sequences, and compare them with existing databases (that’s essentially, but not exactly, the same as in the previous step).

    Again, the same problem, which document to choose?

    A third way would be to synthesize the proteins themselves, and test them for structure and biological function.

    Again, the same problem.

    Unless and until some definite biochemical function is found, I will not make any design inference for sequences like those ones.

    So you determine design by taking the blueprint and building something from it? By definition blueprints refer to designed objects. And your claim is that all proteins are designed, so if a protein is the end product then design is a given?

    If any of the sequence is found to correspond to a functional protein, I will make a design inference for it: we are speaking of hundreds of AAs here, and length is in our favour).

    Is that the only possible way that ID can come to a design inference for long strings of data like this? What if I told you it was a signal from space. Would it automatically become design then? Or would we still have to examine proteins?

    Just to be fastidious, we could also infer design for the simple function of being sheets of paper with characters printed on them. That could probably warrant a design inference for both, but in a completely different sense: the printed sheet of paper is certainly designed, but the printed sequence could still be random.

    Sigh. Then why don’t you start there? I’ve already made it clear that the fact it was originally on paper but that is irrelevant, the data is what is important. And if all ID can say about this situation is “well, those sheets of paper with printing on, they are designed they are!” then forgive me for being singularly unimpressed.

  3. Joe,
    Seems that Kariosfocus disagrees with you:
     

    First, if there is design at work, but he pattern shown is one that exhibits the statistics of a chance based random process i.e. a probability distribution, the filter will infer to chance contingency. That is a false negative and is part of the price paid to make sure that inferences to design are morally certain.

    http://www.uncommondescent.com/intelligent-design/id-foundations/the-tsz-and-jerad-thread-continued/#comment-436715
    So given that the Lederbergs shows that the mutations were not due to environmental clues (if you actually read the paper this is obvious) they were not built-in responses.
    Put simply, if they were built in responses that mechanism is not working very well because the mutations happen regardless of the environment.
    So even if the “response mechanism” is built in, it’s faulty because it acts regardless of the environment.
    So whence comes the mutations?
    As KF says: “First, if there is design at work, but he pattern shown is one that exhibits the statistics of a chance based random process i.e. a probability distribution, the filter will infer to chance contingency.”
    So the pattern of mutations observed exhibits the statistics of a chance based random process and as such does not represent any “design” at all.

    Except of course, your “designed to evolve” fallback of (almost) last resort.

    But given that this is achieved by imperfect replication you are once again adding unneeded entities. Why don’t you discuss your point with KF, explain to him how Zachriel is wrong about this (classic) experiment.

  4. Another day, another bizarre interpretation by Joe. He invokes ‘quorum sensing’ by bacteria as evidence that they adapt to antibiotics using a pre-specified capacity, combined with some mechanistically unclear communication method.

    You can take bacteria that are killed by low levels of antibiotic, and plate them out on concentration gradient made of strips of gel. All the bugs at or above the lethal concentration die. There is nothing in the population capable of immediate response to this supposed ‘environmenal cue’. They grow only n the antibiotic-free portion.

    So you sit back and watch. A tiny offshoot ‘probes’ the gradient at the next level, and and spreads laterally from this point. From this new front, another point seeds the next advance. And so on, until the bugs can cope with a gel at sturation poiint – you can’t physically dissolve any more antibiotic.

    Where on earth does quorun sensing communication come into this? What is being communicated to or from the rest of the population by these mutants that are able to cope with the higher concentrations? And where is the adaptive capacity located in the non-mutated organisms? You can certainly tell they are mutants, by the simple expedient of sequencing them. So this is Joe’s “maybe in the cell wall” computer program that generates adaptive mutations to order. Maybe it’s just random mutation.

    And … if articifial ribosomes don’t function, how come one can Google numerous papers on functional artificial ribosomes? http://www.technologyreview.com/news/412471/creating-cell-parts-from-scratch/

    I look forward to the meta-commentary – “and Allan Miller chimes in with …”. It amazes me the extent to which Joe can mangle scientific concepts and receive not a word of ‘correction’ from his peers. Do they really all think he’s the science expert that he evidently does?

  5. gpuccio: But if A (or B), after one of them happens, expand to the whole population, for a deterministic effect like NS, in a short time, then the scenario changes. 

    Zachriel: Or due to neutral drift. 

    gpuccio: Wrong. Neutral drift does not change the scenario in any way. It is just a form of RV, and RV is alredy accounted for in the scenario.

    It’s not “wrong”. It may be superfluous, as you said effects “like NS”.  We were clarifying that point. As Lenski demonstrated, drift can be important in adaptation. 

    gpuccio: a) The effect of NS in reality would be much lower than waht I have hypotesized in my model.

    Not sure we’ve seen your math. Of course, standard population genetics were worked out generations ago by Fisher et al. Do your results differ? 

    gpuccio: b) Funtional intermediates should absolutely leave traces in the existing genomes.

    Oh? Why is that? Indeed, natural selection should tend to purge the extraneous over time. 

    gpuccio: Each time you are pressed for real examples of your theosy, you shift to macroscopic phenotipic effexts (indeed, to that single example). 

    Your claims nearly always are general claims about the evolution of complexity. The mammalian middle ear is an excellent example as it is familiar to most readers and combines embryological, fossil and molecular evidence, along with a good scientific detective story. 

    gpuccio: But you must know very well that we have absolutely no idea of what genotipic modifications are the basis for those phenotipic changes. Therefore, it is completely impossible to analyze those “sequences” in terms of genomic information. Therefore, they are irrelevant to the ID-neodarwinism debate.

    That’s funny. Of course it’s relevant. Embryological data predict the fossils. That’s hugely important from a scientific vantage. When you can make those sorts of predictions independent of evolutionary theory, then maybe you will gain some scientific currency. 

    gpuccio: The answer seems rather simple: you have mo arguments at the level of molecular biology, and so you recur to the only things you have left.

    Actually, your arguments seem to be about the evolution of complexity, for which we have strong evidence. Instead, you retreat into the most ancient transitions, which left no fossils. It’s a gap!! 

    In any case, small changes to certain genes can be shown to cause relevant changes to the mammalian middle ear. 

    Mallo, Formation of the Middle Ear: Recent Progress on the Developmental and Molecular Mechanisms, Developmental Biology 2001. 

    gpuccio: It’s well established that something is essential for traversing rugged landscapes. THat recombination can do that in the biological context does not appear so well established, IMHO. 

    That recombination is important in traversing rugged landscapes is a mathematical result. Try running a few evolutionary algorithms.  

    gpuccio: And anyway, the experiment in that paper was dealing with a complete, and very favourable, biological setting for phages, where any natural mechanism was free to act. So, why was the rugged landscape not traversed?

    Because simple point mutation algorithms will climb the nearest peak and stop. If there are billions of peaks, then you have to start with billions of initial sequences in order to have a decent chance of finding the highest peak. Recombination can largely overcome this problem. 

    Zachriel: Yes, apparently natural selection is capable of evolving quite adequate proteins — even with one hand tied behind its back!

    gpuccio: I will not comment on that. I usually respect religious faith, in all its forms.

    It’s an empirical statement. Adequate proteins evolved, even without recombination. 

  6. Gpuccio at UD
     

    Really! Why are you surprised? Thatìs clearly stated in my definition and procedure for dFSCI evaluation. I need a specifiction, and in my specific definition (dFSCI) the specification must be functional.

    My understanding is that both documents are functional in some way. I just don’t know what that function is.

    No. And it does not need it. For most biological strings, especially proteins and protein coding genes, the function is well known and measurable. We are quite satisfied with that.

    Can you give me an example of just one “biological string” and what it’s “function” is and how you determined that function is “the” function.

    Get more money.

    It’s a thought experement. It’s abstract. I don’t really have an office. There was really no tornado. That was all for Joe “literal” G’s benefit.

    Try tossig a coin. Or some form of divination.

    So when ID is presented with a set of unknown strings and is asked to choose which is the more interesting with no further data we have to “toss a coin”?

    No. I recognize, define and measure the function, and then I must assess the target space/search space ratio. It’s all explained in my detailed description of the procedure to assess dFSCI.

    As before, what is the function of HIV and what it’s it’s dFSCI?

    I don’t know what you mean with “blueprints”. I have spoken of functions. I can define a function for a stone, as a paperweight, but that does not mean that the stone is designed. So, your statement is simply wrong.

    Then what is the function of HIV?

    Where have you been while we were discussing things? My claim is that if a protein exhibits enough functional complexity (let’s say more than 150 bits), and no credible neo darwinist path is known for its emergence, I infer design for it. I agree that I would infer design for many proteins, or more precisely protein superfamilies.

    Credible to who? You? Let me rephrase the question. Does either of those two documents have “functional complexity”? If so, how much.

    Yes. ID is not divination. It is scientific, and science has its limits.

    So ID can only be applied in the specific case of DNA sequences by building proteins and seeing if they are “functional”? This is quite different from the version of ID usually given.

    If our working hypothesis is that the strings codes for DNA sequences coding for proteins, then certainly yes. If we have other possible functional meanings for the strings, we can certainly pursue them too.

    As yet we’re not at that point. The point we’re at is “Can ID do any better then tossing a coin when determining which of these two sequences are worth investigating, given that only one can be in this example”. The answer, so far, is no.

    I answered in detail to that question, and showed how ID can give a very definite answer, making a design inference in some cases, and not making it in others. It requires, obviously, some work and some reasoning. If you are not interested in doing the work, you will not get any answer. In the absence of any recognized function, no design inference can be made.

    If you can explain to me how to “do the work” then I’ll happily do it. But so far the choice is simple – which of the two documents would *you* given you information/design expertise choose to examine in detail and why.

    I am not so interested in impressing you. If ID cannot solve your problem because you have not the money to use ID for solving it, well, I can survive. I am quite satisfied that ID can solve the problem of the origin of biological information, which is frankly more interesting to us all than your personal (imagined) disadventures.

    It’s a thought experiment. I would have thought that did not need to be explained. It’s a test. Here are two documents. I’m heavily implying that ID should be able to tell us something about each of them. So far it’s all been excuses. If you don’t want to play, that’s fine, but simply saying “well ID can’t do anything of practical use but personally I’m satisfied that it explains the origin of life” is not even trying.

    http://www.uncommondescent.com/intelligent-design/id-foundations/the-tsz-and-jerad-thread-continued/#comment-436722 

  7. Joe:
     

    BWAAAAAAHAAAAAAHAAAAA- design detection tells us if agency involvement was required. From there we investigate further.

    Then the question is: Was agency involvement required in the creation of either of those data sets?

    So go ahead and attempt to “detect design” in either of those two documents.

    Dare you! 

  8. If you can’t determine functionality without doing the chemistry, then design without evolution is impossible. There is no faster way to find the wholistic optimum of all interrelated systems than by fecundity and selection.

  9. Gpuccio
     

    How do you know that both documents are “functional”?

    That is part of the game. One is, one is not. Or neither are. Or both are. Can’t ID tell? 

    Function: Produces pentose sugars for nucleic acid synthesis and main producer of NADPH reducing power.

    No, that’s what it does. It’s function is something quite different. For example, a car turns petrol into heat and gas. That is what it does. It’s function is something quite different.

    If you had read my definition of dFSCI, you would know that we can define any function for the observed object, and that the computation of dFSI will be made for the function we have defined.

    In that case the function of the data contained within the documents is “to see if ID can tell us anything at all about the data”.

    I should have known that irony is wasted with some people…

    Perhaps you should explain the concept of irony to Joe. He’s the one that says you can’t investigate the data without examining it in person.

    Or, like any serious investigator would do, analyze all the strings. You asked to decide which string we should analyze without analyzing them. That is divination, not science.

    No, I asked which of the strings we should analyse in detail. You can in fact perform whatever level of analysis you like of course. If you want to examine both, please feel free to do so.

    The whole virus can be described as a virus having the ability to infect specific cells, and to reproduce itself through that process.

    No, once again, that’s what it does, it’s function appears to be quite different.

    Whole organisms, even if relatively simple like the HIV virus, are much more intractable to a detailed analysis.

    Yes, so simple that millions of hours of effort gone into curing it and still no cure.

    You may not know, but that kind of research has been done for decades. That’s why huge databases exist, like Uniprot, that list known proteins and their functions, and their coding genes.

    That’s not ID research nor anything like it. What is the link between Uniprot and ID please?

    The answer id definitely no. ID cannot say “which of these two sequences are worth investigating” without investigating them.

    So investigate them already and stop with the excuses. If you worked at SETI you’d give up on day one, as until interesting sequences are identified it’s all just noise. 

    If the sequences were in english. it would be rather easy, even for a darwinist like you, to understand at first sight which make sense aand which does not make sense.

    Condescend much? So design is detected by your ability to immediately understand the message? Hey, it’s written in English so it’s probably designed….

    But how do you believe that I, or anyone else, can decide “at first sight” if a nucleotide sequence corresponds to a functional proteins, without making any attempt at studying the sequence? Tossing a coin remains the best option.

    Who said anything about nucleotide sequences or functional proteins? Who said anything about what the data represents. This is all the baggage and preconceptions you are bringing to the code, it’s nothing about the code itself.

    The simplest way would be to assume that they are sequences of protein coding genes, and compare them with existing databases.

    Once more, I don’t have the to do that for both. Can ID suggest which document is more “interesting” then the other?

    As my “information/design expertise” does not make of me a prophet, I say: both. If I can only investigate one, I will toss a coin. And infer design (or not) for the one I have investigated.

    So pick one and investigate it already.

    have really no reason to play with you. I choose my playmates very accurately.

    Yet here we are.

    I give you a final answer: with what I know at present, I cannot make a design inference about your two strings. That’s all. Sorry for you (in many senses).

    Yet the way KF talks it’s the simplest thing in the world with “billions of examples” generated every day. Yet when we get specific, nothing.

  10. Joe, 

    We look for signs of agency involvement because we know if an agency was involved that changes the investigation and opens up new questions. Which means the design inference is not a dead end, but a new beginning.

    Then which, if any, of those data sets had agency involvement in their creation?

  11. gpuccio: The point is: drift does not change the probabilistic scenario.

    Without drift, some adaptations are not even possible. However, as a first-order approximation, it makes some sense.

    gpuccio: I have linked it many times.

    That would have been a good place to put the link.

    gpuccio: The same NS that, according to major darwinist thinkers, leaves more than 95% junk DNA in our genome? Really strange…

    Your nomenclature is poor. Darwin identified the existence of vestigial structures. Darwin would be, presumably, a darwinist. Generally, darwinists (those who think natural selection is the primary mechanism of evolution) have resisted the idea that the genome is mostly junk. However, polyploidal genomes and some amoeba with genomes far larger than human genomes tends to indicate that some genomes contain a lot of redundancy.

    gpuccio: I can’t find there any molecular information about the evolution of the middle ear, although there is a lot of interesting information about the complex molecular control of the development of that structure, based mainly on gene inactivation experiments.

    So we have an almost unbelievable prediction from embryology, that the irreducibly complex structure of the mammalian middle ear evolved from reptilian jaw bones. Astoundingly, we find fossils of intermediate structures buried in the rocks. And, we even have evidence of that small changes to genes directly affect the related structures.

    gpuccio: That recombination can do that in the biological context does not appear so well established, IMHO.

    Your claim was that recombination was “wishful thinking”, when we know from mathematical studies that recombination is effective in rugged landscapes. You reject a plausible mechanism without evidence.

    Xia & Levitt, Roles of mutation and recombination in the evolution of protein thermodynamics, Biophysics 2002.

    Bittker et al., Directed evolution of protein enzymes using nonhomologous random recombination, PNAS 2004.

    gpuccio: Should I laugh?

    Sure. It’s good for the health. But it doesn’t address the point that even lacking one of the primary mechanisms of evolutionary novelty the experiment still resulted in adequate function. This is expected when exploring a rugged landscape.

    gpuccio: There is no nedd to “determine that function is “the” function”. If you had read my definition of dFSCI, you would know that we can define any function for the observed object, and that the computation of dFSI will be made for the function we have defined.

    That’s fine, but if you didn’t know the origin of nylonase, you would still conclude design.

     

  12. Joe,

    Well context is important. And context is missing. I know those letters didn’t appear via nature, operating freely.

    No, I and others had something to do with it.

    So I would say the existence of those letters on the the intertubes was the result of some agency.

    Yes, that’s right. I made it happen. But that’s not the point. By definition all letters printed in a book or on a screen are there via some agency. But none of this speaks to the content of the data itself. If those letters were scratched on a monolith on the dark side of the moon that they were put there by “an agency” would be the least interesting thing about them. What they mean would be far more interesting. Yet it seems you would be happy to leave it at that.

    That said if the data represents DNA sequences there isn’t any evidence that blind and undirected processes could produce either of those.

    Now we are getting somewhere. Yes, we know that you claim that DNA was designed. That’s of no relevance here. I’m asking about these two data sets specifically. Beyond *my* “agency involvement” of getting them to appear on the internet, was there an *agency* involved in their creation?

    We look for signs of agency involvement because we know if an agency was involved that changes the investigation and opens up new questions.

    If you are happy to leave it at “there was an agent involved in getting them to appear on my screen from the internet” then that’s fine. I can just put you down for “Joe tells me something I already know about the data, that I, an agency, was involved it getting it onto the internet in a format he could read”.

    So that is a start- we know your position’s mechanisms didn’t doit.

    Do what? What does “my position” have to do with what ID can tell us about those two documents. And what is a start anyway? All you’ve said so far is “if the data represents DNA sequences there isn’t any evidence that blind and undirected processes could produce either of those” – well, that’s only true if they represent DNA sequences. Do they? Will you step down on at least one side of the fence on that then? It’s not much, but it would be progress. As “if” is no good to anybody – it was you that said ID looks for agency involvement. So far all you’ve done is hedge your bets and refused to stake a claim. And that’s what this game is all about. If this if that if the other, no good. Say something about these datasets.

  13. Summary so far.

    Gpuccio had a go, which was great. He thinks the data represents DNA and as such we need to instantiate it, see what it does and that’ll determine “design or not”. Once instantiated if there is any function at all then the original data was designed, as function is so rare in the total space that finding any function at all is a strong indicator of design.

    So far this is the best idea, with at least an outcome that either indicates design or not. So it’s doable.

    Joe also had a go but no testable proposal, unlike Gpuccios which is at least feasable, so I’ll hold off on assigning him an answer just yet. 

    Kairosfocus also had a go, he quoted me without name in this post: http://www.uncommondescent.com/intelligent-design/id-foundations/the-tsz-and-jerad-thread-continued/#comment-436715
    and says:
     

    Recall, the 500 bit solar system resources limit, is effectively the same as saying set up a cubical haystack 1,000 LY across (about as thick as our galaxy), and then take a blind random sample of one straw-sized object. Sampling theory tells us strongly that by overwhelming likelihood, the sample will be straw. This is the needle in the haystack challenge on steroids. The 1,000 bit cosmos we observe resource limit is far more stringent than this.

    If you are reading KF, would you be able to apply this test to my datasets and determine if they are inside/outside that resource limit you mention? That would be an interesting test. Other then that he’s ignoring the game. I wonder why, of all of them he seems best equipped to come to some determination. He can do it for billions of messages a day, inferring design by calculating probabilities in possibility space, but not apply his publicly stated as usable methodology to two specific documents when asked? Why not I have to wonder.

  14. I find it somewhat amusing that gpuccio’s method for determining functionality turns out to be chemistry and selection. He hasn’t elucidated any necessity for a designer other than to produce saltations.

    The word saltation seems rather quaint, and not many people seem to know its history or what it means. Basically it’s a Behe hop, a large, improbable mutation that leaps over Behe’s Edge. The concept really disappeared from biology until Behe revived it.

    Gpuccio’s theory is nothing more than the molecular equivalent of no transitional fossils. It seems safer to people like Behe and gpuccio because molecules don’t leave fossils, or at least not for long. The latest research indicates that all DNA degrades within a few million years, even if frozen.

  15. Mung: You have to just love how they appeal to recombination when they feel they need to, but at other times it seems they think it totally irrelevant.

    Please substantiate that claim. When have we minimized the importance of recombination in traversing rugged landscapes?

    Mung: And think back to my earlier arguments about how there is a reason for randomizing the genome at the start of a run, and how that is very unlike natural populations.

    That’s irrelevant with typical rugged landscapes. Randomized genomes will quickly climb local peaks.

     

  16. KF’s summary is parichical because he equates the knowledge of how to build biological adaptations with already existing straws in a 1,000 LY cubical haystack. As such, he thinks Darwinism would represent a vast series of one astronomically unlikely events after another, after another, etc. As far as he is concerned, it’s absurd. 

    However, I’m suggesting that this view is mistaken. Darwinism genuinely creates non-explanatory knowledge. As such, to use KF’s analogy, there was no straw already there that evolution lands on. 

    IOW, probability simply isn’t applicable in this case as knowledge creating processes represent a different kind of unknowability. This makes the application of probability limited to very specific cases.  

    Another example of the impact of this unknowability can be found in this TED 2011 TED talk.  In fact, Darwinism becomes an even better explanation when we integrate it with our current, best, universal explanation for the grown of knowledge.   

    For example, dividing knowledge (useful information that tends to remains when placed in a storage medium) between explanatory and non-explanatory allows us to make significantly more progress than merely making the statement that evolution is “random, but not random”. 

    Non-explanatory knowledge is created when genetic variation occurs in the absence of a problem to solve. Cells cannot conceive of problems or explanatory theories. Nor could they test those variations for internal consistency because only explanatory knowledge can be constant or inconsistent with itself. However, these adaptations would be tested by the environment. 

    Genes are biological replicators. The do have “problems” of getting copied into the next generation. But only we can conceive of this as a problem in the necessary sense. So, in the case of Darwinism, we can be far more specific: conjectured genetic variations are random in respect to any specific problem to be solved. 

    There is nothing in a tiger that contains explanatory theories about how different patterns of stripes (camouflage) could help them obtain more food. Nor could those cells conceive of it as such if they did. Nor would those cells have previously contained the knowledge of how to perform those adaptations. 

    Non-explanatory knowledge is genuinely created when conjectured genetic variations occur that influence a tiger’s stripes and some of those conjectures are refuted by natural selection – but that conjecture occurred in a way that was random to the problem of obtaining more food via different forms of camouflage. 

    So, when we integrate evolution with our current, best universal explanation for the growth of knowledge, Darwinism becomes an even better explanation. This includes the growth of knowledge used to improve biological organisms. 

  17. Mung,
     

    OMTWO seems to think that if you can’t infer design based upon his sequences you therefore have no warrant to ever make a design inference.

    No, not at all. Why don’t you come here and ask me myself instead of putting words in my mouth.
    I’m simply asking can ID tell us anything at all about the strings in question.
    I’m not asking you to infer design, calculate CSI or anything at all like that. If you see my original post, I’m simply asking can ID influence my decision one way or the other by providing some currently unknown information about each document.
    If you want to infer design, that’s fine.
    If you don’t want to and then later make a design inference, that’s also fine.

    But if Seti were ever to post a signal they want the world to help decode it’ll be quite clear what’ll happen at UD with regards to it.

    Nothing. At. All.

  18. Gpuccio,

    I should not have done that, because you don’t deserve any serious attention, but I blasted your two sequences and found no similarities, wasting so 5 minutes of my time.

    Fair enough. I did not ask you to do that. I made no claims about the sequences, nor their similarity. You attacked the problem in the way you thought best. Good on you for trying. 

    So, I maintain that I have absolutely no reason to infer design for those two strings.

    Fine. Great. So nothing in it between them for you. For all I know they are just two random strings. I’ve not developed a skill set like you lot at UD to even begin to work it out. So thanks for trying. I’ll put you down for “toss a coin”. 

    Is that an admission?

    Of what? That what you propose is feasible? Of course it is. Where we differ would be on the results. You’d infer design from “function” and I would not. The simple fact is that you are wrong with your opinions about protein domains and the probability of their origin etc. You will never accept it because it forms such a central plank of your “why ID is true” belief system but nonetheless you are wrong.

    I don’t know if I have expressed my claim with siffucuent clarity (I am too lazy tio check), but my claim is that affirming that “recombination can help solving the problem of rugged landscapes in the biological context” is not supported by any evidence.

    If you don’t look for the evidence because you don’t believe it exists then you’ll never find it, hence providing “evidence” for your original thought.

    Is that an admission?

    Not as you mean it, no, given that you are wrong.

  19. gpuccio: the transition form A to A1 is naturally selected, and then the transition from A1 to B happens with the same probability and the same probabilisitc resource, as the effect of selection. The probability of having two events, each of probability 1:2^150, in 2^100 attempts, is, according to the binomial distribution: 3.944307e-31.

    Confused on this. If the transition from A to A1 is naturally selected, then why is the probability 1:2^150? In a large population, beneficial mutations will reach fixation 1/2s, where s is the selection coefficient.

    gpuccio: I appreciate that you don’t agree with the ideas of people like Moran, Myers and similar about normal genomes.

    Larry Moran is not a darwinist. From what we can see, Myers uses the term darwinist ironically. You might want to cite Dawkins, who really is a darwinist, and as such, is considered somewhat dated by many modern evolutionary biologists.

    gpuccio: I am as ionterested as you are in huge genomes, but I have found no detailed information about them. If you ahve something on the matter, I would appreciate if you could share.

    Some organisms have been observed to double their genomes in a generation, such as many species of flowering plants. That’s a lot of redundancy. Onions have larger genomes than humans. Not sure what information you want?

    gpuccio: But minor molecular changes are not complex functional information…

    But we can see how the complex structure evolved in incremental, selectable steps. There’s no barrier.

    gpuccio: “recombination can help solving the problem of rugged landscapes in the biological context” is not supported by any evidence.

    Yes, it’s supported by studies of evolutionary algorithms and how they work on rugged landscapes. And it’s supported by various studies of protein-space. 

    gpuccio:Again, I am not “rejecting a plausible mechanism without evidence”.

    Sure you did.

    gpuccio: Correctly, as I have explained. Because I would infer design for the whole structure of nylonase (and I would be right), and not for the transition from penicillinase to nylonase.

    The new function wasn’t designed. It evolved.

     

  20. Mung,
     

    OMTWO: Both string are designed. They both have a length of 2001. Ain’t that funny. Can we move on now?

    I’ll put you in the same category as Joe then? Strings that were on paper are designed. I thought you were capable of more. But perhaps I overestimated you. 

    But no, it’s not a reference to 2001. And so you are 1 out. It really is only 2000 characters.

    You can move on, you can do whatever you like. You can leave that as your final answer, if that’s your desire. Fine by me, but if you ever want to update your answer do let me know. 

     

  21. I would infer design for the whole structure of nylonase (and I would be right)

    Science is really that easy? Gah, I’ve been doing it all wrong!

  22. Petrushka:

    Gpuccio’s theory is nothing more than the molecular equivalent of no transitional fossils.

    Only less so. ‘Fossil transitionals’ aren’t elbowed out of existence by the very process of evolution. But so-called intermediates on a path of molecular amendment, outcompeted by fitter descendant sequences or simply being the eliminated sequence in a stochastic fixation process – where do GP/Mung etc think that these ‘intermediates’ ought to have been preserved, ‘if evolution were true’? The unavoidable consequences of the theory are twisted into something inexplicable and embarrassing!

    Dead DNA is gone, gone, gone. History, in biology more than anything else, is written by the victors. All we have are the descendants of survivors, mutated and filtered.

  23. You have to just love how they appeal to recombination when they feel they need to, but at other times it seems they think it totally irrelevant.

    Bullshit. For my part, I never shut up about recombination. It is a very important force. And it has clearly been of great historic significance, as witness the many recurring sequences, in both sense and antisense orientations, in functionally unrelated parts of the genome. If one is lukewarm about common descent, of course, one will argue that these are all the same or similar due to common design. But ‘lateral’ within-genome duplication makes exactly the same prediction as whole-genome duplication in descent: a nested hierarchy of markers. The same techniques of phylogenetic tree-building yield the same very strong support for either:

    1) Common Descent

    2) Common Design by a designer to whom fooling us into thinking it’s common descent appears much more important than simply designing the damn thing without such unnecessary restriction.

  24. [recombination] has clearly been of great historic significance, as witness the many recurring sequences, in both sense and antisense orientations, in functionally unrelated parts of the genome.

    Rereading, I appear to flip here from simple reshuffling of genes to duplication. It’s all recombination, of course. Just to be clear: anything that changes the physical sequence of bases on a chromosome, or swaps whole or part-chromosomes, or merges related or unrelated sequences from separate organisms, is recombination. One can trace the relationships between sequence and uncover a lot of history, because recombinational events make excellent markers, in addition to being a powerful mechanism of evolutionary ‘exploration’ in themselves. Unlike point mutations, which only have 3 options available, and a reasonable probability of returning to their start point in 2 steps, recombinational events are highly unlikely to ever occur twice, and even less likely to reverse. Their signal slowly decays, but this simply erases that particular marker, rather than invalidating the ones that can still be reliably identified.

  25. Joe,
     

    That is a separate question We do NOT have to know the content to infer design.

    But you are not inferring design at all. You are simply saying “all data on the internet is designed as data cannot get on the internet without human intervention”. So by that definition every string I might present is designed. If I write down how many birds fly over in a day, or the frequency of radioactive decay detection according to you that data is “designed” simply because it was written down. ID is not very useful is it? All you seem to do is walk around pointing at things saying “yep, designed”.

    So you proclaiming “victory” is somewhat premature. You don’t even have to examine the string itself before saying “design”. What good is that?

    Perhaps to you. But then again you think a ribosome is a genome.

    You think that ribsomes have a non-physical component but can’t prove it.

    They may not mean anything. And without a “Rosetta Stone” or and endless supply of funds, we would most likely never figure it out. However just it’s existence would tell us more in the short term. And there would be no reason to look for any meaning without first determining design.

    You contract yourself. You’ve established design in both my documents (they are on the internet!) but have failed to look for “meaning”. So given that your detection of design was in fact trivial (it was on paper = design) do you want to have a go at the meaning of the documents instead?

    In your case, absolutely. In some real world case, it would all depend.

    Then why don’t you prepare for that real world case by doing what you’d do there on my documents? Get a bit of practice in?

    Its amazing how many excuses you lot come up with to avoid doing the thing that you claim not only can be done but is done day after day.

    Let’s say an archaeologist found two tablets with those strings on. They just go “yep, designed” and move on? No, but that’s what you do.  

  26. Joe,
     

    But that doesn’t have anything to do with ID. And it doesn’t have anything to do with evolutionism. So what is your point besides proving that you are a clueless strawman designer? Or is that what you shooting for?

    This is how UD defines what ID is: http://www.uncommondescent.com/id-defined/

    In a broader sense, Intelligent Design is simply the science of design detection — how to recognize patterns arranged by an intelligent cause for a purpose. Design detection is used in a number of scientific fields, including anthropology, forensic sciences that seek to explain the cause of events such as a death or fire, cryptanalysis and the search for extraterrestrial intelligence (SETI). An inference that certain biological information may be the product of an intelligent cause can be tested or evaluated in the same manner as scientists daily test for design in other sciences.

    So what I’m asking, in essence is that you test or evaluate my documents in the same manner as scientists daily test for design in other sciences. So it seems that nobody is able to recognize patterns arranged by an intelligent cause for a purpose, if those documents indeed contain such a pattern. Just knowing that one did and one did not for example would essentially solve my problem but it seems despite this being the self proclaimed reason for ID’s existence nobody can actually do it!

     

    statistical and experimental evidence that tends to rule out chance as a plausible explanation.

    So it seems that this is just an empty claim, when faced with actually doing it ID folds.

  27. Mung: So complex stuff that already existed shifted around and you say this is proof that the complex stuff evolved in incremental steps?

    The reptilian middle ear is much less complex than the mammalian middle ear.

  28. Isn’t a watch just complex stuff that’s been shifted around? In any other context an ID advocate would be claiming that the arrangement of parts to create a new function would be proof of ID.

     

  29. gpuccio: The “known causes” have nothing to do with the assesment of dFSCI. The requisites to assess dFSCI are two (as I have said millions of times):

    a) High functional information in the string (excludes RV as an explanation)

    b) No known necessity mechanism that can explain the string (excludes necessity explanation)

    Can’t seem to resolve the apparent contradiction between the first statement and b).

    Also, we’re still left with your leaky bucket explanation. See keiths’ description. 

     

  30. gpuccio responds to Zachriel:

    To Zachriel (at TSZ):

    See Keith’s description

    No, thank you. Already did, and it made my views about human nature even worse than they already were.

    I leave Keith’s masterpieces to you, who seem to appreciate them.

    You are always welcome to comment on more serious issues, as you can do.

    gpuccio,

    Don’t let your emotions get in the way of a learning opportunity. My bucket analogy highlights a serious flaw in your dFSCI argument:

    1. Take a bucket of complex sequences.

    2. Throw out the ones that are explained by a “known mechanism”.

    3. Amazing! Of the sequences that are left, not a single one is explained by a known mechanism!

    4. Later you discover a mechanism that can explain one of the remaining sequences.

    5. Throw it out of the bucket and return to step #3.

    In case it’s not already obvious, here’s the problem:

    a. You want to use the fact that something is in the bucket (i.e. has dFSCI) as an indicator that it is designed (that is, not the result of ‘necessity mechanisms’).

    b. Before you put it in the bucket, you have to rule out known ‘necessity mechanisms’ as the cause.

    c. To rule out known ‘necessity mechanisms’, you can’t look to see if the object is in the bucket, because you haven’t decided whether to put it there yet.

    d. Therefore, in order to decide whether to put it in the bucket, you have to use some criterion other than whether it’s already in the bucket. Obvious, right?

    e. But if you’re using some other criterion, then it’s the other criterion that is doing all the work. You only put something in the bucket after the other criterion is met.

    f. So the fact that something is in the bucket (has dFSCI) is just a restatement of what we already knew by other means. The label of dFSCI adds nothing, so we might as well ignore it.

  31. Mung

    Two strings of exactly the same length composed of exactly the same 4 characters from the English alphabet, that’s pretty improbable. i’d say designed. so yeah, lump me with Joe.

    Is “pretty improbable” a technical term in ID then? Consider yourself lumped in with Joe.

    Gpuccio,

    Don’t lie!

    I have asnwered very clearly that no design inference can be done for both strings. That should solve your problem. Neither string is designed.

    Joe and Mung say it’s designed.

    Gpuccio says it’s not.

    Joe,

    And OM, as I have already said, biological information refers to function. We OBSERVE the functionality. We do NOT try to guess what the function, if any, is.

    No need to do all that, just say it’s “pretty improbable” and leave it at that.

    I have determined agancy involvement was required. That is all I have to do.

    But that’s trivially true of any piece of data on the internet. If I take a picture of a rock pile then you will claim that it shows design because “pictures require agency involvement”.

    So ID has it easy. When asked “Is X designed” you can say “The fact that you are asking me that means that agency involvement was present and that’s all I have to do”.

    So your claims that ID is like forensic detective work or archaeology don’t add up. Neither of those activities stop when “agency involvement” is detected.

    If there was really a “science of ID/design detection” you’d all come up with the same answer for my, frankly trivial, exercise.   

  32. Joe,
     

    Just because you are an scientifically illiterate dullard, doesn’t mean you trope refutes ID.

    Testing is a large part of science. I’ve tested you. And the results are, well, as expected.

    I’m not trying to “refute ID”. That can’t be done. There is nothing to refute.

    What I’m trying to do is show how the grand claims of “design detection” I quoted from the UD “What is ID” section are just lies.

  33. Mung,
     

    Two strings of exactly the same length composed of exactly the same 4 characters from the English alphabet, that’s pretty improbable. i’d say designed. so yeah, lump me with Joe.
     

    What if I told you that the letters represented wind directions (N,S,E,W) and had become transposed with the letters used in the document?

    Now simply recording the way the wind was blowing at 1 second intervals means that the way the wind was blowing was designed. You just said so yourself.

    I realise you realise the absurdity of your position, you are a poe, but others take the same position in all seriousness and I thank you for saying what they are too afraid to say.

    It also means that you can look at any segment of DNA and say “yep, pretty improbable, designed”.

    So any two documents that are the same length where the content uses the same characters are designed? Regardless of the actual content? Or how many other potential “documents” are out there?

    I hope you realise how foolish this is making you look, especially as the 3 ID supporters that have braved my trivial challenge can’t actually agree on any aspect of the challenge. 

  34. Gpuccio,
    I thought you did not want to play any more? Now you are calling me a liar for reporting what you are all saying?

     

    So, you are definitely lying.

    Whatever.

    We asnwered your question. Maybe one of us is wrong. Maybe we considered different questions.

    I never asked for a determination of design/not design. Here is what I asked originally:

    Would you be able to help me, Mung, and determine which page is the correct page? Which page should I investigate further and which should I discard, as that’s the choice (limited budget don’t ya know). Which page is more interesting then the other? If you discover that design factors into it, are both designed? Neither? One but not the other? Which? For bonus points, anything further you can tell me about the contents of either document would be appreciated.

    http://theskepticalzone.com/wp/?p=1352&cpage=2#comment-16703 If you would like to revise your answer in light of that please do so.

    I have clearly stated that we could infer design for both sheets with strings printed.

    Which was not what I asked for. You answered the question you thought was asked. I made it clear that the container of the data is not relevant, but nonetheless you make it relevant.

    If instead we consider the strings themselves, we cannot infer design.

    Mung and Joe have done so, on the basis that the string(s) are “pretty improbable” they have concluded design. You have concluded the opposite. Therefore how am I a liar?

    That is in perfect accord with the definition of dFSCI and of design inference. I challenge you to demonstrate the contrary.

    I am doing so with my little game.

    So, in the end, you are simply lying.

    Then you win! It’s simple! The fact remains that some of you are concluding design because “things don’t get printed on paper on their own” and I’m reporting on that and you don’t like it.

    If “design detection” really existed you’d all come to the same conclusion quite quickly about my two documents.

    Yet you cannot even agree on the question that’s being asked despite it being very plain.

    We asnwered your question. Maybe one of us is wrong. Maybe we considered different questions.

    Yet the fact remains that Joe and Mung say design and you do not. If you considered different questions is not really my problem, I only asked one – which of the two documents are more interesting, can they be categorised differently on the basis of their contents (and not the paper they are printed on!).

    So call me a liar if it makes you feel better but it does not cover the fact that of the 3 of you that have answered I’ve had two different answers (design/not designed).

  35. Gpuccio,
     

    So, you are definitely lying.

    Ah, I see what you are getting at. I say that nobody can do what UD says ID can do (but it seems despite this being the self proclaimed reason for ID’s existence nobody can actually do it!) and you say I am a liar because people at UD have attempted my challenge.

    You have misunderstood me. What I’m saying is that *I know the answer* to my little challenge, and so far nobody has used ID to solve it. Nor even come close.

    So when I say that nobody can do it I mean that nobody has done it yet correctly. Yes, attempts have been made but yours was the only serious one. But nonetheless you failed, and that’s what I’m getting at. So when I say that nobody can do it, it being the reason for ID, then that’s still true. You’ve not done it, Joe’s not done it neither has Mung.
    And you’ve all come up with different answers, that much is true….

  36. gpuccio: I consider that a string exhibits dFSCI only if both these criteria are satisfied: 
    a) High functional information in the string (excludes RV as an explanation)
    b) No known necessity mechanism that can explain the string (excludes necessity explanation)

    Then I infer design.

    Okay. So we’re working with a trichotomy. It’s really just another restatement of the Explanatory Filter. 

    The specific problem is that evolution has both random and deterministic aspects. Gpuccio will argue that evolution alternates the two mechanisms, therefore is excluded. That argument doesn’t work, though, because the test for “highly functional information” only precludes completely random sequences, not incremental increases in functional complexity. 

  37. The problem is that the origin of “high functional information” is the very thing being contested. Large amounts say nothing about its origin.

    Among other problems, the length of a gene sequence says absolutely nothing about how many steps removed it is from a non-functional precursor. And nothing at all about its history.

  38. Gpuccio,
     

    You are a liar just the same.

    I know what I am, but what are you?

    I did not infer design for neither string.

    I never said you did. I said that you’ve infered design for *all strings printed on paper* exactly as you said yourself. Now, for the particular strings in question (rather then their container) you have not infered design which I have already mentioned.

    Even if one of them, or both, ahve a function that I did not recognize, ia have given one or two false negatives.

    Great! So that’s essentially a “pass” really. Which is fine, you can’t be wrong with a pass as you point out.

    Which is exactly what can be expected in a design inference.

    So all proteins start out as not-designed until you find their function and then they become designed? Got it.

    If I had given one or two false positives, I would have failed.

    Congratulations, you did not fail! You did not succeed either, so perhaps next time.

    But not so. You don’t understand the ID theory, do you? Or you are just a liar.

    Well, that depends. So far “ID Theory” has told me that strings printed on paper are designed, which I never disputed and specifically mentioned as irrelevant from the start. Furthermore Joe and Mung are saying design and you are not. So when “ID theory” makes up it’s mind feel free to let me know. In the meanwhile you may continue to call me a liar, whatever makes you feel better.

    statistical and experimental evidence that tends to rule out chance as a plausible explanation.

    Yeah, ID ain’t ruling out anything except that where you find manufactured paper you’ll find a paper mill.

  39. Mung,
     

    Bet on it.

    The only evidence I’ve seen so far of your understanding of “ID Theory” when presented with a puzzle that should be trivial for “ID Theory” to solve is:

    Two strings of exactly the same length composed of exactly the same 4 characters from the English alphabet, that’s pretty improbable. i’d say designed. so yeah, lump me with Joe.

    So frankly, your opinion of what I do and do not understand with regard to “ID Theory” is irrelevant until and unless you can prove that you can actually do something with “ID Theory” that does not revolve around your misunderstandings of evolution.

  40. Gpuccio,

    So please, show those incremental increases in functional complexity, each of them of low complexity, each of them naturally selectable in respect to what was there before, for most basic protein domains (you can just start with one, then we will see).

    And then presumably you’ll explain how the Intelligent Designer achieved the same?

    Let me save you the trouble, Joe already told me!

    They were designed that way!!!!

  41. What gpuccio fails to address is the rather basic question of how the Designer knows the properties of yet to be created molecules.

    KF punts this question by asserting that the Designer must have capabilities beyond Venter’s. 

    Gpuccio asserts the Designer must be non-material.

    Can anyone say ad-hoc? 

    Exactly how useless is an invented, imaginary sky-fairy having whatever attributes and capabilities and motives needed to explain the gaps that present themselves today, and which will acquire whatever attributes are needed when gaps are closed or new ones discovered?

    One cannot argue against imagination. As Critical Rationalist points out, science advances by imagining explanations.

    The difference between science and fantasy is that science limits its imagination to testable propositions. This is why, even in hard sciences like physics, conjectures that have no testable entailments are considered  to be puffery. Sometimes interesting, but not science.

    The problem with ID is not that it is proven wrong, but that it doesn’t lead to useful research. Consider Douglas Axe. How useful is it to assert that we don’t know the detailed history of proteins? Or that the specific history, if known, would appear improbable. Like the list of winners of lotto.

    How probable is the specific ancestry of any human being? It would seem that anyone familiar with mathematics would know that the probability of something that has already happened is one.

    What physical law is violated by the string of improbabilities that led to your ancestors meeting? Or the specific lotto winners? Retrospective astonishment is not good mathematics and not science.
     

  42. It also seems to me Gpuccio is another of the “video evidence or it did not happen” crowd. Whatever evidence you might present is never good enough. 

    Great strides have been made recently in understanding the origin of protein domains yet Gpuccio knew yesterday, knows today and will know tomorrow the explanation already. Before any research was done at all, he knew the answer. Regardless of how much research will be done, he knows the answer. 

     

    However, exceptions to this rule allow us to begin to determine the process by which novel folds can develop from ancestral folds and possibly even how the first folds came into existence. Various lines of research have shown that thermodynamic stability, designability, functional flexibility and structural drift all play important roles in shaping the distribution and variation of structural families in nature.

    http://www.els.net/WileyCDA/ElsArticle/refId-a0020202.html But blah blah blah eh Gpuccio? You want this

    So please, show those incremental increases in functional complexity, each of them of low complexity, each of them naturally selectable in respect to what was there before, for most basic protein domains (you can just start with one, then we will see).

    A step by step video essentially. For stuff that happened in the deep deep past. And without that you’ll simply dismiss every other bit of evidence that is produced for a natural origin for whatever spurious reason you think of at the time.

    The only saving grace is that before too much longer it seems that for some problems that currently seem intractable due to computing power limitations those limitations will be lifted somewhat. So perhaps you’ll get your start to end video recording then, but even then you’ll just say “but it’s a simulation, it proves nothing”.

    So my arguments with you Gpuccio do not have the intent of getting you to change your mind, you did not make it up on the basis of evidence so evidence won’t be able to change it.

    I just want to illustrate the stark depth between claim and reality in the ID community.

  43. Gpuccio would be a hoot on a jury.

    Asked to provide the best explanation for events, he would have to say, in the absence of videotape, that the best explanation would be intervention by non-material entities.

    And of course, videotape is just a simulation and could be faked.

  44. GP will protest that humans are intelligent agents and potential causes of crimes. That’s an empirical fact.

    I would point out that evolution is also an intelligent agent capable of creating new function.

    What you lack in biology and in jury trials is the detailed, step by step history. You have to infer the details and come to the best explanation.

    I would also point out the utter, complete lack of any entity capable of designing biological molecules, other than evolution.

    When you are on a jury, you ar generally bound in your theory making, to whether a specific person was the agent. You won’t get far with imaginary, invisible, immaterial agents.

  45. In fairness, I should note that the circularity problem did not originate with gpuccio.  Gpuccio’s dFSCI is just a modified version of Dembski’s CSI, which has been plagued by circularity since its inception.  Unfortunately, gpuccio failed to notice and correct the problem he inherited from Dembski.

    Here’s the circularity in Dembski’s argument:

    1. To safely conclude that an object is designed, we need to establish that it could not have been produced by unintelligent natural causes.

    2. We can decide whether an object could have been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).

    3. To determine whether something has CSI, we use a multiplicative formula for SC that includes the factor P(T|H), which represents the probability of producing the object in question via “Darwinian and other material mechanisms.”

    4. We compute that probability, plug it into the formula, and then take the negative log base 2 of the entire product to get an answer in “bits of SC”.  The smaller P(T|H) is, the higher the SC value.

    5. If the SC value exceeds the threshold, we conclude that unintelligent processes could not have produced the object.  We deem it to have CSI and we conclude that it was designed.

    6. To summarize:  to establish that something has CSI, we need to show that it could not have been produced by unguided evolution or any other unintelligent process.  Once we know that it has CSI, we conclude that it is designed — that is, that it could not have been produced by unguided evolution or any other unintelligent process.

    7. In other words, we conclude that something didn’t evolve only if we already know that it didn’t evolve. CSI is just window dressing for this rather uninteresting fact.

    Though the details are slightly different, the same circularity undermines gpuccio’s dFSCI argument.

     

  46. I didn’t follow that. What your list doesn’t say (as far as I can tell) is that the calculations themselves rest on foregone conclusions. Given sufficient knowledge of conditions, it’s in principle possible to determine that that something was unlikely. Given the near-infinity of interdependent variables inherent in reality, it’s pretty safe to say that nearly everything that happens is vanishingly unlikely. We don’t need even much of sample of these variables to do the calculation close enough to establish this.

    And this in turn means that one simply cannot induce “design” from looking at an object or event. One must identify and operationally define the design mechanism, and then WATCH it happen. We are wading through a sea of CSI every which way all day long.  This is what I’ve called the “every bridge hand is a miracle” fallacy. Clearly, all bridge hands are chock full of CSI – they’re complex, they’re fully specified, they are all vanishingly improbable.

  47. To be fair, gpuccio doesn’t conclude it’s beyond RMNS unless there’s really lots of CSI.

  48. Flint,

    This is what I’ve called the “every bridge hand is a miracle” fallacy. Clearly, all bridge hands are chock full of CSI – they’re complex, they’re fully specified, they are all vanishingly improbable.

    They’re complex and improbable, but not “fully specified” in the way Dembski and other IDers intend. For a bridge hand to be specified, there has to be some independent reason that it is special to the “semiotic agents” involved, apart from the mere fact that it happened to be dealt to you.

    For example, if I predict ahead of time that I will receive a specific bridge hand, and then I receive exactly the cards I predicted, then that bridge hand is clearly specified, even if it is a thoroughly average hand by normal bridge standards. You would rightly suspect that the dealer and I are in cahoots, that something fishy is going on, or maybe even (if you had ruled out the more mundane possibilities) that I was prescient.  You wouldn’t think it had happened by chance, particularly if I was able to repeat the feat.

    However, if I received the same improbable hand without specifying it in advance, it would be a thoroughly unremarkable event, and no one would take notice.

    IDers fall prey to many fallacies, but the “every bridge hand is a miracle” fallacy is not one of them. At least, not one that Dembski and gpuccio fall prey to.

  49. I think that one computes (in Dembski’s argument) bits of SI, not bits of SC.  SI is a concept originated by Leslie Orgel, the C part comes in as an all-or-none assessment that there are at least 500 bits of SI. If it is present, you say there is CSI.

    That value was chosen to be one that could not show up even once in the whole history of the Universe by pure random happenstance. (Personally, I am willing to acknowledge the meaningfulness of SI as a concept in simple genetic algorithm models, and the reasonableness of saying that a value of SI high enough to constitute SC is implausible as having originated by pure mutation, in the absence of natural selection. Don’t everybody boo at once.)

    Where keiths is asserting circularity is where natural selection is ruled out as a source of the SI.  Dembski did it differently. He had his Law of Conservation of Complex Specified Information (LCCSI).  That was supposed to show that there could be no combination of deterministic and stochastic processes that could generate SC. It has been disproven on two different grounds, by Jeff Shalit and Wesley Elsberry, and by me.

    If gpuccio and others who use SI and SC do not rely on Dembski’s LCCSI theorem, they then need to have some other way of ruling out that natural selection made the SI high enough to be SC. That is where gpuccio invokes the ruling-out of deterministic natural causes, and where there seems to be circularity as he does so.

    (As an aside, yes, Dembski also had a step where deterministic natural causes were ruled out, but he seemed to only invoke that to get rid of rather simple and trivial natural forces. The heavy lifting in arguing that NS could not be responsible for the SI was done by the LCCSI.)

Leave a Reply