Conflicting Definitions of “Specified” in ID

I see that in the unending TSZ and Jerad Thread Joe has written in response to R0bb

Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI. Even Stephen C. Meyer says CSI is not amendable to compression.

A protein sequence is not compressable- CSI.

So please reference Dembski and I will find Meyer’s quote

To save Robb the effort.  Using Specification: The Pattern That Signifies Intelligence by William Dembski which is his most recent publication on specification;  turn to page 15 where he discusses the difference between two bit strings (ψR) and (R). (ψR) is the bit stream corresponding to the integers in binary (clearly easily compressible).  (R) to quote Dembksi “cannot, so far as we can tell, be described any more simply than by repeating the sequence”.  He then goes onto explain that (ψR) is an example of a specified string whereas (R) is not.

This conflict between Dembski’s definition of “specified” which he quite explicitly links to low Kolmogorov complexity (see pp 9-12) and others which have the reverse view appears to be a problem which most of the ID community don’t know about and the rest choose to ignore.  I discussed this with Gpuccio a couple of years ago. He at least recognised the conflict and his response was that he didn’t care much what Dembski’s view is – which at least is honest.

261 thoughts on “Conflicting Definitions of “Specified” in ID

  1. Imagine that gpuccio comes across strings generated by Lizzie’s program. He figures out that they are functional (the product of their head-run lengths is very high) and specified (very few strings can do that). He does not know that they can be generated by a simple algorithm. What will he conclude? Will he change his conclusion when he reads the description of the program? 

  2. Zachriel: So evolutionary algorithms can generate dFSCI, per your definition #2-4.

    gpuccio: Sure, why not? 

    Okay. 

    gpuccio: Your software can generate words. That’s fun again. What a pity that it has to have a whole dictionary inside to do that! But that’s not a proble, let’s just call the discionary “a landscape”, and not an oracle that is part of the algorithm, and the fun starts again.

    That’s right. The landscape is an abstraction of an environment, which is something outside the population of replicators. (Word Mutagenation was written to respond to a very specific claim about words introduced by an ID proponent.) Think of it as a map to be traversed. 

    You yourself reference landscapes, such as when citing studies of functional complexity in proteins. Kairosfocus also references landscapes when he points to his “isolated islands of function in vast seas of non function”. By the way, Word Mutagenation addresses these isolated islands of function. They are traversed, not laterally, but vertically through inheritance. 

    gpuccio: No algorithm, of any kind, can ever generate dFSCI for a function about which it has no direct or indirect information.

    Obviously. This relationship is represented by a fitness landscape which returns relative fitness for a given phenotype. You could use a physical environment instead, such as experiments with protein evolution, bacteria in the lab, or birds in the wild. 

    You seem to be confusing the model with the thing being modeled. Word Mutagenation can’t address biological evolution specifically, but it can address general statements about evolutionary processes, such as “isolated islands of function in vast seas of non function”.
     

  3. gpuccio: It’s the same reason why copying a string of DNA is not creating new dFSCI. But I am afraid that you guys cannot even understand that simple concept.

    Not confused on that point. But if you didn’t know the evolutionary origin of nylonase, you would conclude design, a false positive. Worse, you would know it with certainty! 

  4. Zachriel: Is evolution a necessity mechanism?

    gpuccio: “Evolution”, as I have said many times, does not mean anything if it is not better detailed. If you mean the neo darwinian explanation for biologic information, it is obviously an explanatin based on RV + NS acting sequencially. 

    But is it a *deterministic* explanation per #4? We’re not quibbling over the use of the word “deterministic. We thought you were using it broadly, and believe that is still your meaning, but you aren’t being clear, and have recently changed your nomenclature. 

    gpuccio: A transition from a protein to another similar one, that implies only a few bits of modification, is not a transition that exhibits dFSCI, because it is not complex enough. 

    Which emphasizes that you are excluding known evolutionary transitions per #4 of your definition. Is that correct? Is your “deterministic explanation” dichotomous with design?
     

  5. gpuccio: The fact remains that Word Mutagenation includes a dictionary as an oracle, and the dictionary is part of the algorithm, and should be included in the computation of its complexity.

    There’s information in the relationship between the replicator and the environment. That’s what we mean by selection. So if your notion of complexity means including the natural environment as well as the genome, well, you left a few steps out of your definition. 

    In any case, you are still confusing the model with the thing being modeled. The fitness landscape is just a table of fitness values for each phenotype. While no complete fitness table is available for biology (though there are many for aspects of natural biology), we can still explore how evolution works with evolutionary algorithms. When you make generalized statements about evolution, that’s when an evolutionary algorithm may be useful. 

  6. Since we know that many random sequences code for functional proteins, how do we know how many bits of change is required to optimize a sequence. It is quite possible that no more than a few are required.

  7. Mung: Lateral is differences in genomes. Vertical is rates of reproduction.

    Vertical means through inheritance.  The connection between disparate groups can be found in common ancestors. 

    Zachriel: This relationship is represented by a fitness landscape which returns relative fitness for a given phenotype.

    Mung: And that’s why it’s neither a model of evolution nor a model of any evolutionary process.

    Of course it is. We have heredity, sources of variation, and a relative fit to the environment which determines reproductive success. we can even  model very specific biological situations, such as the effect of antibiotics on the evolution of bacteria. 

    Mung: No, it can’t.

    Is so. (Handwaving isn’t an argument.) 

  8. Mung: For those following along, the population in a GA is under constant selection.

    That’s not correct. Genetic algorithms can include drift, chance, relaxed or no selection. 

  9. Mung (quoting Koonin): A corollary of Fisher’s theorem is that, assuming that natural selection drives all evolution, the mean fitness of a population cannot decrease during evolution (if the population is to survive, that is).

    That’s not quite correct as the statement only applies to an infinite population. In a finite population, fitness can decrease even if natural selection drives all evolution (which it doesn’t).  

  10. Mung: “Imagining a calculation of CSI may be good enough for you jokers at TSZ, but it’s not good enough for me. There is no ‘CSI_TRUE’ in his program. There is no ‘CSI’ in his program. There is no explicitly defined ‘return’ call in his main() function. The insertion of your code would make his code not even compile. And a return from main would, iirc, end the program execution. “

    OMG! :)

    It’s not supposed to run!

    It’s to show you what is implicitly being done.

    The CSI is calculated according to UD terms.

    If the “digital functional specific information” reaches the UPB threshold, then CSI is “asserted”, as per Joe and gpuccio.

    The “specific functionality” Lizzie was looking for has been attained as indicated by the dFSCI, (ask gpuccio what this means), and therefore CSI is asserted, whether implicitly, explicitly or “wink wink/nudge nudge”, the result is that the program has finished generating a string containing CSI.

    As I said, I , Toronto, relabeled it, not to “add” code to someone else’s program, but so it would be clear to you, where this CSI calculation was being done, but you again, have let me down.

    As KF says, please try harder.

    I actually thought you would thank me. :)

     

  11. Mung: “I wrote it on a piece of paper and then put a match to it.

    Does that count?”

    Whatever you believe your skill set can handle. :)

  12. Joe: ” Earth to toronto- Lizzie’s example does not produce CSI. Not by Dembski’s definition and definitely not by any definition I have read from an ID proponent.

    You are confused.”

    Then so is gpuccio and kairosfocus.

    If I have dFSCI above an agreed-upon UPB, I can safely say that the string containing that dFSCI, exhibits CSI, and that’s according to what I have read from gpuccio, and with different terminology, KF.

     

     

     

  13. Mung: Of course they can. They can include pink elephants for all I care. But it does not follow that they actually do.

    Many evolutionary algorithms include drift, chance, relaxed or no selection. Not sure why you think otherwise.

    Mung: Probably even in your Word Mutagenation program. Constant selection. 

    We just recently described a simple evolutionary algorithm that includes no selection whatsoever. It shows how diverging descent with modification leads to a nested hierarchy. 

    Mung: You didn’t put forth an argument, you put forth an assertion. 

    Not just an assertion, but an algorithm that anyone can follow to verify the assertion, even recreating the algorithm independently. 

    Mung: Wikipedia: In evolutionary biology …

    As we said, we are using the word vertical to refer to refer to a common population diverging and climbing separate peaks, rather than a population traversing laterally from one peak to another. 

    Zachriel: That’s not quite correct as the statement only applies to an infinite population. In a finite population, fitness can decrease even if natural selection drives all evolution (which it doesn’t). ‘

    Mung
    : Well, let’s just throw out all of theoretical population genetics then.

    Huh? As we said above, and as your citation supports, the statement that fitness can never decrease only applies to infinite populations (which don’t exist, but provide a useful limit) *and* when natural selection drives all evolution (which it doesn’t). When a population is finite then fitness can decrease even if natural selection drives all evolution (which it doesn’t). 

  14. Joe: “All that is true. However Lizzie did not generate dFSCI- there isn’t any function, no meaning, nothing. “

    Yes, there is “specific functionality” and that is a product of values in the string that result in a number larger than “1.0e60″.

    See this line in the program: #define FITNESS_THRESHOLD 1.0e60

     

  15. gpuccio: I am not sure what is your problem.

    Simple. We asked if evolutionary processes are included in #4 of your definition. We also asked whether your use of the term “deterministic explanation” is dichotomous with the design explanation. You made a long comment, and we don’t see a clear answer. 

    Mung: If I take a 504 bit string and “randomize” it, I’ve generated Shannon Information? 

    Yes, that is correct. Do you understand why? Keep in mind that Shannon Information is the theoretical basis of all modern digital communications, including the Internet. Why would a random sequence have more Shannon Information? 

    Mung: Post some examples (fitness values) from your program. Here are some examples from OMTWO

    Which program? Word Mutagenation uses various types of selection, such as length or Scrabble® score. Valid words have positive fitness, while strings that don’t spell perfect words have zero fitness. Another program, as we mentioned, doesn’t use selection whatsoever, but only drift. Still another rewards poetic phrases (iambs, alliteration, rhyme, etc.). Which model we use depends on what aspect of evolution we are investigating. More complex models we’ve worked on include all of these aspects. 

    Joe: And the works of Shakespeare would appear, to any algorithm, to be random, as they haz no short and neat description

    No, that is not correct. Random sequences are generally incompressible, but Shakespeare is quite compressible, one of many simple tests of randomness.   The English language is full of patterns, which is why evolutionary search is so effective. 

    Zachriel: We just recently described a simple evolutionary algorithm that includes no selection whatsoever.

    Joe: Then what makes it an evolutionsry algorithm?

    Because the genomes change over time. However, there is no adaptation without selection, of course. 

    Joe: And you still haven’t demonstrated any understanding of nested hierarchies

    You don’t seem inclined even to define sets, much less a nested hierarchy. 

  16. Joe,

    Toronto:” Yes, there is “specific functionality” and that is a product of values in the string that result in a number larger than “1.0e60?.”

    Joe: “Umm that is not functionality…”

    It is as functional as a “string” of DNA that is “code” for a functioning human.

    If a string of DNA contains “information” then so does Lizzie’s.

     

  17. Mung: “I think I am going to create a “Toronto” class in honor of you and incorporate it into my version of Lizzie’s program.

    It will be responsible for calculating CSI.

    Are you honored?”

    I am feeling both honoured and immortalized.

    Look in “double fitness(genome_t *genome) {…”

    In main though, the test is made for the “dFSCI” filled in by his fitness function: “while (genome_array[0].fitness < FITNESS_THRESHOLD) {..”

    The point is that the winning string exhibits the “functionality” required by the “environment”, (i.e. FITNESS_THRESHOLD) before the program can exit.

  18. Mung: “sigh. and just when I was starting to like you.

    Lizzie’s program is written in MatLab. There is no #define FITNESS_THRESHOLD 1.0e60.

    Here is Lizzie’s code:

    while MaxProducts<1.00e+58″

    OMG! :)

    Whether its set to 1.00e+58, 1.00e+59, 1.00e+59 in Lizzies’s program, or 1.0e60 in the C program, makes no difference to the “algorithm”, which is what we’re testing, not language syntax or program structure enforced by a specific language.

    The algorithm is the same, and that is to generate a string whose structure reflects a “….set of values then when multiplied together result in a value exceeding a certain threshold….”, and thus allow you to survive in your environment and have children.

    Amazingly, both programs seem to have converged on the same “dFSCI” bit pattern which means the algorithm, regardless of language implementation, is consistent.

    When testing, its sometimes more productive to have shorter run-times which means thresholds get adjusted simply for test purposes.

    I’m guessing that’s why she set it to (10**58) instead of (10**60).

     

     

     

     

  19. Yes; the run times were getting long. MatLab is relatively slow.

    The maximum possible threshold for this algorithm is 4100 = 1.6 x 1060 (100 groups of THHHH). If it were set higher than that, the program would never halt.

    The likely reason the program takes longer and longer to approach the maximum threshold is because of small fluctuations in the populations of offspring. One could make it converge to the maximum more quickly by the use of “latching” or by diminishing the probability of a mutation as the populations approach the maximum. Such a constraint might apply to situations in which kicking particles out of a well becomes less likely the deeper they settle into the well (i.e., they dissipate energy as they fall in). Latching could also correspond to something like radioactive decay in which there is no reactivation of the decay product.

    But that feature was not in Elizabeth’s program.

    Notice that the computer program never specifies how the heads are grouped. That is an emergent phenomenon that is not part of the program’s algorithm.

  20. Mung,

    Here is the important line: “product *= (double)(j – i);”

    “product” ends up in each “<genome[x]>.fitness” which then gets sorted so that we end up with the highest “fitness” in “<genome[0]>.fitness” for testing against the “threshold”.

    Again, consider this pseudocode since I’m not looking at the actual code as I type this.

    The resulting “500 bit pattern” is termed by gpuccio to be “dFSCI” and the threshold test, if successful, asserts CSI for that bit pattern.

    You’ll see in the program, the threshold test is actually done on the “functionality” but “dFSCI” is implicit due to the fact that “functionally specified” strings are of a known length.

     

    So we don’t actually “calculate” CSI, we “calculate” “dFSCI” which is compared to a “threshold”.

    If you want to “calculate a value” for CSI, ask gpuccio how to do it, since he claims CSI is not a scalar value, but rather a boolean.

     

  21. Mung: “If you had anything damaging to ID you’d be in a rush to post it. You haven’t. You don’t. “

    keiths has posted a great comment with his “bucket of CSI” analogy.

    An IDist has a bucket of things containing CSI that have no known “deterministic mechanism” explaining their existence. As soon as he finds a reason for a thing’s existence, he takes it out of the bucket.

    What’s left in the bucket?

    All the things he can’t explain! :)

    What does he do next?

    He attributes their existence to an “intelligent designer” that he can’t explain.

    So if you can’t explain something, the default position is ID!

     

  22. Mung,

    In main though, the test is made for the “dFSCI” filled in by his fitness function: “while (genome_array[0].fitness < FITNESS_THRESHOLD) {..”

    “dFSCI” is not just the fact that it is in this case a 500 bit string, but the “specific functionality” of the 500 bit pattern, which in this case is the information that results in a “product of terms embedded in the pattern, that exceeds THRESHOLD”.

  23. Mung:

    population size of 2? really? why?

    It’s a parameter, Mung. You can change it.

    It happens to be set to 2 in the version I posted to Codepad because I was testing STEP_MODE, which displays the genomes every n generations, and a population size of 2 was most convenient for that purpose.

  24. Mung,

    Since you’re still confused about ‘latching’ (aka ‘partitioned search’, aka ‘locking’), this is a good place to start reading:

    Dembski Weasels Out

    The ‘latching’ fiasco is one of the more amusing episodes in ID’s checkered history. Dembski and Marks embarrassed themselves by wrongly claiming that Richard Dawkins’ ‘Weasel’ program employed and depended upon latching. They even immortalized their mistake by publishing it in an IEEE paper. That’s gotta hurt.

    To the best of my knowledge, they never retracted their erroneous claims.

    Kairosfocus also got burned by claiming that Weasel latched. Instead of just admitting his mistake and moving on, he compounded his misery by insisting for weeks (and maybe still does even now) that he was right and that Weasel latched. It’s just that it used “implicit quasi-latching” instead of “explicit latching”. No kidding. Those are his phrases.

    I guess an “implicitly quasi-latching” program is one that doesn’t latch but fools IDers into thinking that it does. 

    Good times. We still laugh about that over at AtBC.

  25. Mung asked:

    But it was in mine. And people over there at TSZ immediately cried FOUL!

    They never explained why. Will you?

    Since Mung actually asked a serious question, I will try to answer it.

    The “latching” or the decreasing of the probability of a change in the string as it approaches the “target” corresponds to situations like particles falling into wells and remaining there. In order to do so, energy is gradually shed so that the particle doesn’t have enough kinetic energy to pop out of the well again. For example, it could be a simulation of system of atoms or molecules condensing into a liquid or a solid. So the algorithm is simulating a phenomenon that actually occurs in these kinds of systems.

    ”Latching” is roughly analogous to the case of radioactive decay in which the atoms are not reactivated by an environment of radiation. Once they decay, that’s it; they don’t reactivate unless the are immersed in some intense radiation environment.

    However, in the case of organisms “condensing” toward a different environment (i.e., being selected for fitness, in the language of random variation in the presence of selection), the phenomenon that is operating is a roughly fixed rate of mutation regardless of how “fit” the current generation is relative to the new environment.

    In other words, the mutation rate continues despite the fact that the current generations are close to being the “fittest” relative to the current environment. If the environment (simulated by the target in the program) changes in the course of the evolution of the population, the evolution changes direction and the population starts to converge on the changed environment (new target in the program). You can easily add an outer loop to the program that changes the target in the course of running the program, and you can watch the population track the change.

    Mutation rates are roughly constant over the course of history of an evolving population. Some of that is due to background radiation involving gamma rays or UV. Other causes include simply the probabilities in soft-matter systems that bonds will be broken or swapped or whatever happens in such complex systems simply because they are immersed in a thermal bath.

    So, for an evolving soft-matter system such as a living organism – a system that adapts by producing offspring that are slight variations of itself rather than simply adjusting itself to the new conditions – it is more appropriate to keep the rate of mutation roughly constant. That is closer to being more realistic in the case of the evolution of living organisms. “Latching” is not appropriate in this case because it misrepresents what is actually going on with real populations. That would be equivalent to freezing the organisms to match the environment. It is supposed to be soft matter adapting by producing surrogates of itself. It has to stay “soft” in order to track changes in the environment.

    Genetic algorithms include whatever laws of nature apply to the systems being modeled. If those algorithms are relatively good approximations of the laws that apply, what falls out of the GA program is close to what falls out in nature even though it may be impossible to predict it or mathematically model it.

    These kinds of program have been around for a long time. They often went under the name of Monte Carlo simulations in the past. They were use on the earliest electronic computers, such as the ENIAC, to do calculations for designing the atomic bomb.

  26. Mung et al,
    A problem has arisen! I tornado has torn up my office. I’ve managed to put everything back together, but typically there is one last thing.

    I have two documents left over:

    Document 1

    Document 2

    But I have only one file that is missing a page! The file is entitled “Properties of DNA”. We’re about to spend much money researching the data on this page. But to me they both look very similar, no way to tell between them at all. 

    I’m not sure where the other document came from. Perhaps from another office, but they do all sorts here so no telling what it is. 

    Would you be able to help me, Mung, and determine which page is the correct page? Which page should I investigate further and which should I discard, as that’s the choice (limited budget don’t ya know). Which page is more interesting then the other? If you discover that design factors into it, are both designed? Neither? One but not the other? Which? 

    For bonus points, anything further you can tell me about the contents of either document would be appreciated. 

    Thanks!

    OMTWO

  27. Mung (quoting): Fitness (often denoted w in population genetics models) is a central idea in evolutionary theory. It can be defined either with respect to a genotype or to a phenotype in a given environment. In either case, it describes the ability to both survive and reproduce, and is equal to the average contribution to the gene pool of the next generation that is made by an average individual of the specified genotype or phenotype. If differences between alleles of a given gene affect fitness, then the frequencies of the alleles will change over generations; the alleles with higher fitness become more common. This process is called natural selection.

    And if you map each genotype or phenotype to its relative fitness, you create a fitness landscape. We can do this with actual phenotypes, such as protein function maps, or abstractly in some evolutionary algorithms. 

     

  28. An intern has just come up with a brilliant idea! Perhaps it’s one page after all that was simply torn in two. Can’t tell by the edges, the tornado messed it all up. So we’ll have to go by the data. 

    Is that possible Mung? We’re dealing with a single page, not two?

    We need to be sure, so please explain your reasoning. 

     

  29. What is worse is, latching hardly matters in Dawkins’s Weasel program. The number of steps it takes to get to the target would be affected in only a minor way by it.

    Nevertheless Dembski, Marks, and others pointed to it as the reason that Weasel did so much better than pure random search. They publicized latching as an important property of the Weasel program, one that anti-ID and anti-creationists were trying to cover up.

    However, the Weasel program never latched at all. The reason for its success (compared to pure random search) was … selection. The very thing it was advertised to be about.

  30. Eric Anderson: People can randomize all they want and can no doubt come up with some increasing amount of pipeline capacity (based on some “fitness” function) and it is entirely irrelevant to the generation of CSI.

    Quite possibly, but Mung asked “If I take a 504 bit string and “randomize” it, I’ve generated Shannon Information?” The answer to that is yes. And whether that is important to claims about CSI or not, it is certainly relevant to information technology.

    Mung: I missed it. 

    Start with a single genome of significant length. Each digit represents a gene for our purposes. Replicate the sequence with reasonable rates of mutation. (Try a single mutation in every other genome for starts.) You can add some limits to population by random culling.  What pattern would occur in the descendants? 

    Mung: A no selection model would not favor the preservation of any particular trait. Agreed?

    Most traits would be preserved by simple heredity, while no particular allele would be better preserved than any other. Here’s the result of such a process after four generations. (Previously, we used commas to make it easier to see the differences. This is harder to read, but allows more precision in grouping.)

    abcdefghijklmnop

    abcdefghijkCmnop

    abcdefghijklmNop

    MbcdefghijklmNop

    abcdefghijklmnIp

    abcdBfghijklmnIp

    abGdefghijklmnIp

    abGdeCghijklmnIp

     
    Turns out that you can reasonably reconstruct the genealogical relationships from the nested pattern. 

    Mung: How close together are these separate peaks?

    They can be very far apart. Keep in mind that most interesting landscapes have many dimensions, so the relationships are not always intuitive.

    Mung: Why isn’t the population evolving together?

    Because there are many different niches. 

    Mung: Show us the runs from your program along with the mean fitness.

    In order to show fitness can decrease in a finite population doesn’t require a computer simulation. Consider a simple example. The environment has resources to support only two individuals. Their genotypes are AA and AA. Each generation, they mate and produce three offspring, of which only two survive to the next generation. This goes on for many generations. Then one day, a mutation occurs and one of the A-alleles is reduced to an inferior a-allele. Now our population has AA and Aa. By chance alone, the offspring might look like this: Aa, Aa, Aa. The survivors are Aa, Aa. These reproduce again. By chance alone, the offspring are aa, aa, aa. The A-allele has disappeared. (The odds of this happening depend on whether the a-allele is recessive. If it is, then it can persist over several generations, even with strong selection.) 

    Mung: Digital communication was taking place long before Shannon.

    Yes. You might start with the clay tokens preceding cuneiform script. But that’s hardly “modern digital communications, including the Internet”. 

    Mung: I have a randomly generated string. I ‘randomize’ my randomly generated string. According to you, I’ve generated “Shannon Information.”

    From the definition. 

    Mung: How and why? 

    The reason why Shannon defined it this way is described in his seminal paper, A Mathematical Theory of Communication, The Bell System Technical Journal 1948. 

    The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design.

    Shannon’s theory is fundamental to all modern digital communications. 

    Mung: You say that the second value will always be greater than the first. Is that what you are saying? 

    Not necessarily. It depends on the contents of the first string, and the context of the message. 

    Mung: I propose a test:

    Try it, and let us know. By the way, Shannon (and subsequent studies) showed that English, in context, only transmits about 1 bit per letter to a human reader. Amazing, huh? (See Sajak & White, W h – – l o f F – – – – n -.) 

  31. gpuccio: But if A (or B), after one of them happens, expand to the whole population, for a deterministic effect like NS, in a short time, then the scenario changes. 

    Or due to neutral drift. Nor does it have to go all the way to fixation, but just a significant number. 

    gpuccio: The real reason why NS completely fails is that complex functions are not deconstructible into simpler intermediates, each of them naturally selectable. We have to stick to real reasons, and not to imagination.

    There are many complex biological structures for which we can trace the history. A common example is the mammalian middle ear, where each step is selectable, while the final result is irreducibly complex. 

    gpuccio (quoting): By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wildtype phage must have involved not only random substitutions but also other mechanisms.

    Of course. It’s well-established that recombination is essential for traversing rugged landscapes. 

    gpuccio: Darwinists should seriously reflect on this empirical evidence, before fantasizing about what true NS can really do.

    Yes, apparently natural selection is capable of evolving quite adequate proteins — even with one hand tied behind its back!

  32. Joe: And what is the testable hypothesis that accumulations of random mutations didit?

    In the past, you’ve rejected any experiment showing mutation is random with respect to fitness, such as Lederberg & Lederberg, Replica Plating and Indirect Selection of Bacterial Mutants, Journal of Bacteriology 1952. 

     

  33. Along with making unwarranted extrapolations of the number of “required” steps, gp ignores recent research indicating that protein domains are themselves modular.

  34. I asked Joe recently if he thought there was such a thing as a fair die. Or set of dice even.

    He said that there was such a thing. 

    I wonder how he can possibly know that? 

    What testable hypothesis did he test to determine that some dice are random I wonder. 

    And I also wonder why that method, whatever it is, can’t be extended out to other systems. 

    What say you Joe? How did you determine that your dice are fair and why can’t anybody else do a similar thing according to you? 

    Are you special Joe? Only you can arbitrate chance/not chance?

  35. Joe Felsenstein wrote: What is worse is, latching hardly matters in Dawkins’s Weasel program. The number of steps it takes to get to the target would be affected in only a minor way by it.

    Indeed; the latching changes only the rate of convergence and narrows the distribution of the populations that approach maximum fitness.

    There is very little difference between a genetic algorithm that maximizes “likeness” or one that minimizes “difference.” That change in perspective lies entirely in the thought processes that go into representing what is being modeled. Whether one maximizes “fitness” or minimizes the differences between the current population and the “template” – genotype, phenotype, or whatever trait is measurable – that stands in as a representation of the new environment, the result is the same.

    The “target” could be a map of the potential well into which particles are condensing or it could represent a “template” of an organism that is consistent with a given environment.

    The “latching” might be a more accurate representation of particles settling into potential wells by losing the kinetic energy that would kick them out again. But living organisms have a probability of changing even though they are close to being “fit” relative to the new environment. That is what makes them “pliable.” Evolution is closer in analogy to a soft material sagging into the shape of its current container. Move it to another container, and it begins to conform to the new container.

    As far as a genetic algorithm is concerned, the main difference is that a pliable material is thought of as being the same object in successive generations whereas replicating organisms replace themselves with approximate surrogates of themselves in successive generations. To the computer program, there is very little difference unless one is also modeling the intermolecular forces in a pliable material.

    The major reason for fitness peaks instead of potential wells is because, in biology, fitness is the objective measure of how a population relates to a given environment. That is a measure that increases; hence fitness peaks rather than potential wells. Yet ultimately, they are simply mirror images of each other reflected in the horizontal plane. To the algorithm, there is little difference.

    Watching the churnings over a UD – although it is both nauseating and amusing at the same time – does give some insight into why people caught up in ID/creationism have so much trouble understanding things like genetic algorithms. It is because none of them has any hands-on experience with the real world. Instead, they have spent their entire lives in word-gaming without ever reaching out to grasp reality. So they have nothing in common with the experiences of those who have immersed themselves in studying the world around them. Most of the ID/creationist followers seem to hate science despite what they claim.

  36. Joe seyz

    To see if a die is fair, you would weigh and measure it. You would check its balance, its edges and corners and finally you would roll it to see what type of distributation you got.

    And that rules out it’s distribution being the product of an algorithm internal to the dice how exactly? And in place of “check it’s balance” we could have “design an experiment”, right? Just like Lederberg & Lederberg, Replica Plating and Indirect Selection of Bacterial Mutants, Journal of Bacteriology 1952. Fer instance. So how come you can come to a conclusion of fair/not fair yet dispute outright anybody else’s ability to do the same?

  37. Joe: Also the mutations allow for fitness- ie successful reproduction, so it would appear to be an example of built-in responses to environmental cues.

    That’s exactly what the Lederbergs showed wasn’t the case. The mutations were not due to environmental clues. You could also look at the Luria–Delbrück experiment. 

     

  38. Joe: However that “conclusion” was reached before we knew that bacteria communicate

    That doesn’t change that the mutations were random with respect to the environment. But feel free to explain how intercellular communication explains the Lederbergs experiment. Please be specific. 
     

  39. Joe: I communicate with some people and tell each one to bring something specific to a party. 

    Please be more specific. What is communicated? How is each bacteria to know what to bring?

    What if we isolate each colony? Are you saying we would have a different result? 

    Are you saying that if we looked at the actual mutations, the rate of the mutation for antibiotic resistance would not be the background mutation rate?

  40. Now that we have established some ground rules, i.e. that Joe agrees that a die can be said to be fair by examination of it’s roll distribution etc, I’d also be interested in Joe’s answer to this.  

  41. Mung,
    Joe won’t help me out and use his “design detection skills” with my problem. He said:

     

    No, you are obviously a loser with nothing to say. Not only that you don’t seem to understand anything beyond misrepresenation and strawmen.

    Apparently that’s his reason. Not an excuse at all. You up for it Mung? Just thought I’d ask…
    http://theskepticalzone.com/wp/?p=1352&cpage=2#comment-16703 

  42. Joe,

    So there isn’t anything my design detection skills can do for you.

    In fact, you could take a quick glance at the two documents I have and the problem I posed. I’ll donate $10 to the charity of your choice should you deign to apply your skills to my trivial problem.

    But if not then I guess your principles are more important to you then making me look a fool. Understandable.  

  43. Looks like Joe will just keep repeating the hollow refutation he can’t support. 
     

    No duh. However your position cannot explain replication. And no, the LEDERBERGs didn’t know about bacterial communication. Please at least try to stay focused.

  44. However your position cannot explain replication.

    Joe, falling back to that already? A minute ago it was all going so well, then things got specific and you fall back to this? C’mon.

  45. Joe
     

    No, I am not going over to the UK to appease some lying loser coward. And that is what I would have to do in order to conduct a proper investigation. But you wouldn’t know anything about how to investigate, properly or not. Also my design detection skills have already alerted me to the fact there wasn’t a tornado in the UK. Which means you are just lying, again, as usual.

    No, you can just look at the data. It’s all contained in the 2000 characters. The security team won’t let me release any more details. Nothing. Not a thing. It’s just, you know, what with all the talk about “500 bits” and “needle in a haystack the size of the cosmos bit string probability” I thought, you know, you might be able to just examine the data itself.
    I understood ID to have many tools at it’s disposal with regard to “messages” and “strings”. I thought we could at least discover something interesting via ID with regard to the puzzle. You know, how you would have to do if you received those strings from a radio telescope. If you were in charge would you need to travel to the origin of the signal to conduct a “proper investigation”?

    If that’s the case then ID is not going to be a whole lot of use if that ever happens is it? And you claim that SETI is doing ID right now? Ha! Just disproved that haven’t we…

  46. Joe

    The data says there wasn’t a tornado in the UK which means you are a liar.

    In fact we often experience tornadoes in the UK, about 50 per year. An extreme example: http://www.telegraph.co.uk/topics/weather/9252146/Tornado-spotted-in-Oxfordshire-as-storms-batter-southern-England.html
    But I never said when this event happened. It was some time ago. I can’t say more then that (security!), but if your excuse is that “there was no tornado so this could not have happened” then so much for SETI! But it did, so will you help or will you not apply the “design detection skills” that you claim to have?

  47. Gpuccio,
     

    I can try no design inference for any of the documents unless I can recognize and define a function for one of them, or both.

    Really? So if we find a structure on Mars made of glass you’ll deny design until you know what it’s function is? Or would that “obviously” be designed? This seems circular to me. You cannot make a design inference unless you can determine the function it was designed to provide? Really?

    Just by looking at them, I cannot say if they are functional or not, and therefore I will make no design inference for any of the two.

    The same could be said for any string. If you happen not to know the function then all strings look the same, right? So Hamlet is designed because you can read and understand it but if you lack that you are stuck? Does ID not have more robust design detection mechanisms then that? 

    But I can suggest a few ways to investigate that problem, if your limited funds allow that. The simplest way would be to assume that they are sequences of protein coding genes, and compare them with existing databases.

    The problem is I only have enough money to do that for one of the documents. If only ID could provide a way to determine which of those documents I should study.

    The second way would be to decode them into AA sequences, and compare them with existing databases (that’s essentially, but not exactly, the same as in the previous step).

    Again, the same problem, which document to choose?

    A third way would be to synthesize the proteins themselves, and test them for structure and biological function.

    Again, the same problem.

    Unless and until some definite biochemical function is found, I will not make any design inference for sequences like those ones.

    So you determine design by taking the blueprint and building something from it? By definition blueprints refer to designed objects. And your claim is that all proteins are designed, so if a protein is the end product then design is a given?

    If any of the sequence is found to correspond to a functional protein, I will make a design inference for it: we are speaking of hundreds of AAs here, and length is in our favour).

    Is that the only possible way that ID can come to a design inference for long strings of data like this? What if I told you it was a signal from space. Would it automatically become design then? Or would we still have to examine proteins?

    Just to be fastidious, we could also infer design for the simple function of being sheets of paper with characters printed on them. That could probably warrant a design inference for both, but in a completely different sense: the printed sheet of paper is certainly designed, but the printed sequence could still be random.

    Sigh. Then why don’t you start there? I’ve already made it clear that the fact it was originally on paper but that is irrelevant, the data is what is important. And if all ID can say about this situation is “well, those sheets of paper with printing on, they are designed they are!” then forgive me for being singularly unimpressed.

  48. Joe,
    Seems that Kariosfocus disagrees with you:
     

    First, if there is design at work, but he pattern shown is one that exhibits the statistics of a chance based random process i.e. a probability distribution, the filter will infer to chance contingency. That is a false negative and is part of the price paid to make sure that inferences to design are morally certain.

    http://www.uncommondescent.com/intelligent-design/id-foundations/the-tsz-and-jerad-thread-continued/#comment-436715
    So given that the Lederbergs shows that the mutations were not due to environmental clues (if you actually read the paper this is obvious) they were not built-in responses.
    Put simply, if they were built in responses that mechanism is not working very well because the mutations happen regardless of the environment.
    So even if the “response mechanism” is built in, it’s faulty because it acts regardless of the environment.
    So whence comes the mutations?
    As KF says: “First, if there is design at work, but he pattern shown is one that exhibits the statistics of a chance based random process i.e. a probability distribution, the filter will infer to chance contingency.”
    So the pattern of mutations observed exhibits the statistics of a chance based random process and as such does not represent any “design” at all.

    Except of course, your “designed to evolve” fallback of (almost) last resort.

    But given that this is achieved by imperfect replication you are once again adding unneeded entities. Why don’t you discuss your point with KF, explain to him how Zachriel is wrong about this (classic) experiment.

  49. Another day, another bizarre interpretation by Joe. He invokes ‘quorum sensing’ by bacteria as evidence that they adapt to antibiotics using a pre-specified capacity, combined with some mechanistically unclear communication method.

    You can take bacteria that are killed by low levels of antibiotic, and plate them out on concentration gradient made of strips of gel. All the bugs at or above the lethal concentration die. There is nothing in the population capable of immediate response to this supposed ‘environmenal cue’. They grow only n the antibiotic-free portion.

    So you sit back and watch. A tiny offshoot ‘probes’ the gradient at the next level, and and spreads laterally from this point. From this new front, another point seeds the next advance. And so on, until the bugs can cope with a gel at sturation poiint – you can’t physically dissolve any more antibiotic.

    Where on earth does quorun sensing communication come into this? What is being communicated to or from the rest of the population by these mutants that are able to cope with the higher concentrations? And where is the adaptive capacity located in the non-mutated organisms? You can certainly tell they are mutants, by the simple expedient of sequencing them. So this is Joe’s “maybe in the cell wall” computer program that generates adaptive mutations to order. Maybe it’s just random mutation.

    And … if articifial ribosomes don’t function, how come one can Google numerous papers on functional artificial ribosomes? http://www.technologyreview.com/news/412471/creating-cell-parts-from-scratch/

    I look forward to the meta-commentary – “and Allan Miller chimes in with …”. It amazes me the extent to which Joe can mangle scientific concepts and receive not a word of ‘correction’ from his peers. Do they really all think he’s the science expert that he evidently does?

  50. gpuccio: But if A (or B), after one of them happens, expand to the whole population, for a deterministic effect like NS, in a short time, then the scenario changes. 

    Zachriel: Or due to neutral drift. 

    gpuccio: Wrong. Neutral drift does not change the scenario in any way. It is just a form of RV, and RV is alredy accounted for in the scenario.

    It’s not “wrong”. It may be superfluous, as you said effects “like NS”.  We were clarifying that point. As Lenski demonstrated, drift can be important in adaptation. 

    gpuccio: a) The effect of NS in reality would be much lower than waht I have hypotesized in my model.

    Not sure we’ve seen your math. Of course, standard population genetics were worked out generations ago by Fisher et al. Do your results differ? 

    gpuccio: b) Funtional intermediates should absolutely leave traces in the existing genomes.

    Oh? Why is that? Indeed, natural selection should tend to purge the extraneous over time. 

    gpuccio: Each time you are pressed for real examples of your theosy, you shift to macroscopic phenotipic effexts (indeed, to that single example). 

    Your claims nearly always are general claims about the evolution of complexity. The mammalian middle ear is an excellent example as it is familiar to most readers and combines embryological, fossil and molecular evidence, along with a good scientific detective story. 

    gpuccio: But you must know very well that we have absolutely no idea of what genotipic modifications are the basis for those phenotipic changes. Therefore, it is completely impossible to analyze those “sequences” in terms of genomic information. Therefore, they are irrelevant to the ID-neodarwinism debate.

    That’s funny. Of course it’s relevant. Embryological data predict the fossils. That’s hugely important from a scientific vantage. When you can make those sorts of predictions independent of evolutionary theory, then maybe you will gain some scientific currency. 

    gpuccio: The answer seems rather simple: you have mo arguments at the level of molecular biology, and so you recur to the only things you have left.

    Actually, your arguments seem to be about the evolution of complexity, for which we have strong evidence. Instead, you retreat into the most ancient transitions, which left no fossils. It’s a gap!! 

    In any case, small changes to certain genes can be shown to cause relevant changes to the mammalian middle ear. 

    Mallo, Formation of the Middle Ear: Recent Progress on the Developmental and Molecular Mechanisms, Developmental Biology 2001. 

    gpuccio: It’s well established that something is essential for traversing rugged landscapes. THat recombination can do that in the biological context does not appear so well established, IMHO. 

    That recombination is important in traversing rugged landscapes is a mathematical result. Try running a few evolutionary algorithms.  

    gpuccio: And anyway, the experiment in that paper was dealing with a complete, and very favourable, biological setting for phages, where any natural mechanism was free to act. So, why was the rugged landscape not traversed?

    Because simple point mutation algorithms will climb the nearest peak and stop. If there are billions of peaks, then you have to start with billions of initial sequences in order to have a decent chance of finding the highest peak. Recombination can largely overcome this problem. 

    Zachriel: Yes, apparently natural selection is capable of evolving quite adequate proteins — even with one hand tied behind its back!

    gpuccio: I will not comment on that. I usually respect religious faith, in all its forms.

    It’s an empirical statement. Adequate proteins evolved, even without recombination. 

  51. Gpuccio at UD
     

    Really! Why are you surprised? Thatìs clearly stated in my definition and procedure for dFSCI evaluation. I need a specifiction, and in my specific definition (dFSCI) the specification must be functional.

    My understanding is that both documents are functional in some way. I just don’t know what that function is.

    No. And it does not need it. For most biological strings, especially proteins and protein coding genes, the function is well known and measurable. We are quite satisfied with that.

    Can you give me an example of just one “biological string” and what it’s “function” is and how you determined that function is “the” function.

    Get more money.

    It’s a thought experement. It’s abstract. I don’t really have an office. There was really no tornado. That was all for Joe “literal” G’s benefit.

    Try tossig a coin. Or some form of divination.

    So when ID is presented with a set of unknown strings and is asked to choose which is the more interesting with no further data we have to “toss a coin”?

    No. I recognize, define and measure the function, and then I must assess the target space/search space ratio. It’s all explained in my detailed description of the procedure to assess dFSCI.

    As before, what is the function of HIV and what it’s it’s dFSCI?

    I don’t know what you mean with “blueprints”. I have spoken of functions. I can define a function for a stone, as a paperweight, but that does not mean that the stone is designed. So, your statement is simply wrong.

    Then what is the function of HIV?

    Where have you been while we were discussing things? My claim is that if a protein exhibits enough functional complexity (let’s say more than 150 bits), and no credible neo darwinist path is known for its emergence, I infer design for it. I agree that I would infer design for many proteins, or more precisely protein superfamilies.

    Credible to who? You? Let me rephrase the question. Does either of those two documents have “functional complexity”? If so, how much.

    Yes. ID is not divination. It is scientific, and science has its limits.

    So ID can only be applied in the specific case of DNA sequences by building proteins and seeing if they are “functional”? This is quite different from the version of ID usually given.

    If our working hypothesis is that the strings codes for DNA sequences coding for proteins, then certainly yes. If we have other possible functional meanings for the strings, we can certainly pursue them too.

    As yet we’re not at that point. The point we’re at is “Can ID do any better then tossing a coin when determining which of these two sequences are worth investigating, given that only one can be in this example”. The answer, so far, is no.

    I answered in detail to that question, and showed how ID can give a very definite answer, making a design inference in some cases, and not making it in others. It requires, obviously, some work and some reasoning. If you are not interested in doing the work, you will not get any answer. In the absence of any recognized function, no design inference can be made.

    If you can explain to me how to “do the work” then I’ll happily do it. But so far the choice is simple – which of the two documents would *you* given you information/design expertise choose to examine in detail and why.

    I am not so interested in impressing you. If ID cannot solve your problem because you have not the money to use ID for solving it, well, I can survive. I am quite satisfied that ID can solve the problem of the origin of biological information, which is frankly more interesting to us all than your personal (imagined) disadventures.

    It’s a thought experiment. I would have thought that did not need to be explained. It’s a test. Here are two documents. I’m heavily implying that ID should be able to tell us something about each of them. So far it’s all been excuses. If you don’t want to play, that’s fine, but simply saying “well ID can’t do anything of practical use but personally I’m satisfied that it explains the origin of life” is not even trying.

    http://www.uncommondescent.com/intelligent-design/id-foundations/the-tsz-and-jerad-thread-continued/#comment-436722 

  52. Joe:
     

    BWAAAAAAHAAAAAAHAAAAA- design detection tells us if agency involvement was required. From there we investigate further.

    Then the question is: Was agency involvement required in the creation of either of those data sets?

    So go ahead and attempt to “detect design” in either of those two documents.

    Dare you! 

  53. If you can’t determine functionality without doing the chemistry, then design without evolution is impossible. There is no faster way to find the wholistic optimum of all interrelated systems than by fecundity and selection.

  54. Gpuccio
     

    How do you know that both documents are “functional”?

    That is part of the game. One is, one is not. Or neither are. Or both are. Can’t ID tell? 

    Function: Produces pentose sugars for nucleic acid synthesis and main producer of NADPH reducing power.

    No, that’s what it does. It’s function is something quite different. For example, a car turns petrol into heat and gas. That is what it does. It’s function is something quite different.

    If you had read my definition of dFSCI, you would know that we can define any function for the observed object, and that the computation of dFSI will be made for the function we have defined.

    In that case the function of the data contained within the documents is “to see if ID can tell us anything at all about the data”.

    I should have known that irony is wasted with some people…

    Perhaps you should explain the concept of irony to Joe. He’s the one that says you can’t investigate the data without examining it in person.

    Or, like any serious investigator would do, analyze all the strings. You asked to decide which string we should analyze without analyzing them. That is divination, not science.

    No, I asked which of the strings we should analyse in detail. You can in fact perform whatever level of analysis you like of course. If you want to examine both, please feel free to do so.

    The whole virus can be described as a virus having the ability to infect specific cells, and to reproduce itself through that process.

    No, once again, that’s what it does, it’s function appears to be quite different.

    Whole organisms, even if relatively simple like the HIV virus, are much more intractable to a detailed analysis.

    Yes, so simple that millions of hours of effort gone into curing it and still no cure.

    You may not know, but that kind of research has been done for decades. That’s why huge databases exist, like Uniprot, that list known proteins and their functions, and their coding genes.

    That’s not ID research nor anything like it. What is the link between Uniprot and ID please?

    The answer id definitely no. ID cannot say “which of these two sequences are worth investigating” without investigating them.

    So investigate them already and stop with the excuses. If you worked at SETI you’d give up on day one, as until interesting sequences are identified it’s all just noise. 

    If the sequences were in english. it would be rather easy, even for a darwinist like you, to understand at first sight which make sense aand which does not make sense.

    Condescend much? So design is detected by your ability to immediately understand the message? Hey, it’s written in English so it’s probably designed….

    But how do you believe that I, or anyone else, can decide “at first sight” if a nucleotide sequence corresponds to a functional proteins, without making any attempt at studying the sequence? Tossing a coin remains the best option.

    Who said anything about nucleotide sequences or functional proteins? Who said anything about what the data represents. This is all the baggage and preconceptions you are bringing to the code, it’s nothing about the code itself.

    The simplest way would be to assume that they are sequences of protein coding genes, and compare them with existing databases.

    Once more, I don’t have the $$ to do that for both. Can ID suggest which document is more “interesting” then the other?

    As my “information/design expertise” does not make of me a prophet, I say: both. If I can only investigate one, I will toss a coin. And infer design (or not) for the one I have investigated.

    So pick one and investigate it already.

    have really no reason to play with you. I choose my playmates very accurately.

    Yet here we are.

    I give you a final answer: with what I know at present, I cannot make a design inference about your two strings. That’s all. Sorry for you (in many senses).

    Yet the way KF talks it’s the simplest thing in the world with “billions of examples” generated every day. Yet when we get specific, nothing.

  55. Joe, 

    We look for signs of agency involvement because we know if an agency was involved that changes the investigation and opens up new questions. Which means the design inference is not a dead end, but a new beginning.

    Then which, if any, of those data sets had agency involvement in their creation?

  56. gpuccio: The point is: drift does not change the probabilistic scenario.

    Without drift, some adaptations are not even possible. However, as a first-order approximation, it makes some sense.

    gpuccio: I have linked it many times.

    That would have been a good place to put the link.

    gpuccio: The same NS that, according to major darwinist thinkers, leaves more than 95% junk DNA in our genome? Really strange…

    Your nomenclature is poor. Darwin identified the existence of vestigial structures. Darwin would be, presumably, a darwinist. Generally, darwinists (those who think natural selection is the primary mechanism of evolution) have resisted the idea that the genome is mostly junk. However, polyploidal genomes and some amoeba with genomes far larger than human genomes tends to indicate that some genomes contain a lot of redundancy.

    gpuccio: I can’t find there any molecular information about the evolution of the middle ear, although there is a lot of interesting information about the complex molecular control of the development of that structure, based mainly on gene inactivation experiments.

    So we have an almost unbelievable prediction from embryology, that the irreducibly complex structure of the mammalian middle ear evolved from reptilian jaw bones. Astoundingly, we find fossils of intermediate structures buried in the rocks. And, we even have evidence of that small changes to genes directly affect the related structures.

    gpuccio: That recombination can do that in the biological context does not appear so well established, IMHO.

    Your claim was that recombination was “wishful thinking”, when we know from mathematical studies that recombination is effective in rugged landscapes. You reject a plausible mechanism without evidence.

    Xia & Levitt, Roles of mutation and recombination in the evolution of protein thermodynamics, Biophysics 2002.

    Bittker et al., Directed evolution of protein enzymes using nonhomologous random recombination, PNAS 2004.

    gpuccio: Should I laugh?

    Sure. It’s good for the health. But it doesn’t address the point that even lacking one of the primary mechanisms of evolutionary novelty the experiment still resulted in adequate function. This is expected when exploring a rugged landscape.

    gpuccio: There is no nedd to “determine that function is “the” function”. If you had read my definition of dFSCI, you would know that we can define any function for the observed object, and that the computation of dFSI will be made for the function we have defined.

    That’s fine, but if you didn’t know the origin of nylonase, you would still conclude design.

     

  57. Joe,

    Well context is important. And context is missing. I know those letters didn’t appear via nature, operating freely.

    No, I and others had something to do with it.

    So I would say the existence of those letters on the the intertubes was the result of some agency.

    Yes, that’s right. I made it happen. But that’s not the point. By definition all letters printed in a book or on a screen are there via some agency. But none of this speaks to the content of the data itself. If those letters were scratched on a monolith on the dark side of the moon that they were put there by “an agency” would be the least interesting thing about them. What they mean would be far more interesting. Yet it seems you would be happy to leave it at that.

    That said if the data represents DNA sequences there isn’t any evidence that blind and undirected processes could produce either of those.

    Now we are getting somewhere. Yes, we know that you claim that DNA was designed. That’s of no relevance here. I’m asking about these two data sets specifically. Beyond *my* “agency involvement” of getting them to appear on the internet, was there an *agency* involved in their creation?

    We look for signs of agency involvement because we know if an agency was involved that changes the investigation and opens up new questions.

    If you are happy to leave it at “there was an agent involved in getting them to appear on my screen from the internet” then that’s fine. I can just put you down for “Joe tells me something I already know about the data, that I, an agency, was involved it getting it onto the internet in a format he could read”.

    So that is a start- we know your position’s mechanisms didn’t doit.

    Do what? What does “my position” have to do with what ID can tell us about those two documents. And what is a start anyway? All you’ve said so far is “if the data represents DNA sequences there isn’t any evidence that blind and undirected processes could produce either of those” – well, that’s only true if they represent DNA sequences. Do they? Will you step down on at least one side of the fence on that then? It’s not much, but it would be progress. As “if” is no good to anybody – it was you that said ID looks for agency involvement. So far all you’ve done is hedge your bets and refused to stake a claim. And that’s what this game is all about. If this if that if the other, no good. Say something about these datasets.

  58. Summary so far.

    Gpuccio had a go, which was great. He thinks the data represents DNA and as such we need to instantiate it, see what it does and that’ll determine “design or not”. Once instantiated if there is any function at all then the original data was designed, as function is so rare in the total space that finding any function at all is a strong indicator of design.

    So far this is the best idea, with at least an outcome that either indicates design or not. So it’s doable.

    Joe also had a go but no testable proposal, unlike Gpuccios which is at least feasable, so I’ll hold off on assigning him an answer just yet. 

    Kairosfocus also had a go, he quoted me without name in this post: http://www.uncommondescent.com/intelligent-design/id-foundations/the-tsz-and-jerad-thread-continued/#comment-436715
    and says:
     

    Recall, the 500 bit solar system resources limit, is effectively the same as saying set up a cubical haystack 1,000 LY across (about as thick as our galaxy), and then take a blind random sample of one straw-sized object. Sampling theory tells us strongly that by overwhelming likelihood, the sample will be straw. This is the needle in the haystack challenge on steroids. The 1,000 bit cosmos we observe resource limit is far more stringent than this.

    If you are reading KF, would you be able to apply this test to my datasets and determine if they are inside/outside that resource limit you mention? That would be an interesting test. Other then that he’s ignoring the game. I wonder why, of all of them he seems best equipped to come to some determination. He can do it for billions of messages a day, inferring design by calculating probabilities in possibility space, but not apply his publicly stated as usable methodology to two specific documents when asked? Why not I have to wonder.

  59. I find it somewhat amusing that gpuccio’s method for determining functionality turns out to be chemistry and selection. He hasn’t elucidated any necessity for a designer other than to produce saltations.

    The word saltation seems rather quaint, and not many people seem to know its history or what it means. Basically it’s a Behe hop, a large, improbable mutation that leaps over Behe’s Edge. The concept really disappeared from biology until Behe revived it.

    Gpuccio’s theory is nothing more than the molecular equivalent of no transitional fossils. It seems safer to people like Behe and gpuccio because molecules don’t leave fossils, or at least not for long. The latest research indicates that all DNA degrades within a few million years, even if frozen.

  60. Mung: You have to just love how they appeal to recombination when they feel they need to, but at other times it seems they think it totally irrelevant.

    Please substantiate that claim. When have we minimized the importance of recombination in traversing rugged landscapes?

    Mung: And think back to my earlier arguments about how there is a reason for randomizing the genome at the start of a run, and how that is very unlike natural populations.

    That’s irrelevant with typical rugged landscapes. Randomized genomes will quickly climb local peaks.

     

  61. KF’s summary is parichical because he equates the knowledge of how to build biological adaptations with already existing straws in a 1,000 LY cubical haystack. As such, he thinks Darwinism would represent a vast series of one astronomically unlikely events after another, after another, etc. As far as he is concerned, it’s absurd. 

    However, I’m suggesting that this view is mistaken. Darwinism genuinely creates non-explanatory knowledge. As such, to use KF’s analogy, there was no straw already there that evolution lands on. 

    IOW, probability simply isn’t applicable in this case as knowledge creating processes represent a different kind of unknowability. This makes the application of probability limited to very specific cases.  

    Another example of the impact of this unknowability can be found in this TED 2011 TED talk.  In fact, Darwinism becomes an even better explanation when we integrate it with our current, best, universal explanation for the grown of knowledge.   

    For example, dividing knowledge (useful information that tends to remains when placed in a storage medium) between explanatory and non-explanatory allows us to make significantly more progress than merely making the statement that evolution is “random, but not random”. 

    Non-explanatory knowledge is created when genetic variation occurs in the absence of a problem to solve. Cells cannot conceive of problems or explanatory theories. Nor could they test those variations for internal consistency because only explanatory knowledge can be constant or inconsistent with itself. However, these adaptations would be tested by the environment. 

    Genes are biological replicators. The do have “problems” of getting copied into the next generation. But only we can conceive of this as a problem in the necessary sense. So, in the case of Darwinism, we can be far more specific: conjectured genetic variations are random in respect to any specific problem to be solved. 

    There is nothing in a tiger that contains explanatory theories about how different patterns of stripes (camouflage) could help them obtain more food. Nor could those cells conceive of it as such if they did. Nor would those cells have previously contained the knowledge of how to perform those adaptations. 

    Non-explanatory knowledge is genuinely created when conjectured genetic variations occur that influence a tiger’s stripes and some of those conjectures are refuted by natural selection – but that conjecture occurred in a way that was random to the problem of obtaining more food via different forms of camouflage. 

    So, when we integrate evolution with our current, best universal explanation for the growth of knowledge, Darwinism becomes an even better explanation. This includes the growth of knowledge used to improve biological organisms. 

  62. Mung,
     

    OMTWO seems to think that if you can’t infer design based upon his sequences you therefore have no warrant to ever make a design inference.

    No, not at all. Why don’t you come here and ask me myself instead of putting words in my mouth.
    I’m simply asking can ID tell us anything at all about the strings in question.
    I’m not asking you to infer design, calculate CSI or anything at all like that. If you see my original post, I’m simply asking can ID influence my decision one way or the other by providing some currently unknown information about each document.
    If you want to infer design, that’s fine.
    If you don’t want to and then later make a design inference, that’s also fine.

    But if Seti were ever to post a signal they want the world to help decode it’ll be quite clear what’ll happen at UD with regards to it.

    Nothing. At. All.

  63. Gpuccio,

    I should not have done that, because you don’t deserve any serious attention, but I blasted your two sequences and found no similarities, wasting so 5 minutes of my time.

    Fair enough. I did not ask you to do that. I made no claims about the sequences, nor their similarity. You attacked the problem in the way you thought best. Good on you for trying. 

    So, I maintain that I have absolutely no reason to infer design for those two strings.

    Fine. Great. So nothing in it between them for you. For all I know they are just two random strings. I’ve not developed a skill set like you lot at UD to even begin to work it out. So thanks for trying. I’ll put you down for “toss a coin”. 

    Is that an admission?

    Of what? That what you propose is feasible? Of course it is. Where we differ would be on the results. You’d infer design from “function” and I would not. The simple fact is that you are wrong with your opinions about protein domains and the probability of their origin etc. You will never accept it because it forms such a central plank of your “why ID is true” belief system but nonetheless you are wrong.

    I don’t know if I have expressed my claim with siffucuent clarity (I am too lazy tio check), but my claim is that affirming that “recombination can help solving the problem of rugged landscapes in the biological context” is not supported by any evidence.

    If you don’t look for the evidence because you don’t believe it exists then you’ll never find it, hence providing “evidence” for your original thought.

    Is that an admission?

    Not as you mean it, no, given that you are wrong.

  64. gpuccio: the transition form A to A1 is naturally selected, and then the transition from A1 to B happens with the same probability and the same probabilisitc resource, as the effect of selection. The probability of having two events, each of probability 1:2^150, in 2^100 attempts, is, according to the binomial distribution: 3.944307e-31.

    Confused on this. If the transition from A to A1 is naturally selected, then why is the probability 1:2^150? In a large population, beneficial mutations will reach fixation 1/2s, where s is the selection coefficient.

    gpuccio: I appreciate that you don’t agree with the ideas of people like Moran, Myers and similar about normal genomes.

    Larry Moran is not a darwinist. From what we can see, Myers uses the term darwinist ironically. You might want to cite Dawkins, who really is a darwinist, and as such, is considered somewhat dated by many modern evolutionary biologists.

    gpuccio: I am as ionterested as you are in huge genomes, but I have found no detailed information about them. If you ahve something on the matter, I would appreciate if you could share.

    Some organisms have been observed to double their genomes in a generation, such as many species of flowering plants. That’s a lot of redundancy. Onions have larger genomes than humans. Not sure what information you want?

    gpuccio: But minor molecular changes are not complex functional information…

    But we can see how the complex structure evolved in incremental, selectable steps. There’s no barrier.

    gpuccio: “recombination can help solving the problem of rugged landscapes in the biological context” is not supported by any evidence.

    Yes, it’s supported by studies of evolutionary algorithms and how they work on rugged landscapes. And it’s supported by various studies of protein-space. 

    gpuccio:Again, I am not “rejecting a plausible mechanism without evidence”.

    Sure you did.

    gpuccio: Correctly, as I have explained. Because I would infer design for the whole structure of nylonase (and I would be right), and not for the transition from penicillinase to nylonase.

    The new function wasn’t designed. It evolved.

     

  65. Mung,
     

    OMTWO: Both string are designed. They both have a length of 2001. Ain’t that funny. Can we move on now?

    I’ll put you in the same category as Joe then? Strings that were on paper are designed. I thought you were capable of more. But perhaps I overestimated you. 

    But no, it’s not a reference to 2001. And so you are 1 out. It really is only 2000 characters.

    You can move on, you can do whatever you like. You can leave that as your final answer, if that’s your desire. Fine by me, but if you ever want to update your answer do let me know. 

     

  66. I would infer design for the whole structure of nylonase (and I would be right)

    Science is really that easy? Gah, I’ve been doing it all wrong!

  67. Petrushka:

    Gpuccio’s theory is nothing more than the molecular equivalent of no transitional fossils.

    Only less so. ‘Fossil transitionals’ aren’t elbowed out of existence by the very process of evolution. But so-called intermediates on a path of molecular amendment, outcompeted by fitter descendant sequences or simply being the eliminated sequence in a stochastic fixation process – where do GP/Mung etc think that these ‘intermediates’ ought to have been preserved, ‘if evolution were true’? The unavoidable consequences of the theory are twisted into something inexplicable and embarrassing!

    Dead DNA is gone, gone, gone. History, in biology more than anything else, is written by the victors. All we have are the descendants of survivors, mutated and filtered.

  68. You have to just love how they appeal to recombination when they feel they need to, but at other times it seems they think it totally irrelevant.

    Bullshit. For my part, I never shut up about recombination. It is a very important force. And it has clearly been of great historic significance, as witness the many recurring sequences, in both sense and antisense orientations, in functionally unrelated parts of the genome. If one is lukewarm about common descent, of course, one will argue that these are all the same or similar due to common design. But ‘lateral’ within-genome duplication makes exactly the same prediction as whole-genome duplication in descent: a nested hierarchy of markers. The same techniques of phylogenetic tree-building yield the same very strong support for either:

    1) Common Descent

    2) Common Design by a designer to whom fooling us into thinking it’s common descent appears much more important than simply designing the damn thing without such unnecessary restriction.

  69. [recombination] has clearly been of great historic significance, as witness the many recurring sequences, in both sense and antisense orientations, in functionally unrelated parts of the genome.

    Rereading, I appear to flip here from simple reshuffling of genes to duplication. It’s all recombination, of course. Just to be clear: anything that changes the physical sequence of bases on a chromosome, or swaps whole or part-chromosomes, or merges related or unrelated sequences from separate organisms, is recombination. One can trace the relationships between sequence and uncover a lot of history, because recombinational events make excellent markers, in addition to being a powerful mechanism of evolutionary ‘exploration’ in themselves. Unlike point mutations, which only have 3 options available, and a reasonable probability of returning to their start point in 2 steps, recombinational events are highly unlikely to ever occur twice, and even less likely to reverse. Their signal slowly decays, but this simply erases that particular marker, rather than invalidating the ones that can still be reliably identified.

  70. Joe,
     

    That is a separate question We do NOT have to know the content to infer design.

    But you are not inferring design at all. You are simply saying “all data on the internet is designed as data cannot get on the internet without human intervention”. So by that definition every string I might present is designed. If I write down how many birds fly over in a day, or the frequency of radioactive decay detection according to you that data is “designed” simply because it was written down. ID is not very useful is it? All you seem to do is walk around pointing at things saying “yep, designed”.

    So you proclaiming “victory” is somewhat premature. You don’t even have to examine the string itself before saying “design”. What good is that?

    Perhaps to you. But then again you think a ribosome is a genome.

    You think that ribsomes have a non-physical component but can’t prove it.

    They may not mean anything. And without a “Rosetta Stone” or and endless supply of funds, we would most likely never figure it out. However just it’s existence would tell us more in the short term. And there would be no reason to look for any meaning without first determining design.

    You contract yourself. You’ve established design in both my documents (they are on the internet!) but have failed to look for “meaning”. So given that your detection of design was in fact trivial (it was on paper = design) do you want to have a go at the meaning of the documents instead?

    In your case, absolutely. In some real world case, it would all depend.

    Then why don’t you prepare for that real world case by doing what you’d do there on my documents? Get a bit of practice in?

    Its amazing how many excuses you lot come up with to avoid doing the thing that you claim not only can be done but is done day after day.

    Let’s say an archaeologist found two tablets with those strings on. They just go “yep, designed” and move on? No, but that’s what you do.  

  71. Joe,
     

    But that doesn’t have anything to do with ID. And it doesn’t have anything to do with evolutionism. So what is your point besides proving that you are a clueless strawman designer? Or is that what you shooting for?

    This is how UD defines what ID is: http://www.uncommondescent.com/id-defined/

    In a broader sense, Intelligent Design is simply the science of design detection — how to recognize patterns arranged by an intelligent cause for a purpose. Design detection is used in a number of scientific fields, including anthropology, forensic sciences that seek to explain the cause of events such as a death or fire, cryptanalysis and the search for extraterrestrial intelligence (SETI). An inference that certain biological information may be the product of an intelligent cause can be tested or evaluated in the same manner as scientists daily test for design in other sciences.

    So what I’m asking, in essence is that you test or evaluate my documents in the same manner as scientists daily test for design in other sciences. So it seems that nobody is able to recognize patterns arranged by an intelligent cause for a purpose, if those documents indeed contain such a pattern. Just knowing that one did and one did not for example would essentially solve my problem but it seems despite this being the self proclaimed reason for ID’s existence nobody can actually do it!

     

    statistical and experimental evidence that tends to rule out chance as a plausible explanation.

    So it seems that this is just an empty claim, when faced with actually doing it ID folds.

  72. Mung: So complex stuff that already existed shifted around and you say this is proof that the complex stuff evolved in incremental steps?

    The reptilian middle ear is much less complex than the mammalian middle ear.

  73. Isn’t a watch just complex stuff that’s been shifted around? In any other context an ID advocate would be claiming that the arrangement of parts to create a new function would be proof of ID.

     

  74. gpuccio: The “known causes” have nothing to do with the assesment of dFSCI. The requisites to assess dFSCI are two (as I have said millions of times):

    a) High functional information in the string (excludes RV as an explanation)

    b) No known necessity mechanism that can explain the string (excludes necessity explanation)

    Can’t seem to resolve the apparent contradiction between the first statement and b).

    Also, we’re still left with your leaky bucket explanation. See keiths’ description. 

     

  75. gpuccio responds to Zachriel:

    To Zachriel (at TSZ):

    See Keith’s description

    No, thank you. Already did, and it made my views about human nature even worse than they already were.

    I leave Keith’s masterpieces to you, who seem to appreciate them.

    You are always welcome to comment on more serious issues, as you can do.

    gpuccio,

    Don’t let your emotions get in the way of a learning opportunity. My bucket analogy highlights a serious flaw in your dFSCI argument:

    1. Take a bucket of complex sequences.

    2. Throw out the ones that are explained by a “known mechanism”.

    3. Amazing! Of the sequences that are left, not a single one is explained by a known mechanism!

    4. Later you discover a mechanism that can explain one of the remaining sequences.

    5. Throw it out of the bucket and return to step #3.

    In case it’s not already obvious, here’s the problem:

    a. You want to use the fact that something is in the bucket (i.e. has dFSCI) as an indicator that it is designed (that is, not the result of ‘necessity mechanisms’).

    b. Before you put it in the bucket, you have to rule out known ‘necessity mechanisms’ as the cause.

    c. To rule out known ‘necessity mechanisms’, you can’t look to see if the object is in the bucket, because you haven’t decided whether to put it there yet.

    d. Therefore, in order to decide whether to put it in the bucket, you have to use some criterion other than whether it’s already in the bucket. Obvious, right?

    e. But if you’re using some other criterion, then it’s the other criterion that is doing all the work. You only put something in the bucket after the other criterion is met.

    f. So the fact that something is in the bucket (has dFSCI) is just a restatement of what we already knew by other means. The label of dFSCI adds nothing, so we might as well ignore it.

  76. Mung

    Two strings of exactly the same length composed of exactly the same 4 characters from the English alphabet, that’s pretty improbable. i’d say designed. so yeah, lump me with Joe.

    Is “pretty improbable” a technical term in ID then? Consider yourself lumped in with Joe.

    Gpuccio,

    Don’t lie!

    I have asnwered very clearly that no design inference can be done for both strings. That should solve your problem. Neither string is designed.

    Joe and Mung say it’s designed.

    Gpuccio says it’s not.

    Joe,

    And OM, as I have already said, biological information refers to function. We OBSERVE the functionality. We do NOT try to guess what the function, if any, is.

    No need to do all that, just say it’s “pretty improbable” and leave it at that.

    I have determined agancy involvement was required. That is all I have to do.

    But that’s trivially true of any piece of data on the internet. If I take a picture of a rock pile then you will claim that it shows design because “pictures require agency involvement”.

    So ID has it easy. When asked “Is X designed” you can say “The fact that you are asking me that means that agency involvement was present and that’s all I have to do”.

    So your claims that ID is like forensic detective work or archaeology don’t add up. Neither of those activities stop when “agency involvement” is detected.

    If there was really a “science of ID/design detection” you’d all come up with the same answer for my, frankly trivial, exercise.   

  77. Joe,
     

    Just because you are an scientifically illiterate dullard, doesn’t mean you trope refutes ID.

    Testing is a large part of science. I’ve tested you. And the results are, well, as expected.

    I’m not trying to “refute ID”. That can’t be done. There is nothing to refute.

    What I’m trying to do is show how the grand claims of “design detection” I quoted from the UD “What is ID” section are just lies.

  78. Mung,
     

    Two strings of exactly the same length composed of exactly the same 4 characters from the English alphabet, that’s pretty improbable. i’d say designed. so yeah, lump me with Joe.
     

    What if I told you that the letters represented wind directions (N,S,E,W) and had become transposed with the letters used in the document?

    Now simply recording the way the wind was blowing at 1 second intervals means that the way the wind was blowing was designed. You just said so yourself.

    I realise you realise the absurdity of your position, you are a poe, but others take the same position in all seriousness and I thank you for saying what they are too afraid to say.

    It also means that you can look at any segment of DNA and say “yep, pretty improbable, designed”.

    So any two documents that are the same length where the content uses the same characters are designed? Regardless of the actual content? Or how many other potential “documents” are out there?

    I hope you realise how foolish this is making you look, especially as the 3 ID supporters that have braved my trivial challenge can’t actually agree on any aspect of the challenge. 

  79. Gpuccio,
    I thought you did not want to play any more? Now you are calling me a liar for reporting what you are all saying?

     

    So, you are definitely lying.

    Whatever.

    We asnwered your question. Maybe one of us is wrong. Maybe we considered different questions.

    I never asked for a determination of design/not design. Here is what I asked originally:

    Would you be able to help me, Mung, and determine which page is the correct page? Which page should I investigate further and which should I discard, as that’s the choice (limited budget don’t ya know). Which page is more interesting then the other? If you discover that design factors into it, are both designed? Neither? One but not the other? Which? For bonus points, anything further you can tell me about the contents of either document would be appreciated.

    http://theskepticalzone.com/wp/?p=1352&cpage=2#comment-16703 If you would like to revise your answer in light of that please do so.

    I have clearly stated that we could infer design for both sheets with strings printed.

    Which was not what I asked for. You answered the question you thought was asked. I made it clear that the container of the data is not relevant, but nonetheless you make it relevant.

    If instead we consider the strings themselves, we cannot infer design.

    Mung and Joe have done so, on the basis that the string(s) are “pretty improbable” they have concluded design. You have concluded the opposite. Therefore how am I a liar?

    That is in perfect accord with the definition of dFSCI and of design inference. I challenge you to demonstrate the contrary.

    I am doing so with my little game.

    So, in the end, you are simply lying.

    Then you win! It’s simple! The fact remains that some of you are concluding design because “things don’t get printed on paper on their own” and I’m reporting on that and you don’t like it.

    If “design detection” really existed you’d all come to the same conclusion quite quickly about my two documents.

    Yet you cannot even agree on the question that’s being asked despite it being very plain.

    We asnwered your question. Maybe one of us is wrong. Maybe we considered different questions.

    Yet the fact remains that Joe and Mung say design and you do not. If you considered different questions is not really my problem, I only asked one – which of the two documents are more interesting, can they be categorised differently on the basis of their contents (and not the paper they are printed on!).

    So call me a liar if it makes you feel better but it does not cover the fact that of the 3 of you that have answered I’ve had two different answers (design/not designed).

  80. Gpuccio,
     

    So, you are definitely lying.

    Ah, I see what you are getting at. I say that nobody can do what UD says ID can do (but it seems despite this being the self proclaimed reason for ID’s existence nobody can actually do it!) and you say I am a liar because people at UD have attempted my challenge.

    You have misunderstood me. What I’m saying is that *I know the answer* to my little challenge, and so far nobody has used ID to solve it. Nor even come close.

    So when I say that nobody can do it I mean that nobody has done it yet correctly. Yes, attempts have been made but yours was the only serious one. But nonetheless you failed, and that’s what I’m getting at. So when I say that nobody can do it, it being the reason for ID, then that’s still true. You’ve not done it, Joe’s not done it neither has Mung.
    And you’ve all come up with different answers, that much is true….

  81. gpuccio: I consider that a string exhibits dFSCI only if both these criteria are satisfied: 
    a) High functional information in the string (excludes RV as an explanation)
    b) No known necessity mechanism that can explain the string (excludes necessity explanation)

    Then I infer design.

    Okay. So we’re working with a trichotomy. It’s really just another restatement of the Explanatory Filter. 

    The specific problem is that evolution has both random and deterministic aspects. Gpuccio will argue that evolution alternates the two mechanisms, therefore is excluded. That argument doesn’t work, though, because the test for “highly functional information” only precludes completely random sequences, not incremental increases in functional complexity. 

  82. The problem is that the origin of “high functional information” is the very thing being contested. Large amounts say nothing about its origin.

    Among other problems, the length of a gene sequence says absolutely nothing about how many steps removed it is from a non-functional precursor. And nothing at all about its history.

  83. Gpuccio,
     

    You are a liar just the same.

    I know what I am, but what are you?

    I did not infer design for neither string.

    I never said you did. I said that you’ve infered design for *all strings printed on paper* exactly as you said yourself. Now, for the particular strings in question (rather then their container) you have not infered design which I have already mentioned.

    Even if one of them, or both, ahve a function that I did not recognize, ia have given one or two false negatives.

    Great! So that’s essentially a “pass” really. Which is fine, you can’t be wrong with a pass as you point out.

    Which is exactly what can be expected in a design inference.

    So all proteins start out as not-designed until you find their function and then they become designed? Got it.

    If I had given one or two false positives, I would have failed.

    Congratulations, you did not fail! You did not succeed either, so perhaps next time.

    But not so. You don’t understand the ID theory, do you? Or you are just a liar.

    Well, that depends. So far “ID Theory” has told me that strings printed on paper are designed, which I never disputed and specifically mentioned as irrelevant from the start. Furthermore Joe and Mung are saying design and you are not. So when “ID theory” makes up it’s mind feel free to let me know. In the meanwhile you may continue to call me a liar, whatever makes you feel better.

    statistical and experimental evidence that tends to rule out chance as a plausible explanation.

    Yeah, ID ain’t ruling out anything except that where you find manufactured paper you’ll find a paper mill.

  84. Mung,
     

    Bet on it.

    The only evidence I’ve seen so far of your understanding of “ID Theory” when presented with a puzzle that should be trivial for “ID Theory” to solve is:

    Two strings of exactly the same length composed of exactly the same 4 characters from the English alphabet, that’s pretty improbable. i’d say designed. so yeah, lump me with Joe.

    So frankly, your opinion of what I do and do not understand with regard to “ID Theory” is irrelevant until and unless you can prove that you can actually do something with “ID Theory” that does not revolve around your misunderstandings of evolution.

  85. Gpuccio,

    So please, show those incremental increases in functional complexity, each of them of low complexity, each of them naturally selectable in respect to what was there before, for most basic protein domains (you can just start with one, then we will see).

    And then presumably you’ll explain how the Intelligent Designer achieved the same?

    Let me save you the trouble, Joe already told me!

    They were designed that way!!!!

  86. What gpuccio fails to address is the rather basic question of how the Designer knows the properties of yet to be created molecules.

    KF punts this question by asserting that the Designer must have capabilities beyond Venter’s. 

    Gpuccio asserts the Designer must be non-material.

    Can anyone say ad-hoc? 

    Exactly how useless is an invented, imaginary sky-fairy having whatever attributes and capabilities and motives needed to explain the gaps that present themselves today, and which will acquire whatever attributes are needed when gaps are closed or new ones discovered?

    One cannot argue against imagination. As Critical Rationalist points out, science advances by imagining explanations.

    The difference between science and fantasy is that science limits its imagination to testable propositions. This is why, even in hard sciences like physics, conjectures that have no testable entailments are considered  to be puffery. Sometimes interesting, but not science.

    The problem with ID is not that it is proven wrong, but that it doesn’t lead to useful research. Consider Douglas Axe. How useful is it to assert that we don’t know the detailed history of proteins? Or that the specific history, if known, would appear improbable. Like the list of winners of lotto.

    How probable is the specific ancestry of any human being? It would seem that anyone familiar with mathematics would know that the probability of something that has already happened is one.

    What physical law is violated by the string of improbabilities that led to your ancestors meeting? Or the specific lotto winners? Retrospective astonishment is not good mathematics and not science.
     

  87. It also seems to me Gpuccio is another of the “video evidence or it did not happen” crowd. Whatever evidence you might present is never good enough. 

    Great strides have been made recently in understanding the origin of protein domains yet Gpuccio knew yesterday, knows today and will know tomorrow the explanation already. Before any research was done at all, he knew the answer. Regardless of how much research will be done, he knows the answer. 

     

    However, exceptions to this rule allow us to begin to determine the process by which novel folds can develop from ancestral folds and possibly even how the first folds came into existence. Various lines of research have shown that thermodynamic stability, designability, functional flexibility and structural drift all play important roles in shaping the distribution and variation of structural families in nature.

    http://www.els.net/WileyCDA/ElsArticle/refId-a0020202.html But blah blah blah eh Gpuccio? You want this

    So please, show those incremental increases in functional complexity, each of them of low complexity, each of them naturally selectable in respect to what was there before, for most basic protein domains (you can just start with one, then we will see).

    A step by step video essentially. For stuff that happened in the deep deep past. And without that you’ll simply dismiss every other bit of evidence that is produced for a natural origin for whatever spurious reason you think of at the time.

    The only saving grace is that before too much longer it seems that for some problems that currently seem intractable due to computing power limitations those limitations will be lifted somewhat. So perhaps you’ll get your start to end video recording then, but even then you’ll just say “but it’s a simulation, it proves nothing”.

    So my arguments with you Gpuccio do not have the intent of getting you to change your mind, you did not make it up on the basis of evidence so evidence won’t be able to change it.

    I just want to illustrate the stark depth between claim and reality in the ID community.

  88. Gpuccio would be a hoot on a jury.

    Asked to provide the best explanation for events, he would have to say, in the absence of videotape, that the best explanation would be intervention by non-material entities.

    And of course, videotape is just a simulation and could be faked.

  89. GP will protest that humans are intelligent agents and potential causes of crimes. That’s an empirical fact.

    I would point out that evolution is also an intelligent agent capable of creating new function.

    What you lack in biology and in jury trials is the detailed, step by step history. You have to infer the details and come to the best explanation.

    I would also point out the utter, complete lack of any entity capable of designing biological molecules, other than evolution.

    When you are on a jury, you ar generally bound in your theory making, to whether a specific person was the agent. You won’t get far with imaginary, invisible, immaterial agents.

  90. In fairness, I should note that the circularity problem did not originate with gpuccio.  Gpuccio’s dFSCI is just a modified version of Dembski’s CSI, which has been plagued by circularity since its inception.  Unfortunately, gpuccio failed to notice and correct the problem he inherited from Dembski.

    Here’s the circularity in Dembski’s argument:

    1. To safely conclude that an object is designed, we need to establish that it could not have been produced by unintelligent natural causes.

    2. We can decide whether an object could have been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).

    3. To determine whether something has CSI, we use a multiplicative formula for SC that includes the factor P(T|H), which represents the probability of producing the object in question via “Darwinian and other material mechanisms.”

    4. We compute that probability, plug it into the formula, and then take the negative log base 2 of the entire product to get an answer in “bits of SC”.  The smaller P(T|H) is, the higher the SC value.

    5. If the SC value exceeds the threshold, we conclude that unintelligent processes could not have produced the object.  We deem it to have CSI and we conclude that it was designed.

    6. To summarize:  to establish that something has CSI, we need to show that it could not have been produced by unguided evolution or any other unintelligent process.  Once we know that it has CSI, we conclude that it is designed — that is, that it could not have been produced by unguided evolution or any other unintelligent process.

    7. In other words, we conclude that something didn’t evolve only if we already know that it didn’t evolve. CSI is just window dressing for this rather uninteresting fact.

    Though the details are slightly different, the same circularity undermines gpuccio’s dFSCI argument.

     

  91. I didn’t follow that. What your list doesn’t say (as far as I can tell) is that the calculations themselves rest on foregone conclusions. Given sufficient knowledge of conditions, it’s in principle possible to determine that that something was unlikely. Given the near-infinity of interdependent variables inherent in reality, it’s pretty safe to say that nearly everything that happens is vanishingly unlikely. We don’t need even much of sample of these variables to do the calculation close enough to establish this.

    And this in turn means that one simply cannot induce “design” from looking at an object or event. One must identify and operationally define the design mechanism, and then WATCH it happen. We are wading through a sea of CSI every which way all day long.  This is what I’ve called the “every bridge hand is a miracle” fallacy. Clearly, all bridge hands are chock full of CSI – they’re complex, they’re fully specified, they are all vanishingly improbable.

  92. To be fair, gpuccio doesn’t conclude it’s beyond RMNS unless there’s really lots of CSI.

  93. Flint,

    This is what I’ve called the “every bridge hand is a miracle” fallacy. Clearly, all bridge hands are chock full of CSI – they’re complex, they’re fully specified, they are all vanishingly improbable.

    They’re complex and improbable, but not “fully specified” in the way Dembski and other IDers intend. For a bridge hand to be specified, there has to be some independent reason that it is special to the “semiotic agents” involved, apart from the mere fact that it happened to be dealt to you.

    For example, if I predict ahead of time that I will receive a specific bridge hand, and then I receive exactly the cards I predicted, then that bridge hand is clearly specified, even if it is a thoroughly average hand by normal bridge standards. You would rightly suspect that the dealer and I are in cahoots, that something fishy is going on, or maybe even (if you had ruled out the more mundane possibilities) that I was prescient.  You wouldn’t think it had happened by chance, particularly if I was able to repeat the feat.

    However, if I received the same improbable hand without specifying it in advance, it would be a thoroughly unremarkable event, and no one would take notice.

    IDers fall prey to many fallacies, but the “every bridge hand is a miracle” fallacy is not one of them. At least, not one that Dembski and gpuccio fall prey to.

  94. I think that one computes (in Dembski’s argument) bits of SI, not bits of SC.  SI is a concept originated by Leslie Orgel, the C part comes in as an all-or-none assessment that there are at least 500 bits of SI. If it is present, you say there is CSI.

    That value was chosen to be one that could not show up even once in the whole history of the Universe by pure random happenstance. (Personally, I am willing to acknowledge the meaningfulness of SI as a concept in simple genetic algorithm models, and the reasonableness of saying that a value of SI high enough to constitute SC is implausible as having originated by pure mutation, in the absence of natural selection. Don’t everybody boo at once.)

    Where keiths is asserting circularity is where natural selection is ruled out as a source of the SI.  Dembski did it differently. He had his Law of Conservation of Complex Specified Information (LCCSI).  That was supposed to show that there could be no combination of deterministic and stochastic processes that could generate SC. It has been disproven on two different grounds, by Jeff Shalit and Wesley Elsberry, and by me.

    If gpuccio and others who use SI and SC do not rely on Dembski’s LCCSI theorem, they then need to have some other way of ruling out that natural selection made the SI high enough to be SC. That is where gpuccio invokes the ruling-out of deterministic natural causes, and where there seems to be circularity as he does so.

    (As an aside, yes, Dembski also had a step where deterministic natural causes were ruled out, but he seemed to only invoke that to get rid of rather simple and trivial natural forces. The heavy lifting in arguing that NS could not be responsible for the SI was done by the LCCSI.)

Leave a Reply