A list of things for which CSI has been determined

Intelligent Design advocates are still talking about CSI and determining the value of it.

CSI measures whether an event X is best explained by a chance hypothesis C, or some specification S.

So I’d like this thread to be a list of biological entities and the value of CSI that has been determined for each.

If no entries are made then I believe that would demonstrate that CSI might measure X Y or Z but it never actually has done so.

Out of interest, what is the CSI of a bacterial flagellum?

133 thoughts on “A list of things for which CSI has been determined

  1. CSI is quantifiable (for those systems where we can actually calculate the probability of their having emerged via unguided random and/or non-random processes) whereas irreducible complexity is not. That is what makes CSI so useful, when arguing for design.

    Can we all agree on specified complexity?

    ha.

  2. This will be the shortest thread ever…

    …not that it will stop the ID’ers from peddling their fantasies.

  3. You are, I think, pushing the idea that CSI is meaningless. I disagree. Some points:
    1. The definition that you give (due to Eric Holloway) is wrong. A specification is a set of genotypes or phenotypes (in the biological case) . Not a cause of anything. We might use, for example “can fly faster than 5 meters per second” as our specification.
    2. Many people have noted that functional information, defined by Jack Szostak and Robert Hazen, and used by Hazen, Carothers, Griffin and Szostak (2007) on experimental data to measure function of RNA, is basically a form of specified information. None of those people are engaged in using the concept to argue for Design Intervention.
    3. The issue, for me, is not whether CSI is meaningful, but whether there is some valid argument that CSI can only be produced by Design, rather than by natural selection.
    4. By the way, CSI is not a number but a yes/no designation. SI can be measured numerically. CSI is present when the SI exceeds about 500 bits. CSI by definition cannot be measured, any more than “weighs more than a ton” can be measured.

    I am probably biased because I introduced a similar concept to SI, “adaptive information”, in 1978. (Leslie Orgel introduced SI in 1973, I later realized).

  4. Joe Felsenstein: The issue, for me, is not whether CSI is meaningful, but whether there is some valid argument that CSI can only be produced by Design, rather than by natural selection.

    Yes, that’s really it of course.

    Perhaps Eric could give some examples of “CSI measuring whether an event X is best explained by a chance hypothesis C, or some specification S” and connect that to Intelligent Design somehow?

  5. Joe Felsenstein: You are, I think, pushing the idea that CSI is meaningless. I disagree.

    No, not at all. I’m just sick of seeing people like Eric claim it has validity with regard to it demonstrating the truth of Intelligent Design.

    I just want to see Eric et al pony up and do some non-theoretical calculations that relate to actual biology.

  6. I think gpuccio over at UD tried to attach numbers to proteins that somewhat resembled SI

  7. RodW: I think gpuccio over at UD tried to attach numbers to proteins that somewhat resembled SI

    Starting quite low level then!
    I had a quick look but there’s too many pages of KF waffling on. I did find this:

    So let’s not beat around the bushes. Let’s be willing to infer design and forcefully state it, particularly in obvious cases like OOL, and let the second order questions (who is the designer?, who designed the designer?, why was the designer so mean?, why is there suffering?, yada, yada) fall where they may.

    a different Eric

  8. RodW,

    I think gpuccio over at UD tried to attach numbers to proteins that somewhat resembled SI

    Right, but his numbers were useless because they neglected selection.

  9. keiths,

    Right, but his numbers were useless because they neglected selection.

    Selection needs an advantage to select for. Gpuccio’s calculations were for proteins. You need to have a functional protein system in order to have something to select for.

  10. OMagain,

    Out of interest, what is the CSI of a bacterial flagellum?

    Using Szostak’s numbers for molecular binding of 1/10^11 for the probability of molecular binding per 100 AA’s and ignoring function you need 40 proteins to bind together or 1/10^440 assuming 100 AA proteins you get around 1500 bits. The real number considering these assumptions are grossly conservative would probably be over 10000 bits.

  11. colewd,

    Are you suggesting that every time a bacterial flagellum develops the Designer is in action?

  12. colewd: Using Szostak’s numbers for molecular binding of 1/10^11 for the probability of molecular binding per 100 AA’s and ignoring function you need 40 proteins to bind together or 1/10^440 assuming 100 AA proteins you get around 1500 bits. The real number considering these assumptions are grossly conservative would probably be over 10000 bits.

    Thanks. And what is that figure the input to, such that Intelligent Design determines design? Or is that outside your pay grade? 😛

  13. Bill,

    That’s just the microscopic version of the “tornado in a junkyard” argument.

    Do better.

  14. keiths,

    That’s just the microscopic version of the “tornado in a junkyard” argument.

    Whats wrong with the tornado in the junk yard argument other than the fallacious assertion that it has been debunked.

  15. colewd: Whats wrong with the tornado in the junk yard argument other than the fallacious assertion that it has been debunked.

    Besides the fact your IDiot probability calculations always make the same false assumption? That all the pieces of the protein had to self-assemble all at once instead of the known pathway of evolving slowly over time from simpler precursors?

    And yes Bill simpler precursors are functional and selectable. You’ve been shown the evidence dozens of times yet you always come back and repeat the same falsified claim.

  16. colewd: The quantity of CSI in the flagellum.

    Which is? Your usual answer of “looks like GOBS to me!!” isn’t acceptable.

  17. Joe Felsenstein: The issue, for me, is not whether CSI is meaningful, but whether there is some valid argument that CSI can only be produced by Design, rather than by natural selection.

    This is the improbability of ASC proof again. Natural selection is a stochastic process, and thus it forms a random variable, to which the improbability of ASC applies.

    Keiths will insist I’m dodging the issue by using ASC instead of CSI, so let me give some background for my preference.

    The problem with CSI is technical, but I’ll briefly explain. The specification portion of CSI does not necessarily define a probability distribution or semimeasure. It can form a harmonic series, which diverges. With an infinite number of events, and a divergent series, the random variable can be expected to generate arbitrarily large amounts of CSI. So, CSI is not conserved, due to this technicality. This is one of the reasons CSI needs Dr. Montañez’s notion of a kardis, which is essentially a normalizing function so that the specification in CSI is a probability distribution.

    ASC does not have this problem because prefix Kolmogorov complexity forms something called the universal distribution, which is actually a semimeasure. But, it doesn’t diverge, so it has a nice conservation bound.

    As for what intelligence is, it’s essentially defined negatively from an ASC perspective. Intelligence must be something other than a stochastic process.

    I have a number of personal speculations that I’ve thrown out in another OP (libertarian free will, halting oracle, intentionality), but those are my own opinions and not official positions of the ID movement.

    Finally, I’m sure someone will take issue with this kind of negative definition and call it ‘god of the gaps’. Now I find the negative definition as dissatisfying as the next person, but we also have to be careful with our terminology. The ID argument is a different dissatisfying beast than the ‘god of the gaps’ because it proves in principle that there is no kind of stochastic process that can generate ASC. ‘god of the gaps’ merely means we don’t currently know what sort of X will fill the gap, so we call it ‘god’ or ‘evolution’ or ‘intelligence’ or ‘naturalism’ or something else, and we stop examining possible Ys that might fill the gap. However, the ID gap is fundamental. There is no possible Y that can fill the gap, so we can disqualify all Ys (stochastic processes).

    This is similar to Bell’s theorem, another analogy you all don’t like, but it is still an accurate analogy. Bell showed that whatever is causing the entanglement, it cannot be a hidden local variable. So, he in principle showed there is no such Y, where Y is a hidden local variable, that can fill the X gap. So, we call the gap ‘entanglement’ but we don’t really know much more than that.

    This is the same sort of thing with the ID argument. We know there is no Y, where Y is a stochastic process, that can generate ASC. So, we have an unfilled X gap, and we say it is filled by ‘intelligence’.

    Now, this is also different than normal ‘god of the gaps’ in a second way. We see other X gaps in the human process that is filled by a Z we call ‘human intelligence’. And so by abductive reasoning we infer the X gaps we see in nature are filled by a similar Z.

    A final counter is that perhaps if we analyze the ‘human intelligence’ Z we will find it can be reduced to a stochastic process, so it is another Y. Haven’t we then demonstrated X can indeed be filled by Y? No, because the in principle ID argument still remains, and shows that if it turns out ‘human intelligence’ is actually a stochastic process, we will find that ‘human intelligence’ itself possess as X gap that must be filled by yet another thing that must be a Z and not a Y.

    But, what if, after breaking ‘human intelligence’ down into its smallest components we find there is no possibility of a Z, and ‘human intelligence’ is stochastic processes all the way down? At that point I think we can say ID has indeed been falsified. However, what would this mean?

    We would essentially have to demonstrate that everything we call order and design around us both in nature and in human artifacts is actually extremely common and the expected outcome of a maximum entropy distribution. So, we can test this idea by flipping a coin 100 times and see if the outcome appears intuitively orderly and designed. But, it is not just a matter of the 100 coin flips “warm my heart” sort of intuition. It has to be objectively orderly and designed such that it tells us something about the universe that is independent of the coin flips that we could not know otherwise, such as the winning lottery number or Bill Gates’ bank account PIN number or a Bitcoin hash or becomes a beautiful work of art when I open it as a PNG file on my computer.

    So, in the comfort of your own home, you too can disprove/prove ID by flipping a coin and see if you become a millionaire. If you do not, then you have demonstrated the veracity of ID, because note that in order for deconstruction of ‘human intelligence’ to disprove ID you must demonstrate that independently specified design and order is in fact extremely common, so much that it is the expected outcome of a random process.

  18. colewd: How do you know?

    I read the primary scientific literature.

    Why do you rote regurgitate the same idiotic “it’s too improbable!!” calculations despite having seen them refuted dozens of times? Is it stubbornness or just garden variety stupidity?

  19. EricMH to disprove ID you must demonstrate that independently specified design and order is in fact extremely common, so much that it is the expected outcome of a random process.

    When did ID ever demonstrate biological life is independently specified? All ID has ever done is take empirical observations and make the post hoc claim what was found was somehow pre-specified.

  20. Maybe it would be conceptually simpler to imagine billions of tornados shuffling the same junkyard, except that every time two bits of junk happen to connect usefully, those two are kept in that configuration. Every few thousand tornados, another bit or two of junk might join the growing (and always useful) device.

    From this model, I think we could predict that SOME useful devices would occur, although their utility might be very limited (like doorstops, maybe) for the first few billion tornados. But given a tornado a week, and a few billion years…

    The key concepts here are: unlimited chances, a very broad notion of utility, retention of useful coincidences, no way to know what might occur beforehand. Oh, and for the ID folks, the notion that once something eventually happens that looks complicated, we have the need to believe that whatever it is was intended by some higher power according to some predefined specification, with no possibility of randomness allowed and no fair looking at all the failures.

  21. This is amusing.

    After evading Joe’s argument for weeks (and getting called on it repeatedly), Eric finally relented, confronted the argument on its own terms, and immediately got himself into trouble.

    So what does he do? He comes to this new thread and continues as if it never happened.

    Come on, Eric.

  22. colewd:
    keiths,

    Whats wrong with the tornado in the junk yard argument other than the fallacious assertion that it has been debunked.

    There is nothing wrong with it when it is used against the claim that complex entities originate all at once at random from a pile of individual small components.

    Since this is not a claim that anyone ever makes about biological features, it is entirely irrelevant for this debate.

    That is what is wrong with it.

  23. keiths:

    So what does he do?He comes to this new thread and continues as if it never happened.

    Perhaps he is a Turing Machine without a halting oracle?

  24. EricMH: Natural selection is a stochastic process, and thus it forms a random variable, to which the improbability of ASC applies.

    If that bothers you, then why didn’t you stick with In Joe’s original example? There is no drift or mutation in it, only natural selection. Hence it is completely deterministic.

  25. EricMH: Now I find the negative definition as dissatisfying as the next person, but we also have to be careful with our terminology. The ID argument is a different dissatisfying beast than the ‘god of the gaps’ because it proves in principle that there is no kind of stochastic process that can generate ASC. ‘god of the gaps’ merely means we don’t currently know what sort of X will fill the gap, so we call it ‘god’ or ‘evolution’ or ‘intelligence’ or ‘naturalism’ or something else, and we stop examining possible Ys that might fill the gap. However, the ID gap is fundamental. There is no possible Y that can fill the gap, so we can disqualify all Ys (stochastic processes).

    The “ID gap” made me smile. So the ID gap is not intended as a gap for God? Can you tell us with which non-stochastic X you provisionally filled your personal ID gap?

  26. I think I can spot that old bugbear again, conflating ‘stochastic’ with ‘equiprobable’ and masking that sin by using the word ‘random’.

    Evolution is (partly) a stochastic process but it is most certainly not equiprobable. Stochastic processes are not deterministic, but that doesn’t mean that they cannot be tightly constrained or that they allow any and all conceivable outcomes.

  27. EricMH,

    You’re well aware, at this point, that Joe Felsenstein has been referring to CSI as defined in Dembski’s (2002) No Free Lunch.

    Joe Felsenstein: The issue, for me, is not whether CSI is meaningful, but whether there is some valid argument that CSI can only be produced by Design, rather than by natural selection.

    Yet you respond:

    EricMH: The problem with CSI is technical, but I’ll briefly explain. The specification portion of CSI does not necessarily define a probability distribution or semimeasure. It can form a harmonic series, which diverges. With an infinite number of events, and a divergent series, the random variable can be expected to generate arbitrarily large amounts of CSI. So, CSI is not conserved, due to this technicality. This is one of the reasons CSI needs Dr. Montañez’s notion of a kardis, which is essentially a normalizing function so that the specification in CSI is a probability distribution. [emphasis added]

    As I have told you before, there is no “specification portion of CSI” in No Free Lunch. If you remain convinced of the contrary, then the appropriate response is not to double down on your vague “explaining,” but instead to quote the passage(s) in No Free Lunch where Dembski gives the “specification portion of CSI.”

    Really, Eric — it’s time to put up, or shut up.

  28. Joe Felsenstein: By the way, CSI is not a number but a yes/no designation. SI can be measured numerically. CSI is present when the SI exceeds about 500 bits. CSI by definition cannot be measured, any more than “weighs more than a ton” can be measured.

    Yeah, I mistakenly told Eric that the quantity of complex specified information for an event $T$ with a detachable specification was $-\!\log_2 P(T)$ bits. I should have omitted “complex” at the beginning.

    Dembski regards a pair $(T, E),$ with $E \subseteq T \subseteq \Omega,$ as specified information when the “conceptual event” $T$ can be specified independently of the “physical event” $E = \{ \omega \}$ (a singleton). Then the quantity of specified information is $-\!\log_2 P(T)$ bits.

    I don’t think it’s widely appreciated that if Dembski specified a bacterial flagellum as a “bidirectional rotary motor-driven propeller,” then what he needed to calculate SI was not just the probability of a particular flagellar structure, but the probability of the set $T$ of all possible outcomes matching the specification “bidirectional rotary motor-driven propeller.”

  29. faded_Glory,

    There is nothing wrong with it when it is used against the claim that complex entities originate all at once at random from a pile of individual small components.

    Since this is not a claim that anyone ever makes about biological features, it is entirely irrelevant for this debate.

    That is what is wrong with it.

    Are you claiming that if the tornado showed up at the junk yard daily for a hundred million years your would get a 747?

    Fred’s analogy was illustrating by analogy the difficulty of finding function in large combinatorial spaces.

  30. Tom English,

    I don’t think it’s widely appreciated that if Dembski specified a bacterial flagellum as a “bidirectional rotary motor-driven propeller,” then what he needed to calculate SI was not just the probability of a particular flagellar structure, but the probability of the set T of all possible outcomes matching the specification “bidirectional rotary motor-driven propeller.”

    Do you really think this helps solve the combinatorial problem evolution by natural selection is facing? How many combinations of 100000 nucleotides can reliably build a rotary motor any molecular rotary motor. A million or a billion? Unless its almost 4^100000 you won’t get there with a trial and error search.

    Simple estimation tells you that this theory is not practical.

  31. colewd:
    faded_Glory,

    Are you claiming that if the tornado showed up at the junk yard daily for a hundred million years your would get a 747?

    Fred’s analogy was illustrating by analogy the difficulty of finding function in large combinatorial spaces.

    No, I am not claiming that. How on Earth did you get that from my post?

    Do you really not understand the difference between generating complexity all at once, or stepwise through cumulative selection? Do hundreds of millions of years of tornados constitute cumulative selection? Do you really not understand what Weasel is all about?

  32. faded_Glory,

    Do you really not understand the difference between generating complexity all at once, or stepwise through cumulative selection? Do hundreds of millions of years of tornados constitute cumulative selection? Do you really not understand what Weasel is all about?

    What cumulative selection are you talking about in this case? You need self replication to get started. What do you think Weasel is about other than validating intelligent design using a algorithmic search?

  33. colewd: Whats wrong with the tornado in the junk yard argument other than the fallacious assertion that it has been debunked.

    There’s no natural selection in it.

  34. EricMH: This is the improbability of ASC proof again. Natural selection is a stochastic process, and thus it forms a random variable, to which the improbability of ASC applies.

    Keiths will insist I’m dodging the issue by using ASC instead of CSI, so let me give some background for my preference.

    The problem with CSI is technical, but I’ll briefly explain. The specification portion of CSI does not necessarily define a probability distribution or semimeasure. It can form a harmonic series, which diverges. With an infinite number of events, and a divergent series, the random variable can be expected to generate arbitrarily large amounts of CSI. So, CSI is not conserved, due to this technicality. This is one of the reasons CSI needs Dr. Montañez’s notion of a kardis, which is essentially a normalizing function so that the specification in CSI is a probability distribution.

    ASC does not have this problem because prefix Kolmogorov complexity forms something called the universal distribution, which is actually a semimeasure. But, it doesn’t diverge, so it has a nice conservation bound.

    As for what intelligence is, it’s essentially defined negatively from an ASC perspective. Intelligence must be something other than a stochastic process.

    I have a number of personal speculations that I’ve thrown out in another OP (libertarian free will, halting oracle, intentionality), but those are my own opinions and not official positions of the ID movement.

    Finally, I’m sure someone will take issue with this kind of negative definition and call it ‘god of the gaps’. Now I find the negative definition as dissatisfying as the next person, but we also have to be careful with our terminology. The ID argument is a different dissatisfying beast than the ‘god of the gaps’ because it proves in principle that there is no kind of stochastic process that can generate ASC. ‘god of the gaps’ merely means we don’t currently know what sort of X will fill the gap, so we call it ‘god’ or ‘evolution’ or ‘intelligence’ or ‘naturalism’ or something else, and we stop examining possible Ys that might fill the gap. However, the ID gap is fundamental. There is no possible Y that can fill the gap, so we can disqualify all Ys (stochastic processes).

    This is similar to Bell’s theorem, another analogy you all don’t like, but it is still an accurate analogy. Bell showed that whatever is causing the entanglement, it cannot be a hidden local variable. So, he in principle showed there is no such Y, where Y is a hidden local variable, that can fill the X gap. So, we call the gap ‘entanglement’ but we don’t really know much more than that.

    This is the same sort of thing with the ID argument. We know there is no Y, where Y is a stochastic process, that can generate ASC. So, we have an unfilled X gap, and we say it is filled by ‘intelligence’.

    Now, this is also different than normal ‘god of the gaps’ in a second way. We see other X gaps in the human process that is filled by a Z we call ‘human intelligence’. And so by abductive reasoning we infer the X gaps we see in nature are filled by a similar Z.

    A final counter is that perhaps if we analyze the ‘human intelligence’ Z we will find it can be reduced to a stochastic process, so it is another Y. Haven’t we then demonstrated X can indeed be filled by Y? No, because the in principle ID argument still remains, and shows that if it turns out ‘human intelligence’ is actually a stochastic process, we will find that ‘human intelligence’ itself possess as X gap that must be filled by yet another thing that must be a Z and not a Y.

    But, what if, after breaking ‘human intelligence’ down into its smallest components we find there is no possibility of a Z, and ‘human intelligence’ is stochastic processes all the way down? At that point I think we can say ID has indeed been falsified. However, what would this mean?

    We would essentially have to demonstrate that everything we call order and design around us both in nature and in human artifacts is actually extremely common and the expected outcome of a maximum entropy distribution. So, we can test this idea by flipping a coin 100 times and see if the outcome appears intuitively orderly and designed. But, it is not just a matter of the 100 coin flips “warm my heart” sort of intuition. It has to be objectively orderly and designed such that it tells us something about the universe that is independent of the coin flips that we could not know otherwise, such as the winning lottery number or Bill Gates’ bank account PIN number or a Bitcoin hash or becomes a beautiful work of art when I open it as a PNG file on my computer.

    So, in the comfort of your own home, you too can disprove/prove ID by flipping a coin and see if you become a millionaire. If you do not, then you have demonstrated the veracity of ID, because note that in order for deconstruction of ‘human intelligence’ to disprove ID you must demonstrate that independently specified design and order is in fact extremely common, so much that it is the expected outcome of a random process.

    Man you write so many words, and yet there’s zero connection to reality.

    I’m sitting here reading this wall of complete and utter bullshit technobabble, and I’m thinking, at what point does this Bozo proceed to provide any goddamn fucking connection to reality that shows that some real genetic sequence X could not have evolved incrementally from some ancestral state Y?

    That’s right, nowhere. No. Fucking. Where.

    You’re a bullshitter, and nobody but religious sycophant are buying this bullshit.

  35. colewd: What cumulative selection are you talking about in this case?

    The one where adaptive mutations occur and fix over mutiple consecutive generations.

  36. Tom,

    I don’t think it’s widely appreciated that if Dembski specified a bacterial flagellum as a “bidirectional rotary motor-driven propeller,” then what he needed to calculate SI was not just the probability of a particular flagellar structure, but the probability of the set T of all possible outcomes matching the specification “bidirectional rotary motor-driven propeller.”

    Actually, no. Even that is over-specific — a case of drawing the bullseye too narrowly around the arrow. From earlier in the thread:

    Eric,

    Even Dembski’s application of “specification” is flawed.

    He introduced it to avoid the metaphorical problem of drawing a bullseye around the arrow after it had already landed. For example, in the case of the bacterial flagellum, he reasoned that the evolutionary target should not be considered to be the exact flagellum we see today, but rather any object satisfying the specification of “level 4 concept or less”:

    For a less artificial example of specificational resources in action, imagine a dictionary of 100,000 (= 10^5) basic concepts. There are then 10^5 1-level concepts, 10^10 2-level concepts, 10^15 3-level concepts, and so on. If “bidirectional,” “rotary,” “motor-driven,” and “propeller” are basic concepts, then the molecular machine known as the bacterial flagellum can be characterized as a 4-level concept of the form “bidirectional rotary motor-driven propeller.” Now, there are approximately N = 10^20 concepts of level 4 or less, which therefore constitute the specificational resources relevant to characterizing the bacterial flagellum…
    We may therefore think of the specificational resources as allowing as many as N = 10^20 possible targets for the chance formation of the bacterial flagellum, where the probability of hitting each target is not more than p. Factoring in these N specificational resources then amounts to checking whether the probability of hitting any of these targets by chance is small, which in turn amounts to showing that the product Np is small.

    There are lots of problems with this, but perhaps the biggest one is that the bullseye is still being drawn too narrowly. Evolution doesn’t care doesn’t care whether a concept is “level 4 or less”, and it certainly doesn’t care whether something can be described as a “bidirectional rotary motor-driven propeller.” Evolution doesn’t care about anything except fitness, and the only legitimate target is therefore “anything at all that would sufficiently increase fitness starting from a given ancestral population”. (Keeping in mind that the fitness landscape changes over time.)

    Good luck to anyone trying to quantify that.

    CSI is hopeless. Dembski seems to have realized that and moved on.

  37. Rumraket,

    The one where adaptive mutations occur and fix over mutiple consecutive generations.

    This is an OOL discussion.

  38. colewd:
    faded_Glory,

    What cumulative selection are you talking about in this case?You need self replication to get started.What do you think Weasel is about other than validating intelligent design using a algorithmic search?

    You don’t start with a fully formed flagellum.

    And no, Weasel is not about ID or not ID; that is tangential to the program. Don’t confuse the model with what is being modelled.

  39. faded_Glory,

    Don’t confuse the model with what is being modelled.

    What is being modeled? A sequence can find itself with an algorithm. The existence of the target is brought about with an english sentence generated by a mind.

    The discussion was about Fred Hoyles comment which was based on OOL.

  40. colewd,

    Actually, the discussion was about a list of biological entities for which the CSI has been calculated.

    Since you can’t name any, which is rather embarrassing, you are trying to sidetrack the discussion to something else, which is duly noted.

  41. colewd: This is an OOL discussion.

    So in your view all the CSI and ASC gibberish drooled out by Eric, Dembski and so on is irrelevant to the theory of evolution? Okay.

  42. faded_Glory,

    No, it isn’t. OOL did not involve flagella.

    You initiated the discussion with a comment regarding tornado in a junkyard and that is what Fred Hoyle used to comment on the chance on life arriving on earth by chance.

  43. Rumraket,

    So in your view all the CSI and ASC gibberish drooled out by Eric, Dembski and so on is irrelevant to the theory of evolution? Okay.

    Not when you are discussing steps from the start of a protein to selectable function. The hard one is to demarcate selectable function as proteins are often interdependent on other proteins.

    I honestly think the selection claim is ok on the micro level based on Darwins observations and other experiments but has nothing to do with the formation of de novo proteins. There is not enough empirical basis for the claim.

  44. colewd, to faded_Glory:

    You initiated the discussion with a comment regarding tornado in a junkyard and that is what Fred Hoyle used to comment on the chance on life arriving on earth by chance.

    That was me, not faded_Glory, and the topic was the flagellum, not OOL.

    Come on, Bill. The information is right here in the thread. Why can’t you read it for yourself?

  45. keiths,

    colewd:
    keiths,

    Whats wrong with the tornado in the junk yard argument other than the fallacious assertion that it has been debunked.

    Faded Glory
    There is nothing wrong with it when it is used against the claim that complex entities originate all at once at random from a pile of individual small components.

    Since this is not a claim that anyone ever makes about biological features, it is entirely irrelevant for this debate.

    That is what is wrong with it.

    FG first comment was regarding Hoyles Tornado analogy. Hoyles argument was specifically about first life.

Leave a Reply