I think I just found an even bigger eleP(T|H)ant….

I just checked out Seth Lloyd’s paper, Computational Capacity of the Universe,  and find, interestingly (although I now remember that this has been mentioned before), that his upper limit on the number of possible operations is 10^120 (i.e. 400 bits) rather than 10^150 (the more generous 500 bits usually proposed by Dembski).  However, what I also found was that his calculation was based  on the volume of the universe within the particle horizon, which he defines as:

…the boundary between the part of the universe about which we could have obtained information over the course of the history of the universe and the part about which we could not.

In other words, that 400 bit limit is only for the region of the universe observable by us, which we know pretty well for sure must be a minor fraction of the total. However, it seems that a conservative lower limit on the proportion of the entire universe that is within the particle horizon is 250, and could be as much as 10^23, so that 400 bit limit needs to be raised to at least 100,000, and possibly very much more.

Which rather knocks CSI out of the water, even if we assume that P(T|H) really does represent the entire independent random draw configuration space, and is the “relevant chance hypothesis” for life.

heh.

But I’m no cosmologist – any physicist like to weigh in?

cross posted at UD

75 thoughts on “I think I just found an even bigger eleP(T|H)ant….

  1. DNA_Jock:
    I think your math is wrong.
    If the UPB (observable universe) is
    10^80 particles x10^45 events/sec x10^25 sec = 10^150 particle-events
    or, per Lloyd, 10^120 logical operations.
    and the total universe is estimated to be 10^23 times larger than the observable universe,
    then the
    UPB (total universe) is
    10^80x 10^23 particles x10^45 events/sec x10^25 sec = 10^173 particle-events
    or, per Lloyd, 10^143 logical operations
    It isn’t 10^(120 x 10^23).

    Of course, the number remains meaningless, thanks to the original eleP(T|H)antin the room.

    Thanks!

    I accept the emendation. Although to be honest, I don’t get Dembski’s logic anyway: why would should the upper bound on the number of possible bit operations be a “upper probability bound” on the probability of a particular sequence by random draw anyway?

    And even if it does make sense, and even disregarding the eleP(T|H)ant, the fact that we don’t have a good estimate of the size of the unobservable universe simply means we don’t have an upper bound. It’s like saying that the maximum value of something is at least X.

  2. Lizzie,

    Agreed.
    I think that arguments about the size of the unobservable universe, like arguments about the likelihood of multiverses, have a ‘theological’ taste to them: somewhat unresolvable.
    It should be easier to explain the P(T|H) problem to someone – in particular the fact that any ‘bit counting’ method is assuming that P(A & B) = P(A) x P(B). Anyone who has played draw poker can see that 9/47 is not 1/4, and 4/47 is not 1/13.

  3. There seems to be an implicit sense among a lot of IDers that if you convert a probability into bits, you’ve revealed some underlying “information” content in the probability.

    Which is only trivially true – all converting a probability of a pattern into bits tells you is how much information you’d need in order to know what it was – or alternatively, how much information you’d receive if somebody told you.

    But that’s as true of a meaningless pattern as a meaningful one.

    If you get a meaningful pattern that would be low probability under random generation but high probability under intelligent generation, then the transforming the low probability into bits doesn’t tell you how much meaningful information is contained in the pattern. And turning the high probability into bits doesn’t tell you that the information content is low!

    The bit conversion is just wand-waving as far as I can see.

    And the compressibility part is just nuts.

    The only coherent message that Dembski’s 2005 paper tells us is that if a pattern seems to be very unlikely under any non-design hypothesis it was probably designed.

    Which is of course true, but no less trivial than saying that if the sun seems to be unlikely to shine tomorrow, it probably won’t.

  4. I think the issue is that taking a logarithm is beyond the mathematical capabilities of most intelligent design creationists, so it seems like a compelling advanced mathematical argument to them.

    Yeah, yeah, I’ll see myself to Guano….

  5. Lizzie,

    Although to be honest, I don’t get Dembski’s logic anyway: why would should the upper bound on the number of possible bit operations be a “upper probability bound” on the probability of a particular sequence by random draw anyway?

    I think Dembski is trying to argue that the N in Np is a safe upper limit on the number of trials for Np ≥ 1 because there can be no greater number of trials in the entire history of the universe.

    Then, based on ID/creationist notions that atoms and molecules behave like alphabet soups, dice, and junkyard parts, it is easy to “calculate” a probability, p, that is small enough for Np to be much less than 1.

    Taking a logarithm simply slathers on another layer of obfuscation and makes the ‘calculation” appear to mean something.

  6. It will be interesting to read Lizzie’s possible ‘ bigger, pink’ eleP(T|H)ant counter-rebuttal of Winston Ewerts ‘pink’ eleP(T|H)ant rebuttal to Lizzie ‘bigger’ eleP(T|H)ant post.

  7. Or maybe Mike “you are not qualified to talk about entropy and evolution if you don’t know how to scales up energies” Elzinga will take a crack at it.

  8. Steve:
    It will be interesting to read Lizzie’s possible ‘ bigger, pink’ eleP(T|H)ant counter-rebuttal of Winston Ewerts ‘pink’ eleP(T|H)ant rebuttal to Lizzie ‘bigger’ eleP(T|H)ant post.

    Well, it will be when Winston gets a round tuit.

  9. Mike Elzinga:
    Lizzie,

    I think Dembski is trying to argue that the N in Np is a safe upper limit on the number of trials for Np ≥ 1 because there can be no greater number of trials in the entire history of the universe.

    Then, based on ID/creationist notions that atoms and molecules behave like alphabet soups, dice, and junkyard parts, it is easy to “calculate” a probability, p, that is small enough for Np to be much less than 1.

    Taking a logarithm simply slathers on another layer of obfuscation and makes the ‘calculation” appear to mean something.

    Yes, but trials of draws of sequences aren’t single-bit operations anyway.

  10. Lizzie: Yes, but trials of draws of sequences aren’t single-bit operations anyway.

    I’m not sure exactly how Dembski, et. al. think of a trial. It could be they throw together n number of atoms and/or molecules and if the specified structure is not realized, do it again; where “do it again” is another trial.

    Or they could think of it as grabbing one atom or molecule at a time and throwing it at another atom or molecule. If that “trial” doesn’t start building the structure, start over. If it does start producing the structure, grab another atom or molecule and throw it at the previous collection. If it continues to build the structure, keep going. If not, start all over.

    In either case, they seem to be selecting from an “ideal gas” of inert atoms and molecules, because they use coins and strings of letters as stand-ins for atoms and molecules.

    If I were to make a guess as to which procedure they would consider a trial, I would guess the first. The examples Dembski and Marks give in their papers suggest – say, in the case of a combination lock – that all digits are “entered” before the lock handle is tested to discover whether or not the combination was a success.

    If one were to use physics and chemistry to get an estimate on the probability of a specified assembly, one would have to know something about the recipe or recipes that produced the structure in the first place. Whether there is some stoichiometry that has to be met or whether there is some catalytic process involved or whether there is a set of energy cascades in which specific products are formed and shuttled into environments where they can stabilize is still not known. But we are making progress.

    What we do know is that it wasn’t an ideal gas of inert atoms and molecules. And these structures certainly aren’t built in one go.

  11. Mike Elzinga: If one were to use physics and chemistry to get an estimate on the probability of a specified assembly, one would have to know something about the recipe or recipes that produced the structure in the first place.

    Exactly. Another thing to carve on Mount Rushmore.

  12. Actually Lizzie, Winston spent his tuit on the 17th….em, that was the reason I posted…

    ching..ching…here’s a tuit or two for you….

    Lizzie: Well, it will be when Winston gets a round tuit.

  13. Winston Ewert puts paid to the notion that proponents of intelligent design “slather” layers of “obfuscation” in order to make calculations “appear” to mean “something”.

    Design detection is a notoriously difficult thing to quantify. Folks like Marks, Dembski and Ewert should be applauded for at least making the attempt.

    Let also put paid to the notion that proponents of ID dont put in the labor to justify their claims.

    .

    Mike Elzinga: “Taking a logarithm simply slathers on another layer of obfuscation and makes the ‘calculation” appear to mean something.”

  14. Ewert asserts that a default to a design hypothesis is warranted, without telling us what a ‘design hypothesis’ is. What’s new?

  15. Design detection is a notoriously difficult thing to quantify. Folks like Marks, Dembski and Ewert should be applauded for at least making the attempt.

    Let also put paid to the notion that proponents of ID dont put in the labor to justify their claims.

    Fair enough. Let’s also put paid to the notion that they have (so far) succeeded.

  16. Reading Ewert’s piece, I think there is a certain symmetry: When we talk about common ancestry, creationists often reply by invoking “Common Design”. The proper response to that is that Common Design is not a scientific hyoothesis, as it predicts anything and everything, including all possible things that happened and all possible things that didn’t happen.

    I’m not claiming that Ewert’s argument discusses the issue of common descent. But Ewert does dismiss unknown “chance” mechanisms because

    … rejecting the conclusion of design for this reason requires the willingness accept an unknown chance hypothesis for which you have no evidence solely due to an unwillingness to accept design.

    A difference between that and the objection to Common Design is that the latter involves unknown mechanisms that we cannot discover and about which we can make no prediction. By contrast, our unknown mechanisms are unknown conditions for natural selection and unknown mutatiions in proteins — both of which are events that invoke no mysterious unknowable events, jiust ordinary mutations and ordinary natural selection. They are just at present unknown.

    If Dembski had been successful, in some general way, in ruling out all ways that natural selection could act, Ewert would have some case, but as Dembski has not done this, Ewert has no case.

  17. As a footnote, the photo of the neon sign of the Elephant Car Wash used in Ewert’s reply is of a well-known Seattle landmark. It is located on Denny Way, near the Space Needle and less than a mile from the offices of the Discovery Institute.

    It is almost as good a graphic as the one Elizabeth used of the patterned elephant in the living room.

  18. Joe Felsenstein:
    As a footnote, the photo of the neon sign of the Elephant Car Wash used in Ewert’s reply is of a well-known Seattle landmark.It is located on Denny Way, near the Space Needle and less than a mile from the offices of the Discovery Institute.

    It is almost as good a graphic as the one Elizabeth used of the patterned elephant in the living room.

    I knew I’d seen it before!

  19. Sure, that might have been the altruistic ‘save others the bother and tedium of locating the one of two locations where Ewert posts’ thing to do. But you have to be in an exceptionally giving mood to do so.

    But as I was/am confident that Lizzie does in fact scan the ID blogs ( where Ewert scribbles) when tuits eventually fill her cookie jar, I figure Lizzie’s nose knows where those particlular posts party.

    But if you were in a round about way asking if I could make it easier on you, yeah sure, a link for comfort it is (next time of course).

    keiths:
    Steve,

    You should learn how to

    a) be explicit, and
    b) post links.

    Lizzie,

    Here is Ewert’s response..Odd that he didn’t drop by andlet you know that he had posted it.

  20. Steve,

    Sure, that might have been the altruistic ‘save others the bother and tedium of locating the one of two locations where Ewert posts’ thing to do. But you have to be in an exceptionally giving mood to do so.

    I see you’re a member of the William J. Murray school of ‘morality’.

    But as I was/am confident that Lizzie does in fact scan the ID blogs ( where Ewert scribbles) when tuits eventually fill her cookie jar, I figure Lizzie’s nose knows where those particlular posts party.

    Many of us monitor UD but not ENV. ENV is generally pretty boring, considering that they don’t even allow comments.

    But if you were in a round about way asking if I could make it easier on you, yeah sure, a link for comfort it is (next time of course).

    Thank you.

Leave a Reply