I think I just found an even bigger eleP(T|H)ant….

I just checked out Seth Lloyd’s paper, Computational Capacity of the Universe,  and find, interestingly (although I now remember that this has been mentioned before), that his upper limit on the number of possible operations is 10^120 (i.e. 400 bits) rather than 10^150 (the more generous 500 bits usually proposed by Dembski).  However, what I also found was that his calculation was based  on the volume of the universe within the particle horizon, which he defines as:

…the boundary between the part of the universe about which we could have obtained information over the course of the history of the universe and the part about which we could not.

In other words, that 400 bit limit is only for the region of the universe observable by us, which we know pretty well for sure must be a minor fraction of the total. However, it seems that a conservative lower limit on the proportion of the entire universe that is within the particle horizon is 250, and could be as much as 10^23, so that 400 bit limit needs to be raised to at least 100,000, and possibly very much more.

Which rather knocks CSI out of the water, even if we assume that P(T|H) really does represent the entire independent random draw configuration space, and is the “relevant chance hypothesis” for life.

heh.

But I’m no cosmologist – any physicist like to weigh in?

cross posted at UD

75 thoughts on “I think I just found an even bigger eleP(T|H)ant….

  1. On the length scales that we can see, the universe does indeed appear pretty flat, that means that it is not so curved that the edge of the observable universe is likely to contain the entire volume, and so it is indeed likely to be far larger.

  2. Looks like Dembski based his on the observable universe too:

    From The Chance of the Gaps:

    In the observable universe, probabilistic resources come in very limited supplies. Within the known physical universe there are estimated around 10^80 elementary particles. Moreover, the properties of matter are such that transitions from one physical state to another cannot occur at a rate faster than 10^45 times per second. This frequency corresponds to the Planck time, which constitutes the smallest physically meaningful unit of time.Finally, the universe itself is about a billion times younger than 10^25 seconds (assuming the universe is between ten and twenty billion years old). If we now assume that any specification of an event within the known physical universe requires at least one elementary particle to specify it and cannot be generated any faster than the Planck time, then these cosmological constraints imply that the total number of specified events throughout cosmic history cannot exceed

    10^80 x10^45 x10^25 = 10^150.

    It follows that any specified event of probability less than 1 in 10^150 will remain improbable even after all conceivable probabilistic resources from the observable universe have been factored in. A probability of 1 in 10^150 is therefore a universal probability bound.

  3. As far as I know, there is no upper limit on the size of the physical universe.

  4. I thought we had decided that there is no unobservable part of the universe. Didn’t we have that discussion?

    Faster than light expansion of space will redshift light, but not “disconnect” one region from another.

  5. FTL expansion does disconnect things eventually. If you could send a photon at an object that we can see receeding faster than c, then it would never get there unless the universe starts to contract. At some point in the past however we could have sent photons there, but not any more.

  6. The whole point of the UPB in the CSI discussion is to establish a low probability, so low that we cannot plausibly explain an adaptation by saying that a random mutation process caused it. (Or, with the more recent CSI definition, that random processes including natural selection caused it).

    I doubt that appealing to a much bigger universe helps us explain this — are we really happy saying that life on earth, with all its adaptations, is a collection of random accidents? Surely most pro-evolution commenters here would not say that, they would argue that natural selection was the explanation for ostensibly improbable events, by selecting the genetic basis for the adaptations out of the mutational gemisch.

    Of course, the details of each adaptation are improbable, but what the CSI calculation is supposed to say is not that the particular details are improbabe, but that it is improbable that any adaptation that good (or better) would arise.

  7. Oh, I agree entirely. I was just amused to note that even if we did think evolution and OoL amounted to independent random draw, there would seem to be nothing prohibitive about it anyway – those 400 bits need to be multiplied by possibly 250 and possibly 10^23! Rather more “probabilistic resources” than Dembski’s “generous” estimate.

  8. petrushka:
    I thought we had decided that there is no unobservable part of the universe. Didn’t we have that discussion?

    Faster than light expansion of space will redshift light, but not “disconnect” one region from another.

    Oh really? Then I missed that. Do you have a link?

  9. JetBlack:
    FTL expansion does disconnect things eventually. If you could send a photon at an object that we can see receding faster than c, then it would never get there unless the universe starts to contract. At some point in the past however we could have sent photons there, but not any more.

    Objects in our universe are not receding faster than light. It is possible that the universe at some point expanded at rates exceeding light speed, but nothing has traveled faster than light in space. We can still see “the beginning,” or at least the first light.

  10. petrushka,

    I thought we had decided that there is no unobservable part of the universe. Didn’t we have that discussion?

    Faster than light expansion of space will redshift light, but not “disconnect” one region from another.

    We did have that discussion, but I think you’ve misinterpreted it.

    It’s true that once an object is observable to us, it remains observable despite the expansion of space, because the light that is already on its way to us gets “stretched out”, but not cut. (And for an additional nuance, consider that at some point the light will be so “stretched out” that the time between photons will become arbitrarily long.)

    However, this doesn’t mean that the entire universe is observable.

    The cosmic microwave background comes not from the moment of the Big Bang itself, but from the time the universe became transparent, which was more than 300,000 years later. We see the CMB from the portions of the universe that were close to us at that time. There are other regions (huge regions!) of the universe whose CMB we will never see, because the light will never reach us. They will be forever unobservable to us.

  11. Joe Felsenstein:

    The whole point of the UPB in the CSI discussion is to establish a low probability, so low that we cannot plausibly explain an adaptation by saying that a random mutation process caused it. (Or, with the more recent CSI definition, that random processes including natural selection caused it)

    That’s right, and no particular value of the UPB has special significance. You just want a value that is low enough so that no one will conclude that “a lucky fluke” is the best explanation for an event that matches the specification. A larger value would do just as well. Dembski was deliberately overshooting.

  12. The cosmic microwave background comes not from the moment of the Big Bang itself, but from the time the universe became transparent, which was more than 300,000 years later.

    For my education, how would this work out if we didn’t have the transparency issue? And does this have any effect on our ability to extrapolate the total mass and particle density, including the part we cannot see?

  13. From this discussion, it sounds like we are back to “every bridge hand is a miracle”, we’re just increasing the number of cards in the deck. But it still seems that no matter how many cards are in the deck, every deal will produce a bridge hand anyway. So what’s the point?

  14. Well, as I say in the OP, the link I gave gives a “conservative” lower bound of 250 times the size of the observable universe and very many orders of magnitude more under other assumptions.

    On that basis, even Flying Spaghetti Monsters become quite probable!

  15. The ID case would be more plausible if they could show us something that could not have evolved.

    Behe seems to be the only one smart enough to realize this, and he has at least tried.

    Axe doesn’t seem clever enough or knowledgeable enough to work with actual evolutionary scenarios.

  16. No, the observed level of adaptation means the bridge hands are improbably good, too good to be a random deal. Natural selection is busy stacking the deck and dealing from the bottom, in effect.

  17. Wouldn’t draw poker be a better analogy?

    I’ve wondered if you couldn’t program a draw poker game in which the program matched its selection abilities against a human.

    Main difference from poker would be no limit to the number of rounds and no “deck.” Each draw would be from a virtual full deck.

  18. (Misplaced reply to petrushka’s comment above on the analogy to draw poker):

    Maybe draw from other people’s relatively-successful hands instead of from a deck?

  19. We can, but not over the entire universe, just within a sphere that is close enough. I could type an explanation, but it is a pain in the ass to do so on my iPad.

  20. So 10^150 presents some sort of barrier to explain the existence of large proteins where as 10^120 would not? Nonsense! The whole point of the calculation was to give a cosmological/physical/scientific-y veneer to the same argument made by YECs for decades

  21. RodW:
    So 10^150 presents some sort of barrier to explain the existence of large proteins where as 10^120 would not?Nonsense! The whole point of the calculation was to give a cosmological/physical/scientific-y veneer to the same argument made by YECs for decades

    Well the first is less of a barrier – it gives greater “probabilistic resources”. Dembski was being ,more generous than Lloyd (not that Lloyd was making this argument).

    My point is that Dembski is in fact being massively ungenerous, and should multiply his resources by something between 250 and 10^23. In other words. we haven’t a clue what the upper bound on resources is.

  22. In any case, the option to keep or discard draws, one at a time, boost the quality of your playing hand just fabulously. If I had only the option of selecting (on each deal) one unseen card from the hand of one opponent and exchanging it with one of my cards, I would be hard to beat even in the short run.

    Despite decades of demonstrations, ID people WAY underestimate the power of selection. I wonder who really cares how unlikely something that does not happen might be. But I don’t wonder why the ID people never factor in selection, and watch in enlightenment while the statistically impossible morphs rapidly into the statistically inevitable.

  23. I guess I don’t follow this. Increasing the resources by 10^23 doesn’t change the fact that atoms do not combine into molecules at random, nor that selection alters probabilities to the point where the original number is meaningless.

  24. Quite true. Dembski was using the UPB just to get a probability so small that it could not reasonably be held to be happening just by chance (that is, an adaptation that good or better could not occur by chance).

    Then he was (if my interpretation is correct) intending his Conservation Law to show that the same improbability could not be generated by natural selection. Alas for him, his original Conservation Law is not proven, and does not work.

    I hope no one here is actually saying that, for example, the hummingbird’s ability to fly is very good but just popped up by pure mutation because the universe is so big and mutational monkeys-with-typewriters is a plausible mechanism for an adaptation like that.

    Because we have another mechanism — natural selection.

    That is why it is fairly useless to argue about the size of the universe. As much fun as that may be.

  25. Good question. IANAC, so take all of this with a grain (or ten) of salt.

    If it weren’t for the transparency issue, then we could potentially receive photons produced by the Big Bang itself, so in that sense the entire universe would be observable.

    However, the red shift and dimming effects would be ginormous times 10 to the holy crap because of inflation, so the frequency would be so low that we probably wouldn’t actually be able to see or even detect anything. Plus the photons from the BB itself would be competing with photons released at every later time, and that might obscure any useful signal that would otherwise have been recoverable anyway.

    So it seems to me that we might ironically get less information in a transparent scenario compared to what we get with the CMB at ~380,000 years, which actually gives us a sort of snapshot of that time.

    But again, IANAC.

    Any knowledgeable folks out there who can set us straight on this?

  26. of course he also focusses on a particular solution. How many possible solutions are there, as in, how many different ways are there of getting life?

  27. I absolutely agree, guys. As I’ve been saying for years, I think, if IDers could really show that non-design was improbable, I would be perfectly happy with a much more lenient alpha cut-off anyway. What they never show is that Darwinian evolution is improbable, only random-independent-draw, which nobody proposes.

    But Dembski himself does instruct P(T|H) to be calculated for H where H takes into account Darwinian mechanisms. It’s just that nobody does that.

    I guess what I’m saying is that even those who don’t, and claim CSI anyway, shouldn’t, because their 500 bit threshold is way too low. Kairosfocus is always quoting Durston and Abel’s “Fits” for proteins as evidence of CSI, but none of their proteins gets anywhere near 500*250 Fits.

  28. Basically the entire CSI concept (and its derivatives) fails (at least) three ways:

    1. P(T|H) is impossible to calculate without calculating the very thing you want to know in the first place.
    2. Even if you did plug in a decent value for H, taking into account existing hypothesis, it’s not valid to reject any null you didn’t model, so you can’t conclude “non-design”, you can only conclude “not any of the nulls I modeled”.
    3. The 500 bit bar is way too low anyway, as it only reflects the observable universe, which is necessarily only that part that T can see. We simply do not know what the bar is, but estimates range from 500*250 to 500*10^23.

  29. One of the odd things about this argument seems to me, to be that even a computer with a set computational capacity can’t do just anything – it has to work with binary strings that actually do something within the computer, for example, send a random string to a printer and it will probably just sit there and scream at you. Send the appropriate string however and it will generate a mandelbrot set (sending short postscript strings to printers, thus jamming them up for hours while they generate a mandelbrot set is the height of hilarity in someone elses office, or a communal printer, but I digress). Evolution is fundamentally a massively parallel generative algorithm with changing boundaries that are set in parallel by the environment, where the environment includes the substrate of the algorithm. algorithms can construct patterns of extraordinary complexity (Game of Life, Mandelbrot set etc.) but the algorithm, like the Game of Life and Mandelbrot set is itself tiny and requires tiny amounts of processing power at any one time to iterate a loop.

  30. Oh, that too. I agree. I knew there was one I’d forgotten.

    The whole thing is a mess. A zombie argument, as Joe Felsenstein says. (Or was that RBH?)

  31. It’s also massively misunderstood by ID fans. Very few IDers seem (vjtorley, Winston Ewert) to have actually noticed the old eleP(T|H)ant, and that’s leaving aside the whole business of Specification (oh, that’s a 5th) which doesn’t work either. Rather more ID fans do see this, hence the alphabet soup of Functional versions that do not rely on Dembski’s Descriptive Simplicity/Event complexity criterion, which of course doesn’t work at all (he forgets that extremely compressible sequences with extremely high Shannon Entropy are readily generated by mechanical means).

    Barry has just declared victory over my claim that I can infer Design without using CSI, and it turns out he doesn’t know Dembski’s definition of either [event] Complexity (high Shannon entropy) or Specification (high compressibility), and takes me to task for violating language by calling a long simple repetitive sequence “complex”. Well, yeah. It’s stoopid. But if Dembski wants to call something “complex” just because it’s drawn from a large configuration space and “specified” because it’s simple to describe, don’t blame me if a simple repetitive sequence can be “complex” in Dembski-speak, as long as it’s lengthy and has a flat frequency distribution of symbols. And “specified” because it’s pattern is “simple”.

    I didn’t propose the damn thing as a design-detector.

  32. 3. The 500 bit bar is way too low anyway, as it only reflects the observable universe, which is necessarily only that part that T can see.We simply do not know what the bar is, but estimates range from 500*250 to 500*10^23.

    What’s the source of the 10^23 estimate? It’s been a while since I was a physicist, but I don’t know of any way to determine whether the universe is even finite.

  33. Lizzie,

    Its seems to me that although you’ve pretty much demolished their argument concerning P(T/H) etc etc. the real issue is whether you can ‘compress’ it down to a sound-byte. This may be a purely intellectual discussion in the UK but here in the US its only a matter of time before we have another school board hearing these arguments in support of teaching ID in public schools.
    So the way I’d answer this argument in a standup debate, or on a witness stand is this:
    Yes, Dembski’s way of computing probably is valid and it shows that the likelihood of evolving complex proteins is very hign – almost 1.0
    If an IDer retorts that its not I’d say, OK, run though the computation considering RM +NS and show its unlikely. We all know they cant do the calculation but now its in the context of failing to answer an aggressive assertion.

    As for the original topic: it seems to me that parts of the universe we cant see are causally disconnected and cant have contributed to life on earth

  34. Steve Schaffner: What’s the source of the 10^23 estimate? It’s been a while since I was a physicist, but I don’t know of any way to determine whether the universe is even finite.

    Wiki references Alan Guth, and in the link in my OP it says:

    Obviously, we can’t directly measure the size of the universe but cosmologists have various models that suggest how big it ought to be. For example, one line of thinking is that if the universe expanded at the speed of light during inflation, then it ought to be 10^23 times bigger than the visible universe.

    Haven’t checked a primary source.

  35. RodW:
    Lizzie,

    Its seems to me that although you’ve pretty much demolished their argument concerningP(T/H) etc etc.the real issue is whether you can ‘compress’ it down to a sound-byte. This may be a purely intellectual discussion in the UK but here in the US its only a matter of time before we have another school board hearing these arguments in support of teaching ID in public schools.
    So the way I’d answer this argument in a standup debate, or on a witness stand is this: Yes, Dembski’s way of computing probably is valid and it shows that the likelihood of evolving complex proteins is very hign – almost 1.0If an IDer retorts that its not I’d say, OK, run though the computation considering RM +NS and show its unlikely. We all know they cant do the calculation but now its in the context of failing to answer an aggressive assertion.

    Well, I’ll leave that up to you Americans!

    As for the original topic: it seems to me that parts of the universe we cant see are causally disconnected and cant have contributed to life on earth

    Oh, sure. But if the argument is simply about “probabilistic resources” as Dembski claims (number of opportunities for an event to happen that has a low probability on any given opportunity), then necessarily, any low-probability life form than nontheless eventually turns up in a vast macro universe of opportunities to form is going to be sitting at the dead centre of its own observable universe.

    If OoL has a 1 in gazillion chance of happening somewhere, and there are several gazillion somewheres, it’s probably going to happen a few times, and in each case the resulting living bserver will think it’s at the centre of its own small observable universe, and that possibly its own small observable universe is all the is there is.

  36. Lizzie:
    Well, I’ll leave that up to you Americans!

    You should come help! Arguments made in a British accent sound much more erudite to American ears.

  37. Lizzie: Wiki references Alan Guth, and in the link in my OP it says:

    Haven’t checked a primary source.

    The Guth citation is really only supplying a possible lower bound (and in very vague terms at that). I think it’s an open question whether the universe could be or is infinite.

  38. For me, the argument in a nutshell is this: Dembski has successfully shown that, if some aspect of life is really, really unlikely to have come about by naturalistic evolution, then it probably didn’t occur by naturalistic evolution. But Dembski has no idea how to calculate that probability, and neither does anyone else.

  39. For the simple reason that no one has found any feature of living things that meets Darwin’s criterion of being unreachable by small, incremental steps.

  40. It was I, here, back when we were wrestling with the P(T|H) argument and wondering whether Dembski could really be saying that. I think I was referring to the earlier Law of Conservation of Complex Specified Information (LCCSI) argument. That one is thoroughly disproven and its repeated use by pro-ID commenters is thus a zombie argument — it walks the earth as an Undead Argument with no brain.

    The P(T|H) argument has replaced it in Dembski’s writings, but neither he nor his supporters seem to have noticed that it renders CSI an afterthought, and not the way you prove that natural selection cannot account for an adaptation — instead you must first prove that before you can even start to consider whether there is CSI.

  41. … it renders CSI an afterthought, and not the way you prove that natural selection cannot account for an adaptation — instead you must first prove that before you can even start to consider whether there is CSI.

    Gpuccio did accept that reasoning. His argument is a bit more subtle. He argues that the lack of overlapping code among protein domains means they have no common ancestor.

    He would perhaps argue that we look at the nested hierarchy of genomes and deduce common ancestry. Gpuccio argues that protein domain sequences have no nesting and appear to be fully formed from the head of Zeus.

    That’s my reading of his argument.

  42. I was more amused by the fact that Dembski did not appear to have read Lloyd’s paper; but merely stole his UPB from the abstract of Lloyd’s paper.

    Had Dembski actually read the paper, he would have seen that Lloyd estimated that it would take 10^120 logical operations on 10^90 bits to simulate the observable universe; and that calculation actually includes the formation of life on at least one planet.

    My hypothesis is that Dembski picked Lloyd’s paper because Lloyd’s paper was in Physical Review Letters, a very prestigious journal. That would supposedly – in the mind of an ID/creationist anyway – lend “credibility” to Dembski’s “CSI” calculation by making Dembski’s UPB appear to have been established by the scientific community and peer reviewed in the most prestigious physics journal.

    It wouldn’t have made any difference if Lloyd had made the estimate for the entire universe – observable as well as its projected unobservable size – Dembski would have used any number that appeared in PRL. But whatever such a number is, it includes the formation of life on at least one planet.

    So Dembski’s “CSI argument” now boils down to life being more improbable than the formation of a universe that includes life. Dembski didn’t notice this.

    As I said, I don’t think Dembski even read the paper; and he wouldn’t have understood it if he did. Dembski already had an ID/creationist UPB that he used earlier; but with the fortuitous arrival of a paper in PRL that gave a number that Dembski thought he could hijack, Dembski went for the “credibility” route. He doesn’t even know what Lloyd was estimating and why.

  43. You think? Its functional parts can be deconstructed. The peptidyl transferase seems the oldest part – probably older than coded protein synthesis.

    I’d envisage an uncoded – or differently-coded – means of linking amino acids by a simple PT ribozyme followed by tighter specificity and an explosion when specificity and acid variety reached a threshold, enabling folded catalysts and structural units to be churned out ad lib. And parallel evolution of the tRNA from a simple ACC- trinucleotide terminus to the lengthier molecule of today.

  44. That’s interesting, Mike. I always wondered about what that number was doing there – Dembski seems to treat it as the number of trials you have of picking an unlikely pattern. I’ve never really queried it, because it seems the least of the problems CSI has. For someone with postgraduate training in statistics he doesn’t seem to know much about statistics. Or not about inferential statistics from data anyway.

  45. There is nothing special about the observable part of the universe in the context of Dembski’s UPB (ignoring for the moment all the problems inherent in this concept). There is no reason for the probability calculation to be limited to this particular region. The calculation, such as it is, should be applied to the entire universe. But the entire universe is almost certainly much larger than the part we can observe, and quite possibly infinite (at the moment an infinite universe seems to be the simplest model that fits observations).

    This consideration sinks Dembski’s “conservative” estimate, or even a corrected one that Lizzie suggests. And that is a good indicator that something is very wrong with the concept of UPB.

  46. There is an interesting argument from cosmologists Garriga and Vilenkin (there is also a monograph by Vilenkin that expands on this, which I haven’t read) that says:

    the number of distinct histories in an O-region is finite, while the number of O-regions in the universe is infinite, and thus there should be an infinite number of other regions with histories identical to ours. Moreover, all histories which are not strictly forbidden by conservation laws occur in a finite fraction of the O-regions.

    Jaume Garriga, Alexander Vilenkin, “Many worlds in one” (2001)

    (An O-region is a region of the universe the size of the observable universe. In an infinite universe there are infinitely many O-regions.)

    If we take this argument seriously, then as long as life is not forbidden by the laws of physics, it ought to have occurred not just once, but infinitely many times throughout the universe.

    The latter point is argued explicitly by Koonin:

    Eugene V. Koonin, “The cosmological model of eternal inflation and the transition from chance to biological evolution in the history of life” (2007)

  47. I think your math is wrong.
    If the UPB (observable universe) is
    10^80 particles x10^45 events/sec x10^25 sec = 10^150 particle-events
    or, per Lloyd, 10^120 logical operations.
    and the total universe is estimated to be 10^23 times larger than the observable universe,
    then the
    UPB (total universe) is
    10^80 x 10^23 particles x10^45 events/sec x10^25 sec = 10^173 particle-events
    or, per Lloyd, 10^143 logical operations
    It isn’t 10^(120 x 10^23).

    Of course, the number remains meaningless, thanks to the original eleP(T|H)ant in the room.

Leave a Reply