Siding with Mathgrrl on a point,and offering an alternative to CSI v2.0

[cross posted from UD Siding with Mathgrrl on a point, and offering an alternative to CSI v2.0, special thanks to Dr. Liddle for her generous invitation to cross post]

There are two versions of the metric for Bill Dembski’s CSI. One version can be traced to his book No Free Lunch published in 2002. Let us call that “CSI v1.0”.

Then in 2005 Bill published Specification the Pattern that Signifies Intelligence where he includes the identifier “v1.22”, but perhaps it would be better to call the concepts in that paper CSI v2.0 since, like windows 8, it has some radical differences from its predecessor and will come up with different results. Some end users of the concept of CSI prefer CSI v1.0 over v2.0.

It was very easy to estimate CSI numbers in version 1.0 and then argue later whether the subjective patterns used to deduce CSI were independent and not postdictive. Trying to calculate the CSI in v2.0 is cumbersome, and I don’t even try anymore. And as a matter of practicality, when discussing origin-of-life or biological evolution, ID-sympathetic arguments are framed in terms of improbability not CSI v2.0. In contrast, calculating CSI v1.0 is a very transparent transformation going from improbability to taking the negative logarithm of probability.

I = -log2(P)

In that respect, I think MathGrrl (who’s real identity he revealed here) has scored a point with respect to questioning the ability to calculate CSI v2.0, especially when it would have been a piece of cake in CSI v1.0.

For example, take 500 coins, and suppose they are all heads. The CSI v1.0 score is 500 bits. The calculation is transparent and easy, and accords with how we calculate improbability. Try doing that with CSI v2.0 and justifying the calculation.

Similarly, with pre-specifications (specifications already known to humans like the Champernowne Sequences), if we found 500 coins in sequence that matched a Champernowne Sequence, we could argue the CSI score is 500 bits as well. But try doing that calculation in CSI v2.0. For more complex situations, one might get different answers depending on who you are talking to because CSI v2.0 depends on the UPB and things like the number possible primitive subjective concepts in a person’s mind.

The motivation for CSI v2.0 was to try account for the possibility of slapping on a pattern after the fact and calling something “designed”. v2.0 was crafted to try to account for the possibility that someone might see a sequence of physical objects (like coins) and argue that the patterns in evidence were designed because he sees some pattern in the coins somewhat familiar to him but no one else. The problem is everyone has different life experiences and they will project their own subjective view of what constitutes a pattern. v2.0 tried to use some mathematics to create at threshold whereby one could infer, even if the recognized pattern was subjective and unique to the observer of a design, that chance would not be a likely explanation for this coincidence.

For example, if we saw a stream of bits which some claims is generated by coin flips, but the bit stream corresponds to the Chapernowne sequence, some will recognize the stream as designed and others will not. How then, given the subjective perceptions that each observer has, can the problem be resolved? There are methods suggested in v2.0, which in and of themselves would not be inherently objectionable, but then v2.0 tries to quantify how likely the subjective perception can arise out of chance and then it convolves this calculation with the probability of the objects emerging by chance. Hence we mix the probability of an observer concocting a pattern in his head by chance and then mixing it with the probability an event or object happens by chance, and after some gyrations out pops a CSI v2.0 score. v1.0 does not involve such heavy calculations regarding the random chance an observer formulates a pattern in his head, and thus is more tractable. So why the move from v1.0 to v2.0? The v1.0 approach has limitations which v2.0 does not. However, I recommend, that when v1.0 is available to use, use v1.0!

The question of post diction is an important one, but if I may offer an opinion — many designs in biology don’t require exhaustive rigor as attempted in v2.0 to try to determine if our design inferences are postdictive (the result of our imagination) or whether the designed artifacts themselves are inherently evidence against a chance hypothesis. This can be done using simpler mathematical arguments.

For example, consider if we saw 500 fair coins all heads, do we actually have to consider human subjectivity when looking at the pattern and concluding it is designed? No. Why? We can make an alternative mathematical argument that says if coins are all heads they are sufficiently inconsistent with the Binomial Distribution for randomly tossed coins, hence we can reject the chance hypothesis. Since the physics of fair coins rules out physics as being the cause of the configuration, we can then infer design. There is no need in this case to delve into the question of subjective human specification to make the design inference in that case. CSI v2.0 is not needed to make the design inference, and CSI v1.0, which says we have 500 bits of CSI, is sufficient in this case.

Where this method (v1.0 plus pure statistics) fails is in questions of recognizing design in a sequence of coin flips that follow something like the Champernowne sequence. Here the question of how likely it is for humans to make the Champernowne sequence special in their minds becomes a serious question, and it is difficult to calculate that probability. I suppose that is what motivated Jason Rosenhouse to argue that the sort of specifications used by ID proponents aren’t useful for biology. But that is not completely true if the specifications used by ID proponents can be formulated without subjectivity (as I did in the example with the coins) 🙂

The downside of the alternative approach (using CSI v1.0 and pure statistics) is that it does not include the use of otherwise legitimate human subjective constructs (like the notion of motor) in making design arguments. Some, like Michael Shermer or my friend Allen MacNeill, might argue that we are merely projecting our notions of design by saying something looks like a motor or a communication system or a computer, but the perception of design is owing more to our projection than to an inherent design. But the alternative approach I suggest is immune from this objection, even though it is far more limited in scope.

Of course I believe something is designed if it looks like a motor (flagellum), a telescope (the eye), a microphone (the ear), a speaker (some species of bird can imitate an incredible range of sounds), a sonar system (bat and whale sonar), a electric field sensor (sharks), a magnetic field navigation system (monarch butterflies), etc. The alternative method I suggest will not directly detect design in these objects quite so easily, since the pure statistics are hard pressed to describe the improbability of such features in biology even though it is so apparent these features of biology are designed. CSI v2.0 was an ambitious attempt to cover these cases, but it came with substantial computational challenges to arrive at information estimates. I leave it to others to calculate CSI v2.0 for these cases.

Here is an example of using v1.0 in biology regarding homochirality. Amino acids can be left or right handed. Physics and chemistry dictate that left-handed and right-handed amino acids arise mostly (not always) in equal amounts unless there is a specialized process (like living cells) that creates them. Stanley Miller’s amino acid soup experiments created mixtures of left and right handed amino acids, a mixture we would call racemic (a mix of right and left-handed amino acids) versus the homochiral variety (only left-handed) we find in biology.

Worse for the proponents of mindless oirgins of life, even homochiral amino acids will racemize spontaneously over time (some half lives are on the order of hundreds of years), and they will deanimate. Further, when Sidney tried to polymerize homochiral amino acids into protoproteins, they racemized due to the extreme heat and created many non-chains, and the chains he did create had few if any alpha peptide bonds. And then in the unlikely event the amino acids polymerize, in a soup, the amino acids can undergo hydrolysis. These considerations are consistent with the familiar observation that when something is dead, it tends to remain dead and moves farther away from any chance of resuscitation over time.

I could go on and on, but the point being is we can provisionally say the binomial distribution I used for coins also applies to the homochirality in living creatures, and hence we can make the design inference and assert a biopolymer has at least -log2(1/2^N) = N bits of CSI v1.0 based on N stereoisomer residues. One might try to calculate CSI v2.0 for this case, but me being lazy will stick to the CSI v1.0 calculation. Easier is sometimes better.

So how can the alternative approach (CSI v1.0 and pure statistics) detect design of something like the flagellum or DNA encoding and decoding system? It cannot do so as comprehensively as CSI v2.0, but v1.0 can argue for design in the components. As I argued qualitatively in the article Coordinated Complexity – the key to refuting postdiction and single target objections one can formulate observer independent specification (such as I did with the 500 coins being all heads) by appeal to pure statistics. I gave the example of how the FBI convicted cheaters of using false shuffles even though no formal specifications for design were asserted. They merely had to use common sense (which can be described mathematically as cross or auto correlation) to detect the cheating.

Here is what I wrote:

The opponents of ID argue something along the lines: “take a deck of cards, randomly shuffle it, the probability of any given sequence occurring is 1 out of 52 factorial or about 8×10^67 — Improbable things happen all the time, it doesn’t imply intelligent design.”

In fact, I found one such Darwinist screed here:

Creationists and “Intelligent Design” theorists claim that the odds of life having evolved as it has on earth is so great that it could not possibly be random. Yes, the odds are astronomical, but only if you were trying to PREDICT IN ADVANCE how life would evolve.

http://answers.yahoo.com/question/index?qid=20071207060800AAqO3j2

Ah, but what if cards dealt from one random shuffle are repeated by another shuffle, would you suspect Intelligent Design? A case involving this is reported in the FBI website: House of Cards

In this case, a team of cheaters bribed a casino dealer to deal cards and then reshuffle them in same order that they were previously dealt out (no easy shuffling feat!). They would arrive at the casino, play cards which the dealer dealt and secretly record the sequence of cards dealt out. Thus when the dealer re-shuffled the cards and dealt out the cards in the exact same sequence as the previous shuffle, the team of cheaters would be able to play knowing what cards they would be dealt, thus giving them substantial advantage. Not an easy scam to pull off, but they got away with it for a long time.

The evidence of cheating was confirmed by videotape surveillance because the first random shuffle provided a specification to detect intelligent design of the next shuffle. The next shuffle was intelligently designed to preserve the order of the prior shuffle.

Biology is rich with self-specifying systems like the auto correlatable sequence of cards in the example above. The simplest example is life’s ability to make copies of itself through a process akin to Quine Computing. Physics and chemistry makes Quine systems possible, but simultaneously improbable. Computers, as a matter of principle, cannot exist if they have no degrees of freedom which permit high improbability in some of its constituent systems (like computer memory banks).

We can see the correlation between a parent organism and its offspring not being the result of chance, and thus we can reject the chance hypothesis for that correlation. One might argue that though the offspring (copy) is not the product of chance, the process of copying is the product of a mindless copy machine. True, but we can further then estimate the probability of randomly implementing particular Quine computing algorithms (that makes it possible for life to act like computerized copy machines). The act of a system making copies is not in-and-of-itself spectacular (salt crystals do that), but the act of making improbable copies via an improbable copying machine? That is what is spectacular.

I further pointed out that biology is rich with systems that can be likened to login/password or lock-and-key systems. That is, the architecture of the system is such that the components are constrained to obey a certain pattern or else the system will fail. In that sense, the targets for individual components can be shown to be specified without having to calculate the chances the observer is randomly formulating subjective patterns onto the presumably designed object.

lock and key

That is to say, even though there are infinite ways to make lock-and-key combinations, that does not imply that emergence of a lock-
and-key system is probable! Unfortunately, Darwinists will implicitly say, “there are infinite number of ways to make life, therefore we can’t use probability arguments”, but they fail to see the errors in their reasoning as demonstrated with the lock-and-key analogy.

This simplified methodology using v1.0, though not capable of saying “the flagellum is a motor and therefore is designed”, is capable of asserting “individual components (like the flagellum assembly instructions) are improbable hence the flagellum is designed.”

But I will admit, the step of invoking the login/password or lock-and-key metaphor is a step outside of pure statistics, and making the argument for design in the case of login/password and lock-and-key metaphors more rigorous is a project of future study.

Acknowledgments:
Mathgrrl, though we’re opponents in this debate, he strikes me a decent guy

NOTES:
The fact that life makes copies motivated Nobel Laureate Eugene Wigner to hypothesize a biotonic law in physics. That was ultimately refuted. Life does copy via a biotonic law but through computation (and the emergence of computation is not attributable to physical law in principle just like software cannot be explained by hardware alone).

89 thoughts on “Siding with Mathgrrl on a point,and offering an alternative to CSI v2.0

  1. So what have we achieved?

    Every one of these “arguments” over ID/creationist assertions ends up in mud wrestling over the meanings of the meanings of the meanings of meanings. This shtick on the part of ID/creationists hasn’t changed in the slightest in something like 50 years.

    Mud wrestling over coin flips ignores the fact that coin flips have nothing to do with atoms and molecules. There is nothing of importance being “argued” here. There are no arguments against science here. The “discussion” is bogged down in trivia; as it always is.

    Sal still cannot tell us what coin flips have to do with atoms and molecules.

    We still have the glaring fact remaining that not one ID/creationist that can even start a basic high school level physics/chemistry calculation that scales up the charge-to-mass ratios of protons and electrons to kilogram-sized masses separated by distances on the order of meters, and then calculate the energies of interaction in units of joules and in units of megatons of TNT.

    If one then adds the rules of quantum mechanics, how does any ID/creationist justify using a random, ideal gas of coins to calculate the probabilities of molecular assemblies?

    By diverting the “argument” onto coin flips, Sal avoids justifying these implied assertions that ideal gases of inert objects are stand-ins for the behaviors of atoms and molecules.

    This is why I get bored with ID/creationist “arguments.”

  2. I see that Arrington doesn’t understand the difference between the probability of a specified permutation and a combination either.

    But still the question about why coin flips have anything to do with atoms and molecules is avoided. CSI is “saved.”

    I suppose it would be funny if I hadn’t seen it so many times in the history of the ID/creationist movement.

  3. It’s not surprising that Arrington doesn’t get it. He doesn’t venture out of the cave much, and as far as I know, he doesn’t have any scientific or mathematical training.

    Sal has much less of an excuse.

  4. Sal,

    If we flip a coin 500 times and get all heads, then yes, of course it requires an explanation — but not because 500 heads are less probable than any other specific sequence, and also not because there are many, many more ways of getting roughly 250 heads than there are of getting 500 heads.

    The reason that getting 500 heads is surprising is that a 500-head sequence is one of a very small number of sequences that are significant to us in advance. The number of possible sequences is huge, and the number of significant sequences is tiny, so the probability of hitting a significant sequence is extremely low.

    I explained this above in my Social Security number analogy:

    Mike,

    How can anyone claim to assert that an event is improbable when they have seen only one instance of it?

    In Sal’s defense, even one-off events can be identified as improbable under certain hypotheses. If I roll a fair ten-sided die nine times and come up with my Social Security number, then a very improbable event has occurred, even if I don’t repeat the experiment.

    My SSN is no more improbable than any other 9-digit number, but it is one of a very small set of 9-digit numbers that are significant to me. The odds of sitting down and rolling a personally significant number are therefore low.

    So yes, a sequence of 500 heads requires an explanation. It’s just that design isn’t the only alternative to chance.

    Likewise for homochirality. As I said in an earlier comment:

    If I were arguing Sal’s case for him, I would put it this way:

    Given that we observe a sequence of 500 heads, which explanation is more likely to be true?

    a) the coins are fair, the flips were random, and we just happened to get 500 heads in a row; or

    b) other factors are biasing (and perhaps determining) the outcome.

    The obvious answer is (b).

    In the case of homochirality, Sal’s mistake is to leap from (b) directly to a conclusion of design, which is silly.

    In other words, he sees the space of possibilities as {homochiral by chance, homochiral by design}. He rules out ‘homochiral by chance’ as being too improbable and concludes ‘homochiral by design’.

    Such a leap would be justified only if he already knew that homochirality couldn’t be explained by any non-chance, non-design mechanism (such as Darwinian evolution). But that, of course, is precisely what he is trying to demonstrate.

    He has assumed his conclusion.

  5. keiths:

    It’s not surprising that Arrington doesn’t get it. He doesn’t venture out of the cave much, and as far as I know, he doesn’t have any scientific or mathematical training.

    Sal has much less of an excuse.

    Sal is trying subtly to gloss over his mistakes by talking about the expectation of an event.

    One cannot calculate the expectation value of an event without knowing the probability distribution for a set of events.

    What is the probability distribution for the origins of life and evolution?

    How do coin flips represent the probabilities of the formations of complex molecules?

  6. The problem with the probability calculations that the people over a UD are having come about because they don’t understand two basic concepts taught in high school math classes; permutations and combinations.

    One can’t help feeling sorry for this poor fellow; but this is precisely the kind of confusion ID/creationism would create in any high school classroom where students are trying to learn real math and science concepts.

    Endless wrangling; and then the school year ends with nothing accomplished.

  7. The missing link between ‘pure materialistic chance’ and ‘intelligent organisation’ is filtration. There are many different kinds of filter. Natural Selection is one. So, for that matter, is Drift. So are magnetism, electrostatic attraction, hydrophobicity, stereospecificity, etc, as well as the more familiar one of size … you find a pattern that you would find surprising if you just shook everything up in a lottery bag … perhaps a filter has been at work? Y’know … physics***?

    *** (which of course encompassses chemistry and biology)

  8. Sal: I don’t think anyone’s quibbling with the math, just its relevance. If one is obliged to reject the ‘chance hypothesis’, what is one being asked to accept in its stead? ‘Chance hypotheses’ don’t only cover the equiprobable – mathematical randomness incorporates bias. Since I don’t think there was ever a real chemical or biological situation in which a process indulged anything remotely resembling a coin-flip between an amino acid and its enantiomer, on a site by site basis, I don’t see why the maths of that situation matter all that much.

    As I have said both above and below, there are natural processes that bias outcomes.

  9. Because chemistry and biology study the patterns that emerge out of the interactions of billions upon billions of underlying events, physics benefits as well. One doesn’t see such patterns in simplified interactions designed to single out the specifics of the most fundamental interactions.

    It is only at the level of condensed matter and above that the emergent properties and patterns become not only evident, but provide deeper insight into the implications of those more fundamental interactions. We wouldn’t know about the properties of complex condensed matter from just looking at interactions between two particles at a time. We wouldn’t even know why matter condenses above the gravitational level.

    But knowing those larger patterns allows us to piece together the chains of events and interactions that connect these realms; and our understanding becomes far more detailed and far richer as a result.

    It may be a bit unfair to say that physics encompasses chemistry and biology; physics owes much to chemistry and biology. That lesson is not taught well enough in my opinion.

  10. It may be a bit unfair to say that physics encompasses chemistry and biology; physics owes much to chemistry and biology. That lesson is not taught well enough in my opinion.

    As disciplines, I guess! – I meant at the fundamental level. It is, indeed, a two-way street. I think many in the ‘life sciences’ would benefit from understanding entropy and energy at the level of chemical interaction or protein folding, for example.

  11. What Arrington is missing in his op at UD is why he would suspect 500 heads to be a rigged coin. He is forgetting what he knows about rigging a coin. If you are going to rig a coin, you are going to rig it to come up with 500 heads or 500 tails. You are not going to rig it so that it comes up hthhthhhthtttthht. Rigging a coin is a simple matter of weight distribution for 500 tails or 500 heads. An extremely more complicated if not impossible matter to make it come up with hthhthhhthtttthht.

  12. Hi bigevil,

    Keep in mind that you can rig the outcome without rigging the coin itself.

  13. Neil Rickert:
    Barry is now saying, in effect, that intuition trumps rigorous mathematics.

    For him, and many IDists, it does.

    What ID critics commonly assail as an “argument from incredulity” is better understood as “arguments from intuition”.

    Reading the UD thread… oy!

  14. When one sees this kind of stuff, one of the first questions that come to mind is whether or not the person is messing with other people’s minds just to be contrary. And the histrionics over, and the demonizing of “Darwinists” and “atheists goes to such extremes that it appears to be a bit over the top.

    The unfortunate answer to that question is the nearly 50 year history of ongoing political activity by ID/creationists to get this stuff into public education. We have court cases and legislation on the books that document all of it. We are not imagining it; UD doesn’t appear to be a parody.

    The Right Wing political anger and the elections of some of these characters to state legislatures and to the US Congress simply inflame the believers of stuff like this.

    It’s not about science; it is political to the core. They don’t understand science or math; and they don’t want to.

    “Oy!” is right on.

  15. Sal asserts in the OP,

    I could go on and on, but the point being is we can provisionally say the binomial distribution I used for coins also applies to the homochirality in living creatures, and hence we can make the design inference and assert a biopolymer has at least -log2(1/2^N) = N bits of CSI v1.0 based on N stereoisomer residues.

    All those words over here and at UD, and we still don’t have an ID/creationist who can scale up the charge-to-mass ratio of protons and electrons to kilogram-sized masses separated by distances on the order of a meter and then calculate the energies of interaction in units of joules and in units of megatons of TNT.

    We still haven’t seen an ID/creationist who can then throw in the rules of quantum mechanics for these interactions and then proceed to justify the ID/creationist tactic of using coin flips, tornados ripping through junk yards, and any other ideal gas of inert objects as stand-ins for atoms and molecules.

    Sal just signs off with,

    Thanks to all for reading and commenting on my thread.

    Same Shtick, Different Day.

  16. The very concept of ‘common sense’ is political. It is the conservative’s appeal to the ignorance of the masses.

  17. Sal says,

    If you stand by eigenstate’s comments, then on what grounds will you ever reject the chance hypothesis short of you seeing someone rigging an apparatus, etc.? Answer: NEVER, because in eigenstate’s world, what matters to him is every sequence is just as probable as the next, whereas in operation practice, deviations from expectation value count for something.

    We still don’t know what any of this has to do with atoms and molecules.

    Furthermore, if expectation value is so important to Sal, then why doesn’t he give us the expectation value for the origins of life and evolution. How does Sal get an expectation value from a one-off event?

    Does Sal really believe that the expectation value for coin flips is anything like the expectation value for the formation of molecules?

    He uses the words without understanding what he is saying.

  18. What Sal seems not to understand with his talk of expectation values and poker analogies in the comments at UD is that you can play straight poker where the best hand is something like “3 4 8 spades, J diamonds, A of hearts”. The game is mathematically no different than if a royal flush is the best hand.

  19. The irony in Sal’s misunderstanding of just that fact lies in the ID/creationist penchant for authority; not recognizing that someone back in history set some arbitrary rules for poker that they don’t question.

    “3 4 8 spades, J diamonds, A of hearts” bucks authority; everybody “knows” it can’t possibly beat a royal flush.

    All heads in 500 flips therefore proves that “a biopolymer has at least -log2(1/2^N) = N bits of CSI v1.0 based on N stereoisomer residues.”

    ID/creationist “reasoning” doesn’t track; it is meant to bamboozle. As I said above, I get really bored with the innanity of their “arguments.” But they still suck people in.

  20. To clarify, you would actually need 4 best hands, because a royal flush can be achieved 4 ways (one in each suit). Not that it makes any difference to the substance of the point.

  21. Yeah; and if I could make up the rules, I would make all those hands different and as hard to remember as possible. Why have four of the same hand in different suits?

    Then I would make a completely arbitrary hierarchy of winning hands.

    Think of the fun with all cards wild. Just don’t bring a loaded sidearm to the game. 🙂

  22. And again, we afford IDers a courtesy they don’t deserve and that they don’t extend to us.

  23. I don’t believe that Cordova ever reads for comprehension.

    He just sees “hot” words and imagines what must be happening.

    We are told by Sal that he studied some physics; yet he consciously avoids confronting a high school level calculation that scales up the charge-to-mass ratio of protons and electrons to kilogram-sized masses separated by distances on the order of a meter and then calculating the energies of interaction in units of joules and megtons of TNT.

    He should then fold in the rules of quantum mechanics and proceed to justify why ID/creationists use “ideal gases” of inert things, like coins, as stand-ins for the behaviors of atoms and molecules.

    He simply can’t do it; a high school level calculation no less.

    He started this thread and then abandoned it without reading or responding to much of anything except to mud wrestle over the meanings of words. That is straight out of Duane Gish’s playbook; never address a real issue, just dance and sneer.

    I think he dumped this OP here just to mark territory.

  24. Yes – I really would like to see an answer on the physical system he has in mind that stitches alpha-amino acids together (which must be discriminatory on two of the four sites around an alpha carbon), and is discriminatory on the shape and charge of the side-chain … and yet is somehow not discriminatory on where (in relation to those ‘fixed’ points’) the third thing it discriminates lies.

    It’s as if the side chain is regarded as just a free-floating property, not something physically glued to the carbon atom.

  25. Mark Frank:
    In any case both definitions suffer from the eleP(T|H)ant in the room. Why is the only acceptable chance hypothesis a fair coin (which of course does not exist in reality)? Maybe it is a coin with 2 heads. Maybe it is being thrown by a mechanism that is so accurate it is almost bound to keep on landing on the same side.

    If it is a coin with 2 heads, it is not a “fair coin” (which I presume to mean a coin which has a relatively equal probability to land either heads or tails given a purely random flip).

    If it is produced by an extremely accurate mechanism, that would be Design, since the mechanism would be designed.

  26. You have a good point, uoflcard: It is actually very rare to find a natural process that generates equiprobable independent outcomes.

    It’s one of the reasons it’s a rather odd null hypothesis!

  27. Lizzie:
    It is actually very rare to find a natural process that generates equiprobable independent outcomes.

    Radioactive half-life being a notable exception.

  28. petrushka: Radioactive half-life being a notable exception.

    Yes, but even then, there a slight autocorrelation is there not? An interval between decay events is slightly more likely to be followed by a longer one than a shorter – or there would be no decay.

  29. Segregation/crossover at meiosis? One of the reasons sex is so stable, contrary to the expectations of the ‘selfishists’ (not just Dawkins, but Maynard Smith, Williams, Hamilton et al). It seems like there should be a genetic cost of meiosis, but it cannot be cashed.

  30. An interval between decay events is 100% likely to be followed by a decay event. 🙂

    Actually, the interval that leads up to a decay event is slightly more likely to be followed by one that is a longer interval. That is because before the chunk of matter had N as-yet-undecayed radioactive atoms, but now it has only N-1 of them left to decay.

    Of course since N is a very big number, of the order of 10^20 in real experiments, this effect is hardly noticeable. If you had a chunk of matter with only two as-yet-undecayed radioactive atoms, once one of them decays, the chunk has only one as-yet-undecayed atom. A simple consideration shows that the chance that the time to the next decay is longer than the time to the recent decay is then 2/3. With N as-yet-undecayed atoms the comparable probability is N/(2N-1). (To see this consider one chunk with N as-yet-undecayed atoms, and another with N-1 of them, and ask in which of these chunks the next decay occurs).

  31. An interval between decay events is 100% likely to be followed by a decay event. 🙂

    Unless it’s an infinite interval. 🙂

    Actually, the interval that leads up to a decay event is slightly more likely to be followed by one that is a longer interval.

    That’s what Lizzie said:

    An interval between decay events is slightly more likely to be followed by a longer one than a shorter…

  32. Joe’s explanation can be put in mathematical terms.

    The rate of decay – number of particles decaying per unit time – is proportional to the number of particles.

    dN/dt = – r N

    where r is some constant characteristic of the system.

    As Joe points out, the more particles there are, the more likely we would see one decay in a given time interval.

    There is an important caveat however; the decay of an individual atom is independent of the decay of any other atom.

    An important exception to this is stimulated emission in which the products of the decay of one atom cause another atom to decay. Two familiar cases of this are the processes in lasers and in nuclear chain reactions. This can lead quickly to an increase in the decay rate until all the decays are completed; and then it stops.

    The “autocorrelation” that Elizabeth is referring to occurs if the products of the decay of one atom cause the decay of another but it doesn’t lead to a chain reaction. All that does is change the rate of decay by making more atoms per unit time decay. The decay constant is now different, but the above equation is still the same.

    In fact, the decay “constant” changes with time because the probability of a decay product encountering another undecayed atoms also decreases as more undecayed atoms are removed from the collection.

Leave a Reply