There has been tremendous confusion here and at Uncommon Descent about what I’ll call the ‘all-heads paradox’.

If you flip an apparently fair coin 500 times and get all heads, you immediately become suspicious. On the other hand, if you flip an apparently fair coin 500 times and get a random-looking sequence, you don’t become suspicious. The probability of getting all heads is identical to the probability of getting that random-looking sequence, so why are you suspicious in one case but not the other?

In this post I explain how I resolve the paradox. Lizzie makes a similar argument in her post Getting from Fisher to Bayes, but there are some differences, so keep reading.

Early in the debate, I introduced a thought experiment involving Social Security numbers (link, link). Imagine you take an apparently fair ten-sided die labeled with the digits 0-9. You roll it nine times and record the number you get, digit by digit. It matches your Social Security number. Are you surprised and suspicious? You bet.

As I put it then:

My point is that for anyone to roll their SSN (or another personally significant 9-digit number) is highly unlikely, because the number of personally significant 9-digit numbers is low for each of us while the space of possible 9-digit numbers is large.

If someone sits down and rolls their SSN, then either:

1. The die was fair, the rolls were random, and they just got lucky. It’s pure coincidence.

2. Something else is going on.

Just as my own SSN is significant to me, 500 heads in a row are significant to almost everybody. So when we express suprise at rolling our SSN or getting 500 heads in a row, what we are really saying is “It’s extremely unlikely that this random number (or head/tail sequence) would just happen to match one of the few that are personally significant to me, instead of being one of the enormous number that have no special significance. Hmmm, maybe there’s something else going on here.”

That helps, but it doesn’t seem to completely resolve the paradox. It still seems arbitrary and subjective to divide the 9-digit numbers into two categories, “significant to me” and “not significant to me”. Why am I suspicious when my own SSN comes up, but not when the SSN of Delbert Stevens of Osceola, Arkansas comes up? His SSN is just as unlikely as mine. Why doesn’t it make me just as suspicious?

In fact, there are millions of different ways to carve up the 9-digit numbers into two sets, one huge and one tiny. Should we always be surprised when we get a number that belongs to a tiny set? No, because every number belongs to some tiny set, properly defined. So when I’m surprised to get my own SSN, it can’t be merely because my SSN belongs to a tiny set. Every number does.

The answer, I think, is this: when we roll Delbert’s SSN, we don’t actually conclude that the die was fair and that the rolls were random. For all we know, we could roll the die again and get Delbert’s number a second time. The outcome might be rigged to always give Delbert’s number.

What we really conclude when we roll Delbert’s number is that we have no way of determining whether the outcome was rigged. In a one-off experiment, there is no way for us to tell the difference between getting Delbert’s (or any other random person’s) SSN by chance versus by design or some other causal mechanism. On the other hand, rolling our own SSN does give us a reason to be suspicious, precisely because our SSN already belongs to the tiny set of “numbers that are meaningful to me”.

In other words, when we roll our own SSN, we rightly think “seems non-random”. When we roll someone else’s SSN, we think (or should think) “no particular reason to think this was non-random, though it might be.”

1. “That’s simply wrong. There is no reason, provisional or otherwise, to assume that the binomial distribution applies to homochirality in biology.”

You’re right, because I was being too generous. Once something becomes homochiral it begins to racemize without maintenance. The binomial distribution was probably too generous because that deals with the static case.

You can see some of the half-life racemization calculations here:

http://www.jbc.org/content/suppl/2006/03/16/M600296200.DC1/SUPPLEMENTAL-DATA.pdf

and here:

http://www.annualreviews.org/doi/abs/10.1146/annurev.ea.13.050185.001325?journalCode=earth

Living things are homochiral because chance processes are not at work, but rather molecular machines. Whether you accept ID or not, the chance hypothesis for homochirality ought to be rejected.

So once something is homochiral it has a finite timespan before it racemizes to uselessness as a precursor to a protein.

By the way, those racemization half-lives presume there is an expectation value (usually around 50% for L-sided stereoisomer amino acids), do you want to argue against expectation values too?

2. keiths:
Lizzie:

And Sal, just so you don’t miss the message— Lizzie’s criticism applies to your homochirality argument.

This statement from your OP illustrates the problem:

That’s simply wrong.There is no reason, provisional or otherwise, to assume that the binomial distribution applies to homochirality in biology.

Yep, and that is exactly the simple-wrongness which should have been the focus of this whole discussion [which began, as you say, a week ago on Cordova’s Siding with mathgrrl post.] That wrongness is such a perfect example of IDist argument: “It looks improbable, therefore it had to have been designed to overcome the improbability. But wait, wait, I’ve actually proved it was improbable, with coin flips, and math and all, so I really really have proved it had to have been designed. It doesn’t just look improbable, it is improbable, the coin flips prove it!!! Why are all you evolutionists so stubborn and arguing with me?”

Interesting as all this discussion of combinatorials and probabilities is …

It’s a funny result of IDists’ strategy to route what should be discussions of actual chemistry and biology into bafflegab about “specification” and “surprise factor” – while just to show they haven’t abandoned pretenses to science, they’ll throw in some science-y links, math-y formulas and bandy about “binomial distribution” etc.

Meanwhile, to all appearances, they studiously avoid every post in discussions which do explain the real science, and only reappear from time to time to lob in another fat howler which basically repeats “See! Improbable! Coin flips prove it! Tornados in junkyards! Design wins!”

Well, it makes for some fun reading. Can’t complain about that 😀

3. Living things are homochiral because chance processes are not at work, but rather molecular machines. sterochemistry is at work.

FIFY
[“molecular machines” is a biased term which contributes to the problem you already have of assuming your conclusion: Machines! prove! it was Designed! by a Cosmic Engineer! Stop using biased terms and you’ll have a fighting chance of un-biasing your own thoughts.]

Whether you accept ID or not, the chance hypothesis for homochirality ought to be rejected.

Well, well, you don’t say. Since literally no one (except you) has ever proposed a “chance” hypothesis for the homochiral assemblage for any lengthy biomolecule, I have no idea why you thought you should kick up this little tempest to begin with. Great, now you’ve rejected your own made-up chance hypothesis. Now what?

4. Sal:

Whether you accept ID or not, the chance hypothesis for homochirality ought to be rejected.

On what grounds?

Suppose you filled a jar with E. Coli, a 50/50 mix of two strains which handily rendered the cells red or blue, but had no effect on fitness and could not laterally transfer. Under the microscope, samples from this purple mix would be expected to contain a mixture of reds and blues whose numbers were generally close to the peak of the distribution. You would be justifiably suspicious if you dipped in and found only reds, or only blues. Now just allow these cells to do what comes naturally. Siphon off a few every day to make room, blindly. Now, inexorably, and completely by chance, the jar will shift from purple to either red or blue. A coin flip, 50/50, as to which way it will go. If it did not do so, in a time that could be mathematically predicted from the population size … then you would have grounds for suspicion. This is simply the mathematics of sampling at work. Samples are more often on the shoulder than exactly at the peak. When you remove a sample, the remainder is also a sample of the original population, each impoverished by the enrichment of the other. This is a distorting process which cannot remain at the peak. Since the representation of the next sample is determined by the current frequencies, and the probability of ultimate ‘fixation’ directly proportional to current frequency, a growing distortion is introduced, right through to, ultimately, complete loss of one and purification of the other. Inexorably, and repeatably.

I’m not saying that this is the mechanism at work in the settlement on L acids, but it is one ‘chance hypothesis’ that I suspect you haven’t considered. There are several others, and they all ultimately hinge upon the biological fact of descent.

You highlight a legitimate issue – how could mixed amino acids be repeatably polymerised prebiotically? – but chirality has next to nothing to do with it. If you have side chain specificity, you discard enantiomers along with everything else. If a process can distinguish among side chains, it can easily rule out D acids (whose ‘side chain’ at that position is always -H, whatever else it holds). Legitimate conclusions based on shaky reasoning are still open to criticism!

My own answer to this is that they weren’t (synthesised prebiotically). They were synthesised by other macromolecules, which are necessarily stereospecific by their very nature.

On a side-note, you highlight this paper: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2203351/ in support of your contention that early acids could not be heterochiral. They did not necessarily have to be but either way this paper does not support your case. A few lines into the abstract, it says this: However, many examples exist in nature where novel polypeptide topologies use both l- and d-amino acids.. The assertion is unfortunately unreferenced, but if heterochiral acids actually exist in nature, where does that leave your case? They may not be as ‘good’ as homochiral acids – but then, what did they have to compete with? Natural Selection simply becomes another ‘chance hypothesis’ to be added to the list – along with iterative sampling (Drift) or gradual expansion of the library from a single starting acid inheriting contingency from the orientation of that precursor – that we should not be in too much of a rush to discard.

5. I guess it does depend a little on what he actually means by ‘chance’ … everybody knows what ‘chance’, ‘random’ etc means … and they all disagree like mad!

6. It always fascinates me how quickly the probability distribution in evolutionary simulations moves from an equiprobable distribution (if that’s what you start with) to a Gaussian, and how often some items simply drop out completely, even with nothing going on other than drift.

It’s a lovely example of the drunkard’s walk. The drunkard is far more likely to end up a long way from the lamp than near it.

7. “You highlight a legitimate issue – how could mixed amino acids be repeatably polymerised prebiotically? – ”

Yes, and even homochiral amino acids will not necessarily polymerize into homochiral polymers. Sidney Fox tried to polymerize homochiral amino acids via extreme heat and the heat racemized the amino acids in the process of polymerization.

And once in the polymerized state, they will racemize spontaneously over time, and all your supposition of pre-biotic sorting of L-amino acids is effectively moot anyway. The half-life research demonstrates this.

Similar considerations apply to other OOL scenarios that aren’t proteins first because chirality appears in other biotic materials.

8. By the way, is my assertion that the number of distinct sequences that are 50% heads among 500 coins correct?

C(n,r) = C( 500, 250) = 1.17 x 10^149

Is my math correct? Any objections or corrections? Thanks in advance.

9. Sidney Fox tried to polymerize homochiral amino acids via extreme heat and the heat racemized the amino acids in the process of polymerization.

So that appears to rule out extreme heat as a polymerising mechanism, then! I don’t quite see the relevance of this. Someone tried something rather unlikely to succeed, on something that is unlikely even to have existed prebiotically – a purified mixture of homochiral acids. He wasn’t able to stumble upon a plausible mechanism by that route. Ho hum. These ‘gross’ methods are very unlikely to succeed anyway, Shove in unstructured, indiscriminate factors that affect many molecules at once, and they are unlikely to ignite the spark of what is a process that operates surgically, right down at the level of the individual molecule, and distinctive groups thereon.

And I should note that you flip rather readily from modern patterns to the OoL. There are 3.5 billion years of descent between now and then. The mechanisms I outlined could easily account for the homochirality of modern peptides, without having any relevance whatever for the OoL. But they do answer your ‘binomial’ issue: biological mechanisms – even completely neutral ones – shift the distribution, right to points that, in a one-step process, would be deemed near-impossible. My E Coli end up all heads or all tails, every time, and if they don’t, something must be acting against that tendency, batting them back to the central peak of 50/50 whichever way they drift. The extremes are the expected result in that system, not perennial occupation of the central peak of your starting population.

Similar considerations apply to other OOL scenarios that aren’t proteins first because chirality appears in other biotic materials.

The considerations are similar but by no means identical. In particular, the nucleic acid molecule has the advantage of complementarity. There is nothing complementary to an amino acid (including its enantiomer). Short homochiral stretches of single strand nucleic acid will hybridise with their complementary sequence. There is a much more interesting problem than chirality going on though: base pairing. There are many different forms of base that could exist, but only those that form complementary pairs have survived. And hybridisation is, IMO, likely to provide a vital key. Double strand nucleic acid is effectively two molecules going in opposite directions and binding. The nucleic acid doesn’t need anything else from outside itself to force homochirality or restrict the monomer set; it will draw stable structures out of a mixture. This underlies the use of RNA probes. Complementarity stabilises the molecule, reducing the turnover of the components. It renders chains much more extensible (single strands cyclise at very small lengths), and stable – the strands are less flexible and so less susceptible to hydrolysis. And such hydrolysis as does occur is, in principle, repairable, because base pairing pins the break in place, where it would float apart in single strand breakage. I think the immediate precursor to Life was a population of mutually stabilised random base sequences – non-replicating double strands, the most stable of which are homochiral and composed only of complementary bases: auto-purification.

10. And once in the polymerized state, they will racemize spontaneously over time, and all your supposition of pre-biotic sorting of L-amino acids is effectively moot anyway. The half-life research demonstrates this.

I missed this. I think you missed my point – my E Coli thought experiment was not a prebiotic sorting mechanism, but a ‘postbiotic’ one. And I seem to have failed to get across the idea that chirality is very much a secondary issue to the general one of specificity. I happen not to care too much about the problems of prebiotic protein formation, since I do not think it happened. Look no further than the Gibbs free energy of the peptide bond. Everyone at UD seems deeply wedded to the notion that Life required peptide enzymes before it could get going. I’m not.

The E coli model illustrates the effect of birth and death upon naive probabilistic expectations – the continual culling of population members and breeding from the remainder is a memoryless process. It does not adhere to a central point, but invariably drifts towards one or other of the extremes (without a countervailing force, such as frequency-dependent selection). Other mechanisms depend on contingency – the orientation of the first chiral acid forces all derivatives to have the same orientation – or selection – homochiral peptides may be better than heretochiral if they ever existed (though I doubt they did). In short, the modern pattern has been subject to filtration by the processes of Life, and the binomial distribution is not relevant.

All plausible mechanisms for polymerisation of a specific alpha-amino acid sequence must be discriminatory. Else how do they achieve sequence at all? It does not need to discriminate down to the last atom, but it needs some means of dividing the set with some level of accuracy, and that must involve some discrimination upon the side chain. The fact that certain amino acids have enantiomers with exactly the same molecular weight and gross chemical character is not relevant; a discriminatory process would not be ‘weighing’ those parameters. We chemists may struggle to differentiate enantiomers, but to a process with 3-dimensional molecular discrimination (which any specification process must possess), it is a breeze. Acids with the wrong chirality have a hydrogen atom where the ‘discriminatory process’ is looking for a particular form of side chain. They are discards.

11. This isn’t a paradox at all. Each coin flip is an independent test of the statistics of what happens to a flipped coin.

12. Hi again, Sal.

If you’re still reading, as Lizzie explained, this whole controversy comes down to different meanings of the word “consistent”. Based on a few google searches, I would agree that much (but not all) of the time, stats textbooks and the like use the word “consistent” in the same way you do. There are exceptions, however, that take the term to mean “logically consistent” as some of us do, even in this context.

For example, the philosopher David Stove. I don’t want to post the entire passage because it is long, but in the previous paragraph, Stove has discussed a coin flip experiment in which the number of heads and tails are significantly different.

This only goes to show, of course, that consistency, or not contradicting yourself, is only a very small part of rationality. To believe that the coin is fair is irrational in the light of our evidence, even though it is consistent with that evidence. After all, if the coin had never once come up heads in all of our million tosses, then that too – that is, an observed frequency of no heads in a million tosses – would have been logically consistent with the hypothesis that the coin is fair. Or again, if someone had the hypothesis that the coin is so very biased that the probability of heads with it is .99, then that hypothesis too would be consistent with an observed frequency of no heads in a million tosses. And so on: we need to fix in our minds the point that absolutely any observed frequency is consistent with every probability other than 1 and 0. And a consequence is this: where probabilistic or statistical hypotheses are concerned, consistency with the observed frequency counts for literally nothing in favour of any one hypothesis, because it is a property common to every hypothesis.