Yes, Lizzie, Chance is Very Often an Explanation

[Posted by Barry at UD]

Over at The Skeptical Zone Elizabeth Liddle has weighed in on the “coins on the table” issue I raised in this post.

Readers will remember the simple question I asked:

If you came across a table on which was set 500 coins (no tossing involved) and all 500 coins displayed the “heads” side of the coin, how on earth would you test “chance” as a hypothesis to explain this particular configuration of coins on a table?

Dr. Liddle’s answer:

Chance is not an explanation, and therefore cannot be rejected, or supported, as a hypothesis.

Staggering. Gobsmacking. Astounding. Superlatives fail me.

Not only is Dr. Liddle’s statement false, it is the exact opposite of the truth. Indeed, pharmaceutical companies, to name just one example, have spent countless billions of dollars in clinical trials of drugs attempting to rule out the “chance explanation.”

Don’t take my word for it. Here is a paper called What is a P-value? by Ronald A. Thisted, PhD, a statistics professor in the Departments of Statistics and Health Studies at the University of Chicago. The abstract states:

Results favoring one treatment over another in a randomized clinical trial can be explained only if the favored treatment really is superior or the apparent advantage enjoyed by the treatment is due solely to the working of chance. Since chance produces very small advantages often but large differences rarely, the larger the effect seen in the trial the less plausible chance assignment alone can be as an explanation. If the chance explanation can be ruled out, then the differences seen in the study must be due to the effectiveness of the treatment being studied. The p-value measures consistency between the results actually obtained in the trial and the “pure chance” explanation for those results. A p-value of 0.002 favoring group A arises very infrequently when the only differences between groups A and C are due to chance. More precisely, chance alone would produce such a result only twice in every thousand studies. Consequently, we conclude that the advantage of A over B is (quite probably) real rather than spurious.

(emphasis added)

In a clinical trial the null hypothesis is that the apparent advantage of the treatment is due to chance. The whole point of the trial is to see if the company can rule out the chance explanation, i.e. to rule out the null hypothesis that the results were due to chance, i.e., the chance hypothesis. So, if “chance is not an explanation” what is the point of spending all those billions trying to rule it out?

Want more? Here’s a paper from Penn State on the Chi-square test. An excerpt:

Chi-square is a statistical test commonly used to compare observed data with data we would expect to obtain according to a specific hypothesis. For example, if, according to Mendel’s laws, you expected 10 of 20 offspring from a cross to be male and the actual observed number was 8 males, then you might want to know about the “goodness to fit” between the observed and expected. Were the deviations (differences between observed and expected) the result of chance, or were they due to other factors. How much deviation can occur before you, the investigator, must conclude that something other than chance is at work, causing the observed to differ from the expected. The chi-square test is always testing what scientists call the null hypothesis, which states that there is no significant difference between the expected and observed result

(emphasis added)

Obviously, asking the question, “were the deviations the result of chance, or were they due to other factors” makes no sense if as Liddle says, “chance is not an explanation.”

I don’t know why Dr. Liddle would write something so obviously false. I am certain she knows better. “Darwinist Derangment Syndrome” or just sloppy drafting? I will let the readers decide.

 

155 thoughts on “Yes, Lizzie, Chance is Very Often an Explanation

  1. I will do Barry the honour of assuming he has actually misunderstood me, as opposed to having deliberately misunderstood me. And it’s a common misunderstanding, even among scientists, so I will cut slack for a lawyer.

    But he has, of course, missed the entire point of my post, which is that “chance” is not an explanatory hypothesis.

    Sure, we can use “chance” as a free-hand “explanation” – “it was just one of those things”; “we met purely by chance”; “if my parents hadn’t met by lucky chance, I wouldn’t be here”.

    But we are talking about formal statistical null hypothesis testing here (clearly – Barry headed his original post “A Statistics Question”) and in formal statistical hypothesis testing, chance is not a hypothesis.

    Sure, we speak of retaining the null as accepting that the data could have been the result of “chance”. But chance is not the null.

    Where chance comes in is at the level of sampling. If we “randomly sample” from a population (e.g. by “sampling” coin tosses from an near-infinite potential population of coin-tosses), that means that we choose the members of our sample in such a way that any member of the population has an equal probability of being selected. That applies to coin-tosses, but it’s easier to envisage if we are talking about something like sampling from an electorate to get their views on who they intend voting for. But it is true of coin-tosses too – in the entire population of possible fair tosses of fair coins, half are heads and half are tails. But any given random sample of those tosses probably won’t be, just as even in an electorate in which exactly half will vote Republican and exactly half will vote Democrat, any given poll will show some slightly different proportion, and occasionally a markedly different proportion.

    So…if we have a null hypothesis (for example, that the coin and tosses are fair, or that there are equal numbers of Republican voters as Democrat), then if we observe a certain result (8 out of 10 heads/Republicans, for instance) we will retain the null hypothesis (that the coin is fair, that Rs and Ds are equal) if the chance that we would have obtained that result under the null is quite high.

    If however, we get 500 heads/Republicans out of 500 tosses/sampled voters) we reject the null that the coin and tosses are fair/Rs and Ds are equally represented in the population, because the chance that we would have obtained that result if the null were true is extremely low.

    But in neither case is the null hypothesis “chance”. The null hypothesis is “the coin is a fair coin, fairly tossed”/”There are equal numbers of Republican and Democratic voters”.

    And this is not a trivial nitpick. It goes, I think, to the error at the heart of the ID critique of evolutionary theory.

    Evolutionary theory is not the theory that what we observe is explained by “chance”. Chance explains nothing. What does explain adaptive evolution, very nicely, is the theory that when living things reproduce, the biochemical processes involved in reproduction are sufficiently complex and interactive that the results are variable, and it is therefore extremely unlikely that any two offspring will be identical to themselves or their parents, and also quite likely that one of the dimensions along which they vary will affect the chance that they will leave viable offspring, again, because the things that may happen to an organism are extremely complex, interactive, and varied.

    As a result, variants that tend to leave more viable offspring will tend to become more common.

    “Chance” is involved, in an informal sense, because this result depends on lots of things we can only model as probability distributions, not as certainties. But that’s because our knowledge is limited, not because chance itself is the operative process.

    Ronald Thisted is informally correct. When I embarked on the statistics teaching module I taught last term (and designed), I said at the outset that my stated goal was to ensure that all students knew what a null-hypothesis p value was by the end of the module.

    At the beginning, some students hazarded that it meant “the probability that the result is due to chance”. That answer is less wrong than the other common answer, which is “the probability that your null hypothesis is true”.

    So I gave it half marks in the exam. But while not exactly wrong, it is a very poor answer, because it conflates the “chance” inherent in random sampling with the null hypothesis itself. The full-mark answer is: “the probability that you would have observed data as or more extreme as the data you did observe if the null hypothesis were true”.

    The null hypothesis itself, which you accept or reject on the basis of that p value, is not the hypothesis that “chance” is the explanation for the phenomenon you are trying to account for (500 coins lying heads-up on a table, in Barry’s example). It is something quite different (in Barry’s example, it is the hypothesis that the coins were fair coins, fairly tossed). “Chance” might be the reason you got more heads than tails, or more tails than heads, given that null, but chance is not your null.

    So no, Barry, not sloppy drafting, although it is possible that your own earlier post was merely sloppy drafting when you asked:”would you reject ‘chance’ as a hypothesis to explain this particular configuration of coins on a table?”

    What you should have said is: “would you reject ‘fair coins, fairly tossed’ as a hypothesis to explain this particular configuration of coins on a table?” To which my answer is yes. And my reasoning is: because the chance of getting 500 heads from a random sample of fair coins, fairly tossed is vanishingly small”.

    So now you know.

    And neither of your quoted statistics experts disagree. I try to avoid using the words they have used, as I think it leads to precisely the erroneous interpretation you have made – that the null hypothesis is “chance”. If the null is true, a sample of data may well have characteristics that are a long way from the population mean (e.g. 75% Heads when the population has 50%), “by chance” – but the chance part is the random sampling part, not the null hypothesis part.

    “Chance” is not the null hypothesis. And unless you know what your null hypothesis actually is, then you won’t know what you are rejecting when you reject it.

  2. I would call the idea that the null hypothesis is “chance” a form of “ID Derangement Syndrome” if it were not so widespread. So I shall call it Sloppy Statistics Syndrome instead.

  3. I see that there are a number of comments on the UD thread. I’d be delighted if Barry would like to comment over here, and of course everyone else is welcome too.

    I can’t, of course, comment over there.

  4. I have to say, the Thisted quotation is very poor, and very misleading:

    Results favoring one treatment over another in a randomized clinical trial can be explained only if the favored treatment really is superior or the apparent advantage enjoyed by the treatment is due solely to the working of chance.

    This part is reasonable.

    Since chance produces very small advantages often but large differences rarely, the larger the effect seen in the trial the less plausible chance assignment alone can be as an explanation.

    I would phrase this as:

    If drug has no effect, random sampling will produce a small apparent advantage often, but a large apparent advantage rarely, so the larger the effect seen in the trial, the less plausible is it that there is truly no advantage to the drug.

    If the chance explanation can be ruled out, then the differences seen in the study must be due to the effectiveness of the treatment being studied.

    And I would phrase this as: If the observed effect is extremely unlikely under the null hypothesis that the treatment has no effect, then we can conclude that the difference seen must be due to the effectiveness of the treatment.

    But this is really very dodgy – and is the reason why we must also take into account the statistical power of the study. If the observed effect is very large, and the sample quite small, there is a very good chance that the effect is overstated, and may not even exist, even if the p value is significant.

    The p-value measures consistency between the results actually obtained in the trial and the “pure chance” explanation for those results.

    No, it doesn’t, except in the most sloppy of senses. This is really a terribly misleading thing to say. The p value is simply a measure of the frequency with which you would observe the results you did observe, if the null of “no effect” were true.

    A p-value of 0.002 favoring group A arises very infrequently when the only differences between groups A and C are due to chance.

    No. Geez. An effect of the size observed will arise with a probability of .002 (not “0.002”, *hiss*) if their is no actual differences between the populations from whic A and C are drawn. Sheesh this guy is poor.

    More precisely, chance alone would produce such a result only twice in every thousand studies. Consequently, we conclude that the advantage of A over B is (quite probably) real rather than spurious.

    Well, “chance alone”, i.e. sampling variability, would indeed produce such a result twice in every thousand studies if the null were true.

    But Thisted is skating very near (although not quite crossing) the line into interpreting evidence against the null as equivalent to evidence for the hypothesis.

    It isn’t. And the size of the observed advantage of A over B will almost certainly be spurious, especially if the sample size is small. All we can say is that we can reject, with some degree of confidence, the hypothesis that the treatment has no effect.

    The true effect may nonetheless be tiny, and a lot tinier than the one observed. Typically, the smaller the study, the larger the observed effect, for the simple reason that only large effects can be detected by small studies.

  5. I see that Thisted’s paper is not actually published in a journal, or doesn’t seem to be, although it has recent “corrections”. Not one I’m going to recommend to my students!

  6. Lizzie:

    But we are talking about formal statistical null hypothesis testing here (clearly – Barry headed his original post “A Statistics Question”) and in formal statistical hypothesis testing, chance is not a hypothesis.

    Sure, we speak of retaining the null as accepting that the data could have been the result of “chance”.But chance is not the null.

    Where chance comes in is at the level of sampling.If we “randomly sample” from a population (e.g. by “sampling” coin tosses from an near-infinite potential population of coin-tosses), that means that we choose the members of our sample in such a way that any member of the population has an equal probability of being selected.That applies to coin-tosses, but it’s easier to envisage if we are talking about something like sampling from an electorate to get their views on who they intend voting for.But it is true of coin-tosses too – in the entire population of possible fair tosses of fair coins, half are heads and half are tails.But any given random sample of those tosses probably won’t be, just as even in an electorate in which exactly half will vote Republican and exactly half will vote Democrat, any given poll will show some slightly different proportion, and occasionally a markedly different proportion.

    So…if we have a null hypothesis (for example, that the coin and tosses are fair, or that there are equal numbers of Republican voters as Democrat), then if we observe a certain result (8 out of 10 heads/Republicans, for instance) we will retain the null hypothesis (that the coin is fair, that Rs and Ds are equal) if the chance that we would have obtained that result under the null is quite high.

    If however, we get 500 heads/Republicans out of 500 tosses/sampled voters) we reject the null that the coin and tosses are fair/Rs and Ds are equally represented in the population, because the chance that we would have obtained that result if the null were true is extremely low.

    But in neither case is the null hypothesis “chance”.The null hypothesis is “the coin is a fair coin, fairly tossed”/”There are equal numbers of Republican and Democratic voters”.

    And this is not a trivial nitpick.It goes, I think, to the error at the heart of the ID critique of evolutionary theory.

    Evolutionary theory is not the theory that what we observe is explained by “chance”.Chance explains nothing.What does explain adaptive evolution, very nicely,is the theory that when living things reproduce, the biochemical processes involved in reproduction are sufficiently complex and interactive that the results are variable, and it is therefore extremely unlikely that any two offspring will be identical to themselves or their parents, and also quite likely that one of the dimensions along which they vary will affect the chance that they will leave viable offspring, again, because the things that may happen to an organism are extremely complex, interactive, and varied.

    As a result, variants that tend to leave more viable offspring will tend to become more common.

    “Chance” is involved, in an informal sense, because this result depends on lots of things we can only model as probability distributions, not as certainties.But that’s because our knowledge is limited, not because chance itself is the operative process.

    (snip)

    The null hypothesis itself, which you accept or reject on the basis of that p value, is not the hypothesis that “chance” is the explanation for the phenomenon you are trying to account for (500 coins lying heads-up on a table, in Barry’s example).It is something quite different (in Barry’s example, it is the hypothesis that the coins were fair coins, fairly tossed).“Chance” might be the reason you got more heads than tails, or more tails than heads, given that null, but chance is not your null.

    (snip)

    “Chance” is not the null hypothesis.And unless you know what your null hypothesis actually is, then you won’t know what you are rejecting when you reject it.

    I think this is wonderfully stated, particularly the parts above. You quite specifically lay out the notion of the null (which ID ignores or inaccurately assumes) and where chance plays a part (but not as an explanation). Nicely done!

  7. It’s back to P(T|H) again. H is not “chance”, although Dembski calls it “the relevant chance hypothesis”, by which he appears to mean “the relevant non-Design hypothesis” (because by rejecting it, he infers Design).

    If you want to call your null hypothesis “Chance” you have to be specific, with which Dembski appears to agree (hence “the relevant” chance hypothesis), but which he mostly ignores in practice, as do most other ID writers.

  8. Lizzie said:

    “Evolutionary theory is not the theory that what we observe is explained by “chance”. Chance explains nothing. What does explain adaptive evolution, very nicely, is the theory that when living things reproduce, the biochemical processes involved in reproduction are sufficiently complex and interactive that the results are variable, and it is therefore extremely unlikely that any two offspring will be identical to themselves or their parents, and also quite likely that one of the dimensions along which they vary will affect the chance that they will leave viable offspring, again, because the things that may happen to an organism are extremely complex, interactive, and varied.”

    That may be true only when you have positive mutations, but according to the actual ToE most of the mutation are neutral. They do not affect the chance to leave viable offsprings. The fixation of this neutral mutations, explains the ToE is made by “drift”. Given the binomial distribution there is the chance that a neutral mutation get fixed. The importance of “drift” varies according with the assumptios darwinists do when analyse the data, but seems that drift according to darwinists is a very important mechanism in evolution.
    So chance IS the explanation for evolution.

  9. Blas:
    Lizzie said:

    “Evolutionary theory is not the theory that what we observe is explained by “chance”. Chance explains nothing. What does explain adaptive evolution, very nicely, is the theory that when living things reproduce, the biochemical processes involved in reproduction are sufficiently complex and interactive that the results are variable, and it is therefore extremely unlikely that any two offspring will be identical to themselves or their parents, and also quite likely that one of the dimensions along which they vary will affect the chance that they will leave viable offspring, again, because the things that may happen to an organism are extremely complex, interactive, and varied.”

    That may be true only when you have positive mutations, but according to the actual ToE most of the mutation are neutral. They do not affect the chance to leave viable offsprings. The fixation of this neutral mutations, explains the ToE is made by “drift”. Given the binomial distribution there is the chance that a neutral mutation get fixed. The importance of“drift” varies according with the assumptios darwinists do when analyse the data, but seems that drift according to darwinists is a very important mechanism in evolution.
    So chance IS the explanation for evolution.

    That’s an interesting point, Blas, but it still doesn’t alter mine. “Drift”, like “tossing coins” is a stochastic process – but a process nonetheless, with an associated set of probability distributions. “Tossing coins” is a process in which the probability of Heads and Tails is equal. This is because the coin has one Heads side and one Tails side, and the tossing process is governed by forces that can produce both, but which are too complex to enable any observer to predict which forces will be present on any given trial.

    “Drifting neutral mutations” is also such a process. A neutral mutation may become more, or less, numerous in each generation, and the prevalence of the mutation so far affects how likely it is to become more prevalent in the next. The forces that govern which will happen, like the forces that act on a tossed coin, are too complex for an observer to predict on any one occasion, but they are forces nonetheless, just as the forces on a tossed coin are forces.

    If we retain the null of “drift” it is just as we do when we retain the null of “fair coins, fairly tossed”. Our null hypothesis is of a specific stochastic processe, as opposed to some other stochastic process, or, even, a non-stochastic process.

    For instance, we might reject “fair coin, fairly tossed” in favour of “weighted coin, fairly tossed”, in which the probability of Heads was 75%, not 50%. But it would still be a stochastic process.

  10. Sorry Lizzie your answer is too long and not so clear. Do you agree that drift is stochastic and then by chance or not?

  11. Drift is a stochastic process. That is quite different from saying that chance is hypothesis. Chance doesn’t cause things, but some processes that cause things do so unpredictably, and so our predictions are based on probability distributions not discrete outcomes.

  12. Lizzie:
    Drift is a stochastic process.That is quite different from saying that chance is hypothesis.Chance doesn’t cause things, but some processes that cause things do so unpredictably, and so our predictions are based on probability distributions not discrete outcomes.

    Ok, the as you do not know what causes a neutral mutation get fixed, then big part of evolution is due to unkown causes. Then when darwinists says there is not need of God for the diversity we see are lying. There are causes that they do not know and if “chance” is not the answer one of them could be God.

  13. Blas, if we have a population and observe that all possible point mutations occur over time, what part does chance play?

    Put in the language of coin tosses:

    If something useful happens when heads is tossed, and I can toss the coin as many times as I want, what part does chance play?

  14. petrushka:
    Blas, if we have a population and observe that all possible point mutations occur over time, what part does chance play?

    Put in the language of coin tosses:

    If something useful happens when heads is tossed, and I can toss the coin as many times as I want, what part does chance play?

    Do you understand “drift” ?

  15. Blas: There are causes that they do not know and if “chance” is not the answer one of them could be God.

    Another possible answer is “Invisible pink unicorn”.

    If all you want is to have “God” in the bucket labelled “no evidence for, but possible” then please feel free to have that. It makes no difference to anyone in the reality based community.

  16. Blas:

    Valleys are created by water erosion
    Water erosion is a stochastic process
    The explanation for valleys is NOT chance – it is water erosion

  17. KF sure doesn’t get it:

    I simply beg to remind you that for many years, there has been a common practice of hypothesis testing by rejecting the null in light of evidence, the null being a hypothesis that chance — undirected contingency — accounts for the results observed.

  18. DrBot:
    Blas:

    Valleys are created by water erosion
    Water erosion is a stochastic process
    The explanation for valleys is NOT chance – it is water erosion

    And the explanation for “drift”?

  19. Blas: Do you understand “drift” ?

    You apparently do not. If you think you understand drift, discuss drift with regard to the Lensky experiment.

  20. Robin: Change in the frequency of a gene variant due to random sampling.

    ramdom, stochastic, chance.
    Then chance is a cause, BA is right.

  21. Blas: ramdom, stochastic, chance.
    Then chance is a cause, BA is right.

    We’ve all been around this hill many times. Don’t you ever wonder why no one ever says “I’ve never thought of that!”?

  22. Neil Rickert: Except that drift is not a cause.It’s just a descriptive term.

    Yes drift is the process “stochastich process” ramdom process” “chance guided process”. That is BA point “chance” is the explanation for that process or you have as Lizzie go for the unknow cause and Gos intervation is not ruled out.

  23. Blas: Gos intervation is not ruled out.

    It never is, it never can be. Everybody knows this, except you it seems.

    But, for the record, neither can I rule out the involvement of the Invisible Pink Unicorn in the origin of life.

    If all you want is for that to be acknowledged, consider it done.

  24. Might another reason for the misunderstandings be that, in the world of science, an ‘explanation’ is usually close to if not synonymous with a ‘mechanism’? Would anyone claim that ‘chance’ a mechanism? I doubt it it, and in fact I seem to recall that one of the arguments used by ID-ers against evolution is exactly that: chance is not a mechanism.

  25. faded_Glory:
    Might another reason for the misunderstandings be that, in the world of science, an ‘explanation’ is usually close to if not synonymous with a ‘mechanism’? Would anyone claim that ‘chance’ a mechanism? I doubt it it, and in fact I seem to recall that one of the arguments used by ID-ers against evolution is exactly that: chance is not a mechanism.

    Yes, they do argue that. Except when BA arguing the opposite gives them yet another opportunity to pretend that we’re lying psychopaths. Heads they win, tails we lose, in the tell-shameless-lies-for-god UD world.

  26. OMagain: It never is, it never can be. Everybody knows this, except you it seems.

    But, for the record, neither can I rule out the involvement of the Invisible Pink Unicorn in the origin of life.

    If all you want is for that to be acknowledged, consider it done.

    Yes it was. Darwinism claim that can explain life only by “natural” causes and that is not true unless you accept chance as a cause.

  27. Blas: If I toss a coin, it has a 50% chance of being heads or tails. But I fully understand the system that tosses the coin, I can predict if its heads or tails – it’s just physics. True of false. If False, why?

  28. Blas: Yes drift is the process “stochastich process” ramdom process” “chance guided process”.

    Drift need not be stochastic. Drift is an observed effect, not a cause.

  29. Blas: Yes drift is the process “stochastich process” ramdom process” “chance guided process”. That is BA point “chance” is the explanation for that process or you have as Lizzie go for the unknow cause and Gos intervation is not ruled out.

    If drift is a random process, then “chance” – as a condition of that process – is not an explanation. It is a condition of the system.

    The whole system is the explanation. That’s where you and BA (and William, and all the other IDists) go awry Blas. You can’t just take some condition of the system and call that an explanation. As an example, what you are arguing is that the explanation for aircraft flight is “chance” because the various air densities that planes fly through is random. That’s just plain silly.

  30. Richardthughes:
    Blas: If I toss a coin, it has a 50% chance of being heads or tails. But I fully understand the system that tosses the coin, I can predict if its heads or tails – it’s just physics. True of false. If False, why?

    True, if you know all the variables you can predict the result. At least from a deterministic point of view.

  31. Blas: Darwinism claim that can explain life only by “natural” causes and that is not true unless you accept chance as a cause.

    Whatever…

  32. Robin: If drift is a random process, then “chance” – as a condition of that process – is not an explanation. It is a condition of the system.

    The whole system is the explanation. That’s where you and BA (and William, and all the other IDists) go awry Blas. You can’t just take some condition of the system and call that an explanation. As an example, what you are arguing is that the explanation for aircraft flight is “chance” because the various air densities that planes fly through is random. That’s just plain silly.

    No “drift” is not a process. “Drift” is the fixation of a neutral mutatio caused by…..
    You can fill the blanks:

    We do not know

    Chance.

  33. Blas: True, if you know all the variables you can predict the result. At least from a deterministic point of view.

    Very good! So what does that tell us about the nature of chance?

  34. Blas: Yes it was. Darwinism claim that can explain life only by “natural” causes and that is not true unless you accept chance as a cause.

    Give me an example of how chance can cause something – what is the mechanism by which this chance thing can be the cause of an effect?

  35. DrBot: Give me an example of how chance can cause something – what is the mechanism by which this chance thing can be the cause of an effect?

    I do not claiming chance is a cause, darwinist use the term chance instead to say we don´t know. Especially when they want to say “we can explain life and/or origin of diversity of life without any interventionof God”.

  36. darwinist use the term chance instead to say we don´t know.

    Is this something you’ve heard “darwinists” say, or is it something that creationists have told you that “darwinists” say?

  37. Pro Hac Vice:
    darwinist use the term chance instead to say we don´t know.

    Is this something you’ve heard “darwinists” say, or is it something that creationists have told you that “darwinists” say?

    You deny Carl Sagan, Richard Dawkins, Jerry Coyne said that?

  38. Barry asks:

    Mark Frank, now you are back to agreeing with Lizzie. Which is it? Is she wrong and the statistics professor right? Is she right and the statistics professor wrong? Surely you are not suggesting they are both right when their statements are irreconcilable. Are you?

    Both the statistics professor and I are right, but the statistics professor’s language was misleading (although he’s not unique). The problem lies in ambivalence between “explanation” to refer to why random sampling should produce an extreme result, and “explanation” to refer to the explanation posed in the null hypothesis.

    Chance is not the null in either the professor’s example (clinical treatment testing) or Barry’s example (fair coin, fairly tossed). The role “chance plays” in statistical testing is at the level of random sampling – if we randomly sample from a population, or randomly allocate people to treatment or placebo, then there is a probability that for reasons that have nothing to do with the effectiveness of the treatment, one group will do better than the other. And we say such findings are “due to chance” – by which we mean that the result was fairly probable under the null hypothesis of “no effect”.

    But note that “no effect” is the null hypothesis, not “chance”.

    And it was “chance” as a hypothesis that Barry invited us to reject. In the case of the professor’s example the hypothesis he invites us to reject is “there was no effect of treatment”. And similarly if we retain the null, it is not the hypothesis of “chance” that we retain, but the hypothesis that “there was no effect of treatment.

    Somebody in the UD thread thinks that this is just “semantics” – well yes, it’s semantics, as in the meaning words have in the context of statistical methodology and null hypothesis testing. And if we wrongly identify the semantic connotation of the word “chance” as in “this result was simply due to the chance effects of random sampling) with the words chance as in “we reject the chance hypothesis” then we are making a grave error, with very important implications, particularly of the we are mistaking something like Darwin’s hypothesis for some “chance hypothesis”. “Chance hypothesis” is an oxymoron. That’s because chance is not a causal explanation of anything. If we say that results are “due to chance” that is simply a sloppy way of saying that factors beyond our control or (by design) our ability to predict affected the results, not the effect hypothesised by our study.

    Eric, who does me the courtesy, for which I thank him, of taking my point seriously, says:

    So my vote is that Elizabeth must have been thinking of the old “is chance real?” concept when she made her statement. I would disagree with her position, but at least it would be a meaningful position to take.

    Yes, it’s a meaningful position to take, but it’s not how I’d put it. I think that chance is perfectly useful word with a perfectly good referent – and I’d define it as the word we use to describe events that happen due to a complex multiplicity of interacting factors that are two complex to model individually, but which we can still as a pattern.

    For example, we can say that it is simply “chance” whether a tossed coin falls heads or tails, and that each is equally likely. This is perfectly “real” – there is really no way of telling which way the coin will land before it does so. However, it is not “chance” that is causing the coin to fall one way or the other (it’s mostly,if not entirely, Newtonian forces), nor is it “chance” that renders the outcome maximally uncertain – indeed it is the maximal uncertainty of the outcome that is the reason we call the game “chance”.

    But, as others have pointed out, not all games of chance have nice equiprobable distributions of outcome. Some kinds of card hands are more probable than others; when we throw pairs of dice, some totals are more probable than others; when we throw a loaded die, some faces will be more probable than others. Yet none of these outcomes are completely predictable (a die loaded to fall preferentially as a one, will sometimes still fall as a six).

    And these non binary, non-equiprobable, distributions are sometimes the result of Design (e.g. the Loaded Die) sometimes accident (a manufacturing flaw) as are the equiprobable ones. In nature, most distributions are not flat, thanks to the Central Limit Theorem – most are bell shaped, and not all are symmetrical (Poisson distributions, for instance).

    And when we test a null hypothesis, the first thing we have to do (before we even collect any data, ideally) is to compute the probability distribution of our data if our null is true. NOT compute the likelihood of our post hoc observations given the null hypothesis that our phenomena are “due to chance”. So we compute the probability distribution of differences between our sample means for our Placebo group and our Treatment group under the null hypothesis that there is no difference in the two effects. We do not compute the probability that our results are “due to chance”, which is both tautological and oxymoronic.

    And I am absolutely sure that Professor Thisted would agree.

  39. You deny Carl Sagan, Richard Dawkins, Jerry Coyne said that?

    I have never seen them say it. Did they?

  40. An amusing and revealing exchange between Barry and Reciprocating Bill:

    RB starts off with a perfectly reasonable comment:

    Mark is right. “Chance explanation” in the context of Thist’s paper refers to the fact that even perfectly executed random sampling from a population will select samples with means (of whatever variable is of interest) that inevitably differ to some degree from the mean of the population from which the samples are drawn. Samples may also display differing means. Nothing is being hypothesized to “cause” either individual measured values or sample means to take on the values they do, apart from the probabilities inherent in random sampling. The “chance” of concern is inherent in the experimental sampling procedures, not the phenomenon being measured…
    &ltsnip&gt

    BA:

    RB, your assertions in 15 are wrong in every particular. Darwinists’ willingness, even eagerness, to twist, distort and obfuscate never ceases to amaze.

    RB:

    BA:

    RB, your assertions in 15 are wrong in every particular.

    I eagerly await your rebuttal of each of those particulars.

    BA:

    In the words of the man in black, “get used to disappointment.” Your assertions in 15 are so egregiously off base that they indicate one of two things: (1) someone who is invincibly stupid and incapable of understanding the issues; or (2) someone being intentionally dishonest and attempting to obscure the issue. Either way, it is pointless to engage with you. BTW, charity compels me to assume (1) is true.

    For the readers, I am not going to rise to RB’s bait. If anyone has a good faith question about the nonsense he spewed in 15, post it and I will answer it, or, better yet, go read the paper for yourself.

    You’re not fooling anyone, Barry. Not even the UD regulars.

  41. Knowing you’re right means not having to discuss how you might be wrong. What’s the definition of “Dogma”, again?

Leave a Reply