More statistical confusion…

At UD I noticed, while I was checking the Moran-Arrington score, I couldn’t help noticing a news item entitled, provocatively (for me) Psychology does not speak the language of statistics very well.

So being a psychologist who teaches statistical methods to psychology students, I had to click, and found that it was a report of a blog piece here called Statistics Shows Psychology Is Not Science

Well, there’s a lot I disagree with in that piece. but I would be first in line to say there’s a lot of sloppy science in psychology (which doesn’t make psychology “not science” – Bad Science isn’t necessarily Not Science.  But I couldn’t help but be amused by this paragraph:

But statistics is problematic when it comes to the social sciences. The first key issue is sample size.

Think of a political survey poll. Every one of these polls states a margin of error; surveys with a larger number of respondents have a correspondingly smaller margin of error. Most social research studies use sample sizes of tens, hundreds, and occasionally thousands. That may sound like a lot, but remember that statistical physics deals with sample sizes that can be described in unimaginable ways like this: One thousand trillion times more than the total number of stars in the Universe. Or, enough sample atoms that if each one were a grain of sand, they could build a sand castle 5 miles high. Or, a number of molecules greater than the number of milliseconds since the Big Bang.

The problem with sample sizes in social science isn’t that they aren’t large enough  Every student in statistics 101 knows, just as every UD denizen must surely know, the law of large numbers, which tells us that as sample size increases, its parameters rapidly and non-linearly approach those of the population from which it is drawn.  That’s both why you don’t need anything like 500 heads to know that a coin is fishy, and you don’t need astronomical sample sizes to have a fairly tight margin of error on a poll.

So the “news” source appears to be one that kairosfocus might (rightly!) look askance at, and Barry Arrington too.

The real problem, though, isn’t sample size but the non-randomness of the sampling.  We don’t get more generalisable results by increasing sample size, but by increasing the representativeness of the sample. That’s because people are non-uniformly distributed, and full of clustering and interactions.  So is biology.  Which is why ID should talk less about coins and more about chemistry (as I’ve been saying in a few comments recently!)

One thought on “More statistical confusion…

Leave a Reply