Warning: this post is nerdy. It makes reference to statistical testing. This is because I’m a giant nerd and I can’t help it. I’m sorry about that. You’ve been warned.*
So in my very first post, I ranted about how tricky it can be to communicate scientific research to the public because each study is one small pixel in the big picture of research that the public usually doesn’t get to see. This is partly because traditional media is suffering and many news organizations don’t even have dedicated science reporters anymore. It’s also because the big picture is a very difficult thing to see, even for people working in the field. Understandably, the sports guy asked to give a report on the latest stem-cell research has a hard time putting things into the right context.
I think about this because I do the kind of research that is probably most often reported to the public (perhaps with the exception of pure medical research, which is another favourite of the media): epidemiology – the study of disease patterns and causes in the population. You’ll know it from headlines such as “Cell phones linked to brain cancer” and “Sunshine vitamin cures EVERYTHING!”
These results are interesting and should be out there, but it can do a real disservice when they are presented with so little context. When the story inevitably changes over time – science is a bumpy road – people get the impression that science can’t be trusted. One study cannot be trusted – the body of evidence can.
Which brings me to what actually inspired today’s rant – the latest xkcd comic. Just so I’m not totally guilty of presenting things without context, here’s a brief explanation of what “p>0.05” refers to: When you want to study something like “do jelly beans cause acne?” the ideal would be to examine everyone in the world and see if in fact those who eat jelly beans have acne. However, you can’t do that. So you take a small sample of people and just cross your fingers and hope that you got a good representative bunch and not the only 20 people on earth who have a sensitivity to jelly beans leading to acne.
But because you’re not sure, you calculate the probability that what you see in these 20 people is what you’d see in everyone out there. If the probability that this finding is a total fluke is less than 5% (typically), you figure that’s good enough and you call it ‘significant’. If the probability of a fluke is more than 5% (even if it’s 6%) you groan loudly, throw in a “More research must be done” and pray that someone publishes it anyway. So “p>0.05” just means your results are not significant.
It also means that 5% of the time we’re TOTALLY WRONG and this is BUILT RIGHT INTO THE SCIENCE.
So without further ado, I give you warning as to why you shouldn’t always toss out your cell-phone or your chocolate bar at the first reporting of a study:
* That being said, even if you’re not a card-carrying member of the mathy-nerd club, don’t be afraid of numbers! They’re really not worth your fear. They just communicate ideas in a dense form and therefore require a touch more time per unit than letters typically do; if a picture is worth a thousand words, a number is probably worth a hundred words ± 20 (dammit! sorry, I’m trying to get it under control). My point is, numbers are still just telling a story and one that’s often pretty interesting.