SOCIAL SCIENCE IS great at making wacky, wonderful claims about the way the world—and the human mind—works. College students walk more slowly after being exposed to words relating to elderly people. Elections are determined by the outcome of college football games. Obesity is contagious, you can have business success by standing in an expansive “power pose,” baseball players with a K in their name are more likely to strike out, and hurricanes with girl names are more dangerous than hurricanes with boy names.
What do the above claims all have in common? They were published in respected scientific journals, they were publicized in the news media, their publication was contingent on finding a “statistically significant” comparison from small samples. And I don’t think there’s good evidence for any of them.
It turns out that it’s easy for researchers to find significant effects by manipulating data—perhaps not with conscious intent. Outside researchers have tried to replicate some of these studies and failed to come up with the same results, as demonstrated in psychology’s Reproducibility Project. Many pixels have been spilled in the last few years explaining why scientists and citizens shouldn’t believe a lot of published and publicized research.
People have started to come to terms with the fact that a research team, or even an entire subfield of science, can get trapped in a loop of confirmation of noisy findings. That means you can’t take published, peer-reviewed studies for granted. Statistical significance does not mean what most people think it does. And that gives us the luxury to go beyond defining the problem—and start considering solutions.
-More at Wired