American Consequences - May 2018

to test it, a place called a “lab” to do the experiment in, and human guinea pigs to sit as experimental subjects. All of this yielding loads of data to study and manipulate, usually with statistics. Just as real scientists do. Yet attempts at replication are rare in the social sciences. And the absence of replication is merely one way in which social science fails to qualify as science. It is hard to understate how sweeping the consequences from the Reproducibility Project should have been. Many of the foundation stones of social psychology, behavioral economics, and sociology were called into question. “Priming,” for example, is an almost ubiquitous and problematic practice in social science experiments. Researchers offer subtle or subconscious cues to subjects and then measure their reactions under varying conditions. One seminal study, for example, claimed to show that if subjects were presented (“primed”) with words commonly associated with aging, they would – unconsciously – walk more slowly when they left the psych lab. Thousands of experiments have been built on the assumption of priming’s effectiveness. Yet the Reproducibility Project researchers failed to replicate the studies that first persuaded social scientists that priming had lasting effects. Since the project’s report, other attempts to replicate the original priming studies have also failed. The Reproducibility Project generated other surprises, but the real surprise is that anyone should have been surprised. The warning bells have been clanging around social science for many years.

THE CULT OF STATISTICAL SIGNIFICANCE The “reproducibility crisis” isn’t peculiar to the social sciences. More than a decade ago, a professor of medicine at Stanford named John Ioannidis published a paper with the arresting title, “Why Most Published Research Findings Are False.” He was talking about research in medicine, and his main complaint was about the over-optimistic use of statistics. Since then many attempts to replicate medical research have only underscored his warning. The main weakness Ioannidis pointed to was the use of “statistical significance” to validate a finding. Statistical significance is a bedrock of social science as well. Defining statistical significance would require a dip into one of the muddier pools of mathematics. But what it is, in brief, is an analysis of data that shows that your data are data and not just a bunch of junk numbers. When employed correctly, statistical significance allows a researcher to judge how likely it is that his finding did or didn’t occur by mere chance. It has its uses. In public opinion polling, 70 years of practice have shown that statistical significance is indispensable in determining the likelihood that a polling result is accurate. For comparison, public opinion polling is an experiment performed on a relatively large number of people (usually between 600 and 1,200) who have been randomly selected from a general population – registered voters, for example. Random selection allows a

American Consequences 73

Made with FlippingBook - Online magazine maker