The Reproducibility Project, as it was known, carried impeccable bona fides. The project was led by the Center for Open Science and its co-founder, professor Brian Nosek of the University of Virginia. The project’s goal was to encourage transparency and data sharing among research scientists. It was the largest undertaking of its kind in the history of the social sciences. The news that a majority of key social psychology experimental findings were, at best, dubious should have rumbled like an earthquake through higher education, where social science and an obsession with data generally have infiltrated every academic journalists who cover them expressed dismay that bad science might underlie so many of their most cherished axioms and practices. But they got over it. Paying too much attention to the Reproducibility Project’s work would have been a particular blow to science reporters. The meat-and-taters of their trade is the colorful, provocative, and always relevant finding of some new social science experiment: A new study by researchers at [Harvard, Berkeley, Keokuk Community College] suggests that [dog lovers, redheads, soccer goalies] are much more likely to [wear short sleeves, drink craft beer, play contract bridge] than cat lovers, but only if [the barometer is falling, they are gently slapped upside the head, a picture of Roger Clemens suddenly appears in their cubicle...]. endeavor (“Quantitative Ethics!”). At first, some social scientists and the
Without such findings, science reporters would find their production of “content” reduced by half or more. The entire mega- selling corpus of the New Yorker social-science writer Malcolm Gladwell would collapse, along with that of his many imitators in the pop-science racket. Marketers who need fresh data, however spurious, to bamboozle clients would suddenly be left empty-handed. Armies of grad students would find themselves with nothing to do. Lots of people have an interest in pretending the Reproducibility Project didn’t happen. If an experiment can’t be repeated and yield much the same results, then the original finding is questionable. And yet, for anyone except academics and science reporters, the catastrophic replication rate is hard to ignore. Nosek fielded 270 researchers to attempt the 100 replications, and only 39 of the original findings could be confirmed. In experimental science, replication functions as the great backstop. If an experiment can’t be repeated and yield much the same results, then the original finding is questionable. And it certainly requires further attempts at replication. At least that’s how things work in real science... And social scientists are quite insistent that they are as “real” and as rigorous as chemists and physicists. This is why they ape the methodology of the physical sciences. A good sociologist or social psychologist will have a hypothesis, an experiment with which
72 May 2018
Made with FlippingBook - Online magazine maker