Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS ECR Community

We need to check our work: Rethinking replication and publishing

The most exciting phrase to hear in science, the one that heralds the most discoveries, is not “Eureka!” (I found it!) but “That’s funny…”
― Isaac Asimov

A popular theoretical framework in my field, developmental psychology, compares children to scientists. Alison Gopnik first championed the view that children, like scientists, have theories about the world, and test and revise those theories through their everyday behavior. “A theory, in science or in everyday life, doesn’t just describe one particular way the world happens to be at the moment,” Gopnik wrote in a 2005 Slate article. “Instead, having a theory tells you about the ways the world could have been in the past or might be in the future.”

Children have been observed experimenting by repeating the same intervention many times – for instance, in many of Gopnik’s studies, children place blocks on a “magical machine” to determine which ones make the machine light up and play music. It seems from these repetitions that children want to be sure what they’re observing isn’t a one-off fluke, but a real causal effect in the world. Children study the results of their interventions, and revise their theories if there’s a new result that contradicts their previous theory. This is, in theory, what scientists do to test our theories as well — except it seems that preschoolers have bested us in the process of repeating interventions to check our work.

Replicating a study seems easy enough, but the time and effort involved in replicating even your own work can be quite costly. In most STEM circles, a research group that finds a significant result would be crazy not to publish it as quickly as possible, lest someone else beats them to the punch. Additionally, there is no incentive to replicate, since journals typically seek out new results for publication. In theory, this is a good thing – we want original findings to make progress in science – but publishing norms have taken this to an extreme. A 2007 study by Daniele Fanelli found that 86% of publications are positive results (up 22% from 1990).

The disincentives for replicating are usually enough for researchers to steer clear of it, but the few who try are in for more hurdles. In order to replicate a study, one needs to know how the original study was done. As journal articles are skewing shorter, methods sections may provide less information, giving the reader only a basic idea of a study’s procedure. In many studies, effects could be due in part to small details, like the exact wording an experimenter uses, or even the color of the walls of the testing room. To probe the strength of an effect, we need to play with these variables to see how they affect the results. In the discussion surrounding the replicability of psychologist John Bargh’s studies, fellow psychologist Daniel Kahneman likened the small details in experiments to “the direction of a theatre performance,” and suggested that perhaps the reason why Bargh’s studies weren’t replicating were because Bargh just “has a knack that not all of us have”.

“The conduct of subtle experiments has much in common with the direction of a theatre performance,” said Daniel Kahneman. If these subtle details are what are driving effects, that’s all the more reason to be thorough in reporting our procedures.

Researchers are humans, and we’re subject to the same biases as everyone else, despite our meticulous training in objectivity. There are many points in the process of publishing a study where human bias can lead to a portrait of the world that is less than accurate. For instance, when psychologists Stéphane Doyen and colleagues tried to replicate John Bargh’s famous result that priming participants with “old” words led to slower walking, they found that the experimenters’ unconscious bias in timing participants’ walking speed could have exaggerated the original findings.

More meticulous reporting of procedures and results will encourage researchers to be more critical of their own work, and will create opportunities for collaboration. In casual conversations with other researchers, I’ve often heard things like, “If only I knew how they did that!” or “I wonder if the researchers looked at X in their results…” The availability of detailed procedures and raw data would allow new life for results, and will allow researchers to do replications or extensions quickly, rather than spending time figuring out the exact procedure that led the original researchers to find an effect.

Replications are also important to safeguard against human fallacy. We hope that our current peer-review process will weed out studies in which experimenter bias or poor study design explain the results, but reviewers, like researchers, are human and subject to biases. One glaring bias is the fact that some journals are not double-blind! We scientists pride ourselves on objectivity and recognition of merit, but it’s hard to say how scientists’ reputation (or lack thereof) influences their publication acceptance rate at journals that reveal the authors’ identities.

Reviewers may also fail to spot errors in research. The Economist reported several studies that indicate reviewers are less familiar with stats and less meticulous than one would hope. Anyone who has served as a peer reviewer can tell you – it is a thankless job. Most reviewers are not paid or recognized in any way for their time and effort, besides the right to add an additional line in the “professional service” section of their CV.

Besides their dedication to the furthering of science, reviewers have little incentive to thoroughly vet the papers they’re assigned to review, especially when this volunteer assignment is competing with other job duties, like research, teaching, and grant-writing. Sometimes, reviewers are assigned to comment on papers that are outside their expertise, which makes them even less motivated or qualified to provide good feedback on manuscripts. A colleague of mine recently waited six months to hear back from the third reviewer of his paper, only to receive an email from that reviewer in which he admitted that he did not feel qualified to comment on the paper because the research was outside his expertise.

Often, reviewers are overworked academics with little incentive to give thoughtful feedback.

Though traditional journals are doing their part to adapt to the shortcomings of the current publishing system, alternative open-source and web-based publishing groups are poised to help address many of these issues with new technologies. For one, replications may be more like to be accepted by and easier to execute in non-traditional journals. Many of these journals or publishing services (PLOS, PeerJ, the Winnower) publish based on the soundness of a study’s methods and conclusions, rather than its perceived impact on its field, which encourages high-quality rather than flashy work. Also, without the word-limit and space constraints that come with printed journals, non-traditional journals could allow researchers to upload detailed procedure instructions, raw data, or even video clips of their procedure.

New publishing systems could also improve the speed of science communications. Cover letters to reviewers were probably useful when correspondence about manuscripts was limited to sending mail through the postal service, but they are now just a relic of an old system. Rather than specifying that you changed a sentence on page 34, paragraph 3, perhaps one day authors and reviewers will be able to submit digital documents highlighting changes, comments, and suggestions, rather than writing formal letters to one another; this would cut down on the amount of time wasted on formatting correspondence between authors and reviewers. Online journal PeerJ allows registered users to comment on papers, creating a community in which feedback can be given instantaneously and less formally.

Online publishing and review also opens up the possibility of more feedback. Why stop at the standard three reviewers? Allowing qualified, registered users to comment on or review articles may also present the incentive for higher-quality feedback. Comments, for better or for worse, could be made public, allowing academics to be recognized for thoughtful feedback. PeerJ, for instance, incentivizes high quality comments by awarding points to users who have written a comment nominated as “insightful” by other users.

Surely, there are other cultural changes that need to occur in the science world before we adopt new publishing systems – for instance, as Berkeley cell biologist and Nobel Prize winner Randy Schekman suggested in an op-ed last week, placing less importance on journals’ impact factors. And even after we adopt new publishing systems, they will not be a panacea for all issues in science world. Still, these recurring discussions about the importance of replication and the need for alternative publishing systems suggest that the time is ripe to refine our scientific values and to rethink the system we have in place to uphold those values.

jane huJane Hu is a PhD candidate in the psychology department at University of California, Berkeley. Her research focuses on social cognition and learning in preschoolers. She is also an editor of the Berkeley Science Review. Follow her on Twitter @jane_c_hu, and check out her science blog: metacogs.tumblr.com

 

 

For more about replicability & replication projects:

Check out these open-source publishing and collaborative tools:

Discussion
  1. Dr Randy Schekman is 100% right with his reasoning. He is also right with his reasoning that addresses how magazines such a .Nature Magazine undertake their business with regard to highly important work which in many ways they suppress. In this respect Nature Magazine we have found is basically a pawn in the game of big business. They do their biding when the bottom-line is threatened was our finding. The fallacy that a vaccine will come in time to prevent the world’s future most deadly pandemic in terms of Bird Flu et al is just a single example of their power over such magazines such as ‘Nature’. They stop the truth emerging in terms of alternative strategies and scientific solutions when that work affects n particular the big pharma’s vast profit making machine.

    In this respect we have first hand experience of how Nature Magazine operates behind closed doors. A few of countless articles that may be of interest and mind opening are as follows –

    http://worldinnovationfoundation.blogspot.co.uk/2013/12/possibly-most-important-keynote-speech.html

    http://worldinnovationfoundation.blogspot.co.uk/2013/12/vaccines-will-never-save-us-from-deadly.html

    http://worldinnovationfoundation.blogspot.co.uk/2013/12/global-pharmaceutical-giants-have-made.html

    Dr David Hill
    World Innovation Foundation

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top