Last Thursday November 21st our Singapore ReproducibiliTea journal club met once again to discuss the topic Reproducibility Now: Many studies don’t reproduce and why. Thanks to our discussion leader Giulio Gabrieli from the Social and Affective Neuroscience Lab (SAN Lab – @SANLabNTU) at Nanyang Technological University (NTU), we started off with a brief summary of our target paper Estimating the reproducibility of psychological science by The Open Science Collaboration (2015) and also referenced similar publications from other research fields:

While it seems to be common to many research fields that the findings reported in individual studies do not replicate, we don’t have a clear picture on the prevalence of this problem, as replication studies are still rather rare. For instance, Makel and Plucker (2014) found that only 461 from a sample of 164,598 articles (0.13%) published in 100 top journals in educational research were replications. Why is this so? Should we not be sure that a finding represents a true effect before we draw recommendations for stakeholders? Unfortunately, many journals and funding agencies still favour novel research studies over replications making it difficult for researchers to pursue replication studies.

But what do we mean when we talk about replication or reproducibility? While both terms are often used as synonyms, some researchers have proposed to use the term “reproducibility” when the same results are obtained by analysing the same data as in the original study. In contrast, the term “replicability” would be reserved for situations in which the same result is obtained by repeating the same analysis with a different dataset (The Turing Way Community, 2019).

The replication crisis is being widely discussed among quantitative researchers, but how does it apply to qualitative research? Our discussion revealed that being able to replicate a finding might not necessarily be as important in qualitative research, because the uniqueness of the sample and specific context of each study play a key role. This indicates how important it is to take into account the underlying assumptions of the philosophy of science behind our practices when we discuss questionable and open science practices. There may be many opportunities for quantitative researchers to learn from qualitative research practices on how to recognize our own subjective bias as researchers and transparently report this as part of our methodology.

We also wondered to what extent questionable research practices are being discussed among qualitative researchers. While there are surely many differences between quantitative and qualitative methodologies, there are also many common problems such as analytic flexibility that could be tackled by raising the standards of transparent reporting of methods sections. In line with this idea, we wondered if methods sections should perhaps be the part of a scientific manuscript that researchers pay the most attention to. We also asked ourselves if current word limits of many journals represent an obstacle in this sense.

Finally, we discussed several ideas that could help tackle the replication crisis and help us communicate more robust research findings to stakeholders. Many of these ideas are nicely summarized in this publication that specifically addressed the replication crisis in the field of second language research, but surely applies to many other research fields as well:

Marsden, E., Morgan‐Short, K., Thompson, S., & Abugaber, D. (2018). Replication in second language research: Narrative and systematic reviews and recommendations for the field. Language Learning68(2), 321-391.

We also talked about an exciting project called repliCATS that is being coordinated by an interdisciplinary team of researchers at the University of Melbourne in Australia. This project aims “to develop more accurate, better-calibrated techniques to elicit estimates of the replicability of social & behavioural science evidence” by eliciting group judgements for the replicability of 3,000 research claims. This will allow a better understanding of “how scientists reason about other scientists’ work, and what factors makes them trust it” (https://replicats.research.unimelb.edu.au/#tab19). For more information on this project listen to this episode of the Everyhting Herzt podcast in which the hosts Dan Quinatna and James Heathers talk to Prof. Fiona Fidler about repliCATS. We reached out to Fiona over Twitter to see if our journal club might be able to participate in one of the repliCATS workshops in the future, so stay tuned for updates on this.

Next week we will be meeting on Thursday 28st of November to discuss Preregistration: a potential solution based on the publication The preregistration revolution by Nosek, Ebersole, DeHaven and Mellor (2018). Dr. Pierina Cheung from the Centre for Research in Child Development at the National Institute of Education will be leading our discussion and sharing experiences with preregistering her own research. So read the paper and come along for Teh Peng, snacks and Open Science chats!

We will be waiting for you at:

The Arc – Learning Hub North, TR+18, LHN-01-06

Thursday 21st of November, 1-2pm

You can also join us virtually by contacting alexa.vonhagen@nie.edu.sg and requesting relevant information.