Last Thursday 31st of October we finally kicked off our very first Singapore ReproducibiliTea journal club meeting. After trying to spread the word of this new initiative as much as possible, we were very happy to welcome an interesting mix of participants at The Arc at Nanyang Technological University (NTU). Our group was composed of researchers and librarians from different institutions (NTU, National Institute of Education (NIE), Agency for Science, Technology and Research (A*Star) and Centre for Biomedical Research Transparency), research fields (psychology, education and biomedical sciences), methodological approaches (quantitative and qualitative research) and career stages. The amazing NTU library staff made sure that we did not miss the iconic ReproducibiliTea teapot introduced by other ReproducibiliTea journal clubs worldwide. As the climate here is just slightly warmer than in Oxford, where Amy Orben, Sam Parsons and Sophia Crüwell first started ReproducibiliTea, they also kindly made sure that we had enough ReproducibiliTehPeng, the Singaporean way of referring to iced tea.

We started off with a warm virtual welcome that Amy and Sam filmed especially for our Singaporean ReproducibiliTea group and then went on to a short introduction round to share our motivation to discuss Open Science practices. After a short summary of the target paper for our first meeting, A manifesto for reproducible science by Munafò et al. (2017), we opened the discussion up to our own experiences, questions and ideas to tackle questionable research practices. A first question we tried to answer was how frequent questionable research practices actually are in our own research fields. Besides our own experiences, a recent preprint by Makel, Hodges, Cook and Plucker (2019) might shed light on this issue specifically for the field of education research. In addition, to get a better idea of the needs of our Singapore ReproducibiliTea community, we would like to invite anyone reading this blog post to complete this very short informal survey put together by our colleagues from NTU library.

Another central topic we focused on were the benefits and challenges of preregistration practices. On one hand, we recognized that preregistration can have important benefits for our own working efficiency. By having to write down the details of our research plan before starting to collect data, we are forced to think more carefully about our study design, measures and analyses. In many cases, we might realize that our ideas were not so clear as we thought. This can help us make changes in a timely manner and prevent taking wrong turns that we may not be able to fix once we start collecting data. Preregistrations might also come in handy at later stages of the research process, for example, as a reminder of the thoughts we had in mind for analyses months or sometimes years before we actually reach that stage. Overall, we agreed that preregistration has the potential to increase the organization and control we have over our research and as a consequence may make us more confident about our work throughout the whole process.

On the other hand, we also shared our concerns on how the adoption of preregistration practices is currently being implemented by some researchers. As there is no external control on the quality and level of detail of the information that needs to be presented, some researchers might communicate very vague descriptions of their working plan just to “check the box” of having pre-registered their study. A recent preprint by Szollosi et al. (2019) summarizes this idea as follows: “Taking preregistration as a measure of scientific excellence can be harmful, because bad theories, methods, and analyses can also be preregistered” (p. 3, Szollosi et al., 2019). The statements of this preprint have recently triggered a heated debate on Twitter regarding the pros and cons of preregistration.

We also pointed out that there are rarely consequences when preregistrations and final reports show inconsistencies that are not explained by the authors. As an example, we mentioned the work by Henry Drysdale and colleagues on the COMPare trials project discussed in episode 79 of the Everything Hertz podcast by Dan Quintana and James Heathers. This team compared clinical trial reports with their preregistered protocols and found that in several cases pre-specified outcomes were misreported. We wondered if preregistration still fulfils the aims of addressing publications bias and analytic flexibility, if authors are not held accountable in cases like these. Nevertheless, we noted that implementing preregistration practices might be a necessary interim step before researchers are ready to move on to registered reporting practices that overcome many of the above-mentioned limitations of preregistration.

There are also several open questions that we would like to follow up in our next meetings. For instance, we would like to consider how many of the solutions to addressing questionable research practices brought forward by the Open Science community apply to qualitative research approaches. Recent publications by Haven and van Grootel (2019) and specific templates on the Open Science Framework might be useful to get us started on this matter. We also aim to invite representatives of the NTU Institutional Review Board to help us better understand how to modify participant consent forms to be able to openly share the data we collect. Another idea that was mentioned is to try to invite editors of journals that have started adopting registered reports to give us more insights into this new Open Science practice. And last, but not least, we would like to emphasize the important role of librarians in promoting Open Science practices and explore possibilities for effective collaborations, not only when it comes to matters of open data, but also with respect to other Open Science practices. The article Replicable Services for Reproducible Research: A Model for Academic Libraries by Sayre and Riegelman (2019) will surely be a keystone to help us achieve this goal.

Overall, this first Singapore ReproducibiliTea journal club meeting was a great way to make us aware that there are lots of building blocks out there to help us address questionable research practices. Our main challenge now seems to be how to learn to manage these building blocks and put them together to improve the reliability and efficiency of scientific research. Hopefully, Singapore ReproducibiliTea can serve as one way to contribute towards this aim by creating opportunities for researchers and librarians to get together and support each other in this endeavour. We look forward to welcoming you all to our next meeting on Thursday 7th of November to discuss False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant by Simmons, Nelson and Simonsohn (2011). Our discussion leader for this session will be the amazing Suzy Styles, who has been one of the pioneers in promoting Open Science practices at NTU. To give us a real life example of the problems of analytic flexibility, she will share preliminary results of a meta-analysis conducted by her team on the multiple decisions researchers make when analysing EEG data. Read the paper and come along for Teh Peng, snacks and Open Science chats! We will be waiting for you at:

The Arc – Learning Hub North, TR+18, LHN-01-06
Thursday 7th of November, 1-2pm

We’d like to invite you to participate in the following two polls, based on the journal paper in discussion:
Munafò, M., Nosek, B., Bishop, D. et al. A manifesto for reproducible science. Nat Hum Behav 1, 0021 (2017) doi:10.1038/s41562-016-0021