The promises and pitfalls of preregistration
Discussion meeting organised by Dr Tom Hardwicke, Professor Marcus Munafò, Dr Sophia Crüwell, Professor Dorothy Bishop FRS FMedSci, Professor Eric-Jan Wagenmakers.
Serious concerns about research quality have provoked debate across scientific disciplines about the merits of preregistration — publicly declaring study plans before collecting or analysing data. This meeting will initiate an interdisciplinary dialogue exploring the epistemological and pragmatic dimensions of preregistration, identifying potential limits of application, and developing a practical agenda to guide future research and optimise implementation.
The schedule of talks, speaker biographies and abstracts are available below.
Attending the meeting
This meeting is intended for researchers in relevant fields.
- Free to attend
- Both in person and online attendance available
- Please register to attend via Eventbrite. An optional lunch is also available to purchase.
Enquiries: contact the Scientific Programmes Team
Organisers
Schedule
Chair
Dr Evan Mayo-Wilson, University of North Carolina, USA
Dr Evan Mayo-Wilson, University of North Carolina, USA
Dr Mayo-Wilson’s research focuses on: evaluating the benefits and harms of health interventions (pharmaceutical and behavioural); improving methods for clinical trials and systematic reviews; and developing methods and interventions to increase research transparency and openness. He is currently the Principal Investigator with Halil Kilicoglu on a study that aims to develop automated methods for assessing compliance with reporting guidelines. Dr Mayo-Wilson is the Scientific Director for peer review of PCORI Research Reports (contracted by Origin Editorial) and Associate Editor for Systematic Reviews for the American Journal of Public Health. He serves on the American Psychological Association Open Science and Methodology Expert Panel and on the Transparency and Openness Promotion (TOP) Guidelines Advisory Panel. He has co-authored multiple guidelines for reporting clinical trials and systematic reviews.
09:05-09:30 |
A brief history of clinical trial registration
Clinical trial registration has become an essential part of biomedical research. Laws require it, international organisations encourage it, and journal publication is often not possible without it. This talk will explore how the prospective registration of clinical trials became an ethical, legal, and practical requirement and examine in the infrastructure that has supported the growth of the practice. This will enable a discussion of the unique facets that led to the broad uptake of prospective registration in the field of clinical trials as well as what aspects are transferrable to other areas. Dr Nicholas DeVito, University of Oxford, UK
Dr Nicholas DeVito, University of Oxford, UKNicholas DeVito is a postdoctoral researcher at the Bennett Institute for Applied Data Science at the University of Oxford. He leads the Institute’s research integrity work studying issues in research transparency, open science, and other meta scientific topics including running the TrialsTracker project. Nick hold a DPhil from the University of Oxford and an MPH from the Yale School of Public Health. |
---|---|
09:30-10:00 |
Psychology: from crisis to change
Psychology went into a severe crisis: Major fraud cases came to light, prominent research findings failed to replicate, and the use of Questionable Research Practices (QRPs) in the collection and analysis of data and reporting of results appeared to be widespread. This crisis resulted in a call for more transparency and openness to improve psychological science. Preregistration, which is registering the hypothesis, study design, and data-analysis plan prior to data collection, was proposed to guarantee a study's confirmatory nature and increase the research process's transparency. Preregistration has become more and more common in recent years. Different preregistration templates have been developed to guide researchers in preregistering their studies, and several journals award preregistration badges. On the other hand, meta-research into these preregistrations shows that these preregistrations are often not specific enough and that the preregistered plans are not always followed. In this talk, Marjan Bakker will present the history of preregistration in psychology, how it is currently used, and present research that investigates the effectiveness of preregistration. Dr Marjan Bakker, Tilburg University, the Netherlands
Dr Marjan Bakker, Tilburg University, the NetherlandsMarjan Bakker is an Assistant Professor at the Methods and Statistics department at Tilburg University, where she is part of the Meta Research Center. She wants to improve science by investigating problems and possible solutions. She examined reporting errors in Psychology, which resulted in the development of Statcheck, and she also studied the use of questionable research practices and the problem of publication bias. Furthermore, she is interested in how researchers approach common statistical issues, like statistical power or the handling of outliers, and translates her findings into best practices. Lastly, she studied how preregistration works in practice and how it can be further improved. |
10:00-10:30 |
Break
|
10:30-11:00 |
A history of preregistration
Professor Fidler's abstract will follow shortly. Professor Fiona Fidler, University of Melbourne, Australia
Professor Fiona Fidler, University of Melbourne, AustraliaFiona is Head of the History and Philosophy of Science program at the University of Melbourne, co-director of MetaMelb (an interdisciplinary metascience research group) and lead PI of the repliCATS (Collaborative Assessments for Trustworthy Science) project. She is broadly interested in how experts make decisions and change their minds, especially in the context of methodological change in science. |
11:00-11:30 |
Do pre-registration and pre-analysis plans reduce p-hacking and publication bias? Evidence from the universe of RCTs in Economics and suggestions for improvement
Randomised controlled trials (RCTs) are increasingly prominent in academic economics and their results are influential in policy circles. Pre-registration is regarded as an important contributor to research credibility. We investigate this by analysing the pattern of test statistics from the universe of RCT studies published in 15 leading economics journals from 2018 through 2021. Broadly, we draw two conclusions: (a) In contrast to other disciplines, pre-registration in economics frequently does not involve a pre-analysis plan (PAP), or sufficient detail to constrain meaningfully the actions and decisions of researchers after data is collected. Consistent with this, we find no evidence that pre-registration in itself reduces p-hacking and publication bias. (b) When pre-registration is accompanied by a PAP we find evidence consistent with both reduced p-hacking and publication bias. We make some policy proposals and hope the analysis can contribute to the ongoing debate about the appropriate approach to pre-registration in economics. Dr Abel Brodeur, University of Ottawa, Canada
Dr Abel Brodeur, University of Ottawa, CanadaAbel Brodeur is an economist at the University of Ottawa and the chair of the Institute for Replication. Brodeur’s research deals with research transparency, political economy, health and labour markets. He completed his PhD studies in 2015 at Paris School of Economics. His recent work deals with reproducibility, replicability and testing the effectiveness of tools to promote research credibility. |
11:30-12:15 |
Response and discussion
Speakers from the session to respond to each other's talks. This will be followed by an open discussion. Dr Marjan Bakker, Tilburg University, the Netherlands
Dr Marjan Bakker, Tilburg University, the NetherlandsMarjan Bakker is an Assistant Professor at the Methods and Statistics department at Tilburg University, where she is part of the Meta Research Center. She wants to improve science by investigating problems and possible solutions. She examined reporting errors in Psychology, which resulted in the development of Statcheck, and she also studied the use of questionable research practices and the problem of publication bias. Furthermore, she is interested in how researchers approach common statistical issues, like statistical power or the handling of outliers, and translates her findings into best practices. Lastly, she studied how preregistration works in practice and how it can be further improved. Professor Fiona Fidler, University of Melbourne, Australia
Professor Fiona Fidler, University of Melbourne, AustraliaFiona is Head of the History and Philosophy of Science program at the University of Melbourne, co-director of MetaMelb (an interdisciplinary metascience research group) and lead PI of the repliCATS (Collaborative Assessments for Trustworthy Science) project. She is broadly interested in how experts make decisions and change their minds, especially in the context of methodological change in science. Dr Nicholas DeVito, University of Oxford, UK
Dr Nicholas DeVito, University of Oxford, UKNicholas DeVito is a postdoctoral researcher at the Bennett Institute for Applied Data Science at the University of Oxford. He leads the Institute’s research integrity work studying issues in research transparency, open science, and other meta scientific topics including running the TrialsTracker project. Nick hold a DPhil from the University of Oxford and an MPH from the Yale School of Public Health. |
Chair
Professor Simine Vazire, University of Melbourne, Australia
Professor Simine Vazire, University of Melbourne, Australia
Simine Vazire is a Professor in the Melbourne School of Psychological Sciences at the University of Melbourne. She has two lines of research. One examines people's self-knowledge of their personality and behaviour and another examines the individual and institutional practices and norms in science, and the degree to which these norms encourage or impede scientific self-correction. She co-founded the Society for the Improvement of Psychological Science (SIPS) and has been editor of a number of journals. She is the incoming Editor in Chief of Psychological Science.
13:15-13:45 |
Lightning talks
Lightning talks on meta-research, ideas and tools. The session is to be filled by an open call and Dr Jessie Baldwin will Chair. Dr Jessie Baldwin, UCL, UK
Dr Jessie Baldwin, UCL, UKDr Jessie Baldwin is a Senior Research Fellow at UCL funded by the Wellcome Trust. Her research focuses on the role of early environmental risk factors for mental health problems, using methods to strengthen causal inference. She is passionate about promoting open science via her roles as the UCL Local Network Co-Lead for the UK Reproducibility Network, organiser of the UCL ReproducibiliTea Journal Club, and Editor for Registered Reports at JCPP Advances. She has contributed guidance on the use of pre-registration in secondary data analysis, and the value of Registered Reports for mental health research. |
---|---|
13:45-14:15 |
Title to be confirmed
Speaker to be confirmed. Abstract to follow shortly. |
14:15-14:45 |
Statistical practice as scientific exploration
Much has been written on the philosophy of statistics: How can noisy data, mediated by probabilistic models, inform our understanding of the world? Researchers when using and developing statistical methods can be seen to be acting as scientists, forming, evaluating, and elaborating provisional theories about the data and processes they are modelling. This perspective has the conceptual value of pointing toward ways that statistical theory can be expanded to incorporate aspects of workflow that were formally tacit or informal aspects of good practice, and the practical value of motivating tools for improved statistical workflow. Professor Andrew Gelman, Columbia University, USA
Professor Andrew Gelman, Columbia University, USAAndrew Gelman has done research on the following topics that are related to replication and preregistration: type M and S errors, the garden of forking paths, multiverse analysis, the use of multilevel modelling to resolve multiple comparisons issues, and Bayesian model evaluation. He has applied these ideas to a wide range of applications in the social and natural sciences. |
14:45-15:15 |
Break
|
15:15-15:45 |
The epistemic benefits of registered reports and policy opportunities
Many arguments for the benefits of registered reports focus on the way in which they reduce the incentives for questionable research practices (QRPs) such as HARKing and p-hacking. In the context of the prediction vs. accommodation debate, I will discuss why it is important to reduce QRPs. I also want to bring additional possible benefits of registered reports into focus. The first is that accepting registered report proposals should include peer review of detailed methodologies, and the timing of such review can be particularly beneficial. Instead of reviewing a paper when the work is already completed, the peer review could improve the methodologies before the work is done, thus strengthening the evidential warrant for whatever results emerge. Registered reports also would allow for the publication of a broader swath of scientific practice, so scientists can know what has not worked. In addition, if registered reports became the norm, it would allow for better tracking of what scientists are attempting and pursuing, allowing for clearer assessments of the range research efforts in a field. Finally, peer reviewed methodologies accepted for publication could be tied to funding pools. If a proposal for research passes through peer review for a registered report publication, the proposal would meet the minimum standards for a funding lottery system (which has other benefits for science). In short, registered reports could change the way science is pursued in a number of salutary ways if we made accompanying policy shifts to take advantage of them. Professor Heather Douglas, Michigan State University, USA
Professor Heather Douglas, Michigan State University, USAHeather Douglas is a philosopher of science who works on the relationships among science, values, and democratic publics. She is an Associate Professor in the Department of Philosophy at Michigan State University, an American Association for the Advancement of Science (AAAS) fellow, and was Senior Visiting Fellow at the Center for Philosophy of Science at the University of Pittsburgh (2021-2022). She is the author of Science, Policy, and the 'Value-Free Ideal' (2009), 'The Rightful Place of Science: Science, Values, and Democracy' (2021), and editor of the book series 'Science, Values, and the Public' for University of Pittsburgh Press. |
15:45-16:15 |
Preregistration will not improve our theories
Proponents of preregistration argue that, among other benefits, it improves the diagnosticity of statistical tests. In the strong version of this argument, preregistration does this by solving statistical problems, such as family-wise error rates. In the weak version, it nudges people to think more deeply about their theories, methods, and analyses. We argue against both: the diagnosticity of statistical tests depend entirely on how well statistical models map onto underlying theories, and so improving statistical techniques does little to improve theories when the mapping is weak. There is also little reason to expect that preregistration will spontaneously help researchers to develop better theories (and, hence, better methods and analyses). Professor Chris Donkin, LMU Munich, Germany
Professor Chris Donkin, LMU Munich, GermanyChris Donkin is a cognitive psychologist at LMU Munich. He is interested in understanding how people make decisions, how they remember things, and how they produce explanations of the world. He uses a combination of experiments and computational models to that end. In recent years, he grew interested in how people create knowledge, and what it is that can make this process effective. This, in turn, led to a lot of opinions on what philosophies of science do and do not make sense. |
16:15-17:00 |
Response and discussion
Speakers from the session to respond to each other's talks. This will be followed by an open discussion. Professor Andrew Gelman, Columbia University, USA
Professor Andrew Gelman, Columbia University, USAAndrew Gelman has done research on the following topics that are related to replication and preregistration: type M and S errors, the garden of forking paths, multiverse analysis, the use of multilevel modelling to resolve multiple comparisons issues, and Bayesian model evaluation. He has applied these ideas to a wide range of applications in the social and natural sciences. Professor Chris Donkin, LMU Munich, Germany
Professor Chris Donkin, LMU Munich, GermanyChris Donkin is a cognitive psychologist at LMU Munich. He is interested in understanding how people make decisions, how they remember things, and how they produce explanations of the world. He uses a combination of experiments and computational models to that end. In recent years, he grew interested in how people create knowledge, and what it is that can make this process effective. This, in turn, led to a lot of opinions on what philosophies of science do and do not make sense. Professor Isabelle Boutron, Université Paris Cité, France
Professor Isabelle Boutron, Université Paris Cité, FranceIsabelle Boutron is Professor of Epidemiology at the Université Paris Cité, head of the Methods Research Team (INSERM- Centre for Research in Epidemiology and Statistics-CRESS). She is director of Cochrane France, co-convenor of the Bias Methods group of Cochrane and member of the SPIRIT-CONSORT executive committee. Her research focusses on Research on Research particularly Interventional Research on Research, transparency, distorted reporting (ie spin) and the peer-review process. She is studying the methodological issues of assessing interventions (blinding, bias, external validity, complex interventions) and she is questioning methods of evidence synthesis. |
17:00-18:00 |
Poster session
|
Chair
Professor Eric-Jan Wagenmakers, University of Amsterdam, the Netherlands
Professor Eric-Jan Wagenmakers, University of Amsterdam, the Netherlands
Dr Eric-Jan ("EJ") Wagenmakers is Professor in Bayesian Methodology at the Psychological Methods Unit of the University of Amsterdam. His main research interest is Bayes factor hypothesis testing in the style of Sir Harold Jeffreys. Wagenmakers's lab spearheads the development of the JASP open-source software program for statistical analyses. Wagenmakers is also a strong advocate of Open Science and the preregistration of analysis plans.
09:00-09:30 |
The epistemic status of pre-registration
We have witnessed the emergence of a new epistemic norm: pre-registration. On what basis do epistemic norms get their normative status? That is, when some rule says that scientists should do x, where does that should come from? Many of us are implicitly committed to epistemic consequentialism, which holds that epistemic norms are warranted if and only if they are truth-conducive. On this view pre-registration comes out looking pretty good. The epistemic benefits of pre-registration (roughly, a decrease in false-positives due to the mitigation of publication bias and p-hacking) plausibly outweigh, in some contexts, the epistemic costs (roughly, an increase in false-negatives due to constraints on researcher degrees of freedom). This is often but not generally so: for example, all the great discoveries in medicine were pre-pre-registration, yet the corrupted context of current clinical research clearly favours pre-registration. Epistemic non-consequentialism, on the other hand, holds that epistemic norms are warranted to the extent that they manifest and promote epistemic agency and responsibility. I will argue for epistemic non-consequentialism. On this view pre-registration comes out looking a little less shiny. If scientists were transparent about their methods and were appropriately cautious with their inferences, pre-registration would be unnecessary. Yet many scientists lack such transparency and epistemic humility, particularly in domains in which incentives are corrupted by fame and money, and therefore pre-registration is helpful. While some domains of science today need honesty-enforcement mechanisms, this is a fact to be greeted by a sigh rather than a smile. Professor Jacob Stegenga, University of Cambridge, UK
Professor Jacob Stegenga, University of Cambridge, UKJacob Stegenga is a Professor in the Department of History and Philosophy of Science at the University of Cambridge. He has published widely on fundamental topics in reasoning and rationality and philosophical problems in medicine and biology. Prior to joining Cambridge he taught in the United States and Canada, and he received his PhD from the University of California, San Diego. He is the author of 'Medical Nihilism' and 'Care and Cure: An Introduction to Philosophy of Medicine', and he is currently writing a book tentatively titled 'Heart of Science'. During the academic year 2023–24 Jacob is a Senior Fellow at Leibniz University, Hannover. |
---|---|
09:30-10:00 |
Preregistration: Panacea or proxy?
The purpose of preregistration of is to reduce the researcher’s degrees of freedom that might otherwise compromise analysis and interpretation of the results. The idea is that by preregistering hypotheses, methodology, sampling plan, and an analysis plan, inconvenient outcomes cannot be addressed by altering hypotheses after the fact or by continuing data collection and exploring different analyses until the results conform to expectations. Professor Lewandowsky argues that it is helpful to disentangle the distinct purposes of preregistration and evaluate each in isolation. Concerning hypotheses, preregistration ensures that theorising cannot be informed by the data being collected by enforcing a strict temporal order between the two. However, temporal order is only a proxy for differentiating between a priori predictions and unimpressive post hoc explanations. Temporal order is entirely irrelevant if the independence of theorising from data collection is ensured in other ways, for example if a computational model is proposed without consideration or knowledge of existing data. Concerning analysis, preregistration ensures that a researcher cannot explore the 'garden of forking paths' by conducting numerous analyses, each involving subtly different choices, and then picking the desired outcome. However, preregistration of a single analysis is entirely arbitrary if multiple justifiable options exist, which creates the risk of missing an interesting pattern that might have been revealed by most or all of the other justifiable options. An alternative to preregistration of as single analysis would therefore involve a 'multiverse' analysis that explores many different forking paths to establish the robustness of a result. Finally, concerning the sampling plan, preregistration ensures that data collection and exclusion of participants is not governed by knowledge of interim results, thus curtailing researchers’ ability to shape the outcome through arbitrary decisions. Professor Lewandowsky argues that specification of the sampling plan and exclusion criteria is the most important aspect of preregistration because—unlike hypothesising and analysis—problems that are introduced during data collection cannot be retroactively examined or fixed. Professor Stephan Lewandowsky, University of Bristol, UK
Professor Stephan Lewandowsky, University of Bristol, UKStephan Lewandowsky is a cognitive scientist interested in the pressure points between the architecture of online information technologies and human cognition, and the consequences for democracy that arise from those pressure points. He focuses in particular on the persistence of misinformation and spread of 'fake news' in society, including conspiracy theories, and how platform algorithms may contribute to the prevalence of misinformation. He works extensively with policy makers, mainly at the European level, to make democracy more resilient to toxicity online. In 2022 and in 2023, he was identified as a highly cited researcher by Clarivate and he was elected to the German National Academy of Sciences (Leopoldina) in 2022. |
10:00-10:30 |
Break
|
10:30-11:00 |
The promise and pitfalls of preregistration
Isabelle Boutron will discuss the need for pre-registration of protocol, the implementation of pre-registration in some domains and how pre-registration could improve research practices. She will also highlight the barriers and limitations of pre-registration. Professor Isabelle Boutron, Université Paris Cité, France
Professor Isabelle Boutron, Université Paris Cité, FranceIsabelle Boutron is Professor of Epidemiology at the Université Paris Cité, head of the Methods Research Team (INSERM- Centre for Research in Epidemiology and Statistics-CRESS). She is director of Cochrane France, co-convenor of the Bias Methods group of Cochrane and member of the SPIRIT-CONSORT executive committee. Her research focusses on Research on Research particularly Interventional Research on Research, transparency, distorted reporting (ie spin) and the peer-review process. She is studying the methodological issues of assessing interventions (blinding, bias, external validity, complex interventions) and she is questioning methods of evidence synthesis. |
11:00-11:30 |
Exploration versus confirmation, tests versus models, mature versus immature sciences, and the role of preregistration
Progress in science loosely follows a similar pattern: scientists observe phenomena, develop measures to objectively assess said phenomena, build theories (models) of these phenomena, then evaluate, test, and refine these models. As such, methodologies generally begin with primarily exploratory approaches (e.g., exploratory data analysis, or EDA) then progress toward confirmatory approaches (e.g., confirmatory data analysis or CDA) as that science matures. Unfortunately, immature sciences (e.g., psychology) have a history of borrowing methodologies (e.g., confirmatory data analytic approaches) from mature sciences long before they are fully ready to leverage these methodologies. These approaches and their corresponding probability-based statistics (e.g., p-values and confidence intervals) only have probabilistic meaning if one’s models are adequately mature. Preregistration, in a sense, was designed to address this problem by forcing researchers to pre-specify which analyses were confirmatory (and, presumably, reflect mature theories). However, without adequate understanding of the exploratory/confirmatory continuum and how methodologies must adapt to the maturity of one’s models, preregistration is insufficient; too many will attempt confirmatory methods prematurely and be disappointed in results. In this paper, Dr Fife will argue that the standard statistics curriculum has necessarily led to the replication crisis and the only way to build a replicable science will require a re-education that focuses on model building, cumulative research, and a clear understanding of the role of EDA/CDA. Dr Dustin Fife, Rowan University, USA
Dr Dustin Fife, Rowan University, USADustin Fife is an Associate Professor at Rowan University in Glassboro, NJ, United States. He received his PhD at the University of Oklahoma in 2013. His area of expertise is in data visualisation, statistical computing, exploratory data analysis, and statistical education. He is the author of an introductory statistics textbook called 'The Order of the Statistical Jedi: Rights, Rituals, and Responsibilities', is the content creator for the YouTube channel QuantPsych, and has authored articles in American Psychologist, Psychological Methods, Perspectives on Psychological Science, and many others. |
11:30-12:15 |
Response and discussion
Speakers from the session to respond to each other's talks. This will be followed by an open discussion. Dr Dustin Fife, Rowan University, USA
Dr Dustin Fife, Rowan University, USADustin Fife is an Associate Professor at Rowan University in Glassboro, NJ, United States. He received his PhD at the University of Oklahoma in 2013. His area of expertise is in data visualisation, statistical computing, exploratory data analysis, and statistical education. He is the author of an introductory statistics textbook called 'The Order of the Statistical Jedi: Rights, Rituals, and Responsibilities', is the content creator for the YouTube channel QuantPsych, and has authored articles in American Psychologist, Psychological Methods, Perspectives on Psychological Science, and many others. Professor Heather Douglas, Michigan State University, USA
Professor Heather Douglas, Michigan State University, USAHeather Douglas is a philosopher of science who works on the relationships among science, values, and democratic publics. She is an Associate Professor in the Department of Philosophy at Michigan State University, an American Association for the Advancement of Science (AAAS) fellow, and was Senior Visiting Fellow at the Center for Philosophy of Science at the University of Pittsburgh (2021-2022). She is the author of Science, Policy, and the 'Value-Free Ideal' (2009), 'The Rightful Place of Science: Science, Values, and Democracy' (2021), and editor of the book series 'Science, Values, and the Public' for University of Pittsburgh Press. Professor Jacob Stegenga, University of Cambridge, UK
Professor Jacob Stegenga, University of Cambridge, UKJacob Stegenga is a Professor in the Department of History and Philosophy of Science at the University of Cambridge. He has published widely on fundamental topics in reasoning and rationality and philosophical problems in medicine and biology. Prior to joining Cambridge he taught in the United States and Canada, and he received his PhD from the University of California, San Diego. He is the author of 'Medical Nihilism' and 'Care and Cure: An Introduction to Philosophy of Medicine', and he is currently writing a book tentatively titled 'Heart of Science'. During the academic year 2023–24 Jacob is a Senior Fellow at Leibniz University, Hannover. Professor Stephan Lewandowsky, University of Bristol, UK
Professor Stephan Lewandowsky, University of Bristol, UKStephan Lewandowsky is a cognitive scientist interested in the pressure points between the architecture of online information technologies and human cognition, and the consequences for democracy that arise from those pressure points. He focuses in particular on the persistence of misinformation and spread of 'fake news' in society, including conspiracy theories, and how platform algorithms may contribute to the prevalence of misinformation. He works extensively with policy makers, mainly at the European level, to make democracy more resilient to toxicity online. In 2022 and in 2023, he was identified as a highly cited researcher by Clarivate and he was elected to the German National Academy of Sciences (Leopoldina) in 2022. |
Chair
Dr Hilda Bastian, Independent, Australia
Dr Hilda Bastian, Independent, Australia
Hilda Bastian was a long-time consumer advocate in Australia, whose career turned to studying and analysing research and communicating about it. She is a writer and cartoonist, blogging at PLOS and contributing to The Atlantic, and publishes a newsletter called Living with Evidence. Her research in the last few years focused on factors affecting the validity of systematic reviews after publication. Hilda has worked on PubMed projects at the NIH National Center for Biotechnology Information (NCBI) in the US, and for the Institute for Quality and Efficiency in Healthcare in Germany. She is one of the founders of the Cochrane Collaboration. Hilda is an active Wikipedian and Mastodon enthusiast.
13:15-13:45 |
Peer community in registered reports: embracing the promises and avoiding the pitfalls of preregistration
Registered Reports are a form of empirical publication, offered by over 350 journals, in which study proposals are peer reviewed, preregistered, and pre-accepted before research is undertaken. By deciding which articles are published based on the question, theory, and methods, Registered Reports offer a remedy for a range of reporting and publication biases. In this talk, Professor Chambers will summarise the progress of a relatively new platform for supporting Registered Reports called the Peer Community in Registered Reports (PCI RR). PCI RR is a non-profit, non-commercial platform that, like the many other PCIs, coordinates the peer-review of preprints but in this case specifically for RRs. PCI RR is also joined by a growing fleet of 'PCI RR-friendly' journals that agree to endorse the recommendations of PCI RR without further review, giving the authors the power to choose which journal, if any, will publish their manuscript. By reclaiming control of the peer review process from academic publishers, PCI RR provides a mechanism for ensuring that Registered Reports are made as open, accessible, and rigorous as possible, while also moving toward a future in which journals themselves become obsolete in their current form. Professor Chris Chambers, Cardiff University, UK
Professor Chris Chambers, Cardiff University, UKChris Chambers is a Professor of Cognitive Neuroscience at Cardiff University, UK. The main focus of his work is metascience and the advancement of policy and practice to improve reproducibility and transparency. Together with colleagues, he co-founded initiatives such as Registered Reports, the Peer Community in Registered Reports (PCI RR), the Transparency and Openness Promotion (TOP) guidelines, the Royal Society’s accountable replications policy, and the UK Reproducibility Network. He currently serves as a Registered Reports editor at several scientific journals and platforms, including PLOS Biology, Royal Society Open Science and PCI RR. Outside academia, Chris co-hosted the Guardian psychology blog Head Quarters from 2013-2018. |
---|---|
13:45-14:15 |
Title to be confirmed
Dr Anne Scheel's abstract will follow shortly. |
14:15-14:45 |
Preregistration: Known, unknown, and what's next
Preregistration is an interesting enough methodological innovation to generate productive debate about what it is, what it is for, and when it is useful. Some insights are revealed in theoretical discussion about preregistration. Other insights are revealed experientially and empirically by treating preregistration as a product in research and development. To what extent are the core purposes of preregistration adaptable to different types of scholarly inquiry? To what extent does the practice of preregistration benefit different types of scholarly inquiry? How can preregistration be improved to decrease the costs and increase the benefits? What are the boundary conditions for which the cost/benefit trade-off favours not using preregistration? After learning from the other presentations at this meeting, I am hoping to have something sensible to say about questions like these. Professor Brian Nosek, Center for Open Science, USA
Professor Brian Nosek, Center for Open Science, USABrian Nosek co-developed the Implicit Association Test, a method that advanced research and public interest in implicit bias. Nosek co-founded three non-profit organisations: Project Implicit to advance research and education about implicit bias, the Society for the Improvement of Psychological Science to improve the research culture in his home discipline, and the Center for Open Science to improve rigor, transparency, integrity, and reproducibility across research disciplines. Nosek is Executive Director of COS and a professor at the University of Virginia. Nosek’s research and applied interests are to understand why people and systems produce behaviours that are contrary to intentions and values; to develop, implement, and evaluate solutions to align practices with values; and, to improve research credibility and cultures to accelerate progress. |
14:45-15:15 |
Response and discussion
Speakers from the session to respond to each other's talks. This will be followed by an open discussion. Professor Brian Nosek, Center for Open Science, USA
Professor Brian Nosek, Center for Open Science, USABrian Nosek co-developed the Implicit Association Test, a method that advanced research and public interest in implicit bias. Nosek co-founded three non-profit organisations: Project Implicit to advance research and education about implicit bias, the Society for the Improvement of Psychological Science to improve the research culture in his home discipline, and the Center for Open Science to improve rigor, transparency, integrity, and reproducibility across research disciplines. Nosek is Executive Director of COS and a professor at the University of Virginia. Nosek’s research and applied interests are to understand why people and systems produce behaviours that are contrary to intentions and values; to develop, implement, and evaluate solutions to align practices with values; and, to improve research credibility and cultures to accelerate progress. Professor Chris Chambers, Cardiff University, UK
Professor Chris Chambers, Cardiff University, UKChris Chambers is a Professor of Cognitive Neuroscience at Cardiff University, UK. The main focus of his work is metascience and the advancement of policy and practice to improve reproducibility and transparency. Together with colleagues, he co-founded initiatives such as Registered Reports, the Peer Community in Registered Reports (PCI RR), the Transparency and Openness Promotion (TOP) guidelines, the Royal Society’s accountable replications policy, and the UK Reproducibility Network. He currently serves as a Registered Reports editor at several scientific journals and platforms, including PLOS Biology, Royal Society Open Science and PCI RR. Outside academia, Chris co-hosted the Guardian psychology blog Head Quarters from 2013-2018. |
15:15-15:45 |
Break
|
15:45-17:00 |
Meeting reflection panel
The five chairs of the meeting will reflect, highlight common ground, challenges and future directions. This will be followed by an open discussion and will be chaired by Sophia Crüwell. Dr Jessie Baldwin, UCL, UK
Dr Jessie Baldwin, UCL, UKDr Jessie Baldwin is a Senior Research Fellow at UCL funded by the Wellcome Trust. Her research focuses on the role of early environmental risk factors for mental health problems, using methods to strengthen causal inference. She is passionate about promoting open science via her roles as the UCL Local Network Co-Lead for the UK Reproducibility Network, organiser of the UCL ReproducibiliTea Journal Club, and Editor for Registered Reports at JCPP Advances. She has contributed guidance on the use of pre-registration in secondary data analysis, and the value of Registered Reports for mental health research. Dr Evan Mayo-Wilson, University of North Carolina, USA
Dr Evan Mayo-Wilson, University of North Carolina, USADr Mayo-Wilson’s research focuses on: evaluating the benefits and harms of health interventions (pharmaceutical and behavioural); improving methods for clinical trials and systematic reviews; and developing methods and interventions to increase research transparency and openness. He is currently the Principal Investigator with Halil Kilicoglu on a study that aims to develop automated methods for assessing compliance with reporting guidelines. Dr Mayo-Wilson is the Scientific Director for peer review of PCORI Research Reports (contracted by Origin Editorial) and Associate Editor for Systematic Reviews for the American Journal of Public Health. He serves on the American Psychological Association Open Science and Methodology Expert Panel and on the Transparency and Openness Promotion (TOP) Guidelines Advisory Panel. He has co-authored multiple guidelines for reporting clinical trials and systematic reviews. Professor Simine Vazire, University of Melbourne, Australia
Professor Simine Vazire, University of Melbourne, AustraliaSimine Vazire is a Professor in the Melbourne School of Psychological Sciences at the University of Melbourne. She has two lines of research. One examines people's self-knowledge of their personality and behaviour and another examines the individual and institutional practices and norms in science, and the degree to which these norms encourage or impede scientific self-correction. She co-founded the Society for the Improvement of Psychological Science (SIPS) and has been editor of a number of journals. She is the incoming Editor in Chief of Psychological Science. Dr Hilda Bastian, Independent, Australia
Dr Hilda Bastian, Independent, AustraliaHilda Bastian was a long-time consumer advocate in Australia, whose career turned to studying and analysing research and communicating about it. She is a writer and cartoonist, blogging at PLOS and contributing to The Atlantic, and publishes a newsletter called Living with Evidence. Her research in the last few years focused on factors affecting the validity of systematic reviews after publication. Hilda has worked on PubMed projects at the NIH National Center for Biotechnology Information (NCBI) in the US, and for the Institute for Quality and Efficiency in Healthcare in Germany. She is one of the founders of the Cochrane Collaboration. Hilda is an active Wikipedian and Mastodon enthusiast. Professor Eric-Jan Wagenmakers, University of Amsterdam, the Netherlands
Professor Eric-Jan Wagenmakers, University of Amsterdam, the NetherlandsDr Eric-Jan ("EJ") Wagenmakers is Professor in Bayesian Methodology at the Psychological Methods Unit of the University of Amsterdam. His main research interest is Bayes factor hypothesis testing in the style of Sir Harold Jeffreys. Wagenmakers's lab spearheads the development of the JASP open-source software program for statistical analyses. Wagenmakers is also a strong advocate of Open Science and the preregistration of analysis plans. |