Workshops

Open Science and Open Data

Open Science and Open Data

Instructor: Kai Horstmann

The workshop Open Science and Open Data first gives a very brief introduction to the general topic of Open Science and the benefits of data sharing and the pre-registration of study design, data processing, data analyses, and statistical inferences. The second half of the workshop then focuses on the implementation of pre-registration, using the Open Science Framework. We will then discuss how these Open Science practices can increase replicability and reproducibility of published research.
Statistical Bias and p-Hacking

Statistical Bias and p-Hacking

Instructor: Frank Renkewitz

In the first part of this workshop we will collect common examples of questionable research practices and discuss when and why these practices should be considered as violations of scientific norms and under which circumstances they may appear justifiable. We will then train our own p-hacking skills and try out different questionable research practices to squeeze statistical significance out of pure noise. Finally, we will review and discuss evidence on the spreading of these practices in several areas of research.
The second part of the workshop will focus on the consequences p-hacking and publication biases: How may these problems affect the proportion of false positives in the literature, the validity of effect size estimates (and other meta-analytical results) or our ability to identify moderators of established effects? To answer these questions I will review the results of several simulation studies.
The last part of the workshop will cover different ways to uncover p-hacking and publication biases. What is the performance of statistical methods meant to detect and correct these problems in collections of evidence? Are there characteristics of research papers that suggest that p-hacking might have been involved? The central aim here is to identify indicators that may help to tell more reliable research findings from less trustworthy ones.
Project Planning: From Experiment Design to Manuscript and Data Release

Project Planning: From Experiment Design to Manuscript and Data Release

Instructor: Shravan Vasishth
A recent analysis of publicly released data accompanying published papers in Cognition showed that not all published numbers could be reproduced, even though the data and code were available (https://royalsocietypublishing.org/doi/full/10.1098/rsos.180448). The authors state that: "...suboptimal data curation, unclear analysis specification and reporting errors can impede analytic reproducibility, undermining the utility of data sharing and the credibility of scientific findings." In this workshop, I will suggest one way to minimize the chances of producing irreproducible results, focusing on repeated measures 2x2 factorial designs as a case study. The steps I will discuss are: -       Experiment design, and planning sample size using simulated data
-       Defining the analysis plan using simulated data
-       Checking that your experiment software actually collects the data you need
-       Once data are collected, visualizing and summarizing the data
-       Creating an R package to document and release your data and analyses
-       Code refactoring
-       Integrating the data analysis into the manuscript
-       Releasing data and code: a suggested checklist Materials for the workshop are available here:
https://vasishth.github.io/MPILeipzig2019/
The Replication Crisis: Problems and Solutions

The Replication Crisis: Problems and Solutions

Instructor: Priya Silverstein

In the first half of this workshop we will be covering what the ‘replication crisis’ is, how science got to this point, and what solutions can be implemented at different levels. 
In the second half we will be covering the steps necessary to identifying a good opportunity to conduct a replication study, and the practical steps for doing so.



Good Scientific Practice – What does it mean in everyday practice?

Good Scientific Practice – What does it mean in everyday practice?

Instructor: Florentine Frantz (Workshop 1), Michaela Scheriau (Workshop 2)

Let’s face it, good scientific practice is not a black and white issue. Even if one memorizes any rules for research integrity and ethics, there will always be moments when it is not straight forwardly clear what to do. With new methodologies and technologies arising, complex co-operation arrangements, and competing regimes of valuation in contemporary academia, researchers are constantly asked to decide what it means to do ‘good’ work. In this spirit, this workshop will not teach any re-articulation of presumably universal scientific norms, but rather aims to discuss the everyday encounters with research integrity. We will use the card-based group discussion method RESPONSE_ABILITY to talk about transgressions of good scientific practice, dilemma situations in research and publishing, conditions that matter for doing research and how we would like to change them. Researchers should learn how to ask the right questions and be reflexive about their very own research practices.
Go to Editor View