Instructor:
Frank Renkewitz
In the first part of this workshop we will collect common examples of questionable research practices and discuss when and why these practices should be considered as violations of scientific norms and under which circumstances they may appear justifiable. We will then train our own
p-hacking skills and try out different questionable research practices to squeeze statistical significance out of pure noise. Finally, we will review and discuss evidence on the spreading of these practices in several areas of research.
The second part of the workshop will focus on the consequences
p-hacking and publication biases: How may these problems affect the proportion of false positives in the literature, the validity of effect size estimates (and other meta-analytical results) or our ability to identify moderators of established effects? To answer these questions I will review the results of several simulation studies.
The last part of the workshop will cover different ways to uncover
p-hacking and publication biases. What is the performance of statistical methods meant to detect and correct these problems in collections of evidence? Are there characteristics of research papers that suggest that
p-hacking might have been involved? The central aim here is to identify indicators that may help to tell more reliable research findings from less trustworthy ones.