Underreporting in Psychology Experiments: Evidence From a Study Registry

Annie Franco, Neil Malhotra, Gabor Simonovits

Research output: Contribution to journalArticlepeer-review

Abstract (may include machine translation)

Many scholars have raised concerns about the credibility of empirical findings in psychology, arguing that the proportion of false positives reported in the published literature dramatically exceeds the rate implied by standard significance levels. A major contributor of false positives is the practice of reporting a subset of the potentially relevant statistical analyses pertaining to a research project. This study is the first to provide direct evidence of selective underreporting in psychology experiments. To overcome the problem that the complete experimental design and full set of measured variables are not accessible for most published research, we identify a population of published psychology experiments from a competitive grant program for which questionnaires and data are made publicly available because of an institutional rule. We find that about 40% of studies fail to fully report all experimental conditions and about 70% of studies do not report all outcome variables included in the questionnaire. Reported effect sizes are about twice as large as unreported effect sizes and are about 3 times more likely to be statistically significant.

Original languageEnglish
Pages (from-to)8-12
Number of pages5
JournalSocial Psychological and Personality Science
Volume7
Issue number1
DOIs
StatePublished - 1 Jan 2016
Externally publishedYes

Keywords

  • disclosure
  • experiments
  • multiple comparisons
  • research methods
  • research transparency

Fingerprint

Dive into the research topics of 'Underreporting in Psychology Experiments: Evidence From a Study Registry'. Together they form a unique fingerprint.

Cite this