Underreporting in political science survey experiments: Comparing questionnaires to published results

Annie Franco, Neil Malhotra, Gabor Simonovits

Research output: Contribution to journalArticlepeer-review

Abstract (may include machine translation)

The accuracy of published findings is compromised when researchers fail to report and adjust for multiple testing. Preregistration of studies and the requirement of preanalysis plans for publication are two proposed solutions to combat this problem. Some have raised concerns that such changes in research practice may hinder inductive learning. However, without knowing the extent of underreporting, it is difficult to assess the costs and benefits of institutional reforms. This paper examines published survey experiments conducted as part of the Time-sharing Experiments in the Social Sciences program, where the questionnaires are made publicly available, allowing us to compare planned design features against what is reported in published research. We find that: (1) 30% of papers report fewer experimental conditions in the published paper than in the questionnaire; (2) roughly 60% of papers report fewer outcome variables than what are listed in the questionnaire; and (3) about 80% of papers fail to report all experimental conditions and outcomes. These findings suggest that published statistical tests understate the probability of type I errors.

Original languageEnglish
Article numbermpv006
Pages (from-to)306-312
Number of pages7
JournalPolitical Analysis
Volume23
Issue number2
DOIs
StatePublished - 1 Apr 2015
Externally publishedYes

Fingerprint

Dive into the research topics of 'Underreporting in political science survey experiments: Comparing questionnaires to published results'. Together they form a unique fingerprint.

Cite this