Abstract (may include machine translation)
Expert evaluations about countries form the backbone of comparative political research. It is reasonable to assume that such respondents, no matter the region they specialize in, will have a comparable understanding of the phenomena tapped by expert surveys. This is necessary to get results that can be compared across countries, which is the fundamental goal of these measurement activities. We empirically test this assumption using measurement invariance techniques which have not been applied to expert surveys before. Used most often to test the cross-cultural validity and translation effects of public opinion scales, the measurement invariance tests evaluate the comparability of scale items across any groups. We apply them to the Perceptions of Electoral Integrity (PEI) dataset. Our findings suggest that cross-regional comparability fails for all eleven dimensions identified in PEI. Results indicate which items remain comparable, at least across most regions, and point to the need of more rigorous procedures to develop expert survey questions.
Original language | English |
---|---|
Pages (from-to) | 599-604 |
Number of pages | 6 |
Journal | Political Analysis |
Volume | 27 |
Issue number | 4 |
State | Published - Oct 2019 |
Keywords
- electoral integrity
- expert surveys
- measurement invariance
- survey design
Fingerprint
Dive into the research topics of 'Comparative Research is Harder Than We Thought: Regional Differences in Experts' Understanding of Electoral Integrity Questions'. Together they form a unique fingerprint.Datasets
-
Replication Data for: Castanho Silva and Littvay, "Comparative Research Is Harder than we Thought: Regional Differences in Experts' Understanding of Electoral Integrity Questions"
Castanho Silva, B. (Creator), Littvay, L. (Creator), Castanho Silva, B. (Contributor) & Analysis, P. (Contributor), Harvard Dataverse, 2019
DOI: 10.7910/dvn/hg2caj
Dataset