PublicationsThe U4 Blog

Blog

Pitfalls in measuring corruption with citizen surveys

Citizens’ survey responses can be unreliable and influenced by perceived expectations. Researchers must understand this behaviour, clearly set the context, and define key terms to improve corruption measurement.
11 November 2024
Outline of three jigsaw pieces
This is the third in a blog series on anti-corruption measurement tools and their applications. Contributors include leading measurement, evaluation, and corruption experts invited by U4 to share up-to-date insights during 2024–2025. (Series editors Sofie Arjon Schütte and Joseph Pozsgai-Alvarez). PD

When asked in the special Eurobarometer nr. 523 on corruption, 89% of Spanish respondents said they believed that corruption is ‘fairly’ or ‘very’ widespread in the country. Yet, when asked about their own lives, only about 1% of respondents reported direct experiences with the problem. How should we understand the discrepancy? Recent research tells us that both numbers might be wrong.

Survey-based corruption measures

Measuring corruption is difficult: since taking or offering a bribe is illegal, those involved have strong incentives to hide the transaction. Researchers often resort to more indirect measures.

One popular measure is survey-based, where citizens are asked about their perceptions of corruption. For example: ‘In your opinion, over the last year, has the level of corruption in this country increased, decreased, or stayed the same?’; and ‘In your opinion, about how many politicians in this country are involved in corruption?’ (both are in Eurobarometer nr. 523).

Citizen surveys have the advantage of tapping into the views of ordinary people.

Traditional measures, such as Transparency International’s Corruption Perceptions Index (CPI), largely rely on the opinions of elite groups. Citizen surveys have the advantage of tapping into the views of ordinary people, the focus of many important questions about the impact of corruption – whether corruption affects institutional trust, or influences political participation.

Are people providing informative responses to survey questions?

Little is known about the quality of information gathered with survey-based measures. But decades of general research into how respondents answer surveys mean that researchers should consider whether citizens are providing the right information.

Researchers should consider whether citizens are providing the right information.

One issue is that respondents often try to be accommodating by answering every survey question, even if they know very little about the topic. And so, when asked about their perception of bribes in the public prosecution service (another question in the Eurobarometer on corruption) a respondent will use whatever information, images, or narratives that come to mind to provide an answer. This could be a recent news story on a poorly handled legal case, or talk about an increase in crime. Such considerations are often not what the researcher had in mind – they were probably looking for a direct estimate of shady dealings in the sector.

Context can sway interpretation

A related problem is that many common perception questions are vague. When asked if corruption has increased in the country over the past year, it is difficult to know how respondents interpret this question. ‘Corruption’ is a term that evokes emotion, and ordinary citizens are unlikely to have the World Bank’s formal definition at hand. The researcher simply does not know what the respondent was thinking about – they could be influenced by a disliked politician, or recent experience of problems in their local school administration.

When survey questions are open to interpretation, responses are malleable and easily influenced by context.

Research on survey responses is clear that, when respondents answer based on a lack of information, and when survey questions are open to interpretation, responses are malleable and easily influenced by context. A recent study also finds this for corruption perception questions.

Political bias can creep in

When asked upfront whether corruption has increased, and whether corruption is common in politics, respondents who stated later in the survey that they support the current government provided more optimistic answers. This is perhaps not surprising; the fact that corruption has decreased could be the reason they supported the government in the first place. But when some respondents were randomly assigned a different order – answering questions about political affiliation first – the difference between government and opposition supporters became much more dramatic, increasing by more than half a scale step on a five-point scale (a 50% increase), widening the gap in reported corruption perceptions.

Political affiliation can provide the frame of reference.

It seems that answering political questions upfront changed how respondents interpreted the corruption questions: their political affiliation can provide the frame of reference. This is clear evidence of the type of malleability expected when respondents are unsure of how to interpret a question.

People shy away from sensitive issues

Could asking about respondents’ direct experiences of corruption generate more reliable responses? Such experienced-based questions are equally common in citizen surveys. Common phrasing is: ‘In the past 12 months were you at any point asked to pay a bribe to a public official?’ In theory, the question should solve some of the survey problems. Research shows that citizens generally have a good understanding of what constitutes a ‘bribe’, and this is similar across countries. However, this understanding is also a significant obstacle to achieving high-quality data: people know that paying bribes is illegal and this makes the question sensitive.

People often underreport experiences and behaviours that are perceived to be sensitive.

Another body of survey research shows that people often dramatically underreport experiences and behaviours that are perceived to be sensitive. Corruption researchers have shown that, when the question format allows respondents to feel a stronger sense of anonymity, the number of reported experiences of corruption increases – sometimes dramatically. This suggests that many respondents who answer standard questions about direct experiences of bribery choose to provide an incorrect negative response that they feel is more socially acceptable.

Lessons for researchers and survey designers

Where does this leave researchers and practitioners who are interested in measuring corruption? Should we give up on obtaining information about the problem from ordinary citizens? While discussing solutions in detail is beyond the scope of this text, a few pointers to survey designers are warranted.

First, many of the problems stem from a belief that it is easy to obtain the information researchers want from citizens – if you need estimates of bribery in the public school system, just ask. But survey designers need to think about how citizens interpret the question, and what type of information they might draw on for their response. Much could be gained by providing the appropriate context and defining key terms such as ‘corruption’. This will make it difficult to ask a large number of questions, since the preamble to each will be longer. However, if data quality can be improved, this is a trade-off researchers should be willing to make.

Survey designers need to think about how citizens interpret the question.

Second, there are several methods designed to obtain honest responses to sensitive questions – such as the so-called list experiment. Each comes with drawbacks, however. Providing respondents with anonymity often means introducing more ‘noise’ into the data, which demands large samples to retain data quality. However, when questions are truly sensitive, the trade-off will be worthwhile.

Drafting robust survey questions is hard. Asking about corruption is even more difficult. Survey questions on the topic continue to proliferate, as seen with DATACORR (the database is the focus of an upcoming U4 blog post). This increases the importance of a thorough understanding of what knowledge can be extracted from this wealth of data. A first step is to raise awareness of the common pitfalls.

Anti-corruption measurement series

This blog series looks at recent anti-corruption measurement and assessment tools, and how they have been applied in practice at regional or global level, particularly in development programming.

Contributors include leading measurement, evaluation, and corruption experts invited by U4 to share up-to-date insights during 2024–2025. (Series editors are Sofie Arjon Schütte and Joseph Pozsgai-Alvarez).

Blog posts in the series

  1. One year on: The Vienna Principles for the measurement of corruption (Elizabeth David-Barrett) 2 Sep 2024
  2. Measuring progress on Sustainable Development Goal 16.5 (Bonnie J. Palifka) 1 Oct 2024
  3. (This post) Pitfalls in measuring corruption with citizen surveys (Mattias Agerberg) 11 Nov 2024
  4. (Forthcoming, November 2024) Decoding corruption: The DATACORR database for better survey questions
    (Luís de Sousa)

Sign up to the U4 Newsletter to get updates, or follow us on Twitter/X | Linkedin | Facebook.

    About the author

    Mattias Agerberg

    Mattias Agerberg is an Associate Professor of Political Science at the University of Gothenburg. His research concerns corruption, political behaviour, and survey methodology.

    Disclaimer


    All views in this text are the author(s)’, and may differ from the U4 partner agencies’ policies.

    This work is licenced under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International licence (CC BY-NC-ND 4.0)

    Photo


    PD