Ideas to Action:

Independent research for global prosperity


Views from the Center


Early on in the COVID-19 pandemic, researchers grappled with an ethical and methodological dilemma: should they integrate measures of violence against women and children into remote data collection efforts—and if so, what logistical protocols were required to safeguard participants against harm?

Despite decades of good practice guidelines, institutional ethical boards are often ill-equipped to advise or make determinations on violence data collection, and this is especially true for less traditional remote surveys. Thus, researchers may end up making decisions on what to ask—and what ethical protocol to put in place—based on their experience, knowledge of the study population (setting), and their comfort level with including sensitive questions.

While examples of how to adapt and overcome the challenge of ethical issues in remote surveys exist in high-income settings, the efforts in low-income settings (at least pre-COVID-19) were few and far between. For example, in phone surveys, how can interviewers ensure privacy of participant responses—out of the earshot of possible perpetrators? Would they be able to “read” verbal cues of distress and discomfort in answering questions? Are referral services available amidst COVID-19 service closures, and how can adverse event responses function without teams on the ground? These questions have been the topic of debate and researchers have offered both opinions and practical considerations as the pandemic and know-how for remote surveys have evolved (see e.g. here and here for violence against women, here and here for violence against children).

Between a rock and a hard place: Examples of what to ask when you cannot ask about violence

In many cases, researchers have decided not to collect direct measures of violence—assessing the risks to participants to outweigh the benefits in knowledge gained. Instead, some have opted to ask indirect questions. While violence experts agree that we cannot equate indirect questions with gold-standard measures, there is diverging opinion as to their utility.

What types of indirect measures have been used so far during COVID-19 remote data collect? Here is a summary of what we’ve seen so far in the Global Evidence Tracker:

Proxy measures

Proxy measures include indicators related to violence (e.g. experience of family conflict, anger, quarreling, and family harmony), proximate behavioral factors (e.g. experiencing fear, feeling unsafe, and excessive partner alcohol use), and consequences of violence (e.g. injury). Similar questions have been collected for years in demographic and health surveys, and are routinely used in triage and risk assessment tools for social services. For example, a phone survey implemented by GAGE focused on adolescents in multiple countries included questions on perceived increases in anger, yelling, and arguments. A phone survey conducted in Indonesia during COVID-19 collected measures of injury without linking them explicitly to violence. In addition, it asked general questions about conflict in the household and feelings of safety at home and in the community (caveat: I co-developed these modules). For these questions, large proportions of the sample (18 to 46 percent) reported increases during COVID-19.

Community or “neighborhood” measures

Some studies have included measures of perceptions around community-level occurrence (or increase) in violence, shifting the focus from individual to proxy-reporting. For example, in Uganda, a study collecting perceptions of the number of physical intimate partner violence incidents per month occurring for men in the village showed an average increase of 0.6 episodes post-lockdown. In Thailand, a UNICEF-supported study asked about perceptions of domestic violence in the community—with 12 percent of participants reporting a perceived escalation. The strategy of asking about community or neighborhood measures is not new—asking about one’s own experiences coupled with sisters’ or neighbors’ experiences has been used in the past to estimate the prevalence of violence against women, primarily in conflict-affected settings (including in Liberia and Uganda and across multiple humanitarian settings).


Several studies have used vignettes depicting situations of violence against adolescents and women for fictional characters and asking participants to indicate how common or not they felt these scenarios were. For example, the GAGE survey asked respondents to imagine a girl/boy in the community and their perceptions about the likelihood of the girl/boy experiencing different types of violence and their potential increases (or decreases) during COVID-19—including intimate partner physical and sexual violence. A similar strategy was used in the aforementioned Indonesia phone survey, asking about likelihood of changes in violence against children and women in the household for fictional characters, as well as violence in the community.

List randomization

List randomization, also known as the “item count” technique, is a survey experiment in eliciting responses to a sensitive question (e.g. violence) masked within a list of other behaviors. This method has been applied to violence questions in multi-topic surveys pre-COVID-19 and has been shown to increase reporting in different settings (including in Burkina Faso, Nigeria, and Rwanda, among others). However, there are also limitations to the method related to loss of precision in estimates and inability to capture all behavioral aspects of violence typologies, among others. During COVID-19, list experiments have been used to solicit diverse violence-related outcomes, including sexual violence and severe physical violence against women and children via an online survey in Germany, physical violence against youth solicited in a phone survey follow-up of a youth empowerment program in Bolivia, and domestic violence within a phone survey follow-up of the Young Lives cohorts in Peru and India. The Young Lives survey showed increases of 8-12 percent during lockdowns.

What have we learned from these efforts and what can we do better?

How useful are these indirect measures? Are results persuasive for use in advocacy and programming or to inform future research? In several cases, research using indirect violence measures show convincing evidence of increases during COVID-19—for example, vignettes in Indonesia, list randomization analysis in Peru, and indirect questions in Uganda. In addition, studies may be able to provide analysis of risk factors or rationale to motivate future research and programmatic action. However, often indicators or particular questions have not been validated, and existing data are ill equipped to provide comparisons to gold-standard measures (given the inability to collect direct measures in the first instance). Only in one case, the youth empowerment evaluation in Bolivia, are standard violence measures collected alongside list experiment measures. Results show consistency of program impacts across the direct measures and list experiments—yet this example does not necessarily imply that other efforts are capturing specific or sensitive measures. Limitations in measurement are not unique to indirect measures—for example, violence data from reported sources, including to police or health facilities captures the tip of the iceberg—thought to represent only the most severe events or among the population with better access to services. In addition, direct measures may suffer from under-reporting if participants do not feel safe disclosing, or in settings where violence is highly stigmatized.

While many violence researchers will be slow to endorse (or outright reject) the value of indirect measures, I would argue they have already proven to be a useful tool in the effort to inform programming and policy during COVID-19. However, more work exploring the validity and measurement properties of indicators across contexts in post-pandemic settings is needed before they can be used with confidence. No data is worth putting participants at risk. Thus, the type of questions being asked (or omitted) should be carefully scrutinized based on expert assessment of risks in comparison to the benefits for program and policy decisions. If indirect measures can help protect study populations, while gaining actionable evidence—they appear to be a worthwhile investment.

The author thanks Megan O’Donnell and members of the Gender-based Violence sub-group of the Gender and COVID-19 Working Group, including Lara Quarterman and Pavita Singh for helpful comments and edits.


CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.

Photo credit for social media:

Adobe Stock