Q&A with Members Video: Rebecca Wolfe
Authors: Ingrid Lee and Max Méndez-Beck
Transcription: Nathalie Gonzalez Aguilar
In today’s Q&A with Members Video, we talk with Rebecca Wolfe, Senior Lecturer and Executive Director of International Policy at the Harris School of Public Policy at the University of Chicago*. Professor Wolfe is a leading expert on political violence, conflict, and violent extremism, and has a unique perspective as both a researcher and a practitioner, previously leading research and program development on conflict and fragility at Mercy Corps.
In the interview, Professor Wolfe shares with us examples of areas where she has seen firsthand how research has influenced policy, the ways in which experimental research has changed academic and practitioners’ ideas regarding conflict prevention, and her experience trying to bring together policymakers and researchers to communicate more effectively.
You can find the recording of the interview below along with the transcript, which has been edited for clarity.
Ingrid Lee: Could you please share an example of where you’ve seen the impact of research from a policy or practitioner perspective?
Rebecca Wolfe: So I am actually going to share two stories, and one is about how I became passionate about the use of evidence for policy change. So my first job was at Partners in Health, a global health organization with Paul Farmer and Jim Kim. When I was there we uncovered the spread of a drug-resistant tuberculosis strain in the shanty towns outside of Lima, Peru. World Health Organization’s (WHO) policy, in those days, was that you couldn’t treat poor people who had multidrug-resistant tuberculosis (MDR-TB). We felt that healthcare is a human right, and we couldn’t deny people treatment, and so at Partners in Health we developed a protocol for treating these patients and collected really good data to show that it could be effective. This data later got into the hands of WHO and they changed their policy on treating MDR-TB in poor areas.
This early experience (back in 1995) really shaped my thinking on other issues, such as conflict, and how we needed to have an evidence base when implementing conflict reduction programs. 20 years on, you really saw how this was an effective approach. When we started doing research on why young people get involved in violent movements, we began to find data that showed that economic issues weren’t as central as we thought, or at least they weren’t in the case of political-ideological types of conflicts. So we developed a research agenda around that question with other colleagues, including some EGAP members, such as Chris Blattman and Jake Shapiro.
In 2015, while I was at Mercy Corps, we combined several of these studies — both quantitative and qualitative evidence — and put out a report around the time of the Obama White House Summit on Countering Violent Extremism. The timing of the report, coinciding with the summit, really changed the game and the narrative around why young people got involved in these types of movements. A couple of years later I was at a United Nations Development Programme (UNDP) event on violent extremism and you could hear the tagline from the earlier Mercy Corps report being stated, that: “It’s not about jobs, it’s injustice.” All of this evidence then led to the creation of what’s called the Global Fragility Act. At Mercy Corps we wanted to show the scope of the evidence, with work from various EGAP members including Jason Lyall, Kosuke Imai, Cyrus Samii, and Maarten Voors. We condensed that information and evidence and took it to the Hill and showed how it could be possible to reduce support for violence, and that ended up convincing policymakers on Capitol Hill that if we put the right investment into reducing violence we actually could be effective. And that was kind of the start of the Global Fragility Act, which passed Congress probably about a year and a half ago.
Ingrid Lee: That’s an incredible impact. Could you tell us, as a practitioner, where you’ve seen research contributing to our understanding about political violence, conflict, and violent extremism in ways that have not been well understood?
Rebecca Wolfe: I started to hint at this in my previous answer, but I think one of the big areas is around what motivates people to join violent movements. What we’ve seen is that there can be economic reasons — crime, for example, tends to be more economically motivated — but when it comes to political violence it tends to be more socially-oriented or related to grievances. So that‘s been a big shift, both in terms of an academic point of view but also in terms of the practical implication regarding how we do peacebuilding work in many of these contexts. So it’s rare that you will see an employment program aimed at stability today, versus when I started working at Mercy Corps and was writing that kind of program regularly. . So there’s been a big shift. I think where we’re also seeing a very strong basis of conflict and peacebuilding programs today is contact theory. If we bring people together, will we reduce prejudice? And while there was good observational data about that, we actually had very little experimental data until more recent work, including from EGAP members Alex Scacco and Laura Paler. I have some work with Chris Grady that was fostered through EGAP, that really looked at whether these types of programs actually change relationships. And so what we’re starting to learn is under what conditions will these social contact programs actually foster stronger relationships between groups and reduce conflict.
Ingrid Lee: That’s really fascinating. I would love to hear more about how that work evolves. This last question relates to your unique perspective as both a researcher and practitioner. What are the specific challenges that you think need to be addressed or overcome in order to foster greater knowledge exchange between academic researchers and government representatives or civil society organizations who are implementing work on the ground?
Rebecca Wolfe: While I was at Mercy Corps, I continued to perform the role of, somewhat, a translator between academics and practitioners, and tried to understand the various constraints that people operate under. I actually teach a course on this at the Harris School, on how to translate evidence for policy and program design. I think one of the key pieces for me on understanding it from the academic side is that the policymaker is going to have to do something with or without evidence, but they don’t often have the choice to do nothing. So, for example, if the answer you’re giving [as an academic] is that there is a null result, or that the program isn’t effective, or that you don’t know if it’s effective, unless there’s an alternative program for the policymaker to implement, it’s very hard for them to shift course.
Also policymakers are in a much more risk-averse type of setting, and so to try something new you have to have a lot of good evidence, and experimentation can be really difficult in a lot of these situations. So more understanding of those constraints, I think, would be useful for academics. I think on the policymaker side, to go to this point about experimentation, I think policymakers are actually always experimenting. They’re doing social programming and so often, whether or not there is an experimental design attached to it, it is still an experiment. Policymakers don’t know if a policy will be effective in many of these situations. I think there can be some negative attributions towards academics, that academics are “experimenting on people.” If there was some recognition that, unless we are a hundred percent sure or even 50 percent sure, that something’s going to work, then we’re essentially experimenting. And so I think that acknowledgement would allow for better collaboration across the two groups. To me it’s about understanding those different perspectives. And if that happened I think there could be more collaboration and recognition that we could do more randomization because we are already experimenting anyway, so let’s do it in a way that we can actually learn more than we could if we just did a pre-post design.
In contrast, I think there’s a lot of data that’s maybe not publishable data, when it comes to the top journals, but it’s data that policymakers and practitioners collect that could foster learning and be a stepping stone for something bigger. At Mercy Corps, we did that a lot, where maybe our first study wasn’t the most rigorous. In the work I did with Jason Lyall, Kosuke Imai, and Yang-Yang Zhou, Mercy Corps had initially done a less rigorous study in Helmand. That demonstration — even with perhaps weaker data — allowed us to be able to fundraise for the more rigorous study. I think we throw out a lot of data and I think if academics could be a bit more flexible on some of those standards we could actually learn more.
Ingrid Lee: That’s great feedback. I think this is the type of collaboration that EGAP is trying to foster. Thanks so much Professor Wolfe for joining us today.
Rebecca Wolfe: Thank you Ingrid.
*: Rebecca Wolfe’s position at the Harris School of Public Policy changed between the time of the recording of the interview and the publication of this post. In the video she is introduced with her previous title (Assistant Instructional Professor), while her current title is Senior Lecturer and Executive Director of International Policy.