Q&A with Members: Lucía Tiscornia and Verónica Pérez Bentancur
Author: Max Méndez Beck
In today’s Q&A with Members we feature EGAP member Lucía Tiscornia (CIDE) and her co-author Verónica Pérez Bentancur (Universidad de la República). We asked them about their recent paper, “Iteration in Mixed-Methods Research Designs Combining Experiments and Fieldwork” published in Sociological Methods & Research.
As you note in the paper, experimental designs are a powerful tool that researchers use to understand causality. What are some of the challenges and limitations that this approach presents?
Lucía Tiscornia & Verónica Pérez Bentancur: Researchers have pointed out a series of limitations in experimental designs, in the paper we focus on two: unrealistic treatments and lack of external validity (Blair and McClendon 2021; Dunning 2015; Seawright 2016). A treatment is realistic when it reasonably reproduces a real-life scenario. Researchers need to consider the context to design treatments that adequately reflect those scenarios (Seawright 2016:166-167; 2021:371). Realistic interventions are central to construct validity. Construct validity can be understood as the degree to which a measurement tool (in this case, the experimental design) is well-calibrated, that is, it measures what the researcher wants to measure, and not something else, this also requires the accurate operationalization of concepts. Better measurement allows researchers to more easily extend their design to other contexts because it gives them a clearer sense of the scope conditions of their theories. In the paper, we argue that to overcome these limitations researchers can use a mixed-methods approach that explicitly incorporates qualitative elements in the design phase.
What role can qualitative research play in overcoming those limitations?
LT & VPB: Increasingly, qualitative tools are used to strengthen the design of experimental interventions (design is distinct from analysis of results or specification of causal mechanisms). Qualitative tools (in-depth interviews, ethnographies, focus groups, systematic review of press, archival work, among many others) can help researchers build thick descriptions of a given context. These descriptions allow researchers to more easily conceptualize and operationalize treatments, so that they are aligned with the context. For example, if we are thinking about a list experiment, the sensitive items can emerge from conversations with interviewees, a vignette experiment can draw from newspaper articles. The ultimate goal is to produce strong causal inferences. Sometimes, if interventions do not comport with reality, research could result in null findings purely as an artifact of the design. Thick descriptions also allow researchers to identify relevant attributes in an experiment. For example, in our research we are interested in understanding the dimensions of the concept of “deservingness of punitive policing” in the context of Uruguay, in other words, when does the public support a more violent police treatment of suspects. This requires thinking about the drivers behind deservingness, is it social class? Race? Gender? These features are contextual.
In the paper you suggest an iterative process for combining qualitative fieldwork and experiments. Could you describe this iterative process?
LT & VPB: We can think of iteration as the movement back-and-forth in the research process. There are different ways to conceptualize iteration: for example, between methods of reasoning (induction and deduction), or between different elements of a design (theory and fieldwork) (Kapiszewski, MacLean, and Read 2015; Yom 2015). Through iteration researchers can revise and update different parts of the design: the research question, concepts, hypotheses, cases and instruments for data collection, such as interview protocols and survey questionnaires (Kapiszewski, MacLean, and Read 2015: 23). Our contribution lies in specifying how to iterate between methods—in particular, the combination of qualitative fieldwork and experimentation— to improve a research design prior to testing. In our view, iteration is a mechanism to improve measurement in experimental designs because it allows us to use qualitative tools to probe the untested assumptions of experiments. For example, an experiment designed only following an extensive review of the literature likely makes assumptions about face validity of the treatments. However, one could take this review of literature as a starting point, and then use interviews, or newspapers to adjust the design, going back to the original and modifying it accordingly. Using qualitative tools we can contrast our assumptions with the context and either validate treatments or adjust them if they do not reflect the context. This back-and-forth can continue until we are satisfied with the design.
As part of this paper, you analyzed 338 pre-analysis plans in EGAP’s registry to understand how mixed methods are employed in research. What did you find?
LT & VPB: Of the 338 documents we analyzed, 14 designs combine methods for the purpose of improving the design itself, which represents only 4% of the PAPs pre-registered in EGAP in 2019. The vast majority of the designs that employ mixed methods combine interviews with experiments; other tools such as participant observation or ethnographic methods are far less common, and iteration between elements of the design is rarely discussed explicitly.
When researchers resort to several methods, the PAPs do not include enough details to allow other researchers to learn from these strategies. It is possible that authors leave out discussions of qualitative methods because they see them as secondary, or pre-scientific. Yet, when PAPs include such details it allows for a better understanding of the whole research process, and it gives other researchers more elements to replicate the design. Overall, the process of writing a PAP is an opportunity to think carefully about the design, and to iterate.
You also use, as a case study, a research project that was carried out in Montevideo, Uruguay on public support for punitive policing practices. What did you learn from this case study?
LT & VPB: To understand when the public supports a more violent police treatment of suspects, we combined a vignette experiment with extensive interviews in Montevideo, Uruguay. In carrying out this project, we extracted two main lessons: one is applied, in terms of our design; the other is more general, and it pertains to the advantages of combining methods explicitly.
Regarding the applied lesson, using fieldwork was helpful in making sure our experimental manipulation would resonate with survey respondents. In our original design, our intuition was that deservingness could be tied to attributes such as nationality, race, or location. However, because the concrete manifestation of some of these cleavages was not clear, and some of them are not particularly salient in the Uruguayan context (neither race nor xenophobic cleavages are particularly strong), we used fieldwork systematically to refine our theory by identifying the relevant attributes of deservingness. Because deservingness is abstract and can be understood differently in different contexts, we needed to ensure that we were gathering evidence about it, and not some other related phenomenon. In many of our interviews, neighborhood residents stated that young, dark-skinned men who happened to be wearing baseball caps or hoodies, were a symbol of insecurity. Based on these insights, we adjusted the theory to incorporate social class as a relevant attribute of deservingness. We subsequently adjusted our experimental design by incorporating “attire” as a marker of deservingness in the set of descriptive features used in the experiment, we also discarded other attributes that did not feature prominently in our interviews. We would not have taken this step had we not conducted interviews and informal conversations with neighborhood residents.
The main overarching lesson in this project is that making all the steps we take in designing our research explicit is important to assess the strength of the design, as well as for replication. Experimental designs often rely on fieldwork, interviews, and case knowledge; this fact is rarely made explicit or public. Because the process of combining methods is not discussed, researchers have little guidance on how to do it, or about why making the process explicit is valuable. Yet the strength of an experiment can only be properly evaluated if researchers know what kind of data and prior knowledge went into its design. Because iteration forces researchers to be explicit about exactly what they are changing in their designs, it makes systematizing the steps taken easier.