Beyond the Metaketa Initiative – Reflections from Meetings with EGAP Members
Authors: Anna Wilke, Jaclyn Leaver, Matthew Lisiecki, and Cyrus Samii
In recent years, social scientists have grown increasingly concerned about the internal and external validity of empirical findings. Heterogeneity in study design, effect heterogeneity, and selective reporting are all challenges that contribute to the so-called “replication crisis.” Since 2013, EGAP’s Metaketa Initiative has been at the forefront of efforts to address these problems. The Metaketa model involves the simultaneous implementation of a set of field experiments in multiple sites. Pre-registration, coordination, and harmonization of study procedures are core features of the model that facilitate knowledge accumulation through meta-analysis. Metaketas share similar goals and structure as “master protocol” trials in the health sciences (Park et al. 2019).
After having successfully launched five Metaketa rounds, EGAP invited members of each round’s steering committee to a series of meetings in late 2020 to reflect on the Metaketa model and explore future directions. This post summarizes key insights that emerged from a discussion about how future collaborative research efforts could be organized. We focus in particular on whether it makes sense for such coordinated trials to be tightly centralized under the control of a single team or researcher, or whether it might be better to allow for relatively independent teams to work in a decentralized manner.
The traditional Metaketa model has both centralized and decentralized aspects. Even though each field experiment is run by a relatively independent country team, a steering committee oversees all projects and is in charge of working with teams to harmonize study procedures. One can envision both less and more centralized models of collaborative research. A platform model, for example, is on the less centralized side. Platforms serve as hubs with light, possibly automated moderation to summarize existing evidence in a given domain and to offer standards for measurement and possibly intervention design. Independent researchers can use these hubs to design their studies in ways that maximize cumulative learning and to feed their results back into the existing knowledge base (following established meta-analytical methods). On the more centralized side, the powers of those coordinating a Metaketa (that is, the steering committee) could be increased — perhaps to the extent that all projects are run by a single team and overseen by a single principal investigator (PI).
Both centralization and decentralization have up- and downsides. Here is a summary of key trade-offs:
Project Quality and Feasibility Across Locations. On the one hand, a centralized review of study procedures may help ensure high standards of research design and implementation across studies. Where centralization means that a single organization holds the funding for all projects, centralization may also make it easier to ensure compliance with quality requirements. On the other hand, successful project implementation often demands significant amounts of local knowledge. In many cases, intervention-based field experiments are only feasible if researchers have connections to implementing partners in governments or civil society or to location-specific data sources. Thus, the degree of centralization could affect the selection of sites for studies, along with some of the ability to work efficiently within site-specific constraints. A completely centralized model may run into difficulties in terms of access to context-specific knowledge, especially if research spans several countries or continents.
Replication. A common goal of coordinated research efforts is replication. Implementing the same experiment multiple times enables researchers to explore the extent to which results of a given study can be obtained again and under different conditions. Due to professional incentives, researchers often prioritize innovation. As such, in a decentralized platform model, few researchers may opt to replicate an existing study. Centralization can provide incentives for replication and can facilitate the harmonization of study procedures across studies. As with quality requirements, harmonization is likely easier to enforce where the funding source is centralized as well.
Too much centralization, however, may undermine the value of replication. Typically, replication efforts do not only seek to find out whether a result can be reproduced, but also whether it is robust to perturbations of the conditions under which it was first obtained. The identity of the research team itself is often thought of as a factor that may impact results. That a result was replicated in multiple experiments may hence inspire more confidence if these experiments were run by many independent research teams and not by a single centralized team. Indeed, recent evidence from the biomedical sciences shows that decentralized scientific communities produce more robust empirical findings than centralized ones (Danchev et al. 2019).
Innovation. A centralized model may make it easier to harmonize study procedures but a decentralized approach may encourage innovation. Where diverse research teams can come up with new ways to manipulate or measure the same theoretical constructs, decentralization may help increase our confidence that results are robust to alternative operationalizations. Decentralized teams may also help in finding the most effective treatments, especially where effectiveness is context-dependent. Finally, a decentralized approach may allow for innovations in theory-testing. Even where decentralized teams cannot agree to implement the same treatment, each team may be able to come up with its own innovative test of one implication of the same overarching theory.
Researcher incentives. The amount of work required and the long time horizon of many coordinated studies raises the question of how to ensure researchers’ participation. Many of those involved in the first five Metaketas cited the opportunity to expand one’s professional network as an incentive, especially among junior scholars. Networking opportunities may be narrower with more centralization, though a highly centralized model may allow for more direct collaboration between junior and senior scholars. A completely decentralized platform model may be most limited in terms of the immediate networking opportunities. At the same time, decentralized models may be attractive in as far as they maintain flexibility with regard to publication rights and do not require collaborative products such as meta-analysis articles to be published prior to individual studies.
Ultimately, there may not be a single degree of centralization or decentralization that is correct for all collaborative research programs. In ways that are more complex than classic studies of centralization versus decentralization would suggest (e.g., Maskin et al. 2000), the level of centralization for research programs depends on what makes sense both from the perspective of researchers engaged in research as well from the perspective of diverse external stakeholders. Different models may be best suited for different kinds of research initiatives. Where little prior knowledge exists about how to solve a social problem across contexts, a decentralized model that harnesses the creativity of diverse teams to come up with suitable interventions may be best. The same may apply to a situation where the goal is to test an overarching theory that makes varying predictions across contexts. In cases where more evidence is needed about the effect of one promising intervention, the premium on harmonization may be high. Here, a centralized model may be most beneficial — especially if the implementation of the intervention in question does not require deep local connections.
References
Danchev, Valentin, Andrey Rzhetsky, and James A. Evans. “Meta-Research: Centralized scientific communities are less likely to generate replicable results.” Elife 8 (2019): e43094.
Maskin, E., Qian, Y., & Xu, C. (2000). Incentives, Information, and Organizational Form. The Review of Economic Studies, 67(2), 359-378.
Park, J.J.H., Siden, E., Zoratti, M.J. et al. Systematic review of basket trials, umbrella trials, and platform trials: a landscape analysis of master protocols. Trials 20, 572 (2019). https://doi.org/10.1186/s13063-019-3664-1