Brief 22: Getting Out the Vote

Gerber and Green's findings challenged the norms about 'getting out the vote' common among professional campaign managers, leading Bowers and Hansen to question how political campaigns in the USA continue to spend their money.

Link to Full Study

Category: Elections

Tags: phonebanking, personal contacts, campaigning

Date of Publication: Wednesday, May 20, 2015

EGAP Researcher: Jake Bowers

Other Authors: Ben Hanson

PDF: HansenBowers2009att.pdf

Click to Download the Data

Geographical Region: North America

Research Question:

The New Haven 1998 experiment that is the focus of this paper was a multilevel or cluster-randomized field-experiment on voter turnout where, although treatment was supposedly randomly assigned to households, public records showed whether individuals either voted or did not vote. Additionally, those who answered the door or phone differed from those who were assigned to be contacted but who did not answer the door or phone: they were older and had stronger histories of past voting, for example. How should analysts confront these analytic challenges in the clearest and most transparent manner?

Preparer: Jake Bowers, Alex Coppock, Damaris Colhoun



In the late 1990s, Alan Gerber and Don Green brought a powerful new tool to the debate: the randomized field experiment. They sought to compare the effectiveness of Get-Out-the-Vote (GOTV) messages delivered to households by phone, in-person, and via the mail. Shortly before the November 1998 election in New Haven, Connecticut, Gerber and Green assigned households at random to GOTV messages, enabling an apples-to-apples comparison between those who were assigned to be treated and those who were not. Gerber and Green found that personal contact mobilized voters but that paid phone banks did not appear to influence voters (Gerber and Green 2000). This finding challenged the norms common among professional campaign managers.

Gerber and Green’s statistical analysis was disputed in the academic literature by Kosuke Imai. Imai noticed that the characteristics of individuals in households assigned to treatment differed from the characteristics of individuals in control households. In fact, when he treated the study as not randomized (under the idea that a failed randomization had caused the differences), he reported a positive and relatively large effect from telephone interventions (Imai 2005).

However, Gerber and Green’s own reanalysis of the data continued to show no effects from telephone calls (Gerber and Green 2005). So, what was the answer? Should political campaigns in the USA continue to spend money on telephone banks? Or reorganize to focus attention on personal door-to-door contacts?

Hansen and Bowers noticed that the answer to this policy question hinged on questions of methodology. Not every household assigned to receive a phone call or personal visit experienced such interventions — after all, many people were not home or answering their phones during the moments when the field staff attempted a contact. Additionally, people in households who answered the door tended to differ in systematic ways from the people who did not answer the door. Non-random non-contact and other problems arising from researchers walking neighborhoods ought to be common in field experiments. However, we rarely see a failed randomization. Could part of the reason for the conflicting findings arise from some confusion about how to analyze cluster-randomized field experiments?

Research Design:


Cluster Randomization and Non-random Treatment Compliance

The total number of individuals who would not have voted in the absence of a telephone or in-person contact is the effect of the treatment on the treated. This quantity focuses only on those who who answer the door or phone when assigned such contact. Since households were randomly assigned to treatment — and since cluster-level randomization checks did not argue against randomization failure — Hansen and Bowers show how to use random assignment to generate a confidence interval for this total.


In their 2009 paper, Hansen and Bowers, answered the analytic challenges posed by the New Haven 1998 study. They noticed that the initial statistical tests used to assess randomization could easily mislead analysts if treatment is assigned to households but testing ignores households and only considers individuals (Hansen and Bowers 2008). They developed a test that takes into account clustered treatment assignment. This test could not support the argument that the random assignment had failed.


Hansen and Bowers showed that voters were most effectively mobilized by personal visits, while telephone calls had no discernible effects. This result was more statistically powerful but used fewer assumptions than the conventional approaches. They also suggested that these campaigns most powerfully influenced those who had previously voted (e.g. older citizens), perhaps explaining the continuing low turnout among the young despite many voter mobilization efforts. The results of this analysis showed a small negative effect of phone calls overall, though the negative effects were more pronounced among those who had never voted previously.


Policy Implications:

Hansen and Bowers showed that even complex randomized field experiments can be analyzed in such a way as to enable attention to substantive debates and to minimize methodological arguments. It is clear that, in New Haven in 1998, in-person canvassing mattered much more than other forms of voter turnout efforts and that the previously popular telephone banks had no effect or were counterproductive. Most of the experiments fielded since the Vote ’98 study agree with the general idea that personal contact is particularly powerful (see the summaries at this Yale ISPS website ) As GOTV campaigns become more effective will we see an increasingly educated, older, participatory electorate that represents the interests of younger, less educated, and less politically interested citizens less? Can we design GOTV campaigns that increase turnout and representativeness? The results of studies like these have raised these questions and, for now, questions they will remain.


Gerber, Alan S, and Donald P Green. 2000. “The Effects of Canvassing, Telephone Calls, and Direct Mail on Voter Turnout: A Field Experiment.” American Political Science Review 94(3): 653–63.

———. 2005. “Correction to Gerber and Green (2000), replication of disputed findings, and reply to Imai (2005).” American Political Science Review 99(2): 301–13.

Hansen, Ben, and Jake Bowers. 2009. “Attributing Effects to a Cluster Randomized Get-Out-the-Vote Campaign.” Journal of the American Statistical Association 104(487): 873–85.

Hansen, Ben, and Jake Bowers. 2008. “Covariate Balance in Simple, Stratified and Clustered Comparative Studies.” Statistical Science 23: 219.

Imai, Kosuke. 2005. “Do get-out-the-vote calls reduce turnout? The importance of statistical methods for field experiments.” American Political Science Review 99(2): 283–300.