|Title||Party, policy, and democracy: What do voters value in candidates? (study has been modified, see 20181024AB)|
|C1 Background and Explanation of Rationale||
This version of the study has been canceled. A modified version of this study is now registered under the ID of 20181024AB.
In recent years, scholars have argued that public support for democracy and elite commitment to democratic values are on the decline. However, this research typically focuses on measuring abstract notions of support for democracy or ratings of democratic performance. As a result, few studies have considered voters’ commitment to democracy in practice relative to other important considerations. Building on Graham and Svolik (2018) and Svolik (2018), we use fully randomized conjoint analysis to explore the strength of Americans’ commitment to democratic values in a series of hypothetical election scenarios. Our main interest is to determine whether and to what extent voters oppose candidates who do not uphold democratic values. We test a series of competing expectations regarding popular opposition to democratic norm violations versus popular support for voter ID laws, the involvement of legislators in law enforcement investigations, and unwillingness to compromise with partisan opponents. We also investigate (1) which specific democratic values and policy positions are most strongly related to vote choice and how those effects compare; (2) whether voters are more likely to forgive transgressions against democratic values by co-partisan candidates than opposition party candidates (and how that relationship varies by approval of Donald Trump); and (3) whether the the effects of candidates acting undemocratically vary by education, political knowledge, political interest, and/or age. Answers to these questions will help us better understand the strength of Americans’ commitment to democracy and how it operates in the context of competitive partisan elections.
|C2 What are the hypotheses to be tested?||
|C3 How will these hypotheses be tested? *||
Our experiment employs fully randomized conjoint analysis to determine the extent to which Americans prioritize democratic values, policy positions, partisanship, and other attributes in hypothetical election scenarios. We conduct an online survey experiment on Qualtrics with a sample of approximately 1000 respondents recruited from the online marketplace, Amazon Mechanical Turk. Drawing theoretical inspiration from Graham and Svolik (2018) and Svolik (2018), we present respondents with 10 pairs of hypothetical candidates in an election who randomly vary on seven attributes: name, partisanship, two policy platforms, and three “democracy” platforms. The policy platforms concern attitudes toward limited government and cultural conservatism, and the democracy platforms concern voting rights, investigations, and legislative compromise. A complete list of the attributes and levels in our experiment is included at the end of this document.
All candidate attribute-levels are randomly selected from a predetermined set of levels. Specifically, candidate names are randomly chosen from a set of 123 names used in Butler and Homola (2017) as signals of race/ethnicity and gender. In our analysis, we will pool names into race/ethnicity and gender categories (i.e., white female, Hispanic male, black female, etc.) and estimate the AMCEs for each category, using the “white male” category as a baseline. Note that we opt to use names rather than individual attributes for race/ethnicity and gender to (1) increase the realism of the candidate profiles and (2) use fewer total attributes in the conjoint tables. All other attributes have just two levels; partisanship is randomly selected to be Democrat or Republican, each policy platform is randomly chosen to correspond to a conservative (e.g., “Wants to lower taxes on the wealthy”) or liberal (e.g., “Wants to raise taxes on the wealthy”) stance, and each democracy platform is randomly chosen to correspond to a democratic norm (e.g., “Said law enforcement investigations of politicians and their associates should be free of partisan influence.”) or a democratic norm transgression (e.g., “Said elected officials should supervise law enforcement investigations of politicians and their associates”). The name and partisanship attributes appear first in the table, in that order, but the order of all other attributes is randomized across respondents.
After viewing each pair of profiles, respondents are asked to select which candidate they would be more likely to support. They are then asked to rate each candidate on a 4-point favorability scale. They repeat this exercise a total of ten times for each candidate pair. The survey also includes a battery of demographic and attitudinal questions, a series of questions on political knowledge, and an opportunity for respondents to provide written feedback.
For our main analysis, we will calculate the average marginal component effects (AMCEs) for each level of each attribute included in the conjoint. AMCEs correspond to the average effect of changing each hypothetical candidate attribute on respondents’ preferences for one candidate over another, relative to a baseline level. Following Hainmueller and Hopkins (2015), we will calculate the AMCEs based on two dichotomous outcome measures: candidate preferred (a dichotomous variable indicating whether or not a given candidate was selected), and candidate rating (a dichotomous measure created from respondents’ rating of each candidate on a 4-point scale, where scores scores above the median indicate that the candidate is preferred and below the median indicate that the candidate is not preferred). We intend to use the AMCE estimates based on the “candidate preferred” outcome measure for our main results, but will conduct analyses based on the “candidate rating” outcome measure as a robustness check. The treatment variables in our analysis are sets of dichotomous variables for each attribute, wherein if a given attribute has k levels, we include k - 1 dichotomous variables in our model. We estimate clustered standard errors in which each cluster is a respondent.
For our analysis of heterogeneous treatment effects among respondent subgroups, we will first divide respondents into two groups using a predetermined set of criteria for each moderator (i.e., partisanship, Trump approval, political interest, political knowledge, education level, and age; see below). We will then calculate the AMCEs for each group separately and take the difference between the two subgroups for each level of each attribute.
|C4 Country||United States|
|C5 Scale (# of Units)||1,000|
|C6 Was a power analysis conducted prior to data collection?||No|
|C7 Has this research received Insitutional Review Board (IRB) or ethics committee approval?||Yes|
|C8 IRB Number||Dartmouth Committee for the Protection of Human Subjects Study #00030030, Modification # MOD00007815|
|C9 Date of IRB Approval||September 13, 2018|
|C10 Will the intervention be implemented by the researcher or a third party?||Researchers|
|C11 Did any of the research team receive remuneration from the implementing agency for taking part in this research?||No|
|C12 If relevant, is there an advance agreement with the implementation group that all results can be published?||not provided by authors|
|C13 JEL Classification(s)||C90, C83, D72|