Priority Theme Spotlight: Brendan Nyhan
Author: Ayuko Picot
Theme: Democracy, Conflict, & Polarization
In today’s Priority Theme Spotlight, we spoke with EGAP member, Brendan Nyhan (Dartmouth College), about the working paper that he co-wrote with Annie Y. Chen (CUNY Institute for State & Local Governance), Jason Reifler (University of Exeter), Ronald E. Robertson (Stanford University), and Christo Wilson (Northeastern University) titled, “Subscriptions and external links help drive resentful users to alternative and extremist YouTube videos.”
Nyhan and his co-authors conducted a study in 2020 monitoring web browsing behavior of 1,181 participants on YouTube. Although YouTube is the most popular social media platform in the United States, few studies have examined the impact of YouTube on misinformation. While critics have pointed to the role of YouTube’s recommendation algorithm in promoting harmful content, or sending them down the rabbit hole of radicalization, Nyhan and his co-authors find that recommendations to videos from alternative and extremist channels are rare. Moreover, they find that recommendations to alternative and extremist channel videos are concentrated amongst a minority of people who are already viewers of alternative and extremist content. We asked Nyhan about his findings and the role that YouTube plays in spreading misinformation.
A substantial amount of recent political science research has focused on the roles of Facebook and Twitter in spreading misinformation. What was your motivation behind studying the role of YouTube on misinformation?
YouTube receives much less attention than Facebook and Twitter in popular discourse about social media, but it has a huge audience. More American adults say they use it than any other social media platform. We thought it was important to examine what influence it has and whether the claims that have been made about its harmful effects are supported.
What are some of the misconceptions that people have regarding Youtube’s role on radicalizing users that your study uncovers? Specifically, what did your study reveal about the “rabbit hole” narrative, or the claim that YouTube’s algorithm leads viewers to harmful content?
Many claims have been made that YouTube’s recommendation algorithm takes people down “rabbit holes” of potentially harmful content that could have radicalizing effects. In response to publicly expressed concerns, Google made significant changes to YouTube’s algorithm in 2019. Our study examines the prevalence of exposure to alternative and extremist channels on YouTube among a large sample of American users after these changes and finds that a relatively small share of people are exposed to these channels and that algorithmic recommendations to them for non-subscribers are very rare. The pattern of behavior we observe instead suggests that consumers of alternative and extremist channels overwhelmingly subscribe to these channels and/or follow external links to them, suggesting that people are largely seeking out this content rather than having it forced upon them by algorithms.
Your findings reveal that the consumption of alternative and extremist content is overwhelmingly concentrated among people with high levels of gender and racial resentment. Can you elaborate on this? What are the implications of these findings?
Our data reveal that high levels of gender and racial resentment are associated with watching videos from alternative and extremist channels. This finding is consistent with our interpretation that the audience for potentially harmful content on YouTube is often seeking it out because they find it appealing.
When someone watches an alternative or extremist video, YouTube’s recommendation algorithm provides additional suggestions for similar videos based on user behavior. What might policymakers consider in terms of the spread of misinformation via social media platforms
I’m not sure this is a policy issue as I am concerned about inappropriate government regulation of political speech. With that said, our findings reinforce those from other platforms showing that consumption of untrustworthy or potentially harmful content is concentrated among small groups of people with extreme viewpoints. The harms of social media are likely to be greater in how it inflames and mobilizes these people rather than prevailing narratives about widespread attitude change among unwitting victims of algorithms.
Your study identifies a “few superconsumers,” who are responsible for the consumption of harmful content. Who are these superconsumers? Is this consistent with findings from your other research on social media and misinformation?
“Superconsumers” are people who watch huge amounts of potentially harmful content on YouTube. We borrow the term from Grinberg et al., who describe how 1% of Twitter users were responsible for 80% of fake news consumption in data from 2016. In our case, we similarly found that 1% of YouTube users in our sample were responsible for 80% of consumption of videos from extremist channels.
To see more about Brendan Nyhan’s work on misinformation and YouTube, visit:
- Current working paper: https://sites.dartmouth.edu/nyhan/files/2022/04/YouTube.pdf
- Article on the New York Times, “The YouTube Rabbit Hole Is Nuanced”
- Podcast episode on Brooking’s TechStream on “Extremist Content on YouTube”