Response to critique on our paper “Algorithmic Extremism: Examining YouTube’s Rabbit Hole of Radicalization”

Anna Zaitsev
4 min readDec 31, 2019

--

Dear Professor Narayanan,

Thank you for showing interest in our paper. We truly appreciate the attention that has been given to this paper. All critical feedback is welcomed, and we believe it will help us to create a better article and improve the quality of research. We would like to offer some responses to the critique you have posted on Twitter.

We have been focusing specifically on the algorithm and the claims that the algorithm directs viewers towards more extreme content. This paper does not take any stance on whether the content itself might radicalize people. We acknowledge that the anonymity of the data is one of the main limitations of the study. Still, at this point, we have not seen any solutions around this problem that would present a representative sample and provide enough data.

What our study would like to highlight is the treatment of independent content creators by the YouTube algorithm and the push that YouTube presents for the content created by mainstream media outlets.
By no means do we disparage qualitative research efforts, and we welcome any studies around this topic that discuss the relationship between algorithms and online human behavior.

In addition, we have submitted this paper for peer review and will also incorporate the feedback we have garnered from the discussion on Twitter in future versions. Many comments have been very useful, and we wish to improve our work based on this “Twitter peer review.”

Anonymous Recommendations

The main criticism is that we have used data based on anonymous recommendations. It’s a legitimate limitation, which is why it is the first limitation we acknowledge in the paper. However, we do not believe that this limitation should be a reason to dismiss our study.

The widely cited and discussed study “Auditing Radicalization Pathways on YouTube,” currently at #3 in its category on altimetric, also used anonymous recommendation data. Their data is consistent with ours, and their results are showing that recommendations flow away from the alt-right. The anonymity of the recommendations data used in this study has not been seen as a major issue for the scientific community.

Acquiring real user recommendation data and analyzing this data with the methods we have currently used would be a great future research topic. We have several ideas about how this could be done, including recruiting volunteers who would allow installing trackers. It’s a promising albeit difficult direction to take for future research.

We also have some real-world feedback that corresponds to the recommendations with the anonymous data — for example, conspiracy theorists are complaining about their recommendations directing the traffic towards Fox News. This change can also be observed in the longitudinal data presented in the illustration below. After April 2019, the recommendation algorithm for Conspiracy channels changes drastically for this particular category which seems to be corresponding with the claims from the content creators. We do acknowledge that these accounts are anecdotal, which is why we are pursuing other ways to measure how well the anonymous recommendations correspond.

Refuting the Rabbit-Hole theory with late 2019 data

Another critique that we have seen on social media has been about the timeline of the data collection. We have been collecting data since November 2018. Still, maybe erroneously, we did not include the longitudinal data set that in the paper or article due to two main reasons. First, we have been growing the data set and added several hundreds of new channels to the data during the data collection period. Second, the original set of data was following a more rudimentary categorization. The current classification is much more granular. These significant changes meant that at the end of 2019, we have an issue with the cleanliness of the earlier data. For this reason, we included only the data that follows the new scheme in our first draft.

Longitudinal view of the data (with uncleansed data set)

YouTube has been tinkering with the algorithm over the years, and it is plausible that there might have been a rabbit hole when Zeynep Tufekci reported on in the NYT, or when the Guardian reported on it in February 2018 using Guillaume Chaslot’s (anonymous) data. However, we are more interested in the persistence of this narrative and thus presented the current data that seems to refute the rabbit hole claims. Besides, our data shows that there has not been favoritism towards more fringe content since November 2018, yet to this day, this is a widely promoted understanding.

Conclusion

We think the purely empirical quantification of YouTubes’ recommendations is meaningful and useful. We believe that studying the algorithm might help inform more qualitative research on radicalization. Radicalization itself is a complex topic, and we do not claim that an algorithmic traffic direction can explain the whole phenomenon. On the contrary, we agree with another recent study on this topic by social scientists Munger and Phillips, who propose alternative explanations for radicalization and call for comprehensive research that includes all sides of the political YouTube.

YouTube's influence in society is an interesting topic and provides many opportunities for future research to understand it better. We would like to collaborate with anyone who is interested in research in this space.

Best regards,

Mark Ledwich & Anna Zaitsev

--

--

Responses (1)