Princeton Dialogues on AI and Ethics
Erik J. Olsson, Chair in Theoretical Philosophy, Lund University, Sweden
Abstract: A popular belief is that the process whereby search engines tailor their search results to individual users, so-called personalization, leads to filter bubbles in the sense of ideologically segregated search results that would tend to reinforce the user’s prior view (filter bubble hypothesis). Since filter bubbles are thought to be detrimental to society, there have been calls for further legal regulation of search engines beyond the so-called Right to be Forgotten Act (EU, C-131/12, 2014). However, the scientific evidence for the filter bubble hypothesis is surprisingly limited. Previous studies of personalization in Google have focused on the extent to which different artificially created users get different result lists without taking the content of the webpages whose links are on the lists into account. This paper proposes a methodology that takes content differences between webpages into account drawing also on the activities of real (as opposed to artificial) users. In particular, the method involves studying the extent to which real users with strong opposing views on an issue receive search results that are correlated content-wise with their personal view. We illustrate our methodology at work, but also the non-trivial challenges it faces, by a pilot study of the extent to which Google Search leads to ideological segregation on the issue of man-made climate change. The second, more exploratory, part of the talk is devoted to a discussion of the extent to which filter bubbles if they exist, now or in the future, are detrimental to democracy.
Co-sponsored by the Center for Information Technology (CITP)