Learning from Viral Content

APA

(2022). Learning from Viral Content. The Simons Institute for the Theory of Computing. https://old.simons.berkeley.edu/talks/learning-viral-content

MLA

Learning from Viral Content. The Simons Institute for the Theory of Computing, Nov. 29, 2022, https://old.simons.berkeley.edu/talks/learning-viral-content

BibTex

          @misc{ scivideos_23031,
            doi = {},
            url = {https://old.simons.berkeley.edu/talks/learning-viral-content},
            author = {},
            keywords = {},
            language = {en},
            title = {Learning from Viral Content},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2022},
            month = {nov},
            note = {23031 see, \url{https://scivideos.org/simons-institute/23031}}
          }
          
Kevin He (Penn)
Source Repository Simons Institute

Abstract

(This work is joint with Krishna Dasaratha.) We study learning on social media with an equilibrium model of users interacting with shared news stories. Rational users arrive sequentially and each observes an original story (i.e., a private signal) and a sample of predecessors' stories in a news feed, then decides which stories to share. The observed sample of stories depends on what predecessors share as well as the sampling algorithm, which represents a design choice of the platform. We focus on how much the algorithm relies on virality (how many times a story has been previously shared) when generating news feeds. Showing users more viral stories can increase information aggregation, but it can also generate steady states where most shared stories are wrong. Such misleading steady states self-perpetuate, as users who observe these wrong stories develop wrong beliefs, and thus rationally continue to share them. We find that these bad steady states appear discontinuously, and even a benevolent platform designer either accepts these misleading steady states or induces fragile learning outcomes in the optimal design.