Algorithms, Emotion, and Learning
An Urgency
It’s difficult to overstate the scale and speed at which algorithmic systems (TikTok, Instagram Reels, YouTube, search and news feeds) shape what learners see, how they feel, and what they believe. These systems are optimized for engagement, not for truth, depth, or balance. As a result, students encounter information environments that reward emotional arousal and novelty, amplify confirmation bias, and make opposing views feel threatening rather than instructive.
Ideally, schools and other important social and cultural institutions might benefit from a renewed emphasis on algorithmic awareness, emotional self-regulation in media environments, and verification practices. This page attempts to very roughly outline the research basis and invites a shift in thinking across curriculum, instruction, and assessment.
What the Research Shows
1. Emotional content spreads faster and farther
Studies of large social networks show that moral-emotional language increases virality within groups, and that false news spreads faster and more broadly than true news, primarily due to human sharing behavior—novelty and arousal drive diffusion, not bots alone. See Brady et al. (PNAS, 2017) and Vosoughi, Roy, & Aral (Science, 2018).
2. ‘Echo chambers’ persist for psychological reasons
Recommendation systems learn our preferences and supply more of the same. Meanwhile, humans prefer cognitive ease and avoidance of dissonance. Experimental evidence indicates that forced exposure to opposing views on social media can increase polarization for some users (Bail et al., PNAS, 2018), helping explain why echo chambers ‘work.’
3. Engagement metrics shape attention and belief
Industrial-scale recommenders, such as YouTube’s two-stage deep learning system, are tuned to maximize watch time and related engagement (Covington, Adams, & Sargin, RecSys, 2016). That optimization can privilege content that is captivating rather than careful, nudging learners toward quick takes over careful reasoning.
4. Attention is finite and easily taxed
Reviews of smartphone and media-multitasking research link heavy, fragmented media use to challenges in attention and interference management, with mixed but concerning evidence across executive-function domains (Wilmer, Sherman, & Chein, Frontiers in Psychology, 2017; Uncapher & Wagner, PNAS, 2018).
5. Verification is teachable—but underdeveloped
Curricula in lateral reading and civic online reasoning improve students’ ability to evaluate sources and resist misinformation, though baseline performance is low and gains require deliberate practice (Stanford History Education Group / Digital Inquiry Group; McGrew, 2023).
Implications for Schools
These dynamics are conditions of learning. A thinking-centered approach should ideally:
- Name platform incentives explicitly and contrast them with learning goals.
- Treat emotion as data: help students notice outrage, fear, or delight and understand how these states influence judgment.
- Teach verification routines for AI-generated and edited media (image, audio, video) and normalize lateral reading.
- Design performances of thinking: media audits, evidence trails, counterclaim memos, and reflections on how conclusions change.
Note:Implementation guidance (units, routines, rubrics) appears on the standards pages and accompanying resources.
Suggestions
- Adopt a shared vocabulary for algorithms, engagement, bias, and verification across departments.
- Build recurring classroom routines that slow thinking before sharing or reacting.
- Assess how students use evidence and revise claims—not only what facts they recall.