The Problem of Online Manipulation
Recent controversies have led to public outcry over the risks of online manipulation. Leaked Facebook documents discussed how advertisers could target teens when they feel particularly insecure or vulnerable. Cambridge Analytica suggested that its psychographic profiles enabled political campaigns to exploit individual vulnerabilities online. And researchers manipulated the emotions of hundreds of thousands of Facebook users by adjusting the emotional content of their news feeds. This Article attempts to inform the debate over whether and how to regulate online manipulation of consumers. Part II details the history of manipulative marketing practices and considers how innovations in the Digital Age allow marketers to identify, trigger, and exploit individual biases in real time. Part III surveys prior definitions of manipulation and then defines manipulation as an intentional attempt to influence a subject’s behavior by exploiting a bias or vulnerability. Part IV considers why online manipulation justifies some form of regulatory response. Part V identifies the significant definitional and constitutional challenges that await any attempt to regulate online manipulation directly. The Article concludes by suggesting that the core objection to online manipulation is not its manipulative nature but its online implementation. Therefore, the Article suggests that, rather than pursuing direct regulation, we add the threat of online manipulation to the existing arguments for comprehensive data protection legislation.
Shaun B. Spencer, The Problem of Online Manipulation, 2020 U. ILL. L. REV. 959 (2020).