Article by Jacob Kovacs-Goodman*
1 Fordham L. Voting Rts. & Democracy F. 236
In contemporary American politics, Big Tech companies provide sophisticated advertising interfaces that enable anyone to target specific voters by demographic. These companies defend their tools as “neutral” to evade culpability for discriminatory ads. Yet, such microtargeted advertising presents a significant threat to democracy. This Article advances a possible two-pronged solution to bar online platforms from targeting political ads based on a user’s protected class. First, this Article promotes a largely unexplored tactic: extending Title II of the Civil Rights Act into the digital space so that behavior that would be impermissibly discriminatory offline is not permitted online. Second, this Article suggests that impacted users should focus their suits not on ad content, but on platforms’ design choices and the underlying data harnessed for the service of ads. Ultimately, the goal of this Article is to prevent the online voter suppression tactics deployed through these advertising services.
Introduction
Twitter, in the wake of its recent acquisition and subsequent drop in advertising revenue, announced in January 2023 that it would reverse its long-standing ban on political advertisements.1See @TwitterSafety, Twitter (Jan. 3, 2023, 5:14 PM), https://twitter.com/TwitterSafety/status/1610399203481784320 [https://perma.cc/9H2G-XBUT] (“Today, we’re relaxing our ads policy for cause-based ads in the US. We also plan to expand the political advertising we permit in the coming weeks.”). This shift comes on the heels of large technology companies re-platforming accounts that aimed to undermine American democracy in 2020 and 2021.2Indeed, tweets on Twitter aided the January 6, 2021, assault on the U.S. Capitol building. See Sarah S. Seo, Note, Failed Analogies: Justice Thomas’s Concurrence in Biden v. Knight First Amendment Institute, 32 Fordham Intell. Prop. Media & Ent. L.J. 1070, 1070 (2022). Politicians and experts across the political spectrum agree that online platforms need more regulation.3See Ina Fried, Exclusive: U.S. Majority Supports Tech Regulation to Preserve Democracy, Axios (Feb. 10, 2022), https://www.axios.com/2022/02/10/poll-majority-supports-tech-regulation-democracy [https://perma.cc/VE6X-DC2Y]. Yet the largest companies have successfully resisted these efforts.4See Breffni Neary, Democracy and Free Speech Concerns Raised by the End of the Trump Facebook Ban, Fordham L. Voting Rts. & Democracy F. Comment. (Feb. 23, 2023, 1:45 PM), https://fordhamdemocracyproject.com/2023/02/23/democracy-and-free-speech-concerns-raised-by-the-end-of-the-trump-facebook-ban [https://perma.cc/3FYA-PKYW]. For instance, Facebook, Twitter, and Google all lobbied to shelve the Honest Ads Act,5S. 1356, 116th Cong. (2019). a bipartisan bill that would have mandated online political ad transparency equivalent to that required on television or radio.6See, e.g., Kenneth P. Vogel & Cecilia Kang, Senators Demand Online Ad Disclosures as Tech Lobby Mobilizes, N.Y. Times (Oct. 19, 2017), https://www.nytimes.com/2017/10/19/us/politics/facebook-google-russia-meddling-disclosure.html [https://perma.cc/PPN7-8WZL]. In the absence of legislative movement, this term the United States Supreme Court took the unusual step of considering two cases on intermediary liability.7The companion cases are Gonzalez v. Google LLC, No. 21-1333 (U.S. argued Feb. 21, 2023) and Twitter, Inc. v. Taamneh, No. 21-1496 (U.S. argued Feb. 22, 2023). The ultimate contention of this Article is that there are existing means within the law to combat the microtargeting that these big platforms deploy in their political advertising-delivery tools.
Big Tech8“Big Tech” commonly refers to the most dominant companies in the information technology industry, including Alphabet (parent company of Google), Amazon, Apple, Meta (parent company of Facebook), and Microsoft. This Article refers to these companies and their corresponding social media platforms as “platforms.” clinches the lion’s share of hundreds of billions of dollars in ad revenues.9See Sara Fischer, 5 Tech Giants Own Over Half the Global Ad Market, Axios (June 14, 2022), https://www.axios.com/2022/06/14/tech-ad-market-global [https://perma.cc/2FFK-YMH6]. Indeed, companies such as Google and Meta not only maintain hands-on teams to assist advertisers, but also provide interfaces that enable “campaigns to target specific voters, geographic regions, or demographics.”10Yochai Benkler et al., Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics 271 (2018). While these companies have special policies and restrictions on political advertising, studies have demonstrated that, in reality, these policies are porous and easy to circumvent.11See, e.g., Google, Political Content, https://support.google.com/adspolicy/answer/6014595 [https://perma.cc/F247-QU7R] (last visited Mar. 20, 2023) (providing Google’s particular policies for political ads). Media outlets have reported that campaigns can skirt such policies by partnering with third-party data brokers to assist in ads placed with Google. See, e.g., Gerrit De Vynck, Google’s Limits on Political Ads Have a Loophole Trump Could Tap, Bloomberg (Dec. 2, 2019, 6:00 AM), https://bloomberg.com/news/articles/2019-12-02/google-s-limits-on-political-ads-have-a-loophole-trump-could-tap#xj4y7vzkg [https://perma.cc/4YMA-UVU7]. See generally Victor Le Pochat et al., An Audit of Facebook’s Political Ad Policy Enforcement, 31 USENIX Sec. Symp. 607 (2022), https://www.usenix.org/system/files/sec22-lepochat.pdf [https://perma.cc/4F7D-ZYWM] (conducting a study on millions of Facebook ads and finding that political ad policies were ineffective in various ways, such as prohibited advertisers having the ability to run political ads without disclosing them).
With respect to advertisements, tech companies can sidestep culpability by citing the safe harbor in Section 230 of the Communications Decency Act (“CDA”), under which courts cannot find platforms liable as a “publisher or speaker” for any content provided by a third party.1247 U.S.C. § 230(c)(1). For instance, four federal court cases alleged that Facebook’s “employment, housing and credit advertisements discriminated against people based on a variety of protected categories, such as race, age and gender”13Vin Gurrieri, Facebook to Overhaul Ad Targeting Tools to End Bias Suits, Law360 (Mar. 19, 2019), https://www.law360.com/articles/1140646/facebook-to-overhaul-ad-targeting-tools-to-end-bias-suits [https://perma.cc/N43L-X8WE]. in violation of the Fair Housing Act1442 U.S.C. §§ 3604–3606. and the Equal Credit Opportunity Act.1515 U.S.C. § 1691(a)–(f). Facebook, in response, described its ad platform as a prototypical “neutral tool” and claimed that ads on its platform fall squarely within Section 230’s safe harbor because the company does not contribute to their content.16See, e.g., Defendant’s Notice of Motion and Motion to Dismiss First Amended Complaint at 12, 20, Onuoha v. Facebook, Inc., No. 5:16-CV-06440-EJD (N.D. Cal. Mar. 19, 2019) [hereinafter Facebook’s Motion to Dismiss]. Yet “neutral” is an inapt adjective. Nearly a century ago, economist and lawyer Robert Hale diagnosed this sort of façade as “systems advocated by professed upholders of laissez-faire [which] are in reality permeated with coercive restrictions of individual freedom, and with restrictions, moreover, out of conformity with any formula of ‘equal opportunity’ or of ‘preserving the equal rights of others.’”17Robert L. Hale, Coercion and Distribution in a Supposedly Non-Coercive State, 38 Pol. Sci. Q. 470, 470 (1923).
This Article demonstrates how current digital ad tools are far from neutral. In February 2023, Justices Kagan and Gorsuch voiced deep skepticism about the neutrality of algorithms during oral arguments in Gonzalez v. Google.18See, e.g., Transcript of Oral Argument at 101, Gonzalez v. Google, No. 21-1333 (U.S. argued Feb. 21, 2023); Amy Howe, “Not, Like, the Nine Greatest Experts on the Internet”: Justices Seem Leery of Broad Ruling on Section 230, SCOTUSblog (Feb. 21, 2023, 4:31 PM), https://www.scotusblog.com/2023/02/not-like-the-nine-greatest-experts-on-the-internet-justices-seem-leery-of-broad-ruling-on-section-230 [https://perma.cc/JK94-CG3L]. Their worry is borne out, as demonstrated in Part I below, through numerous studies that reveal that automated ad tools are intentionally crafted to better discriminate based on protected class status, such as race, sex, or sexual orientation.
Accordingly, this Article advocates for two possible solutions to bar online platforms from targeting (or withholding) political advertisements based on a user’s protected class. First, users who are discriminated against in the ads space can achieve standing through entitlements currently on the books in civil rights statutes—such as Title II of the Civil Rights Act of 1964.19Pub. L. No. 88-352, § 201, 78 Stat. 243 (codified at 42 U.S.C. § 2000a). Second, impacted users should file lawsuits that focus on tool design and implementation, thereby sidestepping Section 230 rebuttals. These complaints should not focus on ad content but, rather, platform-created tools and the underlying data harnessed for the service of ads.
First, Part I outlines the broad consensus that the internet’s largest ad intermediaries—like Twitter, Alphabet, Meta, and their subsidiary companies—use algorithms that enable voter suppression through microtargeting. Part II advocates for extending Title II of the CRA into the digital space, a largely unexplored tactic through which platforms would be liable for their discriminatory ad tools.
Part III then addresses the platforms’ inevitable Section 230 defense and explains why this defense should not apply to design choices, which hinge on designer intentionality and market-making strategies. Lastly, Part IV outlines several avenues for enforcement, or for enacting a regime more amenable to enforcement. The goal of this Article is not only to make the internet a safer ecosystem but, importantly, to prevent the online voter suppression tactics deployed through these advertising services.
I. Ad Targeting and Delivery Algorithms
All large digital platforms have designed their ad algorithms to differentiate between categories. Investigative journalists at ProPublica, for example, unearthed how Facebook’s ad tools enabled housing advertisers to exclude swaths of users, “such as African Americans, mothers of high school kids, . . . and Spanish speakers” from seeing their ads.20Julia Angwin et al., Facebook (Still) Letting Housing Advertisers Exclude Users by Race, ProPublica (Nov. 21, 2017, 1:23 PM), https://propublica.org/article/facebook-advertising-discrimination-housing-race-sex-national-origin [https://perma.cc/83DS-WX6J]. The same research team found that it could successfully promote posts to the category “Jew hater” that was listed in the Facebook ads interface.21See Julia Angwin et al., Facebook Enabled Advertisers to Reach ‘Jew Haters,’ ProPublica (Sept. 14, 2017, 4:00 PM), https://propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters [https://perma.cc/Z6FK-L3RQ] (finding that Facebook “enabled advertisers to direct” ads to the news feeds of approximately “2,300 people who expressed interest in the topics” such as “Jew hater”). Along with the overt discrimination in the ads user interface (“UI”), patterns of discrimination creep into the ad-serving algorithms themselves. Algorithms aimed at efficiency codify biased human practices, since they rely on a history of human errors for their training data.22See, e.g., Martha Minow et al., Technical Flaws of Pretrial Risk Assessments Raise Grave Concerns, Berkman Klein Ctr. for Internet & Soc’y (2019), https://cyber.harvard.edu/story/2019-07/technical-flaws-pretrial-risk-assessments-raise-grave-concerns [https://perma.cc/DFQ4-DPQ9]. For instance, one study, that constructed a tool to determine whether Google’s ads changed when a user’s class membership changed, found “that females received fewer instances of an ad encouraging the taking of high paying jobs than males.”23Amit Datta et al., Automated Experiments on Ad Privacy Settings, 1 Proc. on Priv. Enhancing Tech. 92, 102 (2015).
With respect to democracy and voter suppression, these UI and algorithm features are pernicious. There are two main ways they constitute threats to a democratic polity. The first is when foreign governments—or bad faith nonstate actors—use these tools to sew divisiveness. The second is when domestic-based political candidates aim to shrink the electorate.
In the 2016 general election, for example, Russian intelligence agencies deployed ads on Facebook.24See Spencer Overton, State Power to Regulate Social Media Companies to Prevent Voter Suppression, 53 U.C. Davis L. Rev. 1793, 1795–1803 (2020). The vast majority of these ads used interest-based targeting, focusing on users who had “liked” Black leaders, such as Martin Luther King, Jr., Nelson Mandela, and Malcolm X, resulting in over fifteen million users seeing the ads and one-and-a-half million user clickthroughs.25Id. at 1815. In online advertising, companies analyze “clickthroughs,” which are the user’s act of following a hypertext link to a particular website. See generally Adam Hayes, Click-Through Rate (CTR): Definition, Formula, and Analysis, Investopedia (Dec. 22, 2022), https://www.investopedia.com/terms/c/clickthroughrates.asp [https://perma.cc/N6J8-JVJM]. In a post-election interview, the manager of privacy and public policy at Facebook admitted that, when addressing targeted ads that may impinge on civil rights, “it is a hard thing to identify those ads and to be able to take action on them.”26Gillian B. White, When Algorithms Don’t Account for Civil Rights, The Atlantic (Mar. 7, 2017), https://www.theatlantic.com/business/archive/2017/03/facebook-ad-discrimination/518718 [https://perma.cc/WS46-L3W3] (noting that not all targeted ads target “users based on race or ethnicity . . . and not every type of ad falls within the purview of federal civil-rights law.”). Russia’s Internet Research Agency (“IRA”) focused ads on both sides of the political spectrum and had “substantially higher clickthrough rates than typical Facebook ads.”27Renee DiResta et al., The Tactics & Tropes of the Internet Research Agency 37 (2019). An expert report found that, in the days leading up to the 2016 election, Russia’s IRA targeted Black-community accounts for voter suppression.28Id. at 81. For example, IRA accounts posing as activists posted memes with the caption “Do not vote for oppressors” over pictures of both presidential candidates.29Id. at 82. Russia’s IRA conversely targeted politically right-leaning community accounts with ads and messaging concerning voter fraud and the potential necessity for violence. Id.
Political campaigns have also harnessed these microtargeting tools for advertisements. For example, in 2016, a senior official in the Trump campaign told Bloomberg in the days leading up to the election, “[w]e have three major voter suppression operations under way . . . idealistic white liberals, young women, and African Americans.”30Joshua Green & Sasha Issenberg, Inside the Trump Bunker, with Days to Go, Bloomberg (Oct. 27, 2016, 6:00 AM), https://www.bloomberg.com/news/articles/2016-10-27/inside-the-trump-bunker-with-12-days-to-go [https://perma.cc/DB59-C5WP]. Notoriously, the campaign was able to micro-target Haitian-American users in Miami, Florida, to show ads criticizing the Clinton Foundation’s actions following the 2010 earthquake in Haiti.31See McKenzie Funk, Opinion, Cambridge Analytica and the Secret Agenda of a Facebook Quiz, N.Y. Times (Nov. 19, 2016), https://www.nytimes.com/2016/11/20/opinion/cambridge-analytica-facebook-quiz.html [https://perma.cc/4PBU-MYH9]. But see Lord of the Rings, 2020 and Stuffed Oreos: Read the Andrew Bosworth Memo, N.Y. Times (Jan. 7, 2020), https://www.nytimes.com/2020/01/07/technology/facebook-andrew-bosworth-memo.html [https://perma.cc/WXP4-CJY9] (containing internal Facebook Vice President memo calling Cambridge Analytica claims “snake oil.”). Regardless of whether an external marketing partner like Cambridge Analytica is involved, Facebook ad tools possess the specificity to target voter suppression ads based on race, as in the example presented. Brad Parscale, the digital media advisor on Trump’s presidential campaigns in 2016 and 2020, conveyed that his team would test up to one-hundred-thousand variations of the same ad on test audiences in Facebook’s ads interface.32Ryan Mac & Charlie Warzel, Congratulations, Mr. President: Zuckerberg Secretly Called Trump After the Election, BuzzFeed News (July 20, 2018, 3:51 PM), https://buzzfeednews.com/article/ryanmac/congratulations-zuckerberg-call-trump-election-2016 [https://perma.cc/43L8-GPC6]. A 2017 internal Facebook marketing memo named the Trump campaign an advertising “innovator” that the company should invite to collaborate on Facebook’s own ad strategy. Id. The 2020 Trump reelection campaign relied on the same tactic, such as microtargeting married women in battleground states with paid Facebook ads about crime and policing.33See Jeremy B. Merrill & Jamiles Lartey, Trump’s Crime and Carnage Ad Blitz Is Going Unanswered on Facebook, Marshall Project (Sept. 23, 2020, 5:45 AM), https://www.themarshallproject.org/2020/09/23/trump-s-crime-and-carnage-ad-blitz-is-going-unanswered-on-facebook [https://perma.cc/8ALX-DBSA].
Meta promised that, by March 2022, it would start preventing advertisers from targeting users based on characteristics such as sexual orientation, religion, and political beliefs.34Graham Mudd, Removing Certain Ad Targeting Options and Expanding Our Ad Controls, Meta (Nov. 9, 2021), https://www.facebook.com/business/news/removing-certain-ad-targeting-options-and-expanding-our-ad-controls [https://perma.cc/9DZZ-QKV8]. Yet the targeting of suspect class status will remain just as effective due to algorithmic inferences that reconstruct sensitive characteristics. Such algorithms “effectively use omitted demographic features by combining other inputs . . . correlated with those features.”35Piotr Sapiezynski et al., Algorithms that “Don’t See Color”: Measuring Biases in Lookalike and Special Ad Audiences 1 (arXiv, Working Paper No. 1912.07579, 2022), https://arxiv.org/pdf/1912.07579.pdf [https://perma.cc/YN38-TNND]. Meta has firsthand experience with reconstituting protected class characteristics. For example, in response to the employment and housing lawsuits introduced above,36See supra text accompanying notes 13–16. the company entered into a settlement agreement, under which it “create[d] a separate portal for ads on all its platforms related to employment, housing[,] or credit” to limit advertisers’ targeting capabilities within those sectors.37Gurrieri, supra note 13. Even after implementing this change, however, the new ad tool continued to discriminate at a “statistically indistinguishable” rate from the previous tool.38Sapiezynski et al., supra note 35, at 2. One study compared the results between the old tool, which allowed advertisers to target users based on gender, and the new tool, which purportedly did not.39See id. It concluded that the new tool delivered targeted ads to 91.2 percent of women, compared to the old tool’s 96.1 percent rate.40Id. at 2. This directly demonstrated that the new ad tool reconstituted users’ protected characteristics and continued to deliver ads relying on inferred class status.41See id. at 1.
Even after the ostensible 2022 policy change that no longer permits ads to target suspect class data directly, Meta still touts its ability to serve ads to “lookalike audiences” with common traits, and to specifically curated audience lists that the advertiser creates.42See Meta, About Lookalike Audiences, https://www.facebook.com/business/help/164749007013531?id=401668390442328 [https://perma.cc/P4YC-E27M] (last visited Mar. 20, 2023). Russia’s IRA used these lookalike audiences for geographic and racial targeting in 2016.43See DiResta et al., supra note 27, at 34.
Safeguards should be in place against algorithms serving political ads based on proxies for suspect class status. Parts II and III articulate such a framework.
II. Expanding Title II Into the Digital Arena
While Title VII of the CRA,4442 U.S.C. § 2000e. the Fair Housing Act,4542 U.S.C. §§ 3604–3606. and the Equal Credit Opportunity Act4615 U.S.C. § 1691. have provided grounds for bringing discriminatory claims, they only reach employment, housing, and credit advertising, respectively. Granted, these are areas that have profound ramifications on individuals’ well-being. But those statutes would not provide for standing in other instances, such as the Russian IRA’s voter suppression ads that targeted Black users.47See supra text accompanying notes 24–29. To achieve a wider ambit, one solution is to argue that platforms are places of “public accommodation” within the meaning of Title II of the CRA.48Since Title II does not cover sex discrimination, however, state-level sex discrimination laws would need to play a supplementary role in this framework.
Under Title II, “[a]ll persons shall be entitled to the full and equal enjoyment of . . . any place of public accommodation . . . without discrimination or segregation on the ground of race, color, religion, or national origin.”4942 U.S.C. § 2000(a). Congress passed the law to prevent commercial entities, ostensibly open to the public, from systematically discriminating against Black patrons.50See Christine J. Back, Cong. Rsch. Serv., R46534, The Civil Rights Act of 1964: An Overview 11–13 (2020).
Some scholars have suggested that Title II naturally covers modern “accommodations” platforms like Airbnb, Uber, and Lyft.51See, e.g., Nancy Leong & Aaron Belzer, The New Public Accommodations: Race Discrimination in the Platform Economy, 105 Geo. L.J. 1271, 1296–1301 (2017); Bryan Casey, Title 2.0: Discrimination Law in a Data-Driven Society, 2019 J. L. & Mob. 36, 37–40 (2019). Yet the statute’s wording extends further than that, including, within its definition of covered establishments, any “other place of exhibition or entertainment” that affects commerce.5242 U.S.C. § 2000a(b)(3). Indeed, social media platforms appear to be such places of exhibition and entertainment that affect commerce.
The Americans with Disabilities Act (“ADA”), like Title II, contains a similar but more expansive list of places of “public accommodation.”5342 U.S.C. § 12181(a). Presently, circuits are split as to whether non-physical spaces, like online platforms, constitute places of public accommodation.54See Seo, supra note 2, at 1100. In the context of the ADA, several circuits have examined the congressional intent and legislative history behind the ADA and applied the statute to non-physical spaces.55See id. at 1100–01. Other circuits have instead found that places of public accommodation are limited to physical spaces.56“[T]hese circuits rely on the enumerated categories explicitly listed in the statute and specifically note that all examples are physical spaces.” Id. at 1101.
By contrast, there have been very few cases about whether Title II’s places of “public accommodation” provision applies to online platforms. Those few cases rest on shaky grounds. In 2020, a federal district court in California cited a 2003 federal district court case from Virginia about AOL chatrooms to support the proposition that the CRA should be limited to physical facilities.57See Lewis v. Google LLC, 461 F. Supp. 3d 938, 956–57 (N.D. Cal. 2020) (citing Noah v. AOL Time Warner, Inc., 261 F. Supp. 2d 532, 541 (E.D. Va. 2003)). Additionally, in 2022, a federal district court in Pennsylvania relied on a 2010 ADA case regarding a “prostitute’s credit card processing terminal”58Peoples v. Discover Fin. Servs., 387 F. App’x 179, 181 (3d Cir. 2010). to hold that Title II is limited to physical structures.59Elansari v. Meta, Inc., 2022 U.S. Dist. LEXIS 178399, at *8 (E.D. Pa. Sept. 30, 2022) (citing Discover Fin. Servs., 387 F. App’x at 181)).
While the Pennsylvania court conflated Title II with the ADA, there is a good argument that the two should be analyzed in tandem. The jurisprudence of the former intersects with, but deviates markedly from, its counterpart language in the latter.6042 U.S.C. §§ 12101–12213. While ADA claims were initially limited to physical premises, many courts extended coverage to platforms like Netflix and Scribd over the last decade.61See Casey, supra note 51, at 49 (“[S]ince its passage, the ADA’s definition has managed to keep pace with our increasingly digital world.”); Nat’l Ass’n of the Deaf v. Netflix, Inc., 869 F. Supp. 2d 196, 200–02 (D. Mass. 2012) (holding that Netflix, a video streaming service, constitutes a “public accommodation” even if it lacks a physical nexus); Nat’l Federation of the Blind v. Scribd, Inc., 97 F. Supp. 3d 565, 576 (D. Vt. 2015) (holding that Scribd, an online repository of e-books and audiobooks, constitutes a “public accommodation” under the ADA). Specifically, federal district courts in New York State and Vermont have held, respectively, that a “commercial website itself qualifies as a place of ‘public accommodation’”62Del-Orden v. Bonobos, Inc., 2017 U.S. Dist. LEXIS 209251, at *19 (S.D.N.Y. Dec. 20, 2017). and that it is “absurd” to conclude that people who use an online platform fail to qualify for ADA protections.63Nat’l Fed’n of the Blind, 97 F. Supp. 3d at 570. Accord Panarra v. HTC Corp., 598 F. Supp. 3d 73, 79 (W.D.N.Y. 2022) (failing to extend the language to websites would “‘exclud[e] businesses that sell services through the Internet from the ADA’” and “such an interpretation would ‘run afoul of the purposes of the ADA and would severely frustrate Congress’s intent’” (citation omitted)).
Title II, like the ADA, should apply to online platforms. In 2020, a federal district court in West Virginia dismissed a multifaceted claim brought by a pro se plaintiff, who alleged that his ban from Twitter was unlawful.64See Wilson v. Twitter, No. 3:20-CV-00054, 2020 WL 3410349, at *1–2 (S.D.W. Va. May 1, 2020). While the court dismissed his claim on grounds unrelated to Title II, it paused thoughtfully to recognize at length that Title II should extend to Twitter: “[E]xempting internet services from Title II’s protections entirely would . . . render large swaths of the economy and places of public association immune to the protections provided by the CRA.”65Id. at *9. Although the court did not explicitly acknowledge the then-ongoing lockdowns at the beginning of the COVID-19 pandemic, it expressed concern that “more and more services and economic opportunities [will] migrate to virtual spaces.”66Id. Significantly, the U.S. Supreme Court has expressed parallel logic in the domain of interstate sales tax, holding that, “[b]etween targeted advertising and instant access to most consumers via any internet-enabled device, ‘a business may be present in a [s]tate in a meaningful way without’ that presence ‘being physical in the traditional sense of the term.’”67South Dakota v. Wayfair, Inc., 138 S. Ct. 2080, 2095 (2018) (quoting Direct Marketing Ass’n v. Brohl, 575 U.S. 1125, 1135 (2015) (Kennedy, J., concurring)).
Title II and comparable state laws offer meaningful paths to achieve standing in suits that can allege discriminatory ad delivery on online platforms. Such suits would force platforms to reckon with the tools they provide to advertisers that enable voter suppression. In response to any such discrimination allegations, technology companies always raise a Section 230 defense. Part III outlines a strategy for countering such a defense.
III. Design Choice, Not Speech
This part outlines how, in a First Amendment sense, ad-delivery algorithms are design choices. Secondarily, it claims that, even if the inquiry is into the underlying datum and not into the tool, that datum should not be considered speech—or should only be considered commercial speech.
Most scholars focusing on the disparate impact of discriminatory algorithms advocate for legislative carve-out exceptions from Big Tech’s constant Section 230 invocation.68See, e.g., Bertram Lee, Where the Rubber Meets the Road: Section 230 and Civil Rights, Pub. Knowledge (Aug. 12, 2020), https://publicknowledge.org/where-the-rubber-meets-the-road-section-230-and-civil-rights [https://perma.cc/X337-MRQD] (arguing platforms should be liable for ad content); Overton, supra note 24, at 1827 (suggesting legislation to carve-out voter suppression from Section 230 protection); Olivier Sylvain, Discriminatory Designs on User Data, Knight First Amend. Inst. (Apr. 1, 2018), https://knightcolumbia.org/content/discriminatory-designs-user-data [https://perma.cc/PA3K-ZXLV] (advocating a carve-out for nonconsensual pornography). Due to the practical limits of congressional gridlock, as well as the specter of slippery slope arguments regarding the degree to which Section 230 ought to be amended, this proposal instead highlights how companies like Twitter, Meta, and Alphabet intentionally design their ad-delivery tools, transforming the companies into “publisher[s]” and dissolving their Section 230 immunity.69Once an online platform is properly recognized as a “publisher,” the safe harbor no longer applies. 47 U.S.C. § 230(c)(1).
Meta asserts that it requires advertisers to attest to their compliance with antidiscrimination laws, and so is itself a neutral host, ignorant of any discriminatory advertising content.70See Facebook’s Motion to Dismiss, supra note 16, at 12. Online platforms, however, design their ad-delivery algorithms to determine ad impressions through auctions based on game theory that reference users’ suspect class data—a format that is neither inevitable nor neutral.71See Salomé Viljoen et al., Design Choices: Mechanism Design and Platform Capitalism, 8 Big Data & Soc’y 1, 7 (2021) (“[A] Facebook researcher said the company was making trillions of decisions daily about how to price, rank, and deliver ads[,]” describing “the company’s advertising engine . . . as being powered by an integration of machine learning and auction theory.”). Meta publicly describes its auction process in the following way: “For each ad impression, our ad auction system selects the best ads to run based on the ads’ maximum bids and ad performance. All ads across Meta technologies such as Facebook compete against each other in this process, and the ads that our system determines are most likely to be successful will win the auction.” Meta, Ad Auction, https://facebook.com/business/help/163066663757985 [https://perma.cc/KA6S-EU34] (last visited Mar. 20, 2023). Google’s chief economist has called online advertising “a poster child for algorithmic mechanism design,” referring to a modern economic approach for efficient object allocation.72Viljoen et al., supra note 71, at 2. On mechanism design, see Dirk Bergemann & Stephen Morris, An Introduction to Robust Mechanism Design, 8 Founds. & Trends in Microeconomics 169, 171–74 (2012). In the case of advertising, those objects happen to be opportunities, like housing, or matters of public policy, like whether or not to vote. Ad-delivery algorithms dictate which user receives which impression at what time.73See Overton, supra note 24, at 1817. Platforms themselves assess how to deliver ads through a series of consequential design choices well beyond advertiser or user control. Specifically, platforms assess which users are most likely to engage with the ad content and what each advertiser’s budget is, via an auction.74See id.
While some scholars have sought to disaggregate ad-targeting from ad-delivery algorithms to strip speech out of the discussion entirely, the algorithms are, in fact, more insidious than a simple “call-routing system.”75Pauline T. Kim, Manipulating Opportunity, 106 Va. L. Rev. 867, 928 (2020). Platforms intentionally craft algorithms, in accordance with the American economic theory of mechanism design, and then harness sensitive personal characteristics or proxies for those characteristics to conduct auctions.76See generally Bergemann & Morris, supra note 72. See also Clara Hendrickson & William A. Galston, Big Tech Threats: Making Sense of the Backlash Against Online Platforms, Brookings Inst. (May 28, 2019), https://www.brookings.edu/research/big-tech-threats-making-sense-of-the-backlash-against-onlineplatforms [https://perma.cc/E353-W3FL]. When fully mapped out, it is difficult to see how these design choices were inevitable. Digital ad intermediaries use selectable menus that are more akin to the intentional design choices at issue in Fair Housing Council v. Roommates.com,77521 F.3d 1157 (9th Cir. 2008). a case that represents a rare erosion of Section 230 supremacy. There, the Ninth Circuit held that when a platform requires users to select from a set menu of gender and sexual orientation options in creating their profiles, the website “becomes much more than a passive transmitter of information provided by others; it becomes the developer, at least in part, of that information.”78Id. at 1166. Thus, safe harbor immunity did not apply.79See id.
Meta and Alphabet’s ad frameworks are far more advanced and covert than Roommate’s menus, and correspondingly more intrusive. Just as “Roommate both elicits the allegedly illegal content and makes aggressive use of it in conducting its business,” so too do platforms. As ad intermediaries, they proactively extract users’ suspect class status and sell advertisements based on it, in violation of Title II.80Id. at 1172. Users must accept such data mining policies as a prerequisite to using platform services—a design characteristic the court found problematic in Roommates.
This Article focuses on design choice as opposed to speech. There are other compelling arguments that using protected class data—or any data for that matter—in ad-delivery services ought not be considered speech. By honing in on data as a commodity at point-of-use in auctions, this proposal could lead to reduced First Amendment scrutiny, since it does not trigger the threshold inquiry courts must make prior to applying the Central Hudson balancing test81See Central Hudson Gas & Elec. Corp. v. Pub. Serv. Comm’n, 447 U.S. 557 (1980) (adopting test for determining whether a regulation of commercial speech satisfies First Amendment scrutiny). developed by the Supreme Court.82See Julie Cohen, Examined Lives: Informational Privacy and the Subject as Object, 52 Stan. L. Rev. 1373, 1413 (2000) (describing the threshold requirement as “the presence of ‘communication’ at the collection, processing, and exchange stages”). Professor Julie Cohen notes that, “[i]n the sense that counts for First Amendment purposes, personally-identified data is not collected, used or sold for its expressive content at all; it is a tool for processing people, not a vehicle for injecting communication into the ‘marketplace of ideas.’”83Id. at 1411. But see Salomé Viljoen, A Relational Theory of Data Governance, 131 Yale L.J. 573, 577 (2021) (arguing that commodification of data erodes personal well-being). Additionally, Professor Jack Balkin characterizes the underlying data that feed algorithms as “a commodity, like widgets or soybeans.”84Jack M. Balkin, Information Fiduciaries and the First Amendment, 49 U.C. Davis L. Rev. 1185, 1196 (2016).
In the wake of the Supreme Court’s decision in Sorrell v. IMS Health Inc.,85564 U.S. 552, 567 (2011). commentators worry that the Court has ushered in an era increasingly deferential to commercial speech.86See, e.g., Balkin, supra note 84, at 1185. But in Sorrell itself, the Court explicitly stated that “a ban on race-based hiring may require employers to remove ‘White Applicants Only’ signs.”87Sorrell, 564 U.S.at 2664–65 (citations omitted). The Court recently clarified Sorrell in a way that confirms the constitutionality of this proposal: “the Vermont law in Sorrell, ‘does not simply have an effect on speech, but is directed at certain content and is aimed at particular speakers.’” Barr v. Am. Ass’n of Political Consultants, 140 S. Ct. 2335, 2347 (2020). Neither of those is the case here. An ad-delivery algorithm that targets a Black user for a voter suppression advertisement is the contemporary variation of posting such a sign. Based on this part’s argument that platforms’ ad-delivery auctions weaponize harvested suspect class data in violation of Title II, there are several avenues to rein in microtargeting and usher in a more democratic online ecosystem.
IV. Legal and Non-Legal Solutions
Part II proposed that users who are discriminated against in the ads space should have standing under Title II of the CRA. Part III proposed that platforms’ ad algorithms, structured as a menu of auction design choices, do not constitute speech or, at the very least, should be considered only an impermissible form of commercial speech. Taken together, this framework provides a potential avenue to prevent the continued use of proxies for race, gender, or sexual orientation in discriminatory ad practices.
The judiciary is the most appropriate enforcement mechanism for implementing this framework. Plaintiffs can bring civil rights class action suits under Title II to combat discrimination on online platforms. Alternatively, Title II also empowers state attorneys general to bring an enforcement action when there is “reasonable cause to believe that any person or group of persons is engaged in a pattern or practice of resistance to the full enjoyment of any of the rights” under Title II.88See Back, supra note 50, at 25 (quoting 42 U.S.C. § 2000a-5(a)). If Congress were to intervene, it could do so through data auditors—along the lines of the General Data Protection Regulation’s Data Protection Authorities—as opposed to Section 230 reform.89See Regulation (EU) 2016/679, of the European Parliament and the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), at art. 51. It could also make minor emendations to Title II (and the ADA) to explicitly include modern places of entertainment and websites in the list of “public accommodations.” This Article’s Title II and product design framework is also compatible with proposed disruptive tech governance regimes, such as the information fiduciary model.90See generally Jack M. Balkin & Jonathan Zittrain, A Grand Bargain to Make Tech Companies Trustworthy, The Atlantic (Oct. 3, 2016), https://www.theatlantic.com/technology/archive/2016/10/information-fiduciary/502346 [https://perma.cc/3X4Y-PPJG].
Ad intermediaries themselves could take proactive measures to counteract bad algorithmic practices. Some computer scientists have proposed methods for exposing the logic behind an algorithm’s decision to explain the differentiating factor in a local decision—as opposed to untangling the logic of the entire system.91See generally Finale Doshi-Velez et al., Accountability of AI Under the Law: The Role of Explanation (arXiv, Working Paper No. 1711.01134, 2017), https://arxiv.org/pdf/1711.01134.pdf [https://perma.cc/E48F-8E2N]. But see Cynthia Rudin, Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead (arXiv, Working Paper No. 1912.07579, 2019), https://arxiv.org/pdf/1811.10154.pdf [https://perma.cc/34MF-WMQS]. At a minimum, companies could conduct extensive ethics training for those working on these tools to prevent algorithms from gap-filling on the basis of proxies.
One potential critique of this framework is that it might have negative knock-on effects for “beneficial” targeted advertising. For example, organizers of a Black Lives Matter rally might want to target specific users with ads based on a suspect class datum. A potential response is that platforms could whitelist 501(c)(3) organizations and similar groups. Alternatively, to prevent the exception from swallowing the rule, one could argue that even nonprofits ought not discriminate in this fashion in targeting. The potential positives of this Article’s proposal seem, on balance, to outweigh the negatives.
Conclusion
In its current form, microtargeted political advertising presents a significant threat to democracy. The market is only expanding with Twitter’s recent announcement that it, too, will serve such ads. Since the effects of microtargeted political ads are “extremely hard to measure . . . we will almost certainly not be able to measure their impact until it is too late.”92Benkler et al., supra note 10, at 385. This Article proposes one means of counteracting this threat, based on underutilized existing tools. Title II lawsuits that sidestep Section 230 defenses would prevent platforms from monetizing certain categories of data. This strategy would potentially remove the initial incentive to collect such data about those proscribed categories in the first place. As a result, our platform feeds might be less targeted, less enticing, and less inflammatory. But they would also be less easily exploitable by those who want to undermine democracy.
* Harvard Law School Redstone Fellow in Public Service; J.D. 2022, Harvard Law School.