• Home
  • About
    • Our Mission
    • Leadership
    • Contact
  • Events
    • Inaugural Symposium of the Voting Rights and Democracy Forum
  • Voting Rights and Democracy Forum
    • Home
    • Author Submissions
    • Commentary
    • Articles & Essays
      • Volume IV
        • Current Issue | Volume IV, Issue No. I
      • Volume III
        • Volume III, Issue 3
        • Vol. III, Issue 2, Symposium Issue
        • Vol. III, Issue 1
      • Volume II
        • Vol. II, Issue 2
        • Vol. II, Issue 1
      • Volume I
        • Vol. I, Issue 3
        • Vol. I, Issue 2
        • Vol. I, Issue 1
    • Join the Editorial Staff
    • About
  • Subscribe
  • Donate
  • Home
  • About
    • Our Mission
    • Leadership
    • Contact
  • Events
    • Inaugural Symposium of the Voting Rights and Democracy Forum
  • Voting Rights and Democracy Forum
    • Home
    • Author Submissions
    • Commentary
    • Articles & Essays
      • Volume IV
        • Current Issue | Volume IV, Issue No. I
      • Volume III
        • Volume III, Issue 3
        • Vol. III, Issue 2, Symposium Issue
        • Vol. III, Issue 1
      • Volume II
        • Vol. II, Issue 2
        • Vol. II, Issue 1
      • Volume I
        • Vol. I, Issue 3
        • Vol. I, Issue 2
        • Vol. I, Issue 1
    • Join the Editorial Staff
    • About
  • Subscribe
  • Donate

Deepfakes and Democracy: The Case for Uniform Disclosure in AI-Generated Political Advertisements 

May 23, 2025

By Roberto P. Leito
May 23, 2025, 11:30 AM

I. Introduction

The rise of artificial intelligence (“AI”) has brought significant changes to all aspects of everyday life. As the 2024 United States (“U.S.”) presidential election cycle demonstrated, these changes have included the use of AI in political campaigns. While AI can be a helpful tool, it can also pose a unique threat to election security when political advertisements depict deepfakes. Deepfakes are videos, audio, or images mimicking a real person’s likeness, often saying or doing something they did not. Deepfakes threaten democracy by spreading misinformation through fictitious content that appears real.[1]

AI threatens to fundamentally alter how “the people” gauge the credibility of their news.[2] AI can help campaigns craft on-brand messaging, but it also can generate false or misleading content and exhibit racial, sexist, or political biases.[3] With federal agencies slow to act, states like New York and Florida have crafted their own approaches to fill the gaps and regulate AI usage in political content. But, because the First Amendment protects political advertisements,[4] state and federal governments’ central challenge is imposing constitutionally permissible regulation. One way states have chosen to regulate AI in political ads is through disclosure mandates, which are arguably the most effective means of protecting voters from election misinformation.

  II.         AI’s Role in Disseminating Election Misinformation

            Recent elections have demonstrated AI’s potent ability to spread misinformation, requiring an expeditious response from state and federal officials to protect future elections from interference. For instance, AI’s role in the presidential primary election season is a cautionary tale of how easily it can mislead voters. Just two days before the New Hampshire primary, the primary that formally begins the election cycle, thousands of Democratic New Hampshire voters received a robocall impersonating then-President Joe Biden.[5] The call, orchestrated by a Louisiana Democratic political consultant supposedly wanting to publicize the dangers of AI, instructed that voters not vote in the primary because “‘your vote makes a difference in November, not this Tuesday.’”[6] The Federal Election Commission (“FEC”) accordingly fined the perpetrator $6 million.[7] In the Republican presidential primary in 2023, Governor Ron DeSantis’s campaign shared doctored images of former President Donald Trump embracing Doctor Anthony Fauci, the former Director of the National Institute of Health.[8] The campaign posted the photos to attack Trump on X (formerly Twitter), which drew backlash from Trump’s supporters.[9]

  III.   Constitutional Constraints on Regulation

            Regulating AI in political ads implicates First Amendment protections because they are political speech, making crafting legislation challenging.[10] Among the different standards that courts employ to gauge a statute’s constitutionality, courts use the most rigorous—strict scrutiny—for political speech.[11] When a law directly impedes political speech, strict scrutiny is in place to “[require] the Government to prove that the restriction ‘furthers a compelling interest and is narrowly tailored to achieve that interest.’”[12] A law typically burdens political speech when it regulates that speech by its content or message.[13] It is rare for the government to successfully defend laws under strict scrutiny because it is a highly demanding standard, requiring that it narrowly tailor its goal and use the least restrictive means possible to further its interest.[14] Because outright bans are unlikely to survive constitutional muster, legislatures must consider disclosure requirements instead, as they impose fewer burdens on free speech. As the U.S. Supreme Court held in Citizens United v. FEC, “disclosure requirements may burden the ability to speak, but they ‘impose no ceiling on campaign-related activities.’”[15] Such requirements may include a visible, or spoken, if in audio format, disclaimer announcing AI’s full or partial use in political ads.[16] They could also include an accessible link directing viewers to the original version.[17]

 IV.         Regulatory Gaps in Federal Law

            Despite disclosure’s potential promise as a solution to the dangers of AI election deepfakes, federal regulatory inaction prevents the implementation of national reform, leaving voters vulnerable. Congress has also failed to take substantive action, with only one bipartisan bill introduced in the current 119th Congress, The Protect Elections from Deceptive AI Act.[18] House Republicans recently included a provision in their proposed tax bill that would ban state AI regulation for ten years.[19] Despite the federal government’s complacency, the public has a legitimate interest in preventing AI-generated political ads from spreading misinformation.

            The Federal Election Commission (“FEC”) contemplated imposing rulemaking to address the issue, but it declined. On July 13, 2023, consumer rights non-profit organization Public Citizen submitted a petition to the FEC requesting a rulemaking clarifying whether federal election law 52 U.S.C. Section 30124, which bars fraudulent misrepresentation by a federal political candidate or their agent, prohibits AI-generated deepfake campaign communications.[20] Stakeholders submitted over 2,400 public comments in favor of regulation, including thirty-one members of Congress.[21] Instead, the FEC decided that the language of fraudulent misrepresentation was neutral and broad enough to enforce against deceptive AI uses on a case-by-case basis.[22]

            On July 25, 2024, the Federal Communications Commission (“FCC”) also announced a proposed rulemaking requiring AI-use disclosure in political advertisements on broadcast television and radio channels.[23] The rule would have required broadcasters to inquire if their content was AI-generated, disclose that use through an on-air announcement, and include an AI-disclosure notice in their online political files for the advertisements.[24] This rule remains in proposed status. 

 V.         Regulatory Gaps in State Law

While federal reform is stalled, various states have enacted laws in a patchwork fashion to protect voters from AI-driven election misinformation. Among the states that passed laws in 2024 are New York, Florida, Arizona, Minnesota, Texas, and others.[25] While these laws show valuable initiative, they are inconsistent and, at times, constitutionally vulnerable. As a whole, these regulations underscore the importance of uniform disclosure mandates. 

While New York’s AI disclosure law balances effectiveness and constitutionality, it has several limitations. The New York legislature amended the state election law in 2024, specifically updating the definition of “materially deceptive media,” to include “any image, video, audio, text, or any technological representation” that did not actually occur or was altered significantly from how it happened, is indistinguishable from a real person, or was created with AI.[26] Any creator knowingly publishing deceptive political content must disclose this in two ways. If the media is published in text, the disclosure must read “[t]his (image, video, or audio) has been manipulated,” and if the media is auditory, the same disclosure must be spoken aloud at the beginning and every two minutes thereafter.[27] If a deceptive advertisement harms a candidate, they may sue to remove it and recover attorney’s fees.[28]While this law demonstrates progress in addressing the dangers of AI in elections, disclosure is only required when the publisher knows the material is deceptive.[29] The law rightfully contains exceptions for websites hosting third-party content, such as social media sites, and political satire.[30] Allowing this exception increases the  constitutional strength of the law if challenged, as it may show regulation by a less restrictive means.[31] However, most Americans get their news from social media sites,[32] so while this is a lawful exception,[33] it harms the law’s ability to address misinformation where it matters most.

            While New York’s law represents progress, its deterrence mechanisms fall short. By contrast, Florida’s recently enacted statute more fully accomplishes the goal of deterring misinformation in political advertisements containing AI-generated material. Unlike New York, the Florida law encompasses all political advertisements containing images, audio, and video fully or partially created with AI.[34] However, the ad must depict a person doing something they did not, and if the ad was created with the intent to deceive, a disclaimer must publicly display the following in readable, bold font: “[c]reated in whole or in part with the use of generative artificial intelligence (AI).”[35] For audio, the disclaimer must be read aloud at the beginning and end of a recording.[36]The law took effect on July 1, 2024,[37] and violation results in a misdemeanor, although candidates may also file a complaint with the Florida Elections Commission to obtain a civil fine against the violator.[38] Florida’s law is more aggressive and provides greater deterrence, but at the expense of constitutional strength. It does not address exceptions for organizations governed by Section 230 of the Communications Decency Act (“Section 230”),[39] and for news and satire, which the Supreme Court has repeatedly protected.[40]  Similarly, the law’s intent to deceive requirement is likely difficult to prove, making enforcement difficult.

            Disclosure remains the preferred method for ensuring information security in AI-generated political advertisements in New York and Florida. At the time of writing, a total of twenty-four states regulate political deepfakes, partially using disclosures and other techniques.[41] In 2024, the State of Minnesota amended its deepfake statute, prohibiting all political deepfakes within ninety days of an election if the creator did not obtain the candidate’s consent and intended to influence the election results.[42] Like Florida, Minnesota’s law includes criminal penalties and does not accommodate exceptions,[43] similarly imperiling its constitutionality. Minnesota’s approach reveals that disclosure, not an outright prohibition, is still key. For instance, a 2024 Arizona law requires public disclosure for AI-generated political material published within ninety days of an election.[44] This timing trigger may strengthen a uniform disclosure law’s ability to pass strict scrutiny in a constitutional challenge.[45] However, any disclosure law must include exceptions for satire, organizations governed by Section 230, and news.

 VI.         A Uniform Disclosure Mandate is the Most Constitutional and Practical Solution

            Stalled or inconsistent regulatory and legislative responses expose U.S. voters to misinformation campaigns, potentially harming electoral integrity and distorting the meaning of “the people” exercising their right to vote. States like New York offer a limited solution in favor of constitutional stability, while others, like Florida, favor a more aggressive approach at the Constitution’s expense. Yet, states such as Arizona employ a more balanced approach.

  1. Doctrinal Support

            The U.S. Supreme Court agrees that disclosures are a sound method for regulating political speech. The Court in Citizens United opined that disclosure requirements do not “prevent anyone from speaking.”[46] They merely entitle the public to information about the statement, or in this case, AI-generated material created for political advertising. “[D]isclosure could be justified based on a governmental interest in “‘provid[ing] the electorate with information’”[47] if the government justifies its narrowly tailored interest. Arizona’s disclosure law demonstrates that a supposedly less restrictive means is available, and accordingly, the Court favors disclosures because “[they are] a less restrictive alternative to more comprehensive regulations of speech.”[48]

  1. Digital Platform Limitations and Section 230

            AI political ad disclosures should apply to candidates, political parties, and other groups disseminating political communications.[49] If AI is used at all in a misleading way, such that it would lead a reasonable viewer to an understanding different from what actually occurred, there must be a written or auditory disclosure describing that the ad is wholly or partially AI-generated.[50] While creators could willingly evade disclosure by disseminating the communication without checking off disclosure boxes on, for instance, a social media platform,[51] social media platforms could implement AI-detection tools in their applications, removing the poster’s optional ability to disclose AI’s use. There are foreseeable difficulties distinguishing between political communications and non-political postings, but AI detection would mitigate bad actors’ ability to spread misinformation. 

That said, there is a question of whether mandating AI detection is prohibited for social media platforms under Section 230. Because posters are considered “publishers or speakers” under the law and social media platforms are considered “providers,” the latter cannot be held liable, absent limited exceptions, for what a poster says.[52] This limited liability likely means that the federal government cannot mandate that providers implement AI detection and disclosures on their platforms.[53] Social media platform cooperation is essential, but not required, to facilitate a blanket disclosure mandate in the current legal landscape. Congress must otherwise amend Section 230 to include AI-generated political ads, as it did for sex trafficking in the Allow States and Victims to Fight Online Sex Trafficking Act of 2017.[54]

  1. LLM Labeling as an Alternative to a Section 230 Amendment

            As an alternative to amending Section 230, the AI itself could carry an uneditable disclosure label, eliminating the need for voluntary disclosure.[55] Doing so would require passage of state and federal laws for large-language models (“LLMs”), such as ChatGPT or Llama, to include a disclosure label when political campaigns, parties, or candidates use them for advertising purposes.[56] Some technology companies began this practice even before the 2024 U.S. presidential race. For example, in 2023, Google and its platforms amended their content policies, requiring that all AI-generated election ads include conspicuous disclosures.[57]However, voluntary commitments are inadequate. Federal legislation is imperative.

  1. Disclosure Language, Scope, and Exemptions

            A successful federal blanket disclosure mandate for AI-generated political ads would draw upon different aspects of state laws, including Florida, New York, and, if necessary to withstand constitutional scrutiny, Arizona. Under a blanket disclosure mandate, New York and Florida provide a preferred coverage model, including all print, broadcast, digital, and audio media. However, a significant difference, and where Florida’s version proves stronger, is in the law’s required disclosure language. In materially deceptive media, New York requires disclosures to read “‘[t]his (image, video, or audio) has been manipulated.’”[58] This language does not clearly explain to the public whether the content is wholly or partially AI-generated—the term ”manipulated” is too ambiguous and does not fully convey AI’s involvement. Florida’s version is clearer, stating that the content was “‘[c]reated in whole or in part with the use of generative artificial intelligence (AI),’”[59] and is thus the preferred model for a blanket disclosure. However, as mentioned previously, New York’s law is more constitutionally sound because it provides exemptions. A blanket disclosure requires these exemptions to withstand legal challenges.

  1. Enforcement

Finally, enforcement under a blanket disclosure must mirror New York’s law, which favors civil enforcement, through a private right of action, over Florida’s criminal penalties. The Court generally disfavors speech regulations that broadly criminalize, believing this prevents the dissemination of otherwise protected First Amendment speech.[60] Florida’s version is likely weak in potential constitutional challenges, leaving civil enforcement as the most practical. A successful blanket disclosure law would include a private right of action like New York’s law, for campaigns to sue wrongdoers and permit enforcement, and be accompanied by civil fines like Florida’s law, from federal bodies like the FEC.

 VII.         Conclusion

            AI-generated political ads pose an extreme threat to fair elections. Through the manipulation of images, video, and audio, the lines between fact and fiction have never been blurrier. While our democratic systems have held, they may not be so intact next time.

            The First Amendment constrains potential legislation and regulation through its high-level protection of political speech. An outright ban on AI-generated political content is unlikely due to strict scrutiny. Criminal penalties are likely overbroad and contrary to First Amendment principles. Yet, the Supreme Court in Citizens United held that disclosure is an acceptable means of regulating speech.

            As the patchwork of state regulation demonstrates, disclosure is not the only way to regulate deceptive AI-generated political ads. While imperfect, uniform disclosure remains the most practical and constitutionally viable solution. At a time when the public is often exposed to misinformation, state and national governments must band together and take substantive action. The integrity of our elections hangs in the balance.


[1] Ian Sample, What are deepfakes – and how can you spot them?, The Guardian (Jan. 13, 2020, 5:00 AM), https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them.

[2] Mekela Panditharatne et al., An Agenda to Strengthen U.S. Democracy in the Age of AI, The Brennan Ctr. for Just. (Feb. 13, 2025), https://www.brennancenter.org/our-work/policy-solutions/agenda-strengthen-us-democracy-age-ai.

[3] Christina LaChapelle & Catherine Tucker, Generative AI in Political Advertising, The Brennan Ctr. for Just. (Nov. 28, 2023), https://www.brennancenter.org/our-work/research-reports/generative-ai-political-advertising.

[4] See generally Ellada Gamreklidze, Political Speech Protection and the Supreme Court of the United States, Nat’l Commc’n Ass’n (Jul. 2016), https://www.natcom.org/publications-library/political-speech-protection-and-supreme-court-united-states/.

[5] Maggie Astor, Behind the A.I. Robocall That Impersonated Biden: A Democratic Consultant and a Magician, N.Y. Times (Feb. 29, 2024), https://www.nytimes.com/2024/02/27/us/politics/ai-robocall-biden-new-hampshire.html.

[6] Id.

[7] Alex Seitz-Wald, Telecom company agrees to $1M fine over Biden deepfake, NBC News (Aug. 21, 2024), https://www.nbcnews.com/politics/2024-election/telecom-company-agrees-1-million-fine-biden-deepfake-rcna167564.

[8] Nicholas Nehamas, DeSantis Campaign Uses Apparently Fake Images to Attack Trump on Twitter, N.Y. Times (Jun. 8, 2023), https://www.nytimes.com/2023/06/08/us/politics/desantis-deepfakes-trump-fauci.html.

[9] Id.

[10] Victoria L. Killion, Cong. Rsch. Serv., IF11072, The First Amendment: Categories of Speech 1 (2024)

[11] Id.

[12] 558 U.S. 310, 340 (2010) (quoting FEC v. Wis. Right to Life, Inc., 551 U.S. 449, 464 (2007)).

[13] Victoria L. Killion, Cong. Rsch. Serv., R47986, Freedom of Speech: An Overview 4-6 (2024).

[14] In some cases, such as matters of national security, protecting minors’ physical and mental well-being, and different kinds of discrimination, the government can satisfy strict scrutiny. See id.

[15] Citizens United, 558 U.S. at 366 (2010) (quoting Buckley v. Valeo,  424 U.S. 1, 64 (1976)).

[16] See Fla. Stat. Ann. § 106.145 (2) (LexisNexis 2025).

[17] Artificial Intelligence (AI) in Elections and Campaigns, Nat’l Conf. of State Legs. (Apr. 10, 2025), https://www.ncsl.org/elections-and-campaigns/artificial-intelligence-ai-in-elections-and-campaigns.

[18] See Press Release, Senator Susan Collins, Senator Collins, Bipartisan Group Introduce Bill to Ban Deceptive AI-Generated Content in Elections (Apr. 16, 2025), https://www.collins.senate.gov/newsroom/senator-collins-bipartisan-group-introduce-bill-to-ban-deceptive-ai-generated-content-in-elections.

[19] Matt Brown & Matt O’Brien, House Republicans include a 10-year ban on US states regulating AI in ‘big, beautiful’ bill, Associated Press (May 16, 2025), https://apnews.com/article/ai-regulation-state-moratorium-congress-39d1c8a0758ffe0242283bb82f66d51a.

[20] Letter from Robert Weissman, President, Pub. Citizen, to Lisa Stevenson, Acting Gen. Counsel, Fed. Elections Comm’n 1 (Jul. 13, 2023) (https://sers.fec.gov/fosers/showpdf.htm?docid=423502).

[21] Statement by Ellen L. Weintraub, Vice Chair, Fed. Elections Comm’n, On The Disposition of The Rulemaking Petition Regarding Fraudulent Misrepresentation And Artificial Intelligence 2 (Sept. 19, 2023) (https://www.fec.gov/resources/cms-content/documents/REG-2023-02-A-in-Campaign-Ads-Vice-Chair-Statement.pdf).

[22] 89 Fed. Reg. 78826 (Sept. 26, 2024) (codified at 11 C.F.R. pt. 112).

[23] Ali Swenson, FCC pursues new rules for AI in political ads, but changes may not take effect before the election, Associated Press (Jul. 25, 2024), https://apnews.com/article/artificial-intelligence-political-ads-fec-fcc-18080082b2a81b3aad4897b4c4b5c84b.

[24] Id. See Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements, 89 Fed. Reg. 63381 (proposed Aug. 5, 2024) (to be codified at 47 C.F.R. pts. 25, 73, 76).

[25] Supra note 17.

[26] N.Y. Election Law § 14-106 (5)(a)(i)(1)-(3) (Consol. 2025).

[27] Id. at (5)(b)(i)–(ii).

[28] N.Y. Election Law § (5)(b)(iv) (Consol. 2025).

[29] Id.

[30] Id. at (5)(b)(iii)(1)–(4).

[31] See Killion, supra note 13.

[32] Fifty-four percent of Americans get their political news from social media. Luxuan Wang et al., How Americans Get Local Political News, Pew Rsch. Ctr. (Jul. 24, 2024), https://www.pewresearch.org/journalism/2024/07/24/how-americans-get-local-political-news/.

[33] See infra Part VI(B).

[34] Fla. Stat. Ann. § 106.145 (2) (LexisNexis 2025).

[35] Id.

[36] Id.

[37] Paul Kobak, Florida’s New Deepfake Laws: Criminal Penalties, Civil Remedies, Fla. Bus. Rev. Online (Jul. 3, 2024), https://advance.lexis.com/api/permalink/f19a3a6b-94eb-4166-bd79-6cb2b231bfed/?context=1000516.

[38] Id.

[39] See generally Peter J. Benson & Valerie C. Brannon, Cong. Rsch. Serv., IF12584, Section 230: A Brief Overview 1 (2024) (“Section 230 of the Communications Act of 1934… provides limited immunity from legal liability to providers and users of ‘interactive computer services.’”).

[40] U.S. v. Alvarez, 567 U.S. 709, 711 (2012) (“the threat of criminal prosecution for making a false statement can inhibit the speaker from making true statements, thereby “chilling” a kind of speech that lies at the First Amendment’s heart.”).

[41] Supra note 17.

[42] Minn. Stat. § 609.771 (2024).

[43] Id.

[44] Ariz. Rev. Stat. § 16-1024 (LexisNexis 2025).

[45] See Killion, supra note 13.

[46] Citizens United, 558 U.S. at 366 (internal quotation mark and brackets omitted) (quoting McConnell v. FEC, 540 U.S. 93, 201 (2003)).

[47] Id.

[48] Id. at 369.

[49] Panditharatne et al., supra note 2.

[50] Id.

[51] Sophie Loewenstein, Make America Fake Again?: Banning Deepfakes of Federal Candidates in Political Advertisements Under the First Amendment, 93 Fordham L. Rev. 273, 316 (2024).

[52] Benson & Brannon, supra note 39 at 2.

[53] Id.

[54] Id. Many potential measures amending Section 230 have failed in the past three Congressional sessions.

[55] Panditharatne et al., supra note 2.

[56] Id.

[57] Michelle Chapman, AI that alters voice and imagery in political ads will require disclosure on Google and YouTube, Associated Press (Sept. 7, 2023), https://apnews.com/article/google-ai-ads-political-policy-fake-792cbae3e651d31028ae2c64f65f112c.

[58] N.Y. Election Law § 14-106 (5)(b)(ii)(1) (Consol. 2025).

[59] Fla. Stat. Ann. § 106.145 (2) (LexisNexis 2025).

[60] Alvarez, 567 U.S. at 711.

Related

Artificial IntelligenceCommentaryDeepFakesDemocracyElectionsFirst AmendmentPolitical AdsState Legislatures
Share

Commentary

  • Commentary

    Roberto P. Leito is a rising fourth-year evening student at Fordham University School of Law, where he is the Managing Editor of the Fordham Environmental Law Review and former Senior Commentary Editor of the Voting Rights and Democracy Forum. While in law school, he has interned with the Metropolitan Transportation Authority and the Legal & Compliance Division of Morgan Stanley. Roberto has also worked as a Research Assistant for Fordham adjunct election law professor Jerry Goldfeder. He holds a bachelor’s degree, summa cum laude, in Political Science and History from Fordham University.



  • Our Mission

    To train the next generation of lawyers in the law and practice of voting rights, ballot access, campaign finance, election administration, and democracy protection.

  • Content

    Home

    Resources

    Jobs

    Subscribe

    Events

  • Social

    Twitter

    Instagram

    LinkedIn

    Mastodon

  • Connect with Us

    About Us

    Leadership

    Support

    Submissions


Fordham Law Voting Rights and Democracy Project