NO FAKES Act: AI Deepfakes Protection or Internet Freedom Threat?

Estimated Reading Time: 7 minutes

  • The NO FAKES Act offers individuals control over AI-generated likenesses.
  • Protections extend beyond an individual’s lifetime, making digital rights a family asset.
  • New obligations for online platforms regarding unauthorized digital replicas.
  • Critics warn of potential censorship and hurdles for innovation.
  • The legislation may shape the global regulatory landscape for AI.

Table of Contents

The NO FAKES Act: Digital Identity’s New Sheriff in Town

The Nurture Originals, Foster Art, and Keep Entertainment Safe Act—mercifully shortened to NO FAKES—represents Congress’s latest attempt to tackle one of AI’s most troubling applications. Reintroduced by a bipartisan group of U.S. Senators in April 2025, this legislation aims to establish something that sounds deceptively simple: a federal intellectual property right that gives individuals control over how realistic AI-generated versions of themselves are created and used.

Think of it as copyright protection, but for your face and voice. The Act would create a federal right of publicity that extends beyond traditional state-level protections, giving you legal ammunition against unauthorized digital doppelgängers. But like most attempts to regulate cutting-edge technology, the devil lurks in the digital details.

According to legal experts at Baker Botts, the legislation establishes a comprehensive framework where individuals and their designated representatives—including heirs and licensees—gain the authority to authorize or deny the use of their voice or likeness in digital replicas. This isn’t just about preventing embarrassing deepfake videos; it’s about creating a fundamental shift in how we think about digital identity rights in an AI-driven world.

Beyond the Grave: When Digital Rights Live Forever

Here’s where things get particularly fascinating—and potentially problematic. The NO FAKES Act extends these protections beyond an individual’s lifetime, creating a framework for transfer and licensing by relatives or legal heirs. This means your digital likeness could become a family asset, passed down through generations like intellectual property.

The legislation creates a liability framework that casts a wide net, targeting not just those who create unauthorized digital replicas, but also those who distribute, display, or even provide the tools and services designed to create such replicas. It’s a comprehensive approach that acknowledges the complex ecosystem surrounding AI-generated content.

The Act does include several exceptions designed to protect legitimate speech and journalism. Bona fide news, public affairs, sports broadcasts, documentaries, and biographies get a pass—provided the use is relevant to the content and not deceptively marked as authentic. These carve-outs attempt to balance protection against harm with the preservation of free expression and journalistic integrity.

Platform Police: The New Enforcement Landscape

Perhaps most significantly for the internet as we know it, the NO FAKES Act introduces a modified safe harbor system for online platforms. Drawing inspiration from the Digital Millennium Copyright Act’s takedown procedures, platforms would be shielded from liability if they proactively remove or disable access to unauthorized replicas “as soon as possible” after being notified.

This creates new obligations for platform operators, who must maintain designated agents to receive takedown requests and implement systems to respond to them. The legislation essentially modifies traditional Section 230 protections, creating new avenues for legal action against platforms that provide access to unauthorized digital replicas.

According to analysis from the American Action Forum, this represents a significant shift in how platforms would need to operate, requiring new infrastructure for content moderation and compliance with takedown requests specifically related to AI-generated content.

The Digital Rights Backlash: Why Critics Are Sounding Alarms

But here’s where the narrative takes a dramatic turn. Critics—led by digital rights organizations like the Electronic Frontier Foundation—argue that the NO FAKES Act has undergone a troubling evolution from its original form. What began as a targeted remedy against harmful deepfakes has allegedly morphed into something far more expansive and potentially dangerous.

The EFF’s analysis paints a stark picture: they describe the current version as creating “a sweeping federal image-licensing and censorship system” that could fundamentally alter how the internet functions. Their concern isn’t just about the direct impact on deepfakes, but about the broader implications for parody, satire, and creative expression that might inadvertently fall within the Act’s scope.

The criticism centers on the legislation’s expansion beyond just images to include “products and services used to create digital replicas.” This broader scope potentially captures AI tools, software platforms, and creative technologies that could have legitimate uses beyond generating unauthorized likenesses. Critics worry this could stifle innovation in the AI industry and create significant compliance burdens for creators and technology developers.

According to the EFF, the latest version requires “almost every internet gatekeeper to implement systems that take down speech,” with insufficient safeguards against abuse. They argue this could create a censorship infrastructure that goes far beyond addressing the specific harms of deepfakes, potentially suppressing legitimate expression and fair-use practices.

The Innovation Dilemma: Balancing Protection and Progress

The debate over the NO FAKES Act reflects a broader tension in AI governance: how do we protect people from genuine harms while preserving the innovation and openness that make technological progress possible? This isn’t just an academic question—the decisions made here could shape the trajectory of AI development for years to come.

For smaller creators and developers, the compliance burdens could be particularly challenging. Unlike major platforms with extensive legal and technical resources, independent creators might struggle to navigate the new regulatory landscape. This could inadvertently consolidate power in the hands of larger technology companies better equipped to handle complex compliance requirements.

The legislation also raises questions about enforcement practicalities. Digital replicas can be created and distributed rapidly across multiple platforms and jurisdictions. Creating effective enforcement mechanisms that don’t rely on overly broad automated systems presents a significant challenge for lawmakers and platform operators alike.

The Global Context: America’s AI Regulatory Moment

The NO FAKES Act doesn’t exist in a vacuum. It’s part of a broader global conversation about AI regulation, with jurisdictions worldwide grappling with similar questions about balancing innovation and protection. The European Union’s AI Act, China’s AI regulations, and emerging frameworks in other countries all reflect different approaches to these challenges.

America’s approach through the NO FAKES Act could set important precedents for how democratic societies address AI-generated content. The legislation’s success or failure could influence both domestic AI development and America’s competitive position in the global AI landscape.

Practical Implications: What This Means for You

Whether you’re an AI developer, content creator, or simply someone concerned about your digital identity, the NO FAKES Act could have significant practical implications. For businesses operating in the AI space, it could mean new compliance requirements, potential liability exposure, and the need for careful consideration of how their technologies might be used to create unauthorized digital replicas.

Content creators might need to be more cautious about how they use AI tools and ensure they have appropriate permissions for any likenesses or voices they incorporate into their work. The legislation could also create new opportunities for individuals to monetize their digital likeness while providing stronger protections against unauthorized use.

For platforms and technology companies, the Act could require significant investments in new moderation infrastructure and compliance systems. The safe harbor provisions provide some protection, but only for those who implement effective takedown procedures and maintain the required administrative systems.

The Path Forward: Finding Balance in an AI World

The NO FAKES Act represents both the promise and peril of attempting to regulate rapidly evolving AI technology. Its supporters see it as essential protection for individual rights in an age of increasingly sophisticated digital manipulation. Its critics view it as a dangerous overreach that could stifle innovation and free expression.

The reality likely lies somewhere in between. The genuine harms caused by unauthorized deepfakes—from non-consensual intimate imagery to political disinformation—demand serious policy responses. However, the solutions must be carefully crafted to avoid unintended consequences that could harm legitimate innovation and expression.

As the legislation continues to evolve through the congressional process, input from technologists, legal experts, digital rights advocates, and affected communities will be crucial. The goal should be creating a framework that provides meaningful protection against abuse while preserving the openness and innovation that drive technological progress.

Conclusion: The Stakes Couldn’t Be Higher

The NO FAKES Act controversy illuminates a fundamental question facing our digital society: how do we govern technologies that can both empower and endanger us? The outcome of this legislative battle could set precedents that shape not just how we handle deepfakes, but how we approach AI regulation more broadly.

For companies like VALIDIUM, which focuses on adaptive and dynamic AI solutions, understanding these regulatory developments is crucial for building responsible AI systems that can thrive within evolving legal frameworks. The key is developing technologies that can provide genuine value while incorporating the safeguards and transparency mechanisms that responsible AI governance demands.

The NO FAKES Act reminds us that the future of AI isn’t just about technical capabilities—it’s about the social, legal, and ethical frameworks we build around these powerful technologies. Getting this balance right will determine whether AI becomes a tool for human flourishing or a source of division and control.

As this debate continues to unfold, one thing is clear: the decisions we make today about AI regulation will shape the digital world our children inherit. Whether the NO FAKES Act ultimately proves to be protection or overreach may depend on how successfully we can balance the very real need for digital rights protection with the equally important imperative to preserve innovation and free expression.

The conversation is far from over, and the stakes have never been higher. For businesses navigating this evolving landscape, staying informed and engaged with developments like the NO FAKES Act isn’t just good practice—it’s essential for building sustainable, responsible AI solutions in an increasingly regulated world.

news_agent

Marketing Specialist

Validium

Validium NewsBot is our in-house AI writer, here to keep the blog fresh with well-researched content on everything happening in the world of AI. It pulls insights from trusted sources and turns them into clear, engaging articles—no fluff, just smart takes. Whether it’s a trending topic or a deep dive, NewsBot helps us share what matters in adaptive and dynamic AI.