Written by: Stephen Rogers | Feb 18, 2026

There are bipartisan votes, and then there are “something strange is happening” votes. S. 146, the TAKE IT DOWN Act is firmly in the second category. It cleared the Senate by unanimous consent on February 13, 2025, then passed the House 409–2 on April 28, 2025, and ultimately became law on May 19, 2025.  This, from a Congress that is so dysfunctional that the members generally can't even agree what day it is. 

Given this remarkable state of affairs, let's take a look at the bill and why it managed so successfully to unite a fractured Congress. And check out our previous posts: Combatting AI Deepfakes in Elections, Preventing Deepfake Scams Act, and The Battle to Regulate AI Discrimination.

Read the full IssueVoter analysis here. 

Before getting into why this became the legislative equivalent of a free throw, it’s worth being precise about what Congress actually enacted - because the TAKE IT DOWN Act is not “a deepfakes law” in the broad sense. It is a targeted law about a particular kind of harm: non-consensual intimate imagery, including AI-generated “digital forgeries” that are realistic enough to function like the real thing.

What Problem the Bill Is Trying to Solve, and Why It Became So Popular

The “why now” is not hard to understand. The last few years saw a sharp rise in image-based sexual abuse facilitated by cheap tools: “nudification” apps, deepfake generators, and the frictionless distribution mechanics of large platforms. Victims can be targeted at scale, quickly, and repeatedly - especially minors in school settings - often with limited practical recourse once content has been copied and reposted across services. Major press coverage and victim advocacy pushed the issue into a rare political lane: a clear wrong, concrete harm, and a policy response that can be framed as both pro-safety and pro-accountability.

The political coalition behind the bill was also unusually broad. Reporting at the time highlighted support from major tech companies and advocacy groups, and the bill received high-profile amplification (including from the First Lady), which helped make “doing nothing” look untenable.

In other words, it wasn’t merely a “deepfakes panic” bill; it was sold - and largely received - as a victim-protection bill that closes two perceived gaps at the federal level: first, a criminal prohibition aimed at perpetrators, and second, a fast takedown mechanism aimed at platforms.

The Core of the Law, in Plain English

The TAKE IT DOWN Act does two big things.

First, it creates a federal criminal prohibition on knowingly publishing (and, separately, threatening to publish) non-consensual intimate visual depictions through an interactive computer service. It covers both “authentic” depictions and certain AI-generated “digital forgeries.”

Second, it creates a notice-and-removal process for “covered platforms,” enforced by the Federal Trade Commission, that requires removal of reported content within a short deadline and some effort to prevent identical reuploads.

Those two components - criminal law plus platform takedown - are why supporters treated it as a one-two punch: deter the worst behavior, and reduce the time victims spend watching the harm spread.

The Act defines “digital forgery” as an intimate depiction created or altered via software or AI that, viewed as a whole by a reasonable person, is indistinguishable from an authentic depiction of the individual. This matters: it’s aimed at realistic deepfake porn, not obviously artificial satire.  It also defines “consent” in a comparatively strict way - an affirmative, voluntary authorization free from coercion or fraud - signaling that “implied” or pressured consent is not the design intent here.

The Criminal Provisions: What Conduct Becomes a Federal Crime

For adults, the law targets knowingly publishing intimate depictions when four conditions line up: the content was obtained or created in circumstances where the subject had a reasonable expectation of privacy, the subject did not voluntarily expose what’s depicted in a public or commercial setting, the depiction is not a matter of public concern, and the publication is intended to cause harm or causes harm (psychological, financial, or reputational).

For minors, the structure is different. Instead of building in “public concern” and “expectation of privacy” elements the same way, the statute focuses on intent - publishing with intent to abuse, humiliate, harass, degrade, or to arouse/gratify sexual desire.

The penalties are not symbolic. Violations involving adults carry up to two years imprisonment; violations involving minors carry up to three years.  The law also criminalizes certain threats to publish, when used for intimidation, coercion, extortion, or to create mental distress, with separate penalty ceilings for threats involving adults versus minors.  And it includes teeth on remedies: mandatory restitution (via cross-reference), and forfeiture provisions aimed at proceeds and instrumentalities.

The Act contains exceptions for a range of contexts that Congress did not want chilled: lawful law enforcement and intelligence activity, disclosures made reasonably and in good faith to law enforcement, use in legal proceedings, and certain medical, scientific, or educational purposes, among others.

Whether those exceptions, combined with the adult-elements (“public concern,” “reasonable expectation of privacy”), are enough to protect legitimate journalism and commentary is one of the central points of dispute.

The 48-Hour Takedown: What Platforms Must Do, and When

The platform side of the statute is codified separately (47 U.S.C. § 223a). It requires that no later than one year after May 19, 2025 covered platforms establish a process allowing an identifiable individual (or an authorized representative) to submit a notice and request to remove non-consensual intimate depictions.  A valid request must include a signature, enough information to locate the material, a good-faith statement that it is non-consensual (with relevant info), and contact information.  Once a platform receives a valid request, it must remove the depiction “as soon as possible,” but no later than 48 hours after receiving the request, and must make “reasonable efforts” to identify and remove known identical copies.

This is where the law’s real-world impact lives. A criminal statute may deter some perpetrators and enable prosecutions, but prosecutions are slow and selective. A 48-hour removal obligation, if operationalized well, can reduce the duration of harm for far more victims than the criminal docket ever will.  The FTC enforces the takedown obligations; a failure to reasonably comply is treated as violating an FTC rule defining an unfair or deceptive act or practice.  The statute also extends FTC enforcement in a way that drew attention: it contemplates FTC enforcement even with respect to certain organizations beyond typical profit-based entities.

On the flip side, the law includes an explicit limitation on platform liability for good-faith removal - platforms aren’t liable for claims based on disabling access to or removing material in good faith, even if the depiction is ultimately found not unlawful.  That liability shield matters because it shapes incentives: when the penalty is missing the 48-hour deadline and the safe harbor favors removal, risk-averse platforms have reason to err on the side of taking content down.

The Act does not apply to everything that transmits content. It defines and then excludes categories of services, including telecom services and certain communications and computing services, as well as services primarily providing email or direct messaging, and services that primarily provide access to the internet rather than user-generated content hosting.  This matters for two reasons. It narrows the law to the public-facing distribution layer where viral spread happens. But it also means that if abusive content is circulating primarily through excluded services, the Act’s takedown mechanism may not reach the channels victims care about most in the first hours.

With the May 19, 2026 deadline approaching, the public picture looks uneven. Many large platforms already have “NCII/revenge porn” reporting and takedown workflows, so they are not starting from scratch, but the real work is making those systems statute-compliant - a clear notice, a formal intake route that accepts the required signed request, and operational capacity to remove covered content within 48 hours of a valid notice.  A few services have begun explicitly branding dedicated compliance channels - VRChat, for instance, has published a “TAKE IT DOWN Act” compliance page suggesting some early alignment.  Overall, big platforms are likely close, but we won't know until the deadline arrives.

Why Massie and Burlison Voted No

The two “no” votes in the House came from Rep. Thomas Massie (R-KY) and Rep. Eric Burlison (R-MO) - both Republicans, even though the bill was a Republican-led push that also drew prominent Democratic support and backing from major platforms.

Given the bill’s political popularity, the most interesting interpretive question is why anyone voted no - especially Republicans.  Massie's explanation was direct and framed in the language of caution about downstream misuse: “I’m voting NO because I feel this is a slippery slope, ripe for abuse, with unintended consequences.”

Burlison’s objection was more classically federalism-plus-speech. In a statement relayed by local reporting, he called the conduct “abhorrent” but argued the bill “unnecessarily federalizes” criminalization already covered by state law, warned it is “redundant and constitutionally problematic,” and flagged concerns about First Amendment impacts and “unchecked growth of federal power.”

Two distinct “no” rationales show up here. One is the general fear that a fast takedown regime becomes a censorship tool because it will be abused. The other is that even if the goal is worthy, the federal government is not the right actor and should not create duplicative crimes.

The Case For the Bill, Using Supporters’ Logic

IssueVoter’s summary of the pro arguments centers on the idea that the bill pairs “teeth” (criminal penalties) with speed (platform takedowns), designed to limit trauma from rapid viral distribution, especially for minors.  Supporters also argue that state-by-state approaches have left victims with a patchwork: different definitions, different enforcement priorities, and different platform obligations. A federal framework, in that telling, increases consistency and creates an enforcement hook that does not depend on a victim finding the “right” state statute - or persuading a local prosecutor to treat it as urgent.

A practical point, often implicit, is that the takedown model is familiar. The internet already runs large-scale notice systems (copyright most famously). Proponents wanted something DMCA-like for intimate-image abuse: not perfect, but operationally legible to platforms. That logic appears explicitly in civil-society critiques too - acknowledging what the bill is trying to replicate.

Finally, the politics. When a bill becomes a public symbol of “protecting kids from sexual exploitation online,” the cost of opposing it rises sharply. The coalition letter from business groups framing S. 146 as “critical” protection against exploitative deepfakes gives a flavor of how wide the support was.

The Case Against the Bill, and the Sharpest Critiques

IssueVoter’s “against” section highlights the main civil-liberties concern: that the takedown mechanism can be abused and may lead to over-removal, suppressing lawful speech.

EFF’s critique is blunt on process. It argues the takedown provision is broad, lacks adequate protections against frivolous or bad-faith requests, and creates a strong incentive to remove content quickly rather than verify claims—especially for smaller services—potentially relying on automated filters that are prone to error.

CDT and partner organizations pressed similar points in a pre-passage letter: they supported helping victims of nonconsensual intimate imagery but warned the bill risks censoring legal speech and could threaten privacy and encrypted services unless amended.

CCRI - an organization closely identified with combating image-based sexual abuse - took an especially telling position: strong support for a federal criminal prohibition, paired with substantial concern about the notice-and-removal system’s constitutionality, overbreadth, and risk of selective misuse, and about specific loopholes in exceptions.

Those critiques converge on a single point: the law’s effectiveness depends on who uses the takedown process, how platforms validate requests, and how the FTC exercises discretion. A fast removal regime is inherently vulnerable to strategic reporting if it lacks meaningful counter-notice and anti-abuse mechanisms, and the Act’s liability structure nudges platforms toward “remove first, ask questions later.”

How Other Countries Are Tackling the Same Harm

Other jurisdictions generally share the U.S. instinct that non-consensual intimate imagery is a serious privacy and dignity harm, but they vary in where they put the emphasis: criminalizing creation, criminalizing sharing, empowering regulators to order takedowns, or building broader platform safety regimes.

The United Kingdom has been moving toward a more explicit treatment of deepfake sexual imagery as a standalone criminal concern, including government-announced measures to crack down on explicit deepfakes and related intimate-image abuse, building on existing offenses for sharing or threatening to share intimate images without consent.

Australia’s framework places notable weight on a specialist regulator. Under its online safety regime, the eSafety Commissioner has powers that can include removal notices for intimate images shared without consent, backed by civil penalties.

The European Union’s Digital Services Act is broader and not limited to intimate imagery, but it standardizes a “notice and action” approach for illegal content with an expectation of timely action and safeguards aimed at fundamental rights, rather than specifying a universal 48-hour clock for a particular category.

Canada, meanwhile, has a clearly stated Criminal Code offense for knowingly (or recklessly) publishing or distributing an intimate image without consent, with penalties that can reach up to five years, reflecting a more direct criminal-law route for the underlying conduct.

The takeaway from these comparisons is that the U.S. approach in the TAKE IT DOWN Act is relatively distinctive in pairing a targeted criminal prohibition with a federally enforced, time-bound takedown obligation aimed at platforms.

Will This Stop Deepfakes, or Is More Legislation Needed?

The most honest answer is that the TAKE IT DOWN Act is likely to reduce harm in a meaningful subset of cases, but it will not “solve deepfakes” as a phenomenon. It will likely help in three practical ways.  First, it creates a clearer federal basis to prosecute the most straightforward cases of distributing nonconsensual intimate imagery, including realistic AI forgeries, and it criminalizes coercive threats to publish.  Second, if covered platforms implement the notice-and-removal process competently, many victims should experience faster removal than under purely discretionary trust-and-safety policies—especially once the one-year implementation window runs.  Third, it changes platform incentives. Treating noncompliance as an FTC-enforceable unfair/deceptive practice pushes the issue into compliance departments, not just moderation queues.

But the Act also leaves major gaps that are hard to ignore if your end goal is “stopping deepfakes” rather than “reducing intimate-image abuse.”  The scope is intentionally narrow: the statute is about intimate imagery. Election deepfakes, fraud deepfakes, defamation deepfakes, and synthetic impersonation outside the “intimate depiction” category are largely outside its core design.  The takedown system is notice-based and platform-specific. It does not automatically address replication across the broader ecosystem beyond “known identical copies” on the same platform, and it does not create a universal provenance or authentication infrastructure that would help platforms detect synthetic media at scale.

There are also credible concerns about abuse and over-removal. The statute’s incentives, plus the lack of a clearly described counter-notice pathway in the text itself, make it plausible that lawful content—journalism, sexual health education, satire—could get swept into rapid takedowns, especially under automated filtering or high-volume reporting campaigns.

And then there is encryption. Even though the Act excludes categories of services and critics argued for explicit encryption carveouts, the pressure dynamic is real: if a service receives a notice it cannot technically comply with due to end-to-end encryption, it faces a choice between restructuring its product or risking enforcement. CDT and EFF both raised this as a concrete risk in their pre-passage advocacy.

If lawmakers decide the TAKE IT DOWN Act is a foundation rather than a finish line, the next steps that appear most defensible—based on the critiques and the operational realities—are the boring-but-essential governance pieces.  One likely area is procedural safeguards: clearer standards for validating requests, meaningful penalties for demonstrably bad-faith reporting, and a workable counter-notice/appeal structure that protects lawful speech without forcing victims into prolonged disputes.  Another is capacity and clarity around enforcement: more explicit guardrails for FTC discretion, especially given concerns raised about politicization and selective enforcement risk.

A third is technical ecosystem support that does not require breaking encryption: provenance standards, watermarking or content credential approaches, and interoperability that helps platforms recognize reuploads without broadly scanning private content. The law, as enacted, doesn’t build that layer; it assumes platforms can make “reasonable efforts” on reuploads, which is a flexible standard rather than a technical blueprint.

Finally, Congress may pursue adjacent “deepfake” categories explicitly—fraud, impersonation, and political manipulation—rather than trying to stretch a single intimate-image statute to cover every kind of synthetic-media harm.

Bottom Line

The TAKE IT DOWN Act became law because it addressed a vivid, fast-growing harm with an intuitive remedy: criminalize the conduct and force faster removal. The overwhelming votes reflect that political logic, and the dissenters’ objections reflect the predictable fault lines: federalism, free speech, and fear of an abuse-prone takedown regime.

In real-world effect, the Act is likely to help a meaningful number of victims get non-consensual intimate imagery down faster—if platforms implement the process well and enforcement is evenhanded. But it will not end deepfakes, and it will almost certainly generate edge-case fights about over-removal, due process, and the pressure it places on services that cannot or will not police content in the way a 48-hour deadline encourages.


About BillTrack50 – BillTrack50 offers free tools for citizens to easily research legislators and bills across all 50 states and Congress. BillTrack50 also offers professional tools to help organizations with ongoing legislative and regulatory tracking, as well as easy ways to share information both internally and with the public.

IssueVoter is a nonpartisan, nonprofit online platform dedicated to giving everyone a voice in our democracy. As part of their service, they summarize important bills passing through Congress and set out the opinions for and against the legislation, helping us to better understand the issues. BillTrack50 is delighted to partner with IssueVoter and we link to their analysis from relevant bills. Look for the IssueVoter link at the top of the page.