top of page

Supreme Court Rules AI-generated Music Can't Be Copyrighted

  • Apr 14
  • 6 min read

There’s a lot of fast-moving (and sometimes confusing) commentary about “AI music can’t be copyrighted,” “AI art is public domain,” and what the government is doing about AI training.

The U.S. Copyright Office has been publishing a multi-part report on Copyright and Artificial Intelligence, which is becoming a key reference point in these debates:

  1. Part 1: Digital Replicas (July 2024)

  2. Part 2: Copyrightability (January 2025)

  3. Part 3: Generative AI Training (pre-publication version released May 2025)


This post summarizes the main points of each part, and then connects them to the widely-circulated March 2026 update that the U.S. Supreme Court declined to hear an AI copyright case, leaving in place the lower-court result that copyright requires human authorship.


Part 1 (July 2024): Digital Replicas (deepfakes of voice/likeness)

What “digital replicas” means

Part 1 uses “digital replica” (deepfake) to mean a realistic but false audio/video/image depiction of a person, created or manipulated digitally (AI or otherwise). It focuses on replicas that can be hard to distinguish from authentic depictions.

What the Copyright Office says is happening

Generative AI made digital replicas:

  • easier to create,

  • faster to distribute,

  • more realistic, and

  • scalable enough to become a broad societal problem.

Where the harms show up

The report emphasizes harms across multiple domains:

  • Creative industries/labor displacement: voice actors, performers, musicians

  • Nonconsensual explicit deepfakes

  • Fraud/impersonation scams

  • Political misinformation and erosion of trust

Why existing law isn’t enough

The Office reviews state and federal law and concludes protections are fragmented:

  • State privacy/publicity rights vary widely (some require “commercial use,” some don’t cover voice well, some are limited to certain people).

  • Federal laws help in narrow ways (copyright, FTC, Lanham Act, FCC/robocalls), but no single federal tool broadly addresses unauthorized digital replicas.

The key recommendation

The Office concludes a new federal law is needed to address unauthorized digital replicas, and recommends guardrails like:

  • protect all individuals (not just celebrities)

  • target highly realistic replicas

  • focus on distribution/making available, not creation alone

  • require actual knowledge the replica is of a person and is unauthorized

  • address platforms via secondary liability and a takedown/safe harbor mechanism

  • allow licensing with limits/guardrails; discourage permanent “selling away” your identity

  • explicitly balance free-speech (First Amendment) concerns

  • avoid full preemption of state law (federal floor, not a ceiling)

Practical translation for musicians

This part is mostly about voice cloning (and realistic likeness replication), not “AI composition copyright.”

If your voice is cloned into “new performances,” Part 1 argues current remedies are inconsistent and a federal fix is needed.

Part 2 (January 2025): Copyrightability (are AI outputs copyrightable?)

Human authorship

Part 2 reaffirms that human authorship is required for copyright protection in the United States.

That means:

  • Copyright does not extend to purely AI-generated material.

  • Copyright can cover a work that includes AI-generated material to the extent there is protectable human authorship in the work.

“AI as a tool” is fine

The Office is very clear: using AI as an assistive tool does not automatically disqualify a work from protection.

  • The question isn’t “Was AI involved?”

  • The question is: Where is the human-authored expression?

Prompts alone usually aren’t enough (with current tools)

Part 2’s major practical takeaway is that, with generally available tools today, prompting alone typically doesn’t provide enough control over the expressive elements to make the prompter the “author” of the output.

What can still be protected (even in AI-heavy workflows)

The report describes multiple ways a human can still have copyrightable authorship:

  • Human-authored text, lyrics, melodies, compositions, performances, recordings, etc. that are present in the final work

  • Creative selection, coordination, or arrangement (like a compilation)

  • Creative modifications of AI output (editing, rewriting, re-scoring, re-arranging) where the edits themselves are original

The policy conclusion

Part 2 concludes existing law can handle these questions case by case and the Office does not recommend creating a new copyright (or sui generis right) for AI-generated outputs.

Practical translation for musicians

  • If you generate a full song “as is” from an AI system with minimal human creative control, you likely won’t have copyright in the AI-generated expressive content.

  • If you write lyrics, compose, perform, record, and/or meaningfully arrange/edit the work, you may still have copyright protection for those human-authored parts; even if AI is used as a tool in the process.

Part 3 (May 2025 pre-publication): Generative AI Training (inputs, not outputs)

Part 3 shifts focus away from “Is the AI output copyrightable?” and toward:

Can AI companies train on copyrighted works without permission, and is that fair use?

How the report structures the problem

The Office breaks the generative AI lifecycle into areas where copyright rights may be implicated:

  • Data collection/curation

  • Training

  • Retrieval-augmented generation (RAG)

  • Outputs (including memorization and reproduction concerns)

Fair use is the main battleground

The report provides an analytical framework for how courts often evaluate fair use:

  • Factor 1: purpose/character, transformativeness, commerciality, unlawful access

  • Factor 2: nature of copyrighted works (creative vs factual)

  • Factor 3: amount used and reasonableness for the purpose; what’s exposed to the public

  • Factor 4: market effects (lost sales, dilution/substitution, lost licensing markets), plus claimed public benefits

Licensing is treated as central

Part 3 explores licensing approaches and whether they’re feasible, including:

  • Voluntary licensing markets

  • Collective licensing possibilities (and potential obstacles)

  • Statutory options like compulsory licensing, extended collective licensing, and opt-out Concepts

Practical translation for musicians

Part 3 is directly relevant to whether a model can be trained on your:

  • Sound recordings

  • Compositions

  • Stems/sessions

  • Catalog

…and then compete in the market with outputs that substitute for your work.

The March 2026 “Supreme Court declined to hear the AI copyright case” update (what it means, and what it doesn’t)

What it means

When the Supreme Court declines to hear a case, it does not issue a new merits opinion, but it often has a real practical effect: it leaves the lower-court decision standing.

As widely reported, this effectively upheld the position that:

  • works created entirely by AI without meaningful human input cannot be copyrighted, and

  • copyright requires human authorship.

This aligns with the U.S. Copyright Office’s approach in Part 2.

What it does NOT mean

  • It does not create a brand-new nationwide Supreme Court written rule; it leaves existing doctrine (as applied by lower courts and the Copyright Office) in place.

  • It does not say “AI-assisted music can never be copyrighted.” Part 2 explicitly supports copyright protection for the human-authored elements in AI-involved works.

  • It does not make “selling AI music illegal.” It changes what you can own and enforce via copyright.

The “public domain” and monetization

A common shorthand is: “If it’s not copyrightable, it’s public domain.”

In practice, the more important point for creators is this:

  • If a work is purely AI-generated and not copyrightable, you may have limited ability to stop others from copying that same output, because there is no copyright owner of the AI-generated expression.

But monetization can still exist via:

  • commissions / client contracts

  • branding and audience

  • platform monetization programs (which are contract-based)

  • licensing human-authored components (lyrics, performances, recordings, arrangements) where applicable

Simple “if/then” explaination

  1. If it’s purely AI output (no meaningful human authorship), then there’s likely no copyright.

  2. If AI is used like a tool and a human authors meaningful expression (or makes creative edits/arrangements), then those human parts can be copyrighted.

  3. If an AI model is trained on copyrighted works, then the legal fight usually turns on whether the copying is fair use and what the market impact is; and Part 3 suggests licensing will be a major policy lever.

  4. If someone’s voice/likeness is cloned (digital replicas), then Part 1 says current protections are inconsistent and federal legislation is needed.

TL;DR

U.S. copyright protects human authorship.

  • Pure AI output (created entirely by an AI system with no meaningful human creative control) is not copyrightable under current U.S. law as applied by the U.S. Copyright Office and the federal courts.

  • AI-assisted works can still be protected to the extent they include human-authored expression or human creative selection/arrangement/modification.

  • Separate issue: training AI on copyrighted works (music, books, images, etc.) can involve copying that may or may not be fair use. The law is still developing.


DISCLAIMER

This post is an educational summary of publicly available government analysis and widely reported legal developments, not legal advice for any specific situation.

You can refer to these official sources provided by the U.S. Copyright Office.

bottom of page