What In-House Counsel Must Do Now Before the Window Closes
Client Advisory | April 2026
The intellectual property landscape has not faced a structural disruption of this magnitude since the advent of the internet. Generative AI has created a gap in copyright law that neither Congress nor the courts have fully addressed — and that gap represents both a significant risk and an extraordinary strategic opportunity for organizations that act decisively.
This advisory outlines the current state of the law, identifies the specific exposure your organization likely faces, and recommends a concrete compliance and positioning strategy.
The Problem: A Framework Built for Humans, Deployed by Machines
Copyright law in the United States rests on a foundational assumption: that creative works originate from human beings. That assumption, largely unquestioned for over two centuries, is now under extraordinary pressure.
In March 2025, the D.C. Circuit affirmed in Thaler v. Perlmutter that human authorship is required “[a]s a matter of statutory law” for copyright protection. Dr. Stephen Thaler had sought to register a visual work generated entirely by his Creativity Machine AI system, listing the AI as sole author. The court held that the Copyright Act, “taken as a whole,” makes clear that authors must be humans, not machines. In March 2026, the Supreme Court declined certiorari, leaving the D.C. Circuit’s holding undisturbed.
This is now the settled law of the land — but the practical implications are far more nuanced than the headline suggests.
The D.C. Circuit was careful to note that its ruling “does not prohibit copyrighting work that was made by or with the assistance of artificial intelligence.” The rule requires only that the author be a human being — the person who created, operated, or used the AI — and not the machine itself. What the court did not do, and what no court has yet done, is articulate precisely where the line falls between protectable human-directed AI output and unprotectable machine-generated output.
The Regulatory Landscape: Diverging Standards, Converging Deadlines
The Copyright Office’s January 2025 report on copyrightability (Copyright and Artificial Intelligence, Part 2) provided some guidance but stopped short of bright-line rules. Key takeaways for in-house teams include the following.
First, the report reiterated its AI Registration Guidance from 2023 that requires applicants to disclose the inclusion of more than de minimis AI-generated content in any work submitted for registration and provide a brief explanation of the human author’s contributions. AI-generated content that is more than de minimis must be explicitly excluded from the claim. Second, prompts alone are insufficient to establish authorship. The Office has stated clearly that merely entering prompts into a generative AI system does not give users enough creative control to constitute authorship of the output. However, the prompts themselves, if sufficiently creative, may be copyrightable. Third, copyrightability determinations will be made on a case-by-case basis, examining whether the human’s contributions reflect original creative expression.
Internationally, the situation is even more fragmented. The EU AI Act becomes fully applicable on August 2, 2027, with various sections already in force as of August 2, 2025, and other sections being enforceable as of August 2, 2026, imposing transparency and copyright compliance obligations on all general-purpose AI model providers operating in the EU. Article 53 requires providers to implement a copyright compliance policy that includes a reservation of rights pursuant to Article 4(3) of Directive (EU) 2019/790 and publish, and keep up-to-date, detailed summaries of training data. Non-compliance penalties are included in Article 99 and include, for non-compliance of provisions related to operators or notified bodies other than those included in Article 5, fines of up to 3% of annual worldwide turnover for the preceding financial year or EUR 15 million, whichever is higher. Meanwhile, Beijing’s Internet Court has recognized copyright protection for AI-generated images where “demonstrable human intellectual effort” is involved, and the UAE is pursuing an innovation-friendly framework with different contours still.
The result is a patchwork of standards across jurisdictions which have led to dozens of active infringement lawsuits targeting AI companies, with major fair-use rulings expected throughout 2026.
The Strategic Gap: Where Risk Meets Opportunity
The structural problem is this: organizations are producing AI-assisted content at scale — marketing copy, code, design assets, research summaries, internal documentation — and most have given no thought to whether any of it is protectable. This creates two distinct categories of exposure.
Exposure 1: Unprotectable output. If your organization is generating content through AI tools without documented human creative involvement, that output may reside in the public domain. Competitors can freely copy, adapt, and redistribute it. Trade secret protection may apply in limited circumstances, but copyright — the workhorse of content protection — does not attach to works that fail the human authorship threshold.
Exposure 2: Undisclosed AI involvement. The Copyright Office requires affirmative disclosure of AI-generated content in registration applications. Failure to disclose may constitute a material omission that renders any resulting registration voidable — an outcome with serious implications for enforcement and litigation strategy.
But the corollary is equally important: organizations that build rigorous, documented workflows for human-AI collaboration are creating defensible intellectual property that their less-disciplined competitors cannot protect. In a market where AI-generated content is becoming ubiquitous, the ability to claim enforceable copyright is a genuine competitive moat.
Recommended Actions
Based on the current legal landscape, we recommend that clients take the following steps.
- Audit your AI content pipeline. Identify every workflow in which generative AI tools contribute to content your organization creates, publishes, licenses, or relies upon. This includes marketing, product development, software engineering, and internal knowledge management. The goal is a complete inventory of AI-assisted output and the degree of human involvement at each stage.
- Establish human authorship protocols. For any content you intend to protect or license, implement documented procedures that ensure meaningful human creative contribution at the stages that matter. This goes well beyond prompt engineering. It means human selection, arrangement, and editing of AI-generated elements — with contemporaneous records. The Copyright Office has signaled that it will examine whether the human’s contributions reflect original expression; your documentation should be calibrated to that standard.
- Update registration practices. Ensure that your IP team and outside counsel are disclosing AI involvement in copyright applications and correctly delineating the human-authored portions of each work. Retroactively review any registrations filed since your organization began using generative AI tools.
- Prepare for EU AI Act compliance. If your organization develops or deploys general-purpose AI models in the EU — or produces content using such models for EU audiences — assess your obligations under Article 53 now, as this Article is in force as of August 2, 2025. This includes copyright compliance policies, training data documentation, and content labeling requirements.
- Revisit existing IP agreements. Review your vendor contracts, licensing agreements, and employment IP assignment clauses. Many were drafted before generative AI entered the picture and may not adequately address ownership of AI-assisted works, indemnification for AI-related infringement claims, or representations regarding originality.
- Monitor the litigation landscape. The active AI copyright cases will produce rulings that reshape the field, potentially within months. Assign someone on your team — or engage outside counsel — to track these developments and translate them into updated internal guidance on a rolling basis.
The Window
The current environment is characterized by legal uncertainty, and uncertainty creates asymmetric opportunity. Organizations that invest now in defensible human-AI workflows will hold protectable IP in a market increasingly flooded with unprotectable output. Those that wait for Congress or the courts to provide definitive rules will find, by the time clarity arrives, that the competitive advantage has already been captured.
The EU enforcement deadlines and the wave of pending fair-use decisions will narrow the information gap considerably. The time to act is before those events reshape the playing field — not after.



