Legal scholars have proposed multiple frameworks for classifying AI output and determining applicable liability standards. Each framework has distinct implications for plaintiffs, defendants, and the broader AI industry.

1. First Amendment Speech Framework

AI as Protected Speaker MODERATE STRENGTH

Proponents: Eugene Volokh, Mark A. Lemley, Peter Henderson

Core Argument: AI output may receive First Amendment protection through three channels: (1) the rights of AI creators to speak through their tools, (2) the rights of users to listen to AI-generated content, and (3) the rights of users to speak using AI assistance.

"Even though current AI programs are of course not people and do not themselves have constitutional rights, their speech may potentially be protected because of the rights of the programs' creators."
1

Implications: If AI output is speech, Brandenburg's "intent to incite" standard would apply, potentially limiting liability for AI-caused harm since AI cannot form intent.

Weakness: Breaks down when AI generates novel harmful content not foreseeable by creators.

2. Products Liability Framework

AI as Defective Product STRONG STRENGTH

Proponents: Nina Brown, Catherine Sharkey

Core Argument: AI chatbots should be treated as products subject to products liability law, including potential strict liability for defects. This approach bypasses First Amendment concerns entirely.

"Pleading AI-generated defamation as a products liability claim may offer plaintiffs a viable path to recovery that avoids thorny questions about AI speech and Section 230."
2

Implications: Enables plaintiffs to recover for AI harm without proving intent or negligence; shifts focus to whether the AI had a "defect."

Challenge: Courts may resist treating informational outputs as "products" rather than "services."

3. Section 230 Exclusion Framework

LLMs as Information Content Providers STRONG STRENGTH

Proponents: Matt Perault, CDT, Senator Ron Wyden (co-author of Section 230)

Core Argument: Section 230 immunity does not apply to generative AI outputs because the AI company is "responsible, in whole or in part, for the creation or development" of the content—the statutory definition of an information content provider.

"Given that Generative AI systems engage in a wide breadth of functions... determining whether the system is an 'information content provider' with respect to particular content... would likely vary on a case by case basis."
3

Implications: AI companies cannot rely on Section 230 as a defense for AI-generated content that causes harm.

Note: Neither OpenAI nor Microsoft have raised Section 230 as a defense in recent defamation cases (Walters v. OpenAI, Battle v. Microsoft).

4

4. Negligence/Duty Framework

AI Developers Have Duty of Care MODERATE STRENGTH

Proponents: Jane Bambauer

Core Argument: Negligence law can apply to AI speech when AI developers or users fail to exercise reasonable care. The key question is defining the duty owed to potential plaintiffs.

"Courts may seize on the distinction that... users may rely so completely on the synthesis that an AI program provides that they essentially outsource the decision-making process."
5

Implications: Creates intermediate standard between strict liability and full speech protection; focuses on reasonableness of AI design and deployment.

Application: Most applicable when AI causes physical harm through misinformation (e.g., incorrect medical or safety advice).

5. Listener Rights Framework

Protection via User Rights MODERATE STRENGTH

Proponents: Volokh, Henderson, Lemley (J. Free Speech Law)

Core Argument: Even if AI itself has no constitutional rights, its output may be protected because humans have a First Amendment right to receive information and ideas, including from algorithmic sources.

Implications: Restricting AI output could violate users' rights to access information, even if the AI itself is not a rights-holder.

Limitation: Does not address cases where AI output causes discrete harm to identified individuals.

6. Hybrid/Context-Dependent Framework

Classification Varies by Use Case EMERGING

Source: Harvard Law Review, CDT analysis

Core Argument: Different rules should apply to different AI use cases. When AI merely retrieves information (like search), more protection may apply; when AI synthesizes novel content, less protection.

Implications: Avoids one-size-fits-all approach; allows courts to develop nuanced doctrine over time.

Challenge: Creates uncertainty for AI developers who may not know ex ante which standard applies.

Framework Comparison Matrix

Framework Favors Intent Required? Precedent Strength
First Amendment Speech Defendants Yes (Brandenburg) Moderate
Products Liability Plaintiffs No Strong
Section 230 Exclusion Plaintiffs Case-dependent Strong
Negligence/Duty Mixed No (reasonableness) Moderate
Listener Rights Defendants Yes Moderate
Hybrid Approach Mixed Context-dependent Emerging

Methodology Note

Framework strength ratings reflect scholarly consensus and judicial reception as of January 2026. Ratings may change as litigation progresses and courts issue rulings.