14
Total Sources
5
Tier A
6
Tier B
3
Tier C

Tier A: Primary Legal Sources

Primary legal authorities: statutes, case law, and peer-reviewed law journals.

TIER A Volokh, Lemley & Henderson (2024)

"Freedom of Speech and AI Output"

Journal of Free Speech Law, Vol. 3. Comprehensive First Amendment analysis examining AI creators' rights to speak, users' rights to listen, and users' rights to speak. Key argument: listener value exists regardless of speaker identity.

PDF | 149 KB | Academic | View Source ↗ [Archive] [Archive]

TIER A Nina Brown (2023)

"Bots Behaving Badly: A Products Liability Approach to Chatbot-Generated Defamation"

Journal of Free Speech Law, Vol. 3. Argues chatbots should be treated as products subject to strict liability. Addresses the "is it a product?" question and economic loss doctrine challenges.

PDF | 427 KB | Academic | View Source ↗ [Archive]

TIER A Harvard Law Review (2024)

"Beyond 230: Reimagining Intermediary Liability"

Comprehensive analysis of Section 230's applicability to generative AI, examining why traditional intermediary liability frameworks don't map cleanly to LLM outputs.

Web | Academic | View Source ↗ [Archive]

TIER A Stanford Henderson/Hashimoto/Lemley (2023)

"Where's the Liability in Harmful AI Speech?"

Journal of Free Speech Law, Vol. 3. Technical examination of foundation model design decisions and their implications for liability analysis, including red-teaming scenarios.

PDF | 1.0 MB | Academic | View Source ↗ [Archive] [Archive]

TIER A Raine v. OpenAI Complaint (2025)

First wrongful death suit against AI company

Filed August 2025 in California state court. Alleges OpenAI's ChatGPT caused plaintiff's harm. Currently the leading test case for AI liability precedent.

PDF | Primary Litigation | View Source ↗ [Archive]

Tier B: Expert Analysis

Think tank reports, expert commentary, and institutional analysis.

TIER B Matt Perault (2023)

"Section 230 Won't Protect ChatGPT"

Journal of Free Speech Law, Vol. 3. Argues LLMs are "information content providers" who "develop" content, placing them outside Section 230's safe harbor.

PDF | 172 KB | Legal Analysis | View Source ↗ [Archive] [Archive]

TIER B Jane Bambauer (2023)

"Negligent AI Speech: Some Thoughts About Duty"

Journal of Free Speech Law, Vol. 3. Examines when AI developers should have a duty of care for AI speech outputs, focusing on special relationships and foreseeability.

PDF | 251 KB | Legal Analysis | View Source ↗ [Archive]

TIER B CDT (2024)

"Section 230 and its Applicability to Generative AI"

Center for Democracy & Technology analysis. Examines content provider vs. intermediary distinction for LLMs.

Web | Think Tank | View Source ↗ [Archive]

TIER B Lawfare (2024)

"Section 230 and ChatGPT"

Legal analysis examining whether the 1996 Communications Decency Act provides immunity for AI-generated content.

Web | Legal Analysis | View Source ↗ [Archive] [Archive]

TIER B AEI/Constitution Center (2024)

Brandenburg Standard Analysis

Examines whether the Brandenburg imminent lawless action test applies to AI speech, and the paradox of applying intent requirements to non-sentient systems.

Web | Think Tank | View Source ↗ [Archive]

TIER B FIRE (2024)

"Protecting AI Speech"

Foundation for Individual Rights and Expression analysis. Argues for strong First Amendment protection for AI-generated content.

Web | Advocacy | View Source ↗ [Archive]

Tier C: News & Reporting

Journalism and news reporting providing context and developments.

TIER C Wired, TechCrunch, Reuters (2025-2026)

Garcia v. Character.AI Coverage

Settlement reporting from January 7, 2026. Coverage of the Conway order and its implications for AI liability doctrine.

Web | News | CNN ↗ [Archive] | Reuters ↗ [Archive]

Tier Definitions

TIER A Primary legal sources: case law, statutes, peer-reviewed law journals

TIER B Expert analysis: think tanks, academic commentary, institutional reports

TIER C News and reporting: journalism providing context and developments