AI Speech Classification & First Amendment Liability
The Central Question
Is AI chatbot output protected speech under the First Amendment, or is it a product subject to standard liability frameworks?
This question remains unresolved following the Garcia v. Character.AI settlement (January 7, 2026). The Conway Order (May 2025) provides the most significant judicial statement: "Court not prepared to hold AI output is speech."
The Central Paradox
IF AI Output = Speech
- Brandenburg standard applies
- Requires intent to incite
- AI cannot form intent
- → AI harm becomes unregulatable?
IF AI Output ≠ Speech
- Product liability applies
- No intent required
- Strict liability possible
- → Standard tort framework available
Key Developments
Garcia v. Character.AI - Settled January 7, 2026
The most prominent AI speech case settled before reaching the central First Amendment question. The court indicated it was "not prepared to hold" that AI output constitutes protected speech, but the settlement prevents this from becoming binding precedent.
Raine v. OpenAI - Active (California State Court)
Filed August 2025. Test case examining AI liability for alleged harm. Currently the leading vehicle for establishing precedent on AI speech classification.
Framework Summary
| Framework | Classification | Strength | Key Proponent |
|---|---|---|---|
| First Amendment Speech | AI as speaker/publisher | MODERATE | Volokh, Lemley, Henderson |
| Products Liability | AI as defective product | STRONG | Nina Brown, Sharkey |
| Section 230 Exclusion | LLMs as content providers | STRONG | Matt Perault, CDT |
| Negligence/Duty | AI developers have duty of care | MODERATE | Jane Bambauer |
| Listener Rights | Protection via user rights | MODERATE | Volokh (J. Free Speech L.) |
| Hybrid Approach | Context-dependent classification | EMERGING | Harvard Law Review |
Source Tiers
TIER A = Primary legal sources (cases, statutes, law reviews) | TIER B = Expert analysis (think tanks, academic commentary) | TIER C = News and reporting