Everyone is talking about how AI can help law students write. This session is about teaching students why AI often shouldn't be trusted to do so — and how that insight becomes the most powerful AI literacy lesson you can give a 1L.
At USF School of Law, we built the integration of generative AI into the required first-year legal writing curriculum, in partnership with Anthropic and Accordance. The anchor assignment — the Translation Project — asks students to rewrite their completed persuasive brief for three different audiences: a client email, a settlement communication to opposing counsel, and a press statement. Students draft each version twice: once traditionally, once with AI assistance using institutional Claude accounts that mirror enterprise law firm usage.
The result is not a lesson in how well AI writes. It's a lesson in how AI reliably fails. AI consistently over-discloses, buries strategic choices, ignores audience, and treats every communication as if it should read like a brief. Students who have already mastered persuasive legal writing can see these failures clearly — and that's the point. Foundational competence before AI exposure isn't just a sequencing choice. It's what makes students expert evaluators rather than dependent users.
This session will cover: (1) the curriculum design and pedagogical rationale; (2) concrete, practitioner-relevant examples of AI failure in legal writing tasks (including Lexis AI case summary accuracy as a live demo); (3) how we assess AI literacy alongside writing quality; and (4) what this model suggests about "tech competency" for law students — and whether anyone can really define it.
Attendees will leave with a replicable assignment framework and a sharper answer to the question the whole field is wrestling with: not "should law students use AI?" but "how do we teach them to know when not to?"