The hype says GenAI is coming for legal research. We put that claim to the test - and our students did too.
This presentation reports on a 1L legal research course designed from scratch to integrate technology fluency and GenAI across major assignments. Rather than avoiding GenAI or treating it as a threat, we built a curriculum that requires students to use it, evaluate it and reflect on it in writing. The results were not what the doomsayers predict.
Through summative written reflections tied to hands-on research tasks, a consistent pattern emerged: students found that GenAI tools created more work, not less. For example, outputs required significant verification and correction; hallucinations were common; and the gap between AI-generated results and professionally reliable research was impossible to ignore. Far from replacing the skills we teach, GenAI exposed exactly why those skills matter. The takeaway isn't that GenAI is useless - it's that uncritical AI use is a professional liability, and that legal research courses are the right place to teach students how to find that out for themselves.
We'll share how the course is structured, what the student reflection data actually shows and what it means for legal research instruction going forward. Attendees will leave with ideas for a replicable course design framework, sample assignment structures and a data-grounded argument for why 'Fit for Legal Education' means stress-testing GenAI rather than surrendering to it.