Generative AI can deliver research reports, complete with citations and analysis, almost instantly. Students must learn how to evaluate whether the authority presented in such reports is the best available for a given task. This talk describes two exercises from first-year and upper-level legal research courses. Both exercises build professional judgment within a human-in-the-loop framework, positioning students as strategists, analysts, and ultimate decision-makers.
The Citation Analysis exercise provides students with an AI-generated research report. Students identify the strongest authorities from the cited sources and supplement them as needed using citators, headnotes, and Key Numbers. The Starter Kit exercise places AI-generated reports alongside traditional sources such as treatises, Practical Law, and annotated codes. After reviewing all the materials, students must complete the research. Across both exercises, students weigh competing authorities, identify gaps, and take ownership of the decisions that underlie their final work product.
The session will also briefly discuss one way to address students' often complex attitudes toward AI, acknowledging how their philosophy of technology use helps shape their professional identity.