Generative AI (gen AI) is rapidly transforming attorneys’ research behavior at a scale that exceeds the legal profession’s current capacity to respond. Since the first case of attorneys’ misuse of gen AI in litigation occurred in 2023, the number of similar cases in legal practice has continued to grow. A review of 102 litigation matters involving misuse of gen AI tools indicates that courts’ responses remain inadequate: they tend to apply existing instruments, such as court rules and procedural rules, narrowly to interpret and enforce “reasonable inquiry” and “competence” requirements, and treat output verification as a cure-all, leaving broader questions of research integrity unaddressed. Additionally, case reviews show that judges rarely engage with the technical rationale behind the AI tools and foundational models involved.
This article argues that while “verify the output” is necessary, it is insufficient to address the impacts of gen AI. Limiting accountability to citation hygiene does not resolve the underlying issues, nor does it adequately assess the role of gen AI in legal research workflows. More importantly, it risks undermining the instrumental, epistemic, cognitive, and ethical integrity of legal research, which underpins legal practice and long-term knowledge development.
To address these challenges, the article proposes a process-oriented framework for evaluating AI-assisted legal research. Using a revised Data-Information-Knowledge-Wisdom (DIKW) model as an analytical lens, the study examines how generative AI interacts with each stage of the legal research process. Within this framework, the integrity of legal research is assessed across four interconnected dimensions: instrumental integrity (efficiency and risk calibration), epistemic integrity (reliability and authority of legal knowledge), cognitive integrity (critical reasoning and professional judgment), and ethical integrity (responsibility to clients, courts, and society).
By shifting attention from outputs to the research process itself, this framework provides courts with a more comprehensive way to understand and regulate AI-assisted legal research while remaining consistent with existing procedural and legal ethics rules.