The premise
LLM-based review is dramatically faster than TAR 1.0 — but defensibility requires the same methodology rigor courts have always demanded.
What AI does well here
- Establish a documented review protocol before deployment (not after a Rule 26 challenge)
- Maintain a control set of human-coded documents to validate AI accuracy
- Run accuracy testing per responsiveness category, not just overall
- Document privilege identification methodology separately — courts scrutinize this most
What AI cannot do
- Substitute for attorney review of privileged documents (privilege determinations remain attorney work)
- Make the responsiveness call on close-call documents
- Replace the meet-and-confer process where TAR methodology is disclosed
End-of-lesson check
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-legal-AI-discovery-document-review-adults
When implementing LLM-based document review for a discovery matter, when should the review protocol be documented?
- During the midpoint of the review process to adjust for unexpected document volumes
- Only when responding to a court order or rule 26 challenge
- After the review is complete and before producing documents to opposing counsel
- Before deploying the AI system, as part of pre-review planning
What is the primary purpose of maintaining a control set of human-coded documents during AI-assisted review?
- To create a benchmark for billing the client for review hours
- To provide the AI system with training data to improve over time
- To validate the AI's accuracy by comparing machine decisions against human judgments
- To reduce the total number of documents requiring attorney review
According to the principles of defensible AI-assisted review, accuracy testing should be conducted:
- Only on documents flagged as potentially privileged
- Once at the beginning of the review to establish baseline metrics
- Only on the overall document population to get a single accuracy percentage
- Per responsiveness category to ensure the AI performs adequately on each document type
Why should privilege identification methodology be documented separately from general responsiveness methodology?
- Because privilege review is typically performed by different attorneys than responsiveness reviewers
- Because privilege identification typically requires more computational resources than responsiveness review
- Because privilege log production has different formatting requirements than document production
- Because courts apply heightened scrutiny to privilege determinations and need clear methodology documentation
Which of the following represents an inappropriate use of AI in document review?
- Using AI to categorize documents by document type for organizational purposes
- Allowing AI to make final determinations on whether documents contain privileged content
- Using AI to identify potential responsive documents for attorney approval
- Using AI to prioritize documents for attorney review based on responsiveness probability
On which type of document should AI not make the final responsiveness determination?
- Documents that are clearly responsive based on explicit search terms
- Documents where the responsiveness determination is a close call requiring legal judgment
- Documents that contain only metadata without substantive content
- Documents that are clearly non-responsive based on explicit search terms
Can AI-assisted review replace the meet-and-confer process with opposing counsel regarding TAR methodology?
- No, but AI can draft the initial meet-and-confer disclosure for attorney review
- No, because the meet-and-confer is a mandatory procedural requirement where methodology must be disclosed and agreed upon
- Yes, if the AI vendor provides documentation that satisfies disclosure requirements
- Yes, because AI systems are now sophisticated enough to address all methodological questions
When should defensibility documentation be created during an AI-assisted review?
- Contemporaneously with the review process, as the work is performed
- Only when the court requests it during a Rule 26 conference
- At the end of the matter as part of closing the file
- Retrospectively, after the review is complete, to support any future challenges
In designing a defensible AI review protocol, the seed-set methodology should specify:
- The criteria for selecting documents and the rationale for seed-set size
- Only the total number of documents in the seed set, not how they are selected
- The specific attorneys who will code the seed set documents
- How the AI will automatically expand the seed set during review
What role does statistical sampling play in QA during AI-assisted review?
- It replaces the need for any human review of AI-flagged documents
- It is used only at the very beginning of the review process
- It determines which documents should be prioritized for privilege review
- It provides a method to estimate overall accuracy without reviewing every document
A Rule 26 challenge to AI-assisted review methodology most likely focuses on:
- Whether the client was billed appropriately for review time
- Whether all documents were reviewed using AI rather than human reviewers
- Whether the AI vendor had sufficient insurance coverage
- Whether the review protocol was documented before deployment and whether validation was performed
When opposing counsel challenges your AI-assisted review methodology, which approach provides the strongest defense?
- Arguing that the burden of proof lies with the challenging party
- Explaining that the AI technology is proprietary and cannot be disclosed
- Presenting contemporaneous documentation of protocols, validation testing, and sampling methodology
- Claiming that AI-assisted review is now presumptively acceptable under federal rules
Which of the following tasks remains the responsibility of human attorneys even when using advanced AI systems?
- Categorizing documents by document type (email, memo, contract)
- Extracting dates and key data points from responsive documents
- Making final determinations that documents contain attorney-client privilege
- Identifying documents that are potentially responsive to discovery requests
AI systems should not make the final determination on documents that require:
- Extraction of structured data from forms
- Legal judgment about whether content is responsive to discovery requests
- Automatic classification into document type categories
- Simple keyword searches to identify relevant content
The meet-and-confer process in discovery cannot be replaced by:
- AI-generated disclosure documents for attorney review
- AI systems reviewing and agreeing to methodology
- Written correspondence between counsel
- A joint letter outlining agreed-upon methodology