Lesson 167 of 2116
Elicit: The AI Research Assistant For Systematic Reviews
Elicit automates slow parts of academic research: finding papers, extracting data, building literature matrices. Look at what it saves PhDs 20 hours a week.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1What it's genuinely good at
- 2What it struggles with
- 3Pricing (April 2026)
Concept cluster
Terms to connect while reading
Elicit is an AI research assistant built by Ought, designed to automate tedious parts of academic research — finding relevant papers, extracting key findings from them, and building structured literature matrices. It's less about quick answers (like Consensus) and more about doing the actual work of a systematic review. By 2026 it's widely adopted in PhD programs, policy research, and biotech companies, and it charges accordingly.
Section 1
What it's genuinely good at
- Paper finding — surfaces relevant papers beyond what keyword search would catch.
- Data extraction — pulls sample sizes, effect sizes, methods, and outcomes into a table.
- Summary columns — customize what Elicit extracts from each paper you care about.
- Literature matrix — structured output that maps to how reviewers actually work.
- Citation analysis — snowball citations to find related work.
- Stays grounded — answers are strictly from the papers in your matrix.
Section 2
What it struggles with
- Extraction errors — data extraction misreads tables, especially in scanned PDFs.
- Slower than you'd expect — processing 50+ papers takes many minutes per step.
- Cost — research-scale use burns through credits on the Plus plan quickly.
- Limited to open-access papers for full-text analysis — paywalled papers are abstract-only.
- UI is busy — takes real time to learn the workflow.
Section 3
Pricing (April 2026)
- Basic: Free — limited searches, basic summarization.
- Plus: $12/month — 12,000 credits/month, full extraction features.
- Pro: $42/month — 50,000 credits, priority processing, Notebooks.
- Enterprise: Custom — team features, larger usage, institutional billing.
Key terms in this lesson
Who should bother: PhD candidates doing literature reviews, policy researchers, biotech companies doing evidence assessments, anyone running systematic reviews. Who shouldn't: casual users (Consensus is simpler), humanities researchers whose papers aren't in the corpus, price-sensitive users. Elicit is the best tool for the narrow but valuable use case of structured academic research automation.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Elicit: The AI Research Assistant For Systematic Reviews”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 38 min
Building a Personal AI Stack for School and Career
Assemble the four or five AI tools that actually belong in your daily life. A tested template for the stack that earns its keep.
Creators · 35 min
Codex CLI: OpenAI's Answer to Claude Code
Codex CLI is OpenAI's open-source terminal coding agent. Look at how it compares to Claude Code, what it does uniquely, and why it matters to non-Anthropic shops.
Creators · 35 min
Running a Literature Review With AI
AI turns weeks of literature review into days — if you know how to use it. Here is a workflow that actually works.
