Lesson 347 of 2116
Evaluating Sources: Beyond The CRAAP Test
When your search engine is an LLM, traditional source evaluation rubrics need an upgrade. Here's the creators-tier version.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1CRAAP was written before the synthetic web
- 2source evaluation
- 3CRAAP
- 4provenance
Concept cluster
Terms to connect while reading
Section 1
CRAAP was written before the synthetic web
The classic CRAAP test (Currency, Relevance, Authority, Accuracy, Purpose) assumed a source was written by a human who could be held accountable. Today, much of the first page of any search is AI-generated content about AI-generated content. The rubric needs new axes.
The updated evaluation axes
Compare the options
| Axis | Old question | New question |
|---|---|---|
| Authority | Who wrote it? | Who wrote it AND was any of it generated? |
| Currency | When was it published? | Has this post been auto-refreshed since it was cited? |
| Accuracy | Are the facts right? | Are the facts right AND independently verifiable? |
| Provenance | (not tracked) | Can I trace every claim to a primary source? |
| Incentives | What's the purpose? | Who profits if this is believed? |
Adversarial sourcing
- For any claim, deliberately search for the opposite view
- Weight sources by their distance from the primary evidence, not their reputation
- Ask: would this source change its story if the incentives reversed?
- Read the methods section, not just the abstract
Key terms in this lesson
The big idea: in the LLM era, the axis that matters most is provenance. Every claim should trace to something a human could have disagreed with, measured, or witnessed.
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Evaluating Sources: Beyond The CRAAP Test”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 40 min
Literature Review With LLMs: Scope First, Search Second
Use an LLM to define the scope of your lit review before touching a search engine — the single highest-leverage move in modern research workflow.
Creators · 9 min
Synthesis Vs Summary: The Move That Separates Analysts From Aggregators
LLMs default to summarization. Research demands synthesis. Here's how to prompt for the harder, more valuable thing.
Creators · 10 min
Quantitative Analysis Prompting: Asking For Reproducible Code
When you ask an LLM to 'analyze this data,' you get a guess. When you ask it to write reproducible code, you get a collaborator.
