Lesson 79 of 1570
Perplexity Sonar — when search-first beats raw reasoning
Every LLM hallucinates. Perplexity's Sonar family solves it by grounding answers in live web results with citations. Here is when to use Sonar instead of Claude or GPT.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The hallucination problem
- 2The Sonar lineup
- 3A Sonar call with citations
Concept cluster
Terms to connect while reading
Section 1
The hallucination problem
Ask Claude or GPT about a news event from last week. They will either refuse (cutoff date), hallucinate (sound confident, make it up), or lean on the web search tool if you enabled it. Perplexity built their entire product around solving this. Sonar models are tuned to search first, reason over the results, and cite every claim.
Section 2
The Sonar lineup
Compare the options
| Model | What it is | Price | Use case |
|---|---|---|---|
| Sonar | Standard grounded chat | $1 in / $1 out + $5/1k searches | Web-connected bot, quick fact checks |
| Sonar Pro | Premium with longer reasoning | $3 in / $15 out + search | Citation-required research answers |
| Sonar Deep Research | Multi-hop research agent | $5 in + search + $5/1k steps | Analyst reports, due diligence |
| Agentic Research API | Sonar tools + any third-party LLM | Provider rates + Perplexity fee | Adding web search to Claude/GPT stacks |
Pick Sonar when
- Your answer has to be current — news, stock prices, sports scores, product launches
- You need citations the user can click
- You want one API that handles both the search and the synthesis
Stick with Claude/GPT when
- The answer is timeless (math, code, writing)
- You already enabled the lab's own web search tool and it works
- Token cost matters more than grounding quality
Section 3
A Sonar call with citations
Citations come back as a structured list you can render as footnotes. This is the product.
from openai import OpenAI
# Perplexity is OpenAI-compatible
client = OpenAI(
api_key=os.environ["PERPLEXITY_API_KEY"],
base_url="https://api.perplexity.ai"
)
resp = client.chat.completions.create(
model="sonar-pro",
messages=[{"role": "user", "content": "What were the major model releases from April 16-22, 2026?"}]
)
# The answer comes with inline citations and a 'citations' list
print(resp.choices[0].message.content)
print(resp.citations) # array of URLs backing each claimKey terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “Perplexity Sonar — when search-first beats raw reasoning”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Builders · 28 min
Gemini 2.5 Pro — how a 1M context actually helps
Everyone brags about million-token windows. Here is what you can actually do with one when you learn how Gemini 2.5 Pro handles long documents.
Builders · 30 min
GPT-5.5 vs. Claude Opus 4.7 — which chatbot wins your day
Two frontier models, same subscription price, very different personalities. Pick by vibe, not by benchmark — here is how to figure out which one clicks for you.
Builders · 28 min
ElevenLabs v3 — voice cloning without causing a disaster
ElevenLabs voices are indistinguishable from humans. That is a feature and a fraud vector. Here is the production checklist before you clone anyone.
