Loading lesson…
Every LLM hallucinates. Perplexity's Sonar family solves it by grounding answers in live web results with citations. Here is when to use Sonar instead of Claude or GPT.
Ask Claude or GPT about a news event from last week. They will either refuse (cutoff date), hallucinate (sound confident, make it up), or lean on the web search tool if you enabled it. Perplexity built their entire product around solving this. Sonar models are tuned to search first, reason over the results, and cite every claim.
| Model | What it is | Price | Use case |
|---|---|---|---|
| Sonar | Standard grounded chat | $1 in / $1 out + $5/1k searches | Web-connected bot, quick fact checks |
| Sonar Pro | Premium with longer reasoning | $3 in / $15 out + search | Citation-required research answers |
| Sonar Deep Research | Multi-hop research agent | $5 in + search + $5/1k steps | Analyst reports, due diligence |
| Agentic Research API | Sonar tools + any third-party LLM | Provider rates + Perplexity fee | Adding web search to Claude/GPT stacks |
from openai import OpenAI
# Perplexity is OpenAI-compatible
client = OpenAI(
api_key=os.environ["PERPLEXITY_API_KEY"],
base_url="https://api.perplexity.ai"
)
resp = client.chat.completions.create(
model="sonar-pro",
messages=[{"role": "user", "content": "What were the major model releases from April 16-22, 2026?"}]
)
# The answer comes with inline citations and a 'citations' list
print(resp.choices[0].message.content)
print(resp.citations) # array of URLs backing each claimCitations come back as a structured list you can render as footnotes. This is the product.15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-model-perplexity-sonar-builders
What is the core idea behind "Perplexity Sonar — when search-first beats raw reasoning"?
Which term best describes a foundational idea in "Perplexity Sonar — when search-first beats raw reasoning"?
A learner studying Perplexity Sonar — when search-first beats raw reasoning would need to understand which concept?
Which of these is directly relevant to Perplexity Sonar — when search-first beats raw reasoning?
Which of the following is a key point about Perplexity Sonar — when search-first beats raw reasoning?
What is one important takeaway from studying Perplexity Sonar — when search-first beats raw reasoning?
What is the key insight about "The Comet angle" in the context of Perplexity Sonar — when search-first beats raw reasoning?
What is the key insight about "Search fees are separate" in the context of Perplexity Sonar — when search-first beats raw reasoning?
Which statement accurately describes an aspect of Perplexity Sonar — when search-first beats raw reasoning?
Which best describes the scope of "Perplexity Sonar — when search-first beats raw reasoning"?
Which section heading best belongs in a lesson about Perplexity Sonar — when search-first beats raw reasoning?
Which section heading best belongs in a lesson about Perplexity Sonar — when search-first beats raw reasoning?
Which of the following is a concept covered in Perplexity Sonar — when search-first beats raw reasoning?
Which of the following is a concept covered in Perplexity Sonar — when search-first beats raw reasoning?
Which of the following is a concept covered in Perplexity Sonar — when search-first beats raw reasoning?