Lesson 1414 of 2116
AI Tracing Platforms: Langfuse, LangSmith, Helicone, Phoenix
Compare tracing and observability platforms specifically for LLM and agent applications.
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The premise
- 2LLM tracing
- 3span hierarchy
- 4prompt logging
Concept cluster
Terms to connect while reading
Section 1
The premise
LLM tracing differs from generic APM — purpose-built tools surface the right metadata.
What AI does well here
- Capture full prompt, response, tool call, and cost per span.
- Provide replay and diff across runs.
- Integrate with eval suites for regression detection.
What AI cannot do
- Replace your generic APM for non-LLM stack components.
- Solve trace volume cost without sampling discipline.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “AI Tracing Platforms: Langfuse, LangSmith, Helicone, Phoenix”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Creators · 40 min
LLM Observability Tools: What to Trace, What to Sample, What to Alert
LLM observability tools (LangSmith, LangFuse, Helicone, Datadog LLM, custom) all trace conversations. The differentiation is in evaluation, dashboards, and alerting — and choosing the wrong tool wastes months.
Creators · 11 min
AI cost attribution tools
Attribute LLM spend to teams, features, and customers.
Creators · 45 min
Structured Outputs: Make the Model Return Data You Can Trust
For production apps, pretty prose is often the wrong output. Learn when to use structured outputs, function calling, and schema validation.
