Lesson 330 of 1234
AI Cannot Always Explain Why It Says What It Says
Sometimes AI gives an answer but cannot explain HOW it got there. That is a real problem grown-ups call 'the black box.'
Lesson map
What this lesson covers
Learning path
The main moves in order
- 1The big idea
- 2explainability
- 3black box
- 4trust
Concept cluster
Terms to connect while reading
Section 1
The big idea
AI gives you an answer. But if you ask 'why' or 'how did you decide?' AI often cannot really say. Even the people who built it sometimes do not know exactly. This is called the 'black box' problem.
Some examples
- Doctor's AI says you are at risk for a disease. Doctor wants to know why. AI cannot fully explain.
- Bank AI says your loan is denied. You want to know why. Often you cannot get a clear answer.
- School AI says your essay should get a B. Why not an A? AI cannot really say.
- Self-driving AI brakes suddenly. Why? Sometimes mysterious.
Try it!
Ask AI a question. Then ask 'why did you say that?' See how good its explanation is. Some are good, some are vague.
Key terms in this lesson
End-of-lesson quiz
Check what stuck
15 questions · Score saves to your progress.
Tutor
Curious about “AI Cannot Always Explain Why It Says What It Says”?
Ask anything about this lesson. I’ll answer using just what you’re reading — short, friendly, grounded.
Progress saved locally in this browser. Sign in to sync across devices.
Related lessons
Keep going
Explorers · 5 min
Why Building Trust With AI Tools Takes Time
Just like people, you build trust with AI tools over time. Knowing what each one does well comes from using them.
Explorers · 40 min
When AI Just Makes Stuff Up
Sometimes AI invents fake answers that sound true — this is called a hallucination.
Explorers · 18 min
Prompt Builder Arcade
Snap prompt pieces together to make AI give you what you actually want.
