Six categories where AI is dangerously wrong often enough that you should always verify — or skip the AI entirely.
8 min · Reviewed 2026
Confidence is not the same as correctness
Chatbots write smoothly even when they are wrong. That smooth voice can be more convincing than the truth. Knowing when not to trust them is a skill.
Six high-risk categories
Specific medical dosages or drug interactions — confirm with a pharmacist.
Legal questions about your specific situation — confirm with a lawyer.
Recent news (the AI's information may be months old).
Specific phone numbers, addresses, or business hours — call to verify.
Genealogy facts — the AI may invent ancestors that match your description.
Anything where being wrong would hurt someone.
When AI is usually fine
Drafting a friendly email or note.
Explaining a general concept (gravity, photosynthesis, how a bill becomes a law).
Suggesting recipe substitutions.
Helping you brainstorm a list of options.
The big idea: trust AI for ideas. Trust humans and primary sources for facts that matter.
End-of-lesson check
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-seniors-when-not-to-trust-ai-creators
A chatbot provides a detailed, confident answer that turns out to be completely false. What term do professionals use to describe this phenomenon?
A computational error
A syntax failure
A data overflow
A hallucination
Which scenario represents the highest risk when relying on AI-generated information?
Using AI to brainstorm essay topics
Using AI-generated medical dosage recommendations for treatment
Requesting AI to draft a birthday card message
Asking AI to explain how photosynthesis works
A student asks an AI for the business hours of a local restaurant. What does the lesson recommend?
Both B and C are recommended
Search the restaurant's website yourself
Call the restaurant directly to verify
Accept the AI's response as reliable
Why can AI's smooth, professional writing style be dangerous?
It always indicates the information is accurate
It requires more computational power to generate
It makes false information feel more believable
It uses vocabulary that is too advanced
A user asks an AI about their family history and receives detailed information about ancestors they never knew existed. What should they be concerned about?
The AI might be too slow to search properly
The AI might have a virus
The AI might be inventing ancestors that match the description
The AI might be accessing private family records
What is a primary source?
A summary written by an expert in the field
The first website that appears in search results
An AI-generated compilation of multiple sources
An original document or direct evidence from the time period or event
Why might AI-generated news information be unreliable even if it sounds current?
AI always invents fake headlines
AI cannot write in proper journalistic style
AI databases may only be updated periodically, potentially leaving out recent events
AI is legally prohibited from discussing news
A friend asks if they should use an AI chatbot to help them write a legal document regarding a landlord dispute. What does the lesson advise?
They should verify any legal information with an actual lawyer
They should only use AI for the greeting of the document
It's perfectly safe since AI writes professionally
They should use AI but cite it as a source
Why should you confirm AI-generated medical information with a pharmacist rather than relying solely on the AI response?
Medical dosages and drug interactions can have life-or-death consequences
Pharmacists have access to the same AI tools
Pharmacists are more expensive than AI consultations
AI is not allowed to discuss medical topics
Which of the following would be considered a 'hallucination' by AI standards?
AI refusing to answer due to safety concerns
AI providing a confident but completely fabricated quote from a real author
AI admitting it doesn't have enough information to answer
AI asking for clarification on an unclear question
The lesson states that AI is 'usually fine' for which of the following tasks?
Suggesting recipe substitutions when you're missing an ingredient
Finding contact information for emergency services
Providing legal advice for your specific case
Determining your legal rights in a contract dispute
What is the relationship between AI confidence and accuracy as described in the lesson?
Confidence does not equal correctness
High confidence always means high accuracy
Confidence and accuracy are unrelated concepts
AI confidence is always low when accurate
A user wants to use AI to find the exact address and phone number for a government office in their city. What should they do instead?
Call the office or visit their official website to verify
Search social media for the information
Trust the AI since it's a government office
Ask a friend who visited last year
The lesson mentions categories where 'being wrong would hurt someone.' What type of thinking does this represent?
Risk assessment - evaluating potential consequences of errors
Emotional thinking - considering feelings about decisions
Creative thinking - imagining different scenarios
Competitive thinking - comparing AI to human performance
Which scenario most clearly illustrates the lesson's advice about using AI for brainstorming?
Using AI to decide who to vote for in an election
Using AI to generate a list of possible essay topics to explore further
Using AI to find the current winner of a sporting event
Using AI to determine if a contract is legally binding