Using AI to Sharpen Strategic Thinking and Pre-Mortems
AI as a Devil's-advocate sparring partner for plans, strategies, and decisions.
11 min · Reviewed 2026
The premise
The most underused AI mode at work is asking it to disagree with you. Used as a red-team partner, AI surfaces failure modes you missed and questions assumptions you had not noticed you were making.
What AI does well here
Running a structured pre-mortem on a plan you are about to commit to
Listing the strongest objections to your strategy from a named perspective
Identifying load-bearing assumptions that, if wrong, sink the plan
Comparing your plan against well-known reference cases or frameworks
What AI cannot do
Replace the political and human context of how the strategy will land
Know what your specific organization will tolerate or resist
Predict second-order consequences in a complex system
End-of-lesson check
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-ai-strategic-thinking-final1-adults
What's the most underused AI mode at work?
Asking it to flatter you
Asking it to disagree with you
Asking it to summarize email
Asking it to translate
What does 'red-team' mean here?
Painting things red
Sports
Adversarial pressure-testing of a plan
A hardware vendor
What is a pre-mortem?
A medical procedure
A type of contract
A press release
Imagining the plan failed and listing why
What kind of 'load-bearing assumptions' does AI surface well?
Beliefs that, if wrong, sink the plan
Cosmetic preferences
Trivial decisions
Random opinions
What's a useful comparison AI can perform on your plan?
Comparing fonts
Comparing against well-known reference cases or frameworks
Comparing colors
Comparing emojis
What does AI not know about how strategy lands?
Basic strategy frameworks
Generic risk lists
Political and human context
Common comparisons
What does AI not know about your organization's tolerance?
Generic change frameworks
Standard objections
Typical rollouts
What it will actually accept or resist
What is hard for AI to predict in complex systems?
Second-order consequences
First-order math
Static facts
Standard syntax
What's a known AI tendency to watch for?
Random hostility
Agreeableness — it tends to flatter your draft
Refusal to answer
Constant errors
What's the antidote to AI's agreeableness?
Phrase prompts more politely
Use bigger fonts
Explicitly prompt for disagreement and counter-arguments
Ignore disagreement
What's a strong pre-mortem prompt structure?
'Will this work?'
'Be honest'
'Cheer me up'
'Assume one year from now the plan failed; list top 5 reasons by likelihood and one cheap prevention each'
What's a 'cheap thing you can do now' meant to do?
Reduce the chance or impact of a likely failure
Make the plan look better
Replace the plan
Cancel the project
What kind of 'named perspective' helps?
No perspective
Imagining objections from a specific role (e.g., CFO, customer, regulator)
Random celebrities
Pets
Why weigh AI's pushback against your own knowledge?
AI is always right
AI is always wrong
AI lacks local context; you decide which objections truly apply
Skip your own thinking
Which mindset best fits AI in strategy?
A magic decision-maker
A passive note-taker
A salesperson
A red-team sparring partner whose pushback you weigh