When the agent changes architecture, capture why. A short ADR prevents future agents from undoing the decision casually.
14 min · Reviewed 2026
Write Architecture Decision Records With AI
When the agent changes architecture, capture why. A short ADR prevents future agents from undoing the decision casually.
Name the job before naming the tool.
Write the smallest useful scope the agent can finish.
Run the result as a user, not as a fan of the tool.
Inspect the diff, data access, and failure path before sharing.
Write ADR-003: We use Supabase RLS as the permission authority. Include context, decision, consequences, and what future agents must not change.Use this as the working prompt or checklist for the lesson.
What should the user be able to do when this is finished?
What data should the app or agent never expose?
What test proves the change works?
What rollback path exists if the output is wrong?
End-of-lesson check
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-coder-adr-with-agent-creators
What is the primary purpose of an Architecture Decision Record (ADR) in an AI-agent project?
To track which AI model version is currently deployed
To replace traditional code comments in the repository
To log every code change made by the AI agent
To capture the reasoning behind architectural choices so future agents don't undo them casually
Which principle should be applied FIRST when starting an AI agent coding task?
Define the failure path in detail
Name the job the agent needs to accomplish before choosing the tool
Select the most capable AI model available
Write comprehensive test cases upfront
What does the lesson mean by writing the 'smallest useful scope' the agent can finish?
Write the minimal code possible, even if it doesn't solve the core problem
Have the agent work on one small file at a time regardless of the overall goal
Define a focused, completable objective that delivers value without unnecessary features
Minimize the number of test cases to speed up development
Why does the lesson advise running the result as a user rather than as a 'fan of the tool'?
Because the tester's job is to find bugs, not validate the tool's capabilities
Because tool fanboys write biased reviews
To evaluate whether the solution actually solves the user's problem, not whether the AI tool performed well
Because AI tools are unreliable and should not be trusted
Before sharing AI-generated code, which three things should be inspected?
Syntax, variable names, and comment quality
The diff, data access patterns, and failure path
Test coverage, documentation, and CI pipeline status
File size, line count, and execution speed
What is the relationship between a working AI demo and a production-ready system, as described in the lesson?
AI demos cannot be made production-ready and should always be rewritten from scratch
The real skill is turning a demo into something observable, reversible, and safe for others to use
Production systems require removing all AI-generated code
A working demo is always production-ready because it functions correctly
What does 'observable' mean in the context of turning an AI demo into production code?
The system's behavior and state must be measurable and visible to operators
The code must have extensive console logging statements
The code must be open source and publicly viewable
The AI must explain its reasoning for every decision it makes
What is the 'rollback path' and why is it important when deploying AI-generated changes?
A documented procedure to undo changes if they cause problems
A way to revert to an older AI model version
A backup of the original codebase
The path the AI takes when it fails to generate correct code
Which type of documentation helps context survive across different AI agent sessions?
AGENTS.md, ADRs, and handoff docs
Inline code comments only
README.md files in each directory
Git commit messages exclusively
Why is it risky to allow AI to make architectural changes without documentation?
Documentation slows down development too much
AI is always wrong about architecture
Architectural changes don't need to be documented
Future agents or developers may unknowingly undo important decisions because they don't understand the reasoning
What question should you answer to determine if an AI architectural change is successful?
What rollback path exists if the output is wrong?
Which AI model was used to generate the change?
What test proves the change works?
What data should the app or agent never expose?
What does 'reversible' mean in the context of AI-generated architectural changes?
The AI can generate both forward and backward compatible code
Changes can be undone easily if they cause problems
The system can run in both directions
The code can be compiled on any platform
What is a 'decision log' in the context of AI agent development?
A list of all code decisions the AI made during a session
A debugging output showing the AI's thought process
A chronological record of architectural decisions and their reasoning
A log file generated by the AI during execution
When inspecting 'data access' before sharing AI-generated code, what are you checking for?
Whether the code uses an ORM or raw SQL
How many rows the code can process at once
What data the code can read, write, or expose—and whether that's appropriate
Which database vendor is being used
The lesson emphasizes that AI can quickly create a working demo. What distinguishes a real developer skill from basic AI usage?
Knowing which AI model to select
Turning the demo into something observable, reversible, and safe for others to use