The premise
An LLM with the full module graph in context is faster than any single engineer at proposing extraction seams to debate.
What AI does well here
- Cluster modules by shared data and shared callers
- Propose three different cut lines with tradeoffs each
- Surface circular dependencies and hidden coupling
- Draft the migration sequence as ordered, reversible steps
What AI cannot do
- Know which boundaries reflect real team ownership
- Predict the latency cost of new network hops
- Decide which services to extract first based on business priority
End-of-lesson check
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-ai-coding-LLM-monolith-extraction-planning-creators
What makes an LLM particularly effective for initial monolith extraction planning?
- It automatically deploys extracted services to production
- It eliminates the need for code reviews during migration
- It can analyze the complete module dependency graph faster than any single engineer
- It guarantees zero downtime during the extraction process
During seam analysis, an LLM can successfully identify which of the following?
- Circular dependencies and hidden coupling between modules
- The exact business revenue impact of each extraction
- The precise network latency cost of inter-service calls
- Which engineering team owns each service boundary
An LLM proposes three different extraction cuts with tradeoffs. What does this demonstrate about AI's strengths?
- Determining which cut aligns with company politics
- Guaranteeing that all proposed cuts will work in production
- Automatically resolving conflicts between engineering teams
- Generating multiple solution variants with comparative analysis
A developer asks an LLM to recommend which service to extract first based on business priority. Why might this be problematic?
- The LLM will always choose the smallest service first
- Business priority questions require reading the entire codebase
- LLMs cannot understand any business-related concepts
- The LLM has no knowledge of organizational strategy or business goals
What type of output should an architecture team expect from an LLM performing seam analysis?
- A final decision document signed by the AI
- A guarantee of zero future technical debt
- A proposed extraction plan requiring human validation
- An automatically executable deployment pipeline
When an LLM clusters modules by shared callers, what is it actually identifying?
- The exact salary of each developer who wrote the code
- Modules that should be deleted entirely
- Services that will never need updates
- Groups of modules that tend to be invoked together
An LLM identifies a circular dependency between modules A, B, and C. Why is this valuable for extraction planning?
- Circular dependencies indicate which modules can be deleted
- This finding proves the code has no bugs
- The AI will automatically refactor the circular dependency
- Circular dependencies create extraction barriers that must be resolved first
What does the lesson mean by a 'reversible' migration sequence?
- All code must be duplicated before moving
- The migration can only move code backwards
- Reversibility is not important for extraction planning
- Each step can be rolled back if problems emerge
Why can't an LLM know which boundaries reflect real team ownership?
- Team ownership has no impact on extraction planning
- LLMs always correctly identify team boundaries
- Team ownership is an organizational concept not visible in code structure
- Ownership is visible in variable names
What is the primary value of asking an LLM to propose three different cut lines?
- AI can only think of three options at a time
- Three cuts mean the extraction is guaranteed to succeed
- The first proposal is always the best one
- It provides multiple perspectives for the team to evaluate
A product manager wants to use the LLM's extraction plan as the final architecture document. What does the lesson advise?
- The plan should be converted to a legal contract
- AI plans are never useful for architecture
- The plan should be signed by the most senior engineer
- The plan is a starting artifact, not a final decision
When evaluating an LLM's proposed first PR to ship, what should reviewers focus on?
- Whether it eliminates the need for further human review
- Whether the AI personally tested the code in production
- Whether it represents a safe, incremental change that validates the approach
- Whether it includes automatic billing charges
An LLM suggests extracting a payment service first. Why might this recommendation be flawed?
- LLMs always suggest the wrong service
- Extraction should never begin with any service
- The LLM cannot assess business priority or revenue impact of services
- Payment services are never good candidates for extraction
What distinguishes a 'seam' in monolith extraction from a simple code deletion?
- Seams are not relevant to extraction planning
- A seam is another term for deleting unused variables
- A seam requires deleting the entire codebase
- A seam is a boundary between services that can be split while keeping both functional
Why is it important for the LLM to provide risk profiles for each proposed cut?
- The AI is legally liable for incorrect risk profiles
- Risk profiles guarantee the extraction will succeed
- It helps the team understand potential failure modes and plan mitigations
- Risk profiles are required by programming language compilers