Loading lesson…
AI happily writes code with classic vulnerabilities. Learn the OWASP-aligned review checklist for AI output, the prompts that catch issues early, and the tools that automate the rest.
Models train on public code, and public code is full of vulnerabilities. The model learns common patterns — including the insecure ones. SQL strings concatenated with user input, secrets hardcoded for examples, `eval` of user data, missing auth checks. AI will reproduce all of these on demand if you don't tell it not to.
| Vuln | What AI does | Fix |
|---|---|---|
| SQL injection | f-string queries with user input | Parameterized queries, prepared statements |
| Command injection | `subprocess.call(f"...{user}...")` | `subprocess.run([...], shell=False)` |
| Path traversal | `open(user_input)` with no allowlist | Resolve, normalize, check inside allowed dir |
| Hardcoded secrets | `API_KEY = "sk-..."` in committed file | Read from env, use a secrets manager |
| Missing auth | Endpoint without role/permission check | Decorator/middleware mandatory on all routes |
| Open redirect | `redirect(request.args.get("next"))` | Allowlist of safe redirect targets |
| XSS | `dangerouslySetInnerHTML` with user data | Escape, or use safe APIs |
| Insecure deserialization | `pickle.loads(user_input)` | Use JSON; if pickle, sign and verify |
# Append to any code-generation prompt that touches:
# - user input
# - filesystems
# - shells
# - databases
# - HTTP
# - secrets
"Apply OWASP best practices. No string-concatenated SQL — use parameterized queries.
No shell=True in subprocess calls. No eval/exec. Read all secrets from env vars,
never hardcode. Validate all user input with explicit schemas (Zod, Pydantic, etc.).
If you generate code with any of these, flag it and explain why you couldn't avoid it."A 70-word boilerplate that stops 80% of AI-introduced vulns at the source.| Tool | What it catches | Setup |
|---|---|---|
| Semgrep | Pattern-based vulns (SQLi, command inj, etc.) | `semgrep --config=auto .` |
| Bandit (Python) | Common Python security smells | `pip install bandit && bandit -r .` |
| ESLint security plugins | JS/TS security patterns | `eslint-plugin-security` |
| GitHub Code Scanning | CodeQL queries on every push | Free for public repos |
| Trivy / Grype | Vulnerable dependencies in lockfiles | CI step or local scan |
| GitGuardian / TruffleHog | Hardcoded secrets in commits | Pre-commit hook + CI |
# After AI writes code, run this in a fresh chat:
"Act as a security auditor. The following code was just generated by AI.
List every realistic security issue you can find, ranked by severity (critical/high/med/low).
For each: the line numbers, the threat model (who attacks how), and the fix.
Do not be polite. Assume hostile users.
<paste code>"
# Use a *different* model than wrote the code if possible.
# A second pair of eyes from a different family catches what the first missed.Cross-family adversarial review catches more than same-family review. Mix Claude with GPT for coverage.AI knows the patterns. Humans know the threats.
— An application security engineer
The big idea: AI accelerates code, but it does not understand attackers. Apply the OWASP-aligned prompt boilerplate, run static analyzers automatically, and budget a human security review for anything user-facing. Speed without security is a CVE in waiting.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-coding-debug-security-review-creators
What is the core idea behind "Security Review of AI-Generated Code"?
Which term best describes a foundational idea in "Security Review of AI-Generated Code"?
A learner studying Security Review of AI-Generated Code would need to understand which concept?
Which of these is directly relevant to Security Review of AI-Generated Code?
Which of the following is a key point about Security Review of AI-Generated Code?
Which of these does NOT belong in a discussion of Security Review of AI-Generated Code?
Which statement is accurate regarding Security Review of AI-Generated Code?
Which of these does NOT belong in a discussion of Security Review of AI-Generated Code?
What is the key insight about "AI's favorite vuln: the helpful error message" in the context of Security Review of AI-Generated Code?
What is the key insight about "Defense in depth still applies" in the context of Security Review of AI-Generated Code?
Which statement accurately describes an aspect of Security Review of AI-Generated Code?
What does working with Security Review of AI-Generated Code typically involve?
Which best describes the scope of "Security Review of AI-Generated Code"?
Which section heading best belongs in a lesson about Security Review of AI-Generated Code?
Which section heading best belongs in a lesson about Security Review of AI-Generated Code?