Loading lesson…
Before a vibe-coded app leaves your laptop, check auth, database policies, secrets, file uploads, admin routes, rate limits, and public pages. Write the smallest useful scope the agent can finish.
Before a vibe-coded app leaves your laptop, check auth, database policies, secrets, file uploads, admin routes, rate limits, and public pages.
Audit this app for: exposed .env values, public Supabase tables, missing auth guards, public storage buckets, unsafe admin routes, unvalidated forms, no rate limits, and destructive actions without confirmation.Use this as the working prompt or checklist for the lesson.15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-vibecoder-security-checklist-lite
A developer asks an AI to 'make this app secure.' What does the lesson recommend doing first?
When reviewing AI-generated code, what does 'smallest useful scope' refer to?
Why should security testing be done 'as a user, not as a fan of the tool'?
Which three areas does the lesson specifically list as part of a pre-deployment security check?
What is the 'diff' that should be inspected during security review?
What does it mean for a deployed feature to be 'observable'?
What does the lesson mean by saying a security change should be 'reversible'?
Why are admin routes particularly risky in web applications?
What problem does rate limiting primarily prevent?
When reviewing database policies, what should be the primary concern?
In the context of AI-generated apps, why are 'secrets' a security concern?
What security risk is unique to file upload features in web applications?
What is the 'failure path' in security testing?
Why should public pages be checked for security issues even though anyone can see them?
What does it mean that 'AI optimizes for it works unless you ask for it cannot be abused'?