The Security Blind Spot in AI-Generated Code

Fast Code Is Not Safe Code
AI coding tools let you build features in minutes that used to take hours. That speed is real and valuable. But it comes with a tradeoff that most teams discover too late: the AI does not think about security the way a senior engineer does.
Language models generate code based on patterns from training data. A lot of that training data contains insecure patterns. Stack Overflow answers with SQL injection vulnerabilities. Tutorial code that skips input validation. Open source projects that hardcode API keys. The AI learned from all of it, and it will happily reproduce those patterns in your codebase if you do not catch them.
This is not a theoretical risk. It is happening right now in production codebases at companies that moved fast and trusted the output.
Where AI-Generated Code Goes Wrong
SQL Injection and Query Building
The AI loves string interpolation. Ask it to write a database query and there is a meaningful chance it will concatenate user input directly into the SQL string instead of using parameterized queries. It works in development. It passes basic tests. And it opens a direct line from your URL bar to your database.
// What the AI might write
const query = `SELECT * FROM users WHERE email = '${email}'`;
// What it should write
const query = `SELECT * FROM users WHERE email = $1`;
const result = await db.query(query, [email]);
Exposed Secrets and Hardcoded Credentials
AI models sometimes generate code with placeholder API keys, database connection strings, or JWT secrets that look realistic enough to miss during review. Worse, when you ask the AI to help with environment setup, it may write .env values directly into source files or configuration that gets committed to git.
Missing Input Validation
The AI will build the happy path first and stop there. User input flows through the system without sanitization, length checks, or type validation. Form data reaches the database exactly as submitted. File uploads are accepted without checking type, size, or content. Request bodies are trusted without schema validation.
Cross-Site Scripting (XSS)
When generating frontend code, the AI frequently renders user-provided content without escaping it. React's JSX protects against basic XSS by default, but the moment you use dangerouslySetInnerHTML, render markdown to HTML, or build templates outside of React, the AI will not always remember to sanitize the output.
Authentication and Authorization Gaps
Ask the AI to add an API endpoint and it will build the handler, the route, and the response. What it often forgets is the middleware. The auth check. The role verification. The endpoint works perfectly for authenticated admins and unauthenticated attackers alike.
Overly Permissive CORS
The AI defaults to the path of least resistance. When CORS is blocking a request during development, the model's instinct is to set Access-Control-Allow-Origin: * and move on. That fix ships to production and now any domain can make authenticated requests to your API.
How to Audit AI-Generated Code
Every piece of AI-generated code should go through the same review process as human-written code. But because the AI produces code quickly and in volume, teams need a systematic approach to keep up.
Run static analysis on every change
Tools like Semgrep, ESLint security plugins, and Snyk can catch common vulnerabilities automatically. Set these up in your CI pipeline so nothing reaches production without a scan. The AI will not be offended by automated rejection.
Check every database query
Search for string concatenation or template literals in any file that touches the database. Every query that includes user input should use parameterized statements or a query builder that handles escaping. No exceptions.
Audit authentication on every endpoint
For every new route or API endpoint the AI generates, verify that authentication middleware is applied. Check that authorization logic matches the intended access level. An endpoint without auth is an endpoint anyone can call.
Validate all input at system boundaries
User input from forms, URL parameters, request bodies, file uploads. All of it needs validation before it enters your system. Define schemas with tools like Zod, Joi, or Go's validator and reject anything that does not conform. The AI will generate cleaner code when these schemas already exist in your codebase.
Review secrets and environment variables
Search for hardcoded strings that look like keys, tokens, passwords, or connection strings. Verify that .env files are in .gitignore. Check that secrets are loaded from environment variables, not imported from source files.
Test the unhappy paths
The AI writes code that works when everything goes right. Your tests should verify what happens when things go wrong. Malformed input. Missing fields. Oversized payloads. Expired tokens. Concurrent requests. These are the paths where vulnerabilities hide.
PII and Privacy
Security is not just about preventing attacks. It is about protecting the people whose data flows through your systems.
AI-generated code is particularly careless with personally identifiable information. It logs request bodies that contain email addresses. It stores full credit card numbers when it should store only the last four digits. It sends user data to third-party analytics without checking consent. It creates database schemas that store more than necessary because the AI defaults to keeping everything.
Data minimization, encryption at rest, access controls, and retention policies are not features the AI will suggest on its own. They require deliberate architecture decisions made by engineers who understand both the technical requirements and the legal obligations.
If your product handles user data in Canada, the EU, or increasingly anywhere else, privacy is not optional and the AI does not know the regulations that apply to your specific situation.
Building a Security Culture Around AI Tools
The answer is not to stop using AI coding tools. The answer is to build guardrails that match the speed of generation.
Establish a security checklist that every AI-generated PR must pass. Automate what you can with static analysis and CI checks. Make security review a non-negotiable part of the development process, not something that happens after the sprint is over.
Train your team to treat AI output the way they would treat code from a very fast, very confident junior developer. It probably works. It probably has a few blind spots. Review it accordingly.
How Deadly Approaches Security
At Deadly, security and privacy are not afterthoughts. They are part of every architecture decision from day one. When we build AI solutions for our clients, we treat PII with the seriousness it deserves. We apply input validation at every boundary. We run static analysis on every change. We design data flows that minimize exposure and comply with privacy regulations.
We have seen what happens when teams ship AI-generated code without proper review. Fixing those vulnerabilities after the fact costs far more than building securely from the start. Every project we deliver is built with the assumption that it will be audited, because that is the standard we hold ourselves to.


