NEW! AppSec in Practice Research

Vibe Coding Is Shipping Vulnerabilities: A Security Team's Guide to AI-Generated Code Risks

Michael Lieberman

March 30, 2026

Vibe coding security vulnerabilities are becoming one of the fastest-growing blind spots in application security. As developers lean on AI coding assistants to generate entire functions, modules, and even full applications from natural language prompts, security teams face a new class of risk that traditional AppSec tooling was never built to catch.

Key Insights: What You Need to Know About Vibe Coding Security Vulnerabilities

  • Vibe coding security vulnerabilities emerge when developers use AI coding assistants to generate code through conversational prompts with minimal manual review, shipping insecure patterns, hallucinated dependencies, and outdated library calls directly into production.
  • AI coding assistant adoption has outpaced security controls. According to Kusari's Application Security in Practice report, 85% of organizations have adopted AI coding assistants, yet only 9% consider AI-driven AppSec analysis a must-have capability.
  • Hallucinated dependencies create real attack surfaces. AI models sometimes suggest packages that don't exist, and attackers have begun registering those phantom package names with malicious payloads, turning AI hallucinations into supply chain entry points.
  • OWASP Top 10 violations appear frequently in AI-generated code. Research from the Cloud Security Alliance and Endor Labs found that 62% of AI-generated code contains design flaws or vulnerabilities, while Veracode reported that 45% of AI-produced code fails against the OWASP Top 10.
  • Security review coverage for AI code remains low. Kusari's report shows that only 38% of organizations use AI to support code review in pull requests, leaving the majority of AI-generated code without automated security feedback at the point where it matters most.
  • Insecure patterns in AI-generated code follow predictable categories. Apiiro's research documented a 322% spike in privilege escalation flaws and a 153% increase in architectural design flaws in repositories with high AI code contribution, giving security teams specific areas to target.
  • AI guardrails and code review automation are the primary countermeasures. Effective mitigation requires combining SAST and SCA scanning at the pull request level, establishing secure prompt guidelines for developers, and treating AI-generated code with the same rigor as third-party dependencies.

What Is Vibe Coding and Why Should Security Teams Care?

Andrej Karpathy, a co-founder of OpenAI, coined the term "vibe coding" in early 2025 to describe a style of programming where developers write natural language descriptions of what they want and let AI assistants like GitHub Copilot, Cursor, or ChatGPT generate the implementation. The developer reads the output, accepts it if the application runs, and moves on. Detailed line-by-line review takes a back seat to speed and iteration.

For product teams, this approach can feel transformative. Prototypes that took days now take hours. Developers who aren't fluent in a particular language or framework can still produce working code. The productivity gains are real, and they explain why AI coding assistant adoption has accelerated so quickly across the industry.

But from a security perspective, vibe coding introduces a specific problem: it shifts code authorship from a human who understands the application's security context to a model that optimizes for functionality without awareness of your threat model, compliance requirements, or dependency policies.

Security teams should care because the code that enters your repository through vibe coding carries the same weight in production as hand-written code. If a developer accepts an AI suggestion that includes a SQL injection vulnerability or pulls in a deprecated library with known CVEs, that flaw ships just like any other. The difference is that nobody deliberately chose it.

How Vibe Coding Security Vulnerabilities Enter Your Codebase

AI-generated code introduces risk through several distinct pathways. Understanding these categories helps security teams prioritize their scanning and review efforts.

Hallucinated dependencies

Large language models occasionally suggest packages that don't exist in public registries like npm, PyPI, or Maven Central. A developer working in vibe coding mode may not check whether a suggested import statement references a real package. Attackers have noticed this pattern. By monitoring AI outputs and registering phantom package names with malicious code, they can turn hallucinated dependencies into a supply chain attack vector. A developer runs npm install, the package resolves to the attacker's version, and malicious code enters the build.

Insecure code patterns

AI models learn from public code repositories, which include massive amounts of insecure, outdated, and example-only code. When a model generates a database query, it may produce a string concatenation pattern vulnerable to SQL injection rather than a parameterized query. When it writes authentication logic, it may hardcode a default secret or skip input validation. These insecure patterns aren't bugs in the AI; they reflect the distribution of code the model was trained on.

Research from Apiiro found that repositories with significant AI-generated code showed a 322% increase in privilege escalation flaws and a 153% rise in architectural design flaws (Apiiro, 2025). Java projects fared particularly poorly, with Veracode reporting a 70%+ failure rate for secure code generation in that language.

Outdated training data

AI models have a knowledge cutoff. A model trained on data through mid-2024 won't know about CVEs disclosed in late 2024 or 2025. It may suggest library versions that had no known vulnerabilities at training time but have since been flagged. Worse, it may recommend patterns that were considered acceptable practice years ago but have since been deprecated in favor of more secure alternatives.

Missing security context

When a developer writes code manually, they (ideally) understand the broader application context: what data is sensitive, which APIs are exposed externally, what compliance standards apply. An AI assistant has none of this awareness. It generates code based on the prompt it receives, without knowledge of your application's authentication model, data classification policies, or network boundaries. Vibe coding security vulnerabilities often stem from this mismatch between what the code does technically and what it should do within your specific security context.

Mapping Vibe Coding Risks to the OWASP Top 10

Security teams already think in terms of the OWASP Top 10, and AI-generated code maps to it cleanly. Here's where vibe coding security vulnerabilities tend to cluster:

OWASP Category How Vibe Coding Introduces Risk Frequency
A01: Broken Access Control AI generates permissive default roles, skips authorization checks on endpoints High
A02: Cryptographic Failures Models suggest outdated algorithms (MD5, SHA-1) or hardcoded keys Moderate
A03: Injection String concatenation for SQL/NoSQL queries instead of parameterized statements High
A05: Security Misconfiguration AI sets debug modes, verbose error messages, or overly permissive CORS headers Moderate
A06: Vulnerable and Outdated Components Suggestions reference deprecated or vulnerable library versions High
A07: Identification and Authentication Failures Weak session handling, insufficient token validation, missing MFA flows Moderate
A08: Software and Data Integrity Failures No verification of package integrity, missing signature checks High
A09: Security Logging and Monitoring Failures AI-generated code rarely includes audit logging or anomaly detection High
Frequency estimates based on CSA/Endor Labs (2025) and Veracode (2025) research

This mapping gives AppSec engineers and security architects a concrete framework for building scanning rules and review checklists specific to AI-generated code.

The Adoption-Security Gap: What the Data Shows

Kusari's Application Security in Practice report (2026) quantifies the disconnect between AI coding tool adoption and security maturity. The numbers paint a clear picture:

85% of surveyed organizations have adopted AI coding assistants, making them nearly ubiquitous in modern development workflows. Yet when asked about AI-driven application security capabilities, only 9% of respondents described AI analysis and recommendations as a must-have. Most (57%) categorized it as merely "nice to have."

The usage gap is just as telling. While 85% use AI to write code, only 38% use AI to support code review in pull requests. That means most AI-generated code reaches the repository without automated security feedback at the integration point where catching flaws is cheapest and fastest.

These numbers explain why vibe coding security vulnerabilities are accumulating at scale. Organizations have accelerated the code-creation side of the equation without equally investing in the code-verification side.

When Vibe Coding Security Controls Are Not Enough

Not every organization faces the same level of risk from vibe coding, and the playbook below has limits worth acknowledging.

Regulated industries like healthcare and financial services face additional constraints. If your codebase processes protected health information (PHI) or payment card data, standard SAST/SCA scanning may not catch compliance-specific violations that AI-generated code introduces. You'll need policy-aware scanning rules that go beyond generic vulnerability detection.

Small teams without dedicated AppSec staff may struggle to implement all the controls described here simultaneously. In that case, the highest-impact starting point is blocking AI-generated pull requests that include new dependencies without human approval.

Organizations using agentic AI systems that chain multiple AI calls together face compounding risk. When one AI agent generates code and another agent reviews it, hallucinated dependencies and insecure patterns may pass through both layers without human intervention.

And no set of guardrails fully eliminates the core tension: vibe coding works because it's fast, and thorough security review slows things down. The goal isn't to stop developers from using AI assistants. It's to make the secure path the easy path, so the default workflow includes safety checks without requiring extra effort.

A Security Team's Playbook for Managing AI-Generated Code

Security teams that want to get ahead of vibe coding security vulnerabilities need a practical, layered approach. Here's what that looks like in practice:

Step 1: Establish visibility into AI-generated code. Before you can secure AI-generated code, you need to know where it is. Some organizations tag AI-generated pull requests through IDE plugins or commit metadata. Others monitor for patterns typical of AI output, such as boilerplate comment structures or characteristic function naming conventions. The specific mechanism matters less than having one.

Step 2: Run SAST and SCA scans at the pull request level. Catching vulnerabilities before code merges is substantially cheaper than finding them in production. Configure your static analysis and software composition analysis tools to scan every pull request, with specific rules targeting the OWASP categories where AI-generated code clusters. Kusari Inspector, for example, reviews every pull request for severe vulnerabilities including transitive dependencies, typosquatting packages, and insecure patterns, providing a pass/fail gate before code reaches the main branch.

Step 3: Block unknown and unvetted dependencies. Implement an allowlist or approval workflow for new package dependencies. This directly addresses the hallucinated dependency risk by preventing developers from accidentally installing packages that only exist because an AI model invented them. SCA tools with dependency policy enforcement can automate this.

Step 4: Create secure prompt guidelines for developers. Developers who include security requirements in their AI prompts get more secure outputs. A prompt that says "write a login handler" will produce less secure code than one that says "write a login handler using parameterized queries, bcrypt hashing, and rate limiting." Document these patterns and make them part of your developer onboarding.

Step 5: Treat AI-generated code as third-party code. This mental model helps teams apply the right level of scrutiny. You wouldn't ship a new third-party library without reviewing its license, checking for known vulnerabilities, and understanding its dependency tree. Apply the same standard to code generated by AI assistants, especially for security-critical components like authentication, authorization, and data handling.

Step 6: Feed security findings back into AI guardrails. When your scanning tools flag a recurring insecure pattern in AI-generated code, document it and share it with your development team. Some organizations maintain an internal list of "AI-specific anti-patterns" that gets updated as new failure modes emerge. This feedback loop improves over time and reduces repeat violations.

Comparison: Manual Code Review vs. Automated AI Code Scanning

Factor Manual Code Review Automated AI Code Scanning
Speed Hours per pull request Seconds to minutes
Coverage Limited by reviewer availability and expertise Consistent across all pull requests
Depth Can catch logic flaws, design issues, business context violations Best at pattern matching: known CVEs, CWEs, dependency issues
Scalability Doesn't scale with team growth Scales linearly with CI/CD infrastructure
AI-specific risks Reviewer may miss hallucinated packages if unfamiliar with ecosystem Can flag unknown packages automatically against registry
Cost High ongoing personnel cost Lower marginal cost after setup
Best for Security-critical paths, architectural decisions Broad coverage, regression prevention, dependency policy enforcement
Most organizations need both: automated scanning for coverage, manual review for depth on high-risk areas.

Ready to Secure Your AI-Generated Code Pipeline?

Vibe coding security vulnerabilities aren't going away. As AI assistants get more capable, more code will be generated this way, and the security implications will grow. The organizations that treat this as a tooling and process challenge now will be far better positioned than those that wait for a breach to force the issue.

Request a demo of Kusari to see how automated pull request scanning, transitive dependency analysis, and AI-generated code review work together in your CI/CD pipeline and much more.

Frequently Asked Questions About Vibe Coding Security Vulnerabilities

What are vibe coding security vulnerabilities?

Vibe coding security vulnerabilities are security flaws introduced when developers use AI coding assistants to generate code through natural language prompts with limited manual review. These vulnerabilities include insecure code patterns like SQL injection and broken access control, hallucinated dependencies that don't exist in real package registries, and references to outdated or deprecated library versions.

How do hallucinated dependencies become a security threat?

Hallucinated dependencies become a security threat when AI models suggest package names that don't exist in public registries like npm or PyPI. Attackers monitor these AI-generated suggestions and register the phantom package names with malicious code. When a developer installs the package without verifying it, the attacker's payload enters the build pipeline and potentially reaches production.

What percentage of AI-generated code contains vulnerabilities?

The percentage of AI-generated code containing vulnerabilities varies by study and language. Research from the Cloud Security Alliance and Endor Labs found that 62% of AI-generated code contains design flaws or security vulnerabilities. Veracode's analysis reported that 45% of AI-produced code fails against the OWASP Top 10 criteria. Java showed the worst results, with over 70% of AI-generated Java code failing secure coding benchmarks.

How should security teams scan AI-generated code differently?

Security teams should scan AI-generated code by running SAST and SCA tools at the pull request level rather than waiting for scheduled scans. They should add specific detection rules for AI-typical patterns like hardcoded credentials, permissive default roles, and string-concatenated queries. Dependency policy enforcement is also important to catch hallucinated or unvetted packages before they enter the codebase.

Can AI guardrails prevent all vibe coding security risks?

AI guardrails cannot prevent all vibe coding security risks. Guardrails reduce the frequency of common insecure patterns and can block known-bad dependencies, but they can't replace human judgment for context-dependent security decisions. Architecture choices, data flow design, and compliance-specific requirements still need human review, especially in regulated industries and for security-critical application components.

What is the difference between vibe coding and traditional AI-assisted development?

The difference between vibe coding and traditional AI-assisted development is the level of human oversight. In traditional AI-assisted development, a programmer uses autocomplete suggestions for individual lines or functions and reviews each suggestion in context. Vibe coding, by contrast, involves describing entire features in natural language and accepting the AI's implementation with minimal line-by-line inspection. This reduced oversight is what makes vibe coding security vulnerabilities more likely to reach production.

How does vibe coding affect software supply chain security?

Vibe coding affects software supply chain security by introducing unvetted dependencies, outdated library versions, and packages the developer never intentionally selected. Since AI assistants pull from training data that includes millions of open source projects, the generated code may reference packages with known vulnerabilities, deprecated maintainers, or unclear licensing. Without SCA scanning at the pull request level, these supply chain risks enter the codebase undetected.

What should AppSec teams prioritize first when addressing vibe coding risks?

AppSec teams addressing vibe coding risks should prioritize automated pull request scanning first. This provides the broadest immediate coverage because it catches vulnerabilities, insecure patterns, and unvetted dependencies before code merges into the main branch. After that, implementing dependency allowlists and creating secure prompt guidelines for developers deliver the next highest return relative to effort.

Like what you read? Share it with others.

Other blog posts 

The latest industry news, interviews, technologies, and resources.

View all posts

Previous

No older posts

Next

No newer posts

Want to learn more about Kusari?

Schedule a Demo
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.