AI Coding Tools Are Doubling Secret Leak Rates: What GitGuardian’s 2026 Report Reveals
There’s an uncomfortable truth lurking in every developer’s workflow: the tools making us faster are also making us leakier. GitGuardian’s State of Secrets Sprawl 2026 report, published March 17, reveals that 28.65 million new hardcoded secrets were pushed to public GitHub repositories in 2025 — a 34% year-over-year increase and the largest single-year jump ever recorded.
The headline statistic is striking enough. But dig into the data and the real story emerges: AI-assisted coding is fundamentally changing the shape of the secrets problem. Commits generated with AI coding tools leak secrets at roughly double the baseline rate. And the new AI infrastructure — MCP configurations, LLM orchestration layers, vector databases — is creating entirely new categories of credential exposure that didn’t exist two years ago.
If you’re a developer using Copilot, Claude Code, Cursor, or any AI coding assistant, this report is essential reading.
The Scale of the Problem
Let’s start with the raw numbers:
- 28.65 million new hardcoded secrets pushed to public GitHub in 2025
- 34% increase year-over-year (the largest annual jump on record)
- 1.94 billion public GitHub commits in 2025 (up 43% YoY)
- 33% growth in the active developer base
More developers writing more code means more secrets leaking — that’s not surprising. What’s surprising is where the leaks are coming from and how the composition is changing.
AI Is Creating a New Generation of Leaks
The most important finding in the report: the type of secrets being leaked is shifting rapidly.
AI Service Secrets: Up 81%
Secrets related to AI services reached 1,275,105 in 2025 — an 81% year-over-year increase. Eight of the ten fastest-growing secret detectors were tied to AI services. The report specifically highlights:
- 113,000 leaked DeepSeek API keys as one example of how quickly new services create exposure
- LLM infrastructure secrets (orchestration, RAG pipelines, vector storage) leaked 5x faster than core model provider keys
- New AI wrappers, gateways, registries, and integration layers are entering production faster than security controls can keep up
AI-Assisted Commits Leak at 2x the Baseline
Here’s the stat that should stop every developer in their tracks:
Claude Code-assisted commits showed a 3.2% secret-leak rate, versus a 1.5% baseline across all public GitHub commits.
That’s more than double. But GitGuardian is careful to add important context: this isn’t simply a tool failure. Developers remain in control of what gets accepted, edited, and pushed. Even as coding assistants improve their guardrails, people can still override warnings or ask the model to behave insecurely.
The leak still happens through a human workflow. AI is changing the pace and shape of software development, but the underlying failure mode is familiar: people under time pressure making local decisions in complex systems.
The speed is the problem. When you’re generating code 3-5x faster, you’re also generating mistakes 3-5x faster. And the review process hasn’t scaled to match.
24,000 Secrets in MCP Configurations
Model Context Protocol (MCP) is rapidly becoming standard infrastructure for connecting AI tools to external services. GitGuardian found 24,008 unique secrets exposed in MCP-related configuration files across public GitHub, including 2,117 unique valid credentials — an 8.8% validity rate.
Why MCP Configs Are a Problem
The issue is partly cultural. Official MCP setup guides and documentation often recommend putting API keys directly into configuration files, command-line arguments, or embedded connection strings. When the official quickstart says “put your key here,” developers do exactly that.
This is a pattern security teams should recognize: new standards often arrive with convenience-first examples. If those examples assume hardcoded credentials, the problem spreads at ecosystem speed.
Common patterns GitGuardian identified in MCP leaks:
// DON'T DO THIS (but many guides tell you to)
{
"mcpServers": {
"database": {
"command": "mcp-server-postgres",
"args": ["postgresql://user:P@ssw0rd@prod-db:5432/main"]
}
}
}
The Fix
Use environment variables, secrets managers, or credential injection at runtime. Never hardcode credentials in MCP configuration files, and never commit those files to version control.
// DO THIS INSTEAD
{
"mcpServers": {
"database": {
"command": "mcp-server-postgres",
"args": ["${DATABASE_URL}"]
}
}
}
Internal Repos Are 6x Worse
One of the most sobering findings: internal repositories are approximately 6x more likely than public ones to contain hardcoded secrets.
This makes sense psychologically — developers feel less urgency about security in private repos because the exposure seems less immediate. But this private buildup becomes exactly the material attackers exploit once they gain internal access. Every lateral movement in a breach relies on finding credentials that provide access to the next system. Internal repos are a goldmine.
The Problem Beyond Code
About 28% of secret incidents originate entirely outside repositories — in Slack messages, Jira tickets, Confluence pages, and other collaboration tools. These non-code leaks are 13 percentage points more likely to be categorized as critical than secrets found only in code.
Why? Because secrets shared in collaboration tools are often passed around during urgent troubleshooting, incident response, or operational debugging. The context is urgent, and urgent contexts produce high-impact exposures.
The Remediation Crisis
Perhaps the most alarming finding of all:
64% of valid secrets from 2022 are still active and exploitable in 2026.
That means secrets that were confirmed leaked four years ago have never been rotated. The remediation gap isn’t closing — it’s barely moving.
Additionally, 46% of critical secrets are missed by validation-only prioritization, meaning many high-risk exposures remain underprioritized because they can’t be automatically verified.
This is the dirty secret of secrets management: finding the leaks is getting better, but fixing them isn’t keeping pace.
Developer Workstations Are Now Prime Targets
As AI agents gain deeper local access to terminals, files, editors, environment variables, and credential stores, the developer laptop itself becomes a more meaningful attack surface.
GitGuardian’s analysis of the Shai-Hulud 2 supply chain attack provides a window into what lives on developer machines: across 6,943 compromised machines, the team found 294,842 secret occurrences corresponding to 33,185 unique secrets.
Even more concerning: 59% of the compromised machines were CI/CD runners rather than personal workstations, meaning the attack surface extends well beyond individual endpoints into the build pipeline itself.
What You Should Do Right Now
For Individual Developers
-
Install a pre-commit hook that scans for secrets before they ever reach a repository. GitGuardian offers
ggshield, and there are open-source alternatives likedetect-secretsandtrufflehog. -
Never hardcode credentials in MCP or AI tool configs. Use environment variables or a secrets manager. Add
mcp.json,.cursor/, and similar AI config files to your.gitignore. -
Review AI-generated code carefully for secrets. When an AI assistant generates code that references an API, database, or external service, check whether it hardcoded a credential or used a proper reference.
-
Rotate secrets that have been exposed. If you’ve ever committed a secret — even to a private repo — rotate it immediately. Don’t assume deletion from git history is sufficient; leaked secrets are often scraped within minutes.
-
Use short-lived tokens whenever possible. Tokens that expire after hours or days limit the blast radius of any leak.
For Security Teams
-
Monitor AI tool adoption and include AI-related configuration files in your scanning scope.
-
Audit internal repos with the same rigor as public ones. The 6x higher leak rate in private repos is a ticking time bomb.
-
Scan collaboration tools (Slack, Jira, Confluence) for secrets. 28% of incidents originate outside code.
-
Implement automated secret rotation to address the remediation gap. If 64% of leaked secrets from 2022 are still active, manual processes aren’t working.
-
Track the AI secret sprawl trend. AI service secrets grew 81% in one year. This trajectory will accelerate as AI infrastructure becomes more complex.
The Bottom Line
AI coding tools are a productivity revolution. They’re also a secrets management crisis accelerator. The tools aren’t the villain — they’re amplifiers. They amplify developer speed, and they amplify developer mistakes. The 3.2% leak rate for AI-assisted commits versus 1.5% baseline isn’t a condemnation of AI tools — it’s a signal that our security processes haven’t adapted to AI-speed development.
The solution isn’t to stop using AI coding tools. It’s to build guardrails that match the pace. Pre-commit scanning, automated rotation, runtime credential injection, and AI-aware security policies aren’t optional anymore. They’re table stakes.
29 million secrets on public GitHub. 64% of four-year-old leaks still unrotated. The numbers are getting worse, not better. The time to act was yesterday.
Sources
- GitGuardian, “The State of Secrets Sprawl 2026,” March 17, 2026
- Security Boulevard, “GitGuardian study shows AI coding tools double leak rates as 29M credentials hit GitHub,” March 17, 2026
- GlobeNewsWire, “GitGuardian Reports an 81% Surge of AI-Service Leaks,” March 17, 2026



