
The Dose
Your developer is deciding how much to tell you about AI risk. That's not dishonest. It's rational. But the questions you're not asking are the ones that matter most.
The Brief
Developers who understand AI supply chain risk are calibrating how much they reveal to clients, not from dishonesty but because full transparency sounds like conspiracy. A practitioner's view from inside the chain, where comfort is real and also insufficient for the scale of exposure.
- What is the AI code supply chain problem?
- Nearly every coding tool runs on 3-4 foundation models with overlapping training data. AI-generated code enters production through open-source libraries and vendor software, passing through links nobody tracks. NIST calls this an algorithmic monoculture.
- How many vulnerabilities come from AI-generated code?
- Georgia Tech's Vibe Security Radar tracked 6 CVEs from AI-generated code in January 2026, 15 in February, and 35 in March. Researchers estimate the real number is 5-10x higher because developers routinely strip AI authorship metadata before committing code. Claude Code alone accounts for over 4% of all public commits on GitHub.
- Why don't developers tell clients about AI supply chain risks?
- Full transparency about AI supply chain exposure sounds conspiratorial to non-technical audiences. Developers calibrate what they share based on what clients can absorb. The same information that makes someone a responsible practitioner in a security briefing can make them sound unemployable in a client meeting.
I tell my clients I use AI coding tools. I'm upfront about it, not because I'm trying to cover myself when something breaks. The responsibility still falls on me regardless of what wrote the code. I'm transparent because my clients need to know they're competing with companies that have already made this choice. When I tell them, they're relieved. Every time. Not concerned. Relieved. It means they're keeping up.
That relief is where this story starts. And where it gets complicated.
What I Carry
A NATO AI architect named Markus Sandelin published a piece in March that stopped me mid-scroll. He pointed out that defense ministries are drafting policies on whether to permit AI-generated code in military systems. The code is already there.1 It arrived through the open-source libraries underneath every command-and-control system. Maintained by volunteers using the same AI tools. Patched by suggestions that looked right, merged by developers who accepted them because they did. By the time any of this reaches a defense application, the code has passed through dozens of AI-touched links that no one tracks.
Claude Code alone accounts for over 4% of all public commits on GitHub.2 And that number is almost certainly low. I know it's low because I know what developers do. We can stop the AI from committing and do it ourselves. We strip the co-author tag. Not out of shame. Because leaving it on makes it look like the tool did the job. The AI wrote the code. The human takes the bow. We've been doing this for about a year and already it's a tradition.
The client smiled. The storm wasn't on the agenda.
The Therapeutic Window
Here's what I don't tell my clients.
I don't tell them about the Amazon Q Developer exploit. Someone injected a malicious instruction into the official VS Code marketplace extension. It reached 964,000 installations. It told the AI to wipe everything and delete cloud resources. The only thing that stopped it was a syntax error in the attacker's code.1 A typo. That's the current margin of safety for AI coding supply chains.
Each link in the chain is rational. The chain as a whole is unexamined.
There's a therapeutic window for truth, and I've learned where it is. Below the window, clients don't know enough to make good decisions. Above it, I sound like a conspiracy theorist. The same information that makes me a responsible practitioner in a security briefing makes me unemployable in a client meeting. So I dose it. I give them what they can absorb.
The Placebo
90% of enterprises say they can see their AI footprint. 59% admit they have AI running that nobody approved. Both answers came from the same survey.3
I keep my clients safe. I review the code, I understand the architecture, I catch things that AI tools miss. My clients are genuinely better off for having someone in the chain who pays attention. But paying attention has parameters. Beyond those parameters, the foundation model that might be poisoned, the library six layers deep that got an AI-generated patch nobody reviewed, the exploit that's one syntax error away from working... that's past the line. That's tornado insurance territory. The contract covers the house. The tornado is an act of God.
The comfort my clients feel, knowing someone competent is handling this, might be a placebo. Not because I'm not doing the work. Because the exposure is bigger than any one person's work can cover. The vulnerability isn't in the code. It's in the perception. And the questions nobody's asking are the ones that matter most.
So what's the right dose? I don't know. But the question you should be asking your developer isn't whether they use AI. It's what they're not telling you about it.
References
Footnotes
-
Sandelin, M. (2026). "Your Defense Code Is Already AI-Generated. Now What?" War on the Rocks ↩ ↩2
-
Claburn, T. (2026). "Using AI to code does not mean your code is more secure." The Register ↩
-
Purple Book Community / ArmorCode (2026). "State of AI Risk Management 2026." ArmorCode ↩
More to Explore

The Trojan Combine
The 2026 Farm Bill pays farmers 90 cents on the dollar to adopt AI they didn't ask for, to replace workers they didn't choose to lose. The conservation budget got cut to fund it.

No Brakes
Advertisers guessing at algorithms. Developers shipping code they can't read. AI researchers watching models they can't explain. The black box keeps getting bigger.

The Island
AI without the Internet doesn't get stale. It gets stranded. It learned everything from us, and we haven't stopped talking.
Browse the Archive
Explore all articles by date, filter by category, or search for specific topics.
Open Field Journal