In the last post, we went through the static threats that live inside AI models before they ever see a user. Training data poisoning, model backdoors, serialization exploits, weight tampering, hardcoded behavior, weight theft, and compression artifacts. These are not theoretical. They are real, documented, and growing as a concern as more organizations build and deploy models at speed.
This post is about what you can do about them. Not in theory. In practice, today, with tools that exist.
Everyone in the AI space knows about prompt injection. Someone slips a bad instruction into a model’s input at runtime, and the model does something it shouldn’t. It’s the kind of attack that gets blog posts, conference talks, and CVEs. But there is a whole category of threats that get almost no attention, and they are arguably harder to detect and fix. These are static threats: vulnerabilities that live inside the model itself, before it ever sees a single user input.
This post is about those threats.
Why OWASP Top 10 2025’s #3 Risk Demands Your Immediate Attention
In November 2025, the Open Web Application Security Project (OWASP) released its eighth edition of the Top 10 security risks, and the message is clear: software supply chain security has graduated from a niche concern to one of the most critical threats facing modern organizations. Ranked at position #3 with an alarming 5.19% incidence rate, Software Supply Chain Failures represents a paradigm shift in how we must approach application security.
A practical guide to understanding and fixing a common security scanner false alarm.
Recently I have observed more development teams using AI coding assistants like Claude Code directly in their CI/CD pipelines. While the productivity gains can be impressive, this trend is raising serious security concerns that every organization should understand.