Where AI Labs Will and Won't Disrupt Cybersecurity
AI labs are pushing into application security, but three structural barriers keep them out of runtime protection, proprietary threat intelligence, and SOC workflows. WAFs are safe for now. Here is what RSAC 2026 revealed.
AI labs push into application security
AI labs entered cybersecurity through the most obvious door: application security. At RSAC Conference 2026, Foundation Capital partner Sid Trivedi argued that moving from static analysis into dynamic testing was a natural extension of code generation capabilities. As AI-generated code increases, securing that output has become an obvious adjacency.
But that is where the easy wins end.
Three areas AI labs will not easily touch
Trivedi identified three structural barriers that limit how deep AI labs can go into security:
Runtime sensors. Endpoint-level protection requires deep instrumentation that AI labs are not building. Products like CrowdSec, open-appsec, and traditional WAFs like ModSecurity live at the runtime layer. That is not something a model API replaces.
Proprietary data. Security functions built on data that cannot be publicly trained on stay out of reach. WAF rule sets built from real attack traffic, like the Cloudflare managed ruleset or Imperva threat intelligence, depend on proprietary signal that no foundation model has access to.
SOC and incident response. Multi-tool integrations, embedded context, and workflow orchestration across security stacks are not problems that scale with model size.
Google holds the strongest position
Among AI labs, Trivedi sees Google as having the strongest end-to-end position in cybersecurity. That aligns with what we see in the WAF market. Google Cloud Armor already integrates ML-based adaptive protection. Combining that with Gemini-class models gives Google a path from code scanning to runtime defense that other AI labs cannot easily match.
WAFplanet take
This is good news for the WAF industry. AI labs will keep improving code-level security scanning, which is genuinely useful. But they are not coming for your WAF any time soon.
WAFs operate on runtime request data, tuned rulesets, and protocol-level inspection. That is fundamentally different from code analysis. Companies like F5, Fastly, and AWS WAF are integrating AI into their existing products, not competing against foundation model labs.
The real question is whether AI labs will eventually build identity layers that compete with WAF authentication features. That is the frontier to watch.