AI has fundamentally transformed the threat landscape.
Today, 67% of organizations globally are adopting or building in-house LLMs and GenAI applications. Just under 70% of respondents to the Thales 2025 Data Threat Report regard this fast-moving ecosystem as the most concerning GenAI security risk. According to the 2025 Imperva Bad Bot Report, automated traffic now accounts for 51% of all web activity.
Up until now, however, we’ve not changed how we protect our applications. We still rely on traditional tools that can’t combat AI-specific threats and vulnerabilities. That’s why at Thales we’ve developed Imperva AI Application Security.
LLMs introduce unique vulnerabilities into enterprise application environments. Traditional security tools, like WAFs, endpoint protection, and network security, simply cannot understand or defend against the threats specific to an LLM’s logic and interface. Here’s why.
Traditional security focuses on blocking known malicious code or network anomalies. LLM threats, however, often leverage the model’s intended functionality – also known as logic – to perform malicious actions. That makes these threats much harder to detect.
Many of these threats appear in the OWASP Top 10 for LLMs. For example:
Traditional web application firewalls are great at blocking malicious inputs. That means they can detect and prevent threats like SQL injection or cross-site scripting. They don’t, however, have the context or logic necessary to analyze an LLM’s output.
As a result, these traditional security tools alone can’t determine whether LLM output is harmful, unsafe, non-compliant, or exposing sensitive data. This leaves the organization vulnerable to Improper Output Handling.
Moreover, because LLMs are computational, attackers can overwhelm them with resource-heavy queries. This is known as Unbounded Consumption. It can cause costly slowdowns, denial-of-service (DoS) issues, or massively inflated operational costs. Basic rate limiting is the only defense traditional tools have against this kind of attack, and that doesn’t cut it.
The bottom line here is that AI-specific risks demand AI-specific mitigations. That’s exactly what Imperva AI Application Security provides.
Imperva AI Application Security is an enterprise-grade security solution designed specifically to safeguard GenAI and LLM applications. It provides specifically designed runtime protection that sits between enterprise applications and the LLMs they run on.
Think of Imperva AI Application Security as an intelligent shield. One that analyzes every input and output in real time to detect and stop malicious activity. One that safeguards the unique behaviors and outputs of GenAI applications. All without impacting your applications’ performance.
Key capabilities include:
The solution is flexible and environment agnostic, meaning you can deploy and integrate it seamlessly with your existing environment. Ultimately, it provides a level of defense that traditional tools cannot match.
No. You still need WAFs, endpoint protection, and network security tools. Imperva AI Application Security solves a very specific problem: the critical gap at an application’s AI interface in which legacy tools can’t detect or prevent threats. It enables organizations to:
Imperva AI Application Security is a key component of the Thales broader security vision. It’s part of the Thales AI Runtime Security suite of solutions for AI protection known as Thales AI Security Fabric, which also includes RAG Data Protection and additional capabilities being launched in 2026.
Thales is uniquely positioned to protect the full GenAI lifecycle, securing every layer of AI systems from users to applications, from applications to LLMs to underlying data stores.
With Thales AI Security Fabric, organizations will be able to:
Thales’s Imperva AI Application Security offering will become generally available in the middle of 2026. Until then, consider partnering with us as we progress forward or why not explore other aspects of the Imperva best-in-class application security?