THALES BLOG

Thales Introduces Imperva AI Application Security

December 16, 2025

Michael Wright | Sr. Product Marketing Manager, Cybersecurity & Digital Identity More About This Author >

AI has fundamentally transformed the threat landscape.

Today, 67% of organizations globally are adopting or building in-house LLMs and GenAI applications. Just under 70% of respondents to the Thales 2025 Data Threat Report regard this fast-moving ecosystem as the most concerning GenAI security risk. According to the 2025 Imperva Bad Bot Report, automated traffic now accounts for 51% of all web activity.

Up until now, however, we’ve not changed how we protect our applications. We still rely on traditional tools that can’t combat AI-specific threats and vulnerabilities. That’s why at Thales we’ve developed Imperva AI Application Security.

Why do Organizations Need Imperva AI Application Security?

LLMs introduce unique vulnerabilities into enterprise application environments. Traditional security tools, like WAFs, endpoint protection, and network security, simply cannot understand or defend against the threats specific to an LLM’s logic and interface. Here’s why.

Logic-Based Threats

Traditional security focuses on blocking known malicious code or network anomalies. LLM threats, however, often leverage the model’s intended functionality – also known as logic – to perform malicious actions. That makes these threats much harder to detect.

Many of these threats appear in the OWASP Top 10 for LLMs. For example:

  • Prompt Injection: This threat involves using a carefully crafted prompt to manipulate an LLM into working in a way it shouldn’t. For example, ignoring its system instructions, revealing internal data, or performing other unintended actions. Since this is a natural language input, traditional filters often treat it as benign.
  • Sensitive Data Leakage: LLMs are trained on and process vast data, including potentially sensitive user or proprietary enterprise information. An attacker might use prompt injection or exploit an application flaw to trick the model into outputting this confidential data.
  • System Prompt Leakage: System prompts are an LLM’s core instructions and operational logic. They are proprietary and critical to an LLM’s function. If an attacker manages to get hold of these prompts, they can use them to further exploit or bypass the model’s defenses.

Critical Gaps in the Security Stack

Traditional web application firewalls are great at blocking malicious inputs. That means they can detect and prevent threats like SQL injection or cross-site scripting. They don’t, however, have the context or logic necessary to analyze an LLM’s output.

As a result, these traditional security tools alone can’t determine whether LLM output is harmful, unsafe, non-compliant, or exposing sensitive data. This leaves the organization vulnerable to Improper Output Handling.

Moreover, because LLMs are computational, attackers can overwhelm them with resource-heavy queries. This is known as Unbounded Consumption. It can cause costly slowdowns, denial-of-service (DoS) issues, or massively inflated operational costs. Basic rate limiting is the only defense traditional tools have against this kind of attack, and that doesn’t cut it.

The bottom line here is that AI-specific risks demand AI-specific mitigations. That’s exactly what Imperva AI Application Security provides.

What is Imperva AI Application Security?

Imperva AI Application Security is an enterprise-grade security solution designed specifically to safeguard GenAI and LLM applications. It provides specifically designed runtime protection that sits between enterprise applications and the LLMs they run on.

Think of Imperva AI Application Security as an intelligent shield. One that analyzes every input and output in real time to detect and stop malicious activity. One that safeguards the unique behaviors and outputs of GenAI applications. All without impacting your applications’ performance.

Key capabilities include:

  • Prompt Injection Defense: Blocks malicious or manipulative prompts before they reach the model.
  • Sensitive Data Protection: Detects and blocks exposure of sensitive or proprietary information, including Personally Identifiable Information (PII), financial data, and Application Programming Interface (API) keys.
  • System Prompt Leakage Prevention: Prevents attackers from accessing internal instructions or operational logic.
  • Improper Output Handling: Filters harmful, unsafe, or non-compliant AI outputs before they reach end-users.
  • Unbounded Consumption Mitigation: Prevents abusive or resource-heavy AI tasks that could cause slowdowns, outages, or inflated operational costs.

The solution is flexible and environment agnostic, meaning you can deploy and integrate it seamlessly with your existing environment. Ultimately, it provides a level of defense that traditional tools cannot match.

Is Imperva AI Application Security a Replacement for Traditional Security Tools?

No. You still need WAFs, endpoint protection, and network security tools. Imperva AI Application Security solves a very specific problem: the critical gap at an application’s AI interface in which legacy tools can’t detect or prevent threats. It enables organizations to:

  • Deploy AI-driven applications with confidence: Protect users and enterprise operations from AI-specific risks.
  • Prevent costly and reputationally damaging incidents: Stop prompt injection, data leakage, and model manipulation before they impact your business.
  • Accelerate innovation while maintaining compliance: Enable safe AI adoption at scale without disrupting workflows or regulatory obligations.
  • Focus on business growth: Ensure unique LLM threats are managed so teams can innovate and expand without added security concerns.

How Does Imperva AI Application Security Fit with the Thales Security Vision?

Imperva AI Application Security is a key component of the Thales broader security vision. It’s part of the Thales AI Runtime Security suite of solutions for AI protection known as Thales AI Security Fabric, which also includes RAG Data Protection and additional capabilities being launched in 2026.

Thales is uniquely positioned to protect the full GenAI lifecycle, securing every layer of AI systems from users to applications, from applications to LLMs to underlying data stores. 

With Thales AI Security Fabric, organizations will be able to:

  • Enable AI for business growth: Maximize the value of AI, allowing teams to innovate and expand without added security risks.
  • Prevent costly and reputationally damaging incidents: Reduce the risk of prompt injection, data leakage, and model manipulation before they impact your business.
  • Accelerate innovation while maintaining compliance: Allow Agentic AI and Gen AI access to datasets while protecting sensitive and regulated data.

Thales’s Imperva AI Application Security offering will become generally available in the middle of 2026. Until then, consider partnering with us as we progress forward or why not explore other aspects of the Imperva best-in-class application security?