banner

Thales Blog

Thales AI Cybersecurity: Using AI, Protecting AI, Protecting Against AI

July 10, 2025

Sebastien Cano Sebastien Cano | SVP, Cyber Security Products, Thales More About This Author >

The growing threat of AI to cybersecurity and technology resources proved the prevailing topic at RSA this year, alongside the desire to harness its power positively and productively.

According to the Thales 2025 Data Threat Report, 69% found a “fast-moving ecosystem” to be the most concerning GenAI security risk. Perhaps in a desire to move fast themselves, over half (53%) had already invested in GenAI-specific tools, and 20% will use newly allocated budget resources to do so.

In what can only be described as the next technological arms race, organizations understand the imperative of adopting, acknowledging, and accounting for GenAI in their future business and security practices. The question is: will they be able to do it responsibly?

Adopting AI: A Precarious Balancing Act

We are witnessing the advancement of many previously nascent AI-infused security strategies. However, as these postures reach maturity, a growing tradeoff is appearing between rapid adoption and security – the age-old feud. In an effort to match the fast-paced growth of the AI landscape, many are modernizing their technology stacks with GenAI solutions from various places.

Among those prioritizing AI security, nearly 70% have gone to their cloud provider for AI solutions, just over 60% are opting for established security vendors, and roughly half are trusting new or emerging startups.

These statistics indicate a dominating desire not to get “left behind,” whether in the eyes of consumers, competitors, or the race against attackers. The point to consider is, at what cost? That will be a question for each organization to decide.

Protecting AI (From Itself)

While GenAI adoption is desirable, even if it is imperative to a business to an extent, it does come with the necessary consequence of securing it.

Whether leveraging an AI model or creating one, organizations need to secure all stages of the AI lifecycle, from model development to training and usage. A strengthened AI security posture will require bespoke visibility into and control over data, applications, and models, and a compliance-informed approach. This latter element is especially important as AI regulations are still in a state of flux, and solutions must be malleable to change with updated or increasing requirements as time progresses.

For these reasons, securing the use of AI and GenAI in your enterprise often requires an AI-infused approach. With AI-powered cybersecurity from Thales, teams can protect sensitive data before ingestion by AI models, sterilizing it for use.

These capabilities enable organizations to protect important data from AI while it is still in the training and fine-tuning stage, or during the Retrieval-Augmented Generation (RAG) process when it is in use by large language models (LLMs). This matters regarding audits and AI compliance, with data privacy laws increasingly cracking down on AI.

Thales’ advanced AI security tools also enable teams to protect their intellectual property, encrypting the AI model and limiting the apps that can decrypt it. This protects it against IP theft and reverse engineering, helping you get your model safely from inception to monetization.

Protecting Against AI – Like Only AI Can

Hackers are also participating in the AI technological arms race. The rise in accessible AI tools lowered the barrier to entry for cyber attackers, enabling them to create and deploy malicious bots at scale. Today automated traffic makes up 51% of the total web traffic. Our solutions continuously monitor for risks and threats, detecting and preventing malicious AI-powered Bot attacks and mitigate API abuse in real time by monitoring and protecting API traffic.

As the sophistication of deepfakes and voice cloning attacks increases and GenAI improves its ability to mimic the human form, security awareness training needs to be complimented by technology that can help humans decipher what is real and what is fake. While employees should always be trained to “spot the signs,” research indicates that “people are biased toward mistaking deepfakes as authentic videos” and often overestimate their own abilities.

Thales’ deepfake detection tools leverage liveness detection to separate reality from AI-generated images and videos, in addition to using machine learning to spot suspicious behaviors in real time. Identity spoofing is another natural expression of AI-based attacks, and biometrics, strong authentication, and risk-based authentication (RBA) can work to mitigate it.

Advanced AI-based solutions

There isn’t a single facet of GenAI that wasn’t seemingly reviewed and rehashed by the cybersecurity community at RSA. In other ways, there is a myriad more to be explored.

As the world careens towards total AI adoption, we must balance the urge not to be left behind with the ever-present need to progress responsibly. Advanced AI-based solutions from Thales consider the unique threats facing AI models, the unique threats they create, and the need to stay competitive in a landscape filled with emerging technological opportunities.

As the world moves towards more AI, Thales’ solutions will continue to pave the path to getting there safely.

Learn more about hardening your modern security posture with AI cybersecurity solutions from Thales.