Nadav Avital | Senior Director of Threat Research at Thales
More About This Author >
Nadav Avital | Senior Director of Threat Research at Thales
More About This Author >
AI is everywhere: in our phones, offices, and homes. It shapes recommendations, makes analysis lightning-fast, and even writes its own code.
The shape of AI changes, and with it, the risks.
At the very end of the chain, where a person types a prompt and waits for a response, the words themselves become an attack surface. The form of AI security becomes layered. It follows the lifecycle of the system itself: collection, training, deployment, and use.
Training data is fuel; without it, models starve. When data is tainted, models rot from the inside.
This is not theoretical: a dataset can be seeded with malicious inputs, shaping the outcome. And because AI learns by example, bad examples become part of its logic. In addition, many AI systems rely on RAG to provide updated, relevant data, which can also be manipulated. The problem grows even bigger if you rely on a public data set without control over the data creation or update process.
This is why many experts argue that protecting AI begins by protecting data. Each stage of the AI lifecycle is data-heavy. Inputs, outputs, and interactions are all data.
Secure that flow, and you secure much of the system. Leave it exposed, and the system weakens.
Malefactors are quick to see opportunity. GenAI gives them new tools: code written at speed, phishing messages polished in flawless English, deepfakes that evade the human eye.
That’s not all. Attackers use AI to automate workflows, including scouting the terrain, mapping networks, and finding weaknesses. Work that once took hours now takes minutes. They can unleash AI models against open-source projects, finding zero-day flaws that have slept unnoticed for years.
This lowers the barrier to entry for criminals, adding to the load for defenders. Here, it’s about scale. Threats hit more systems, more often, and in ways no human could spot in time.
Defenders are moving, too. AI can protect by cutting noise, limiting false positives, explaining incidents in plain language, and hunting zero-days before malefactors do.
This is the changing shape of defense: building higher walls and smarter ones, adding layers that understand intent and context instead of signatures and rules.
There is also the matter of intellectual property. Companies invest years training a model, refining its data, and integrating it into a service. That model is value. If deployed carelessly (say, embedded in a customer’s own environment), it can be copied, reverse-engineered, or altered. Protecting the model is as much about business survival as it is about security.
The form factor here is the business model - companies charging per scan, use, or outcome. If the model leaks, revenue does too. And if the model is critical (for instance, analyzing medical images), its integrity becomes life-critical.
Large language models add yet another surface. Their “language” can itself be exploited. Prompt injection, sensitive information disclosure, supply-chain risks, and improper output handling are new threat categories. To meet this, researchers are working on what is called an “AI firewall.” Unlike traditional firewalls, these solutions must interpret intent and semantics to spot malicious conversation.
This is a deeper form of security. Not just defending the shell of the system, but engaging with its very mode of interaction, words.
Agentic systems can act autonomously across APIs, data sources, and workflows, making them powerful but also unpredictable. The risks include tool poisoning, privilege compromise, and loss of control when agents chain decisions in unintended ways.
If not governed properly, these systems can amplify external and insider threats, expose sensitive data, or be manipulated by adversaries to execute harmful tasks. Organizations must treat agentic AI with the same rigor as any other critical system: strong identity controls, continuous monitoring, and a security-by-design approach.
AI moves fast, but perceptions of its risks move just as quickly. Once a novelty, AI can leak secrets, spread lies, and open doors to bad actors. Companies are starting to treat AI like any other critical infrastructure: shielded, monitored, and tested.
AI is not bound to a single shape or place. It shifts from cloud to edge, from service to device, from structured API to free-form dialogue. Security must match that fluidity. Protection of data, models, identity, and the interfacing application must travel with the system wherever it takes shape.