Jon-Rav Shende | Chief Technology Officer, Data Security
More About This Author >
Jon-Rav Shende | Chief Technology Officer, Data Security
More About This Author >
How AI is rewriting the rules of insider threats, and why security must rapidly evolve to keep pace
Artificial intelligence (AI) is no longer just a passive tool. It’s an active insider interpreting data, executing workflows, automating decisions, accessing sensitive data, and managing critical systems, in enterprise operations that directly affects an enterprise risk posture AI goes beyond merely supporting business processes; it creates, consumes and controls information at a scale previously unseen. These shifts bring tremendous benefits, but also introduce new vulnerabilities and unprecedented risks that traditional security models are not equipped to handle.
As AI systems gain deeper integration into critical processes, they increasingly resemble trusted insiders,—one that moves faster, scales wider, and operates with a level of complexity, at machine speed. AI’s competencies far exceed human ability to fully monitor or understand, rendering traditional oversight ineffective. While AI doesn’t act out of malice, its potential for harm is much greater than human bad actors, stemming from opaque integrity, increased sophistication, and the default trust placed in its outputs, making governance and detection more complex
AI’s newfound role mandates urgency for new security approaches. Organizational AI adoption is moving fast, often outpacing the budgets for advanced data security tools and needed infrastructure. According to the recent Thales Data Threat Report, in just one year, the percentage of organizations that moved beyond experimentation into AI implementation grew from 49% to 59%.
The Thales study also reveals that 73% of organizations are investing in GenAI tools. Increasingly, the rapid adoption of GenAI ecosystems supersedes organizational readiness--the development of policies, processes, and controls—leading to ad-hoc measures that increase potential gaps in security coverage
Recently, IBM found that 63% of organizations either lack AI governance policies or are still developing them, making AI unchecked as security and governance lag adoption. Without governance, even well planned AI deployments can introduce unmonitored access paths, unvalidated outputs and uncontrolled data movement across hybrid clouds. As GenAI becomes more prevalent in oft-targeted and far-reaching cloud environments, security risk rises in kind.
However, a lack of AI governance may also introduce compliance risks besides security risks. The EU AI Act is an AI governance framework for secure and trustworthy adoption and use of AI, and businesses must abide by its requirements, especially if they are establishing high-risk AI systems. To ensure compliance with these requirements, companies can leverage frameworks developed either by industry vendors, such as Gartner’s AI TRiSM (AI Trust, Risk, and Security Management), or by academia, such as the Adaptive Trust and Responsible AI concept from the Pacific Northwest National Laboratory.
Security strategies have long focused on preventing unauthorized access, monitoring user behavior, and flagging suspicious activity. AI defies these traditional insider threat models. Operating at high speed and scale, it can autonomously access massive datasets, interact with multiple systems, and make thousands of decisions in seconds—all without intent or oversight.
If organizations fail to adjust their security mindset, they risk underestimating the threat AI can pose—not because the systems are dangerous, but because they function in ways that defy conventional security. The more AI is entrusted with core business functions, the more critical it becomes to scrutinize its role, ensure accountability, and implement robust guardrails.
In this new era, enterprise data security must evolve, and do so quickly. It’s no longer just about protecting systems from external attackers or malicious employees. It’s about recognizing that the most powerful and unpredictable actors inside the organization may be the very systems built to help it.
Insider Threat Detection must evolve from behavior based monitoring to policy and purpose driven detection. It should be able to recognize when an AI system’s outputs, requests or connections deviate from authorized functions or data boundaries and provide critical updates to threat monitoring tools.
The Thales Data Threat Report shows that 69% of organizations recognize GenAI ecosystems as the greatest AI security risk, with 64% citing a lack of integrity, and 57% naming trustworthiness. Corrective action, however, lags this recognition. For example, IBM found 97% of organizations that reported an AI-related breach lacked proper AI access controls, underscoring AI as a high-value target while organizations lack preparedness to safeguard this target.
One of the most significant challenges with enterprise AI is ensuring system integrity. Unlike traditional software, AI doesn’t follow clear, rule-based logic. Its decisions are shaped by massive datasets and complex statistical models that even developers can struggle to interpret, audit, or reverse engineer.
This complexity makes bugs and vulnerabilities harder to detect. Instead of obvious software errors, issues might emerge as subtle, systemic behaviors, becoming apparent only after the system has been in operation for some time and damage has been done.
Compounding this, AI systems increasingly interact with one another. One flawed model can influence others, setting off a cascade of unintended consequences across departments or entire organizations.
Security Leaders must redefine integrity as both a governance challenge and a technical control domain, combining model lineage, explainability and cryptographic attestation to validate provenance and behavior as key functions of AI Integrity.
Another factor that makes AI a potential insider threat is the lack of transparency. Many AI models operate as “black boxes,” generating outputs—decisions, recommendations, actions—without clear explanations of how those outcomes were reached. This opacity poses a direct risk to organizations, amplified by always-on, autonomous AI.
For instance, consider an AI-powered HR system that screens job applicants. If it develops a bias against certain groups due to skewed training data, it can lead to discriminatory hiring practices without immediate notice. Or consider an AI-driven trading system that misinterprets market signals and executes a series of poor trades, causing financial losses in seconds. In both scenarios, the issue isn’t that AI is malicious—rather, it was trusted without sufficient oversight.
Transparency must evolve from a compliance checkbox into an operational control, by embracing Explainable AI (XAI) principles with audit trails linking back to data, logic and human oversight.
AI is also rapidly expanding cloud environments—and the inherent vulnerabilities in them. As GenAI drives almost 90% of all new online content, organizations are overwhelmed by an ever-expanding data estate. Today, an estimated 175 to 200 zettabytes of data are stored globally. To put this in perspective, 200 zettabytes of data allows an individual to stream 40 trillion years of movies.
Over 80% of enterprise data, is unstructured, such as emails, documents, chat transcripts, customer service calls, images, and videos. This diverse data exists across platforms like SharePoint, OneDrive, and Slack and often resides in disparate systems and clouds, complicating data landscapes and making it challenging to sufficiently safeguard data. The Thales 2025 Global Cloud Security Study confirms that cloud security is a top concern for 64% of enterprises worldwide, followed by security for AI. Further, 54% of data in the cloud is classified as sensitive, making cloud security imperative.
To navigate this evolving landscape, organizations must rethink how they manage and secure AI—not just as a tool, but as an active agent embedded in their operations, aimed at extending insider threat protection beyond human identity
It starts with data integrity and security. Just as organizations control who accesses sensitive information, they must define what AI systems are allowed to do and under what conditions. Model development should prioritize explainability and transparency, enabling meaningful oversight and real-time monitoring.
Organizations need to apply Zero Trust Principles to AI agents, enforcing “trust but verify” for models, APIs and automated workflows.
Some Key Control Layers are:
Data Security Foundation that protects the underlying data with effective data flow paths mapping, strong encryption, tokenization and data classification
Data governance must also evolve. Policies should define how AI is deployed, who is responsible for its behavior, and what steps are taken when things go wrong. This includes clear escalation paths and contingency plans for when AI systems behave unpredictably or fail outright.
AI is not the enemy. It’s a new workforce and powerful ally—one that can unlock tremendous value when deployed responsibly. But its growing autonomy and influence within organizations mean it must also be treated with newfound security approaches and a level of scrutiny applied to trusted insiders.
In today’s digital landscape, the most unpredictable actors inside an organization may not be people at all. They may be algorithms. Tomorrow’s security strategies must be ready for that reality, with success defined by how organizations govern autonomous systems as trusted insiders with visibility, explainability, and continuous verification at every layer
As security leaders we must evolve from a Maturity level of Ad hoc AI Use to Defined Policies, Automated Governance and ultimately having an Autonomous Trust Fabric.