It’s impossible to ignore the seismic impact of Generative AI (GenAI) on enterprise operations. From personalized customer interactions to rapid content generation and smarter automation, GenAI is transforming how businesses operate and compete.
However, beneath the fanfare lies a complex and pressing paradox. The Thales 2025 Data Threat Report found that while 69% of enterprises acknowledge that the breakneck speed of GenAI’s evolution is its greatest threat, many still treat data security and trustworthiness as an afterthought.
The AI Trust Paradox
GenAI works with data; it learns from it. It doesn’t just generate output. It influences decisions, systems, and people. When the models that power this revolution are vulnerable to bias, theft, and manipulation, we’re not just talking about technical risks. We’re talking about business threats to integrity, trust, and reputation.
Counterintuitively, in the Thales 2025 Consumer Digital Trust Index, 32-33% of consumers said they would trust a brand more if it used GenAI or AI. Consumers consider that these tools increase personalization and security, making interactions more efficient and secure. Using AI responsibly, brands can establish long-term customer relationships by fulfilling their needs and preferences. needs and preferences.
Are we Rushing to Adopt AI?
Businesses know this, and according to the 2025 Data Threat Report findings, nearly three-quarters (73%) are investing in security tools to protect their AI establishments. Yet, it’s clear that many of them haven’t fully grasped the complexity of their GenAI architectures or the implications of embedding these tools into SaaS platforms at scale.
AI is often described as a “black box,” but what happens when that black box becomes a blind spot?
CISOs today are being asked to endorse and oversee AI-powered systems that are making decisions with significant business consequences. But here’s the dilemma: How can you trust those systems when you’re not confident in the data that feeds them?
This dilemma goes beyond the theoretical. GenAI models are susceptible to:
- Bias and fairness issues — because they learn patterns from historical data, which may reflect past prejudices.
- Model theft — where malicious actors extract architecture, embeddings, or weights to replicate or corrupt systems.
- Adversarial input — where subtle tweaks to prompts or data deceive the model into producing false or harmful output.
- Output manipulation — including deepfakes and synthetic content used for fraud, disinformation, or reputational sabotage.
Each risk touches on the core principles of confidentiality, trustworthiness, and integrity. And each demands a more deliberate, disciplined approach to data.
Hence, the real question that comes out of the Thales 2025 Data Threat Report is, are businesses ready to adopt AI at full scale, or are they rushing without first ensuring security?
Continuous Vigilance of Data is Paramount in the AI Era
Data quality and security have become the new focal points of enterprises' rapid adoption of GenAI. Polluted data will lead to wrong decisions with severe repercussions for businesses. CISOs must ask themselves if they are enabling innovation or opening a Pandora’s box.
Retrieval-augmented generation (RAG) is a good example. By design, RAG brings enterprise data into the model’s context, but it becomes a liability if it bypasses classification rules or includes sensitive material. Similarly, we risk intellectual property loss, compliance violations, and trust erosion if publicly available LLMs are trained on proprietary or confidential data.
The new frontier of security is fluid, dynamic, and data-centric. And according to this year’s Data Threat Report, we’re making progress, but not quickly enough.
In 2021, 36% of respondents said they lacked confidence locating their data. By 2025, that’s improved, but only to 24%. And while data classification has held steady, with 80 to 85% of respondents able to classify at least half of their data, that’s still a massive portion of information flying under the radar.
Encouragingly, enterprises are encrypting more of their sensitive cloud data. In 2021, only 46% said they encrypted 40% or more of that data. In 2025, that number jumps to 68%. That’s a big win, but encryption without clarity is like locking doors in a house you cannot map.
Why the disconnect? Fragmentation. Nearly two-thirds of enterprises use five or more tools for data discovery or classification. Over half rely on five or more encryption key management solutions. This patchwork approach leads to duplicate rules, inconsistent protections, and silos that undermine any holistic data strategy.
In short, you can’t secure what you can’t find or trust what you can’t secure.
What Must Change?
To safely harness the promise of GenAI, security must move upstream. That means:
- Reinforcing data classification and lineage — Know where your data is, where it comes from, and how it’s being used.
- Consolidating and rationalizing security tooling — Fragmentation breeds risk. Prioritize platforms that unify visibility and control.
- Applying encryption consistently, not selectively — Protect data across environments, not just in “sensitive” systems, and ensure centralized key management.
- Creating AI usage guardrails — Prevent model poisoning, hallucination, or misuse through role-based controls and prompt validation.
- Building trust by design — Integrate transparency and auditability into your AI architecture from the outset.
Trust Is Not a Byproduct — It’s the Starting Point
GenAI is too powerful, too pervasive, and too consequential to roll out without guardrails. While the speed of innovation dazzles, CISOs must be the voices of discipline and foresight. The good news? Most organizations are starting to invest in AI-specific security. The better news? We now have the visibility to do something about it.
But the real question is: Will we treat trust as a strategic imperative—or a footnote in the rush to innovate?
The Thales 2025 Data Threat Report looks into these issues, offering practical insights and sobering statistics for CISOs navigating this new frontier. If you’re serious about securing the future of AI in your enterprise, this is one report you can’t afford to skip.