Marco Venuti | IAM Enablement & Acceleration Director
More About This Author >
Marco Venuti | IAM Enablement & Acceleration Director
More About This Author >
Traditional identity protections were never designed for the age of AI. They can’t stop the lightning-fast, highly convincing identity attacks AI facilitates. There’s a reason that nearly 60% of businesses say compromised credentials are the leading cause of breaches.
As we mark the 21st anniversary of Cybersecurity Awareness Month, organizations must rethink identity security. They must move from static protections to adaptive, AI-resistant defenses that keep attackers out and users safe.
Phishing is and has long been one of the most common and successful cyber threats.
However, up until recently, most phishing messages had mistakes such as poor spelling, suspicious tone, or odd formatting that would give them away. Now, large language models (LLMs) can generate flawless emails, even mimicking a CEO’s tone and style by learning from company communications.
The result: emails that look indistinguishable from legitimate business communication, making business email compromise (BEC) even harder to stop. Strong passwords alone won’t help here – if an employee is tricked into handing over credentials, the attacker is already inside.
Credential stuffing—using stolen usernames and passwords across multiple sites—is nothing new. What’s changed is the scale and sophistication AI brings to it.
Today’s AI-enhanced bots can:
Strong passwords and MFA help, but on their own, they’re no longer enough.
Multi-factor authentication was meant to stop attackers in their tracks. But AI-enhanced “push bombing” has turned it into a weapon against users themselves.
Here’s how it works: after stealing credentials, attackers trigger endless MFA push notifications. At first, users reject them. But frustration builds. Confusion sets in. Eventually, someone taps “Approve” just to make the prompts stop.
AI makes these attacks even more dangerous with:
Traditional “Approve/Deny” MFA can’t withstand that pressure.
AI isn’t just making old attackers smarter – it's creating new ones. Tools like FraudGPT industrialize scams with ready-made phishing templates, malware scripts, and evasion tactics.
At the same time, AI can generate entire deepfake personas: synthetic identities complete with AI-created photos, voices, and online histories. These “individuals” can pass security checks, open bank accounts, and interact with victims like real people.
And unlike phishing or credential stuffing, passwords don’t even come into play here – attackers simply create new accounts that look legitimate. Even worse, conventional security checks won’t stop them; if onboarding relies on static documents or superficial validation, these AI-crafted identities can walk right through.
As threats evolve, organizations need to rethink identity protection. But adopting new technologies doesn’t mean we should throw the baby out with the bathwater. Long-established best practices are still essential.
Passwords have long been the weakest link in security. Even with strong, unique combinations and password managers, users remain vulnerable to phishing, credential stuffing, and breaches.
That’s why more organizations are embracing passwordless authentication. By replacing passwords with methods like biometrics, FIDO2 hardware security keys, or cryptographic-bound passkeys, businesses can eliminate the risks tied to stolen or reused credentials.
Passwordless not only strengthens security but also improves the user experience—no more remembering complex strings or dealing with constant resets. It’s a win-win: better protection and a smoother login process.
We established earlier that traditional MFA is vulnerable to AI-enhanced MFA fatigue attacks. Key word: traditional. Adaptive, context-based MFA, however, isn’t. It protects against MFA fatigue by:
But passwords and MFA are only part of the equation. To stop AI-driven attacks, organizations also need defenses that watch what happens after login – both at the account level and within transactions.
Sometimes, attackers won’t try to gain access to your systems by hijacking an existing user’s account; they’ll use a synthetic identity to create an entirely new account.
Thales' digital identity verification (IDV) solutions ensure users are who they claim to be during onboarding. They combine:
These onboarding processes prevent deepfake-enabled cybercriminals from registering with false or stolen credentials. By confirming genuine digital identities, you ensure only real individuals can create accounts or authenticate.
AI-driven phishing, credential stuffing, MFA fatigue, and synthetic identities all show one thing: strong passwords alone are not enough. Organizations need context-aware, behavior-based defenses that can adapt in real time—closing gaps before attackers can exploit them.
That’s exactly what Thales IAM solutions deliver. Because when attackers evolve, defenses must evolve faster—and at Thales, we’re committed to leading that evolution.