banner

Thales Blog

The Eternal Sunshine of the Criminal Mind

July 18, 2023

Steve Prentice Steve Prentice | Security Sessions Host More About This Author >

Everyone who works in cybersecurity or IT knows the frustration of dealing with relentlessly creative threat actors. Every day it seems, breaking industry news reveals another story about how a criminal gang or hacker penetrated a website, database, or device by reverse engineering its defences, discovering a weakness, or by using a feature or tool in a way other than for what it was intended. From SQL injection to code hidden inside digital images, to convincing AI-based GPT technologies to step around their own strict rules of behaviour, these people seem to have levels of ingenuity and energy that far surpass those of ordinary people.

How can we get a piece of that? Why does it seem that threat actors have the advantage in this war? Or is such an assessment even accurate?

The fact is, despite their cleverness in finding workarounds and vulnerabilities, it is still easier to break something than to build something. They need only find that one thing. Those who develop software or design devices from the ground up are just as intelligent and driven, but they must deal with design issues, quality assurance, shift-left /continuous testing, deadlines and budgets and still produce a viable product. They hire pen testers and ethical hackers to help locate those flaws. They pay bug bounties, and they relentlessly seek to improve their products with new iterations and upgrades.

Yet it is the hackers who get the media attention, and often the reward for spotting a weakness hidden deep inside a code or a machine.

Proactivity means challenging the norms

Proactive security, defence in depth, zero trust – these are all practices that developers and organizations must embrace and practice continually. But much of what makes for secure borders comes from practices rooted in human factors. It is based on psychology and physiology. It’s about trust and errors. For example, a person who walks into a place of business, like a bank or an office tower, and who walks with purpose, appearing at home in these surroundings, basically looking like they have business being there, will often go unchallenged. This person might even have a door literally opened for them by a polite employee, seeking to either help this busy visitor, or avoiding causing an embarrassing scene by challenging them.

Effective security also carries another millstone around its neck: when security people do their job correctly, no one notices. So long as everything seems to be working, the talents and achievements of these people go essentially unnoticed. One of the largest examples of this in the cyber world was Y2K, the turn-of-the-century challenge involving millions of computers with two-digit date clocks that had no instructions on what to do when the calendars moved from 1999 to 2000. This story is now over 25 years old and is seen as a nothingburger to many simply because nothing terrible happened. But nothing terrible happened because the thousands of computer engineers worldwide, some called out of retirement, at a global cost of between $200 billion and $800 billion (depending on whose reports you read) fixed the problem in the nick of time.

Even on a daily basis, if your smartphone works, the WiFi is connected, and the internet is running, all seems well, and the people keeping it going are forgotten.

The problem with this type of thinking is that it institutionalizes the normalcy of operations and consequently dilutes the necessary awareness of risk that we need, to keep pace with threat actors. In fact, that hypersensitivity to danger that is being repressed is the very same nervous energy that keeps creative bad actors at the top of their game. They simply must find that next weakness, test this newly released code, exploit vulnerabilities whose patches haven’t reached the consumers yet. For the threat actor, this energy stays high. For the rest of us, a general sense of safety, paired with politeness and a good dose of time starvation, pushes those same instincts away. This is not to say that cybersecurity professionals have forgotten this urgency; simply that their requests for better tools fall too often on deaf, complacent ears.

What we perceive as human failures often systemic

What does that leave us with? Well, phishing for one. People who click on phishing links often get blamed for the resulting malware infestation. But clicking on a link is a natural human thing to do. The entire World Wide Web, the interface of the internet was built on that one key idea: “click here and something good will happen.”

The fact that phishing continues to be exploited by energetic threat actors must go face to face with the fact that organizations have generally dropped their side of the equation. It’s a perfect example of organizations inversely prioritizing security as a final thought or an afterthought, that occurs once a product or process has been created and adopted. Phishing is not a human failure: it’s a technology failure. If an organization can be taken down by someone clicking a link, which is, after all, part of their job, that’s a segmentation problem. It’s something that should not be allowed to happen and should be built into the system to make it “not possible.”

Build security in accordance with people rather than forcing them to fit

To remain effective against a relentless adversary, security procedures must be built around the people who use them, because otherwise they're going to find workarounds. Don't focus on reports that show how many people are clicking on the phishing links; focus instead on how quickly they report these issues and how comfortable they feel in making those reports. They should not feel that they may get fired for self-reporting a phishing accident. They should instead know that they will be rewarded for informing everyone that there could be a potential incident. This, by the way, is the essence of kaizen and gemba, two key management concepts developed as part of the Toyota Production System in the middle of the last century, but just as relevant today.

The bad actors own, or work for, businesses just like the rest of us do. Their key advantage is they get to look for that loose brick in a wall that someone else has built, which is always going to be easier than designing and building a wall from scratch. We simply need to remember that the energy that fuels their search is equally available to everyone, so long as we lift complacency from our eyes.

This blog is a digest of a podcast conversation between myself, Amanda Widdowson, Head of Human Factors Capability at Thales UK, and Freaky Clown, the co-CEO and Co-Founder, Head of Ethical Hacking at Cygenta. You can hear the full conversation on Season 3 Episode 2 of the Thales Security Sessions Podcast online or on your podcast app of choice.