banner

Thales Blog

The Challenge of Bias in AI – Creating Ethical Guidelines

February 6, 2020

Ashvin Kamaraju Ashvin Kamaraju | Vice President of Engineering, Strategy & Innovation More About This Author >

Artificial intelligence (AI) is becoming increasingly integral to information security. From the multitude of ways AI is used in business to creating smart cities and safeguarding transportation, AI impacts nearly every aspect of our lives. In fact, in its Reinventing Cybersecurity with Artificial Intelligence report, Capgemini found that 61% of respondents said they can no longer detect data breach attempts without the help of AI. This perspective informed the decision of 48% of the surveyed organizations to increase their digital security spending for AI by an average of 29%in 2020. In preparation of this greater investment, nearly three-quarters (73%) of organizations are already testing use cases for AI in their network security and other digital security initiatives.

All of the findings presented above assume that organizations can implicitly trust the results of their AI-powered security solutions. But what if they shouldn’t? It’s possible these solutions suffer from biases that, in turn, skew their results and therefore don’t give organizations a truly accurate picture of their digital security.

Such potential for bias highlights the need to create ethical guidelines for AI. Acknowledging that need, this blog post will explain the challenge of taking steps to eliminate bias in AI using ethics. It will then provide some recommendations on how the security community can specifically avoid harmful bias going forward.

What Is Bias in AI, Anyway?

The issue of bias in AI arises from the fact that software products are human creations. That is to say, human developers, together with the data being used, are responsible for determining what defines a good outcome from an AI-powered solution. And these decisions might not be universally good for all. It’s more likely the case that they’re good only under certain contexts and not others.

Towards Data Science expands upon how this loss of neutral standing upon a technology’s creation specifically applies to AI:

In AI, specifically machine learning…, missing data, missed inputs, missed decision paths, missed discussions all have a bearing on the “quality” of the prediction. Choosing to use the prediction irrespective of its “quality” has a bearing on the “quality” of the end outcomes. And the “quality” of end outcomes has a bearing on the “quality” of its impact on humans.

Needless to say, AI suffers from bias in all of its many applications. That goes for information security, as well. Indeed, Help Net Security covered a recent survey from O’Reilly which found that 59% of respondents didn’t check for fairness, bias or ethical issues while developing their machine learning (ML) models. Not only that, but nearly one in five (19%) of organizations revealed that they struggled to adopt AI due to a lack of data, data quality and/or development skills.

These inadequacies ultimately come together and skew the outcome of an AI-powered security solution. As noted in Fast Company, a limited data sample might prevent a tool from flagging certain behavior as suspicious. This false negative could then carry on with its malicious activity, move deeper into the organization’s network and evolve into a security incident without raising any red flags. On the other hand, an improperly tuned algorithm could detect otherwise benign network traffic as malicious, preventing business-critical information from getting through and burdening security teams with unnecessary investigations into false positives.

Excising Bias from an AI-Powered Solution

The computer science and AI communities aren’t unaware of bias. In fact, some companies like Google and Microsoft have plans to develop their own ethical guidelines for AI. But as Fast Company notes in another article, the problem with these initiatives is that they oftentimes don’t reflect the cultural and social nuances that shape different interpretations of “ethical behavior.”

The other issue is that companies need to follow through on implementing those principles. Unfortunately, that’s not a foregone conclusion. An article in The New York Times rightly points out that companies often change course or sacrifice their idealism to address financial pressures.

Absent larger initiatives such as governmental regulation, developers and organizations can begin to shift the conversation towards ethical AI by reconceiving the modelling process. Fast Company explains in a recent article that such reconceptualization begins by bringing social science into the AI conversation. This process should involve using more diverse computer scientists, data sources and security teams to protect their organizations. Doing so will help account for contextual and perceptual differences, thereby improving the efficiency of algorithms and scope of input data overall. At the same time, these new AI models should also allow for a degree of dynamism so that they can evolve as we change, culturally and socially.

Finally, CIOs interested in coming up with their own ethical frameworks should realize that they don’t need to reinvent the wheel. Many researchers and organizations have already published several foundational principles pertaining to AI. An Information Week article shares some of these here.