banner

Thales Blog

Keeping Customer Data Safe: AI's Privacy Paradox

March 14, 2024

Chris Harris Chris Harris | Associate VP, Sales Engineering More About This Author >

AI's appeal lies in its ability to personalize and streamline customer experiences in ways previously unimaginable. Through sophisticated algorithms and machine learning capabilities, AI can analyze vast amounts of data to understand individual preferences and behavior patterns.

This enables brands to deliver tailored recommendations, anticipate customer needs, and provide timely assistance, ultimately fostering deeper engagement and loyalty. For consumers, AI promises to make interactions with brands more convenient, efficient, and enjoyable.
 

Catching the Public’s Attention
 

Moreover, as AI technology matures and becomes more integrated into everyday experiences, consumers become more receptive to its potential benefits. There is no doubt that the rise of generative AI has made waves over the last year, with tools such as OpenAI’s ChatGPT and Microsoft Co-Pilot being widely adopted.

In good ways and bad, the innovation and adoption of these tools have caught the public’s attention regarding how they could impact their working lives. This finding is backed up by the Thales 2024 Consumer Digital Trust Index, “Building Digital Experiences that Enhance Consumer Trust.” The study identified AI as the technology most likely to positively impact consumers’ online interactions with brands. Much of the debate focused on how brands will use generative AI in their interactions; the study showed that just over half (51%) of respondents would be happy for companies to use the technology to improve their experiences. 


Putting Personal Data at Risk
 

The results were not all positive, however. The Thales study highlighted how nearly six out of ten (57%) of global consumers were anxious that brands using generative AI would be putting their personal data at risk. Also, roughly half (43%) claimed they would not trust any interactions powered by generative AI. An even higher number (47%) don’t trust companies to use generative AI responsibly.

These figures aren’t surprising because it is not unusual for consumers to have limited awareness of the current laws safeguarding their data privacy. This lack of understanding is understandable, given the complex and increasingly stringent landscape of privacy regulations that affect both consumers and innovators alike.

However, AI has emerged as a focal point in technology policy discussions, intertwining with ongoing debates on data privacy.


Adding Complexity
 

With the EU’s passage of the AI Act in December 2023, the regulatory landscape became even more complex and onerous for companies. This represents a landmark regulatory framework that fosters trust and accountability in AI systems within the EU. The Act introduces strict guidelines for high-risk AI applications, mandating compliance with transparency, accountability, and human oversight requirements.

Although it was met with some skepticism by some privacy groups, the Act establishes a comprehensive regulatory regime overseeing AI development, deployment, and governance, emphasizing the protection of fundamental rights and ensuring AI systems operate in a manner aligned with European values. The Act sets forth significant fines for non-compliance, signaling a commitment to safeguarding citizens against potential risks posed by AI technologies while promoting innovation and competitiveness in the European digital landscape.

Conversely, in 2023, Europe's General Data Protection Regulation (GDPR) demonstrated how regulatory frameworks can impede AI development. Instances such as Italy's temporary block on ChatGPT and Ireland's delay of Google's Bard (now named Gemini) underscore the challenges posed by data privacy laws that some consider draconian.


World Consumer Rights Day
 

With all this change happening, it’s not surprising that World Consumer Rights Day, which happens every year on March 15, has chosen ‘Fair and responsible AI for consumers’ as its theme for this year. It serves as a timely reminder for consumers to review their privacy settings and for policymakers to consider the tradeoffs associated with overly restrictive privacy laws.

Speaking about the theme, World Consumer Rights Day said that last year, “breakthroughs in generative AI took the digital world by storm.” For instance, a slew of chatbots have been introduced as customer service agents and are able to mimic human conversation. Millions of consumers are already using generative AI daily, and there’s no doubt that this technology is expected to have a tremendous impact on how people work, create, communicate, gather information, and so much more.

However, while there is a real opportunity here, World Consumer Rights Day stresses that there are severe ramifications in terms of consumer safety and digital equality. And, because developments are happening at an unprecedented pace, all stakeholders need to move rapidly to ensure a fair and responsible AI future.

This year’s event will shine the spotlight on concerns such as misinformation, privacy violations, and discriminatory practices and will look at how AI-driven platforms are able to subvert the truth by spreading false information and perpetuating biases. 


Looking to the Future
 

With 2025 just around the corner, the future of AI and its impact on consumers and trust promises to be an exciting journey, crammed with advancements and, no doubt, challenges too.

However, AI is set to alter the way customers interact with brands even further.  

AI-driven treatment recommendations in healthcare will enhance outcomes for patients and cut costs dramatically. Powerful analytics and ML will help root out disease earlier, enabling doctors to act more quickly.

Admin will be streamlined through AI-powered chatbots or even robots, making the experience more pleasant for patients. Also, should AI be integrated further into telemedicine, healthcare will be democratized, and access to these services will be put into the hands of people in remote and rural areas who previously had none.


Lowering The Barriers to Entry
 

We’ve already seen how AI has turned the financial services industry on its head, and in the future, these advancements are set to become commonplace.  Through AI-driven algorithms, investment strategies can be honed, fraud rooted out more effectively, and risk managed on a higher level.

For customers, chatbots and AI will improve their banking experiences even more. There’s also an opportunity for AI to play a greater role in compliance and regulation by helping businesses adhere more firmly to rules and regulations. For the many unbanked, AI-based credit assessments will lower the barriers to entry and improve access to capital for hopeful entrepreneurs.

In retail, the future will see even more personalized experiences for consumers, and shopping taken to a whole new level of experience. Inventory management will be optimized, smart shelves will become increasingly popular, and self-service checkouts, will be ubiquitous. Through AI and analytics, brands will be able to fine-tune their customer’s preferences and target them even more precisely.


To Err is Human?
 

It will also become even more important for companies to establish comprehensive AI policies prioritizing accountability, explainability, and transparency. As AI systems become increasingly integrated into various aspects of business operations, the potential for mistakes or biases also grows.

In addition, as regulations tighten and scrutinize AI practices even more, organizations must proactively address potential liabilities arising from AI misdoings. In the recent case of Air Canada, the airline was ordered to cough up compensation after its chatbot gave a customer the wrong information, which made him pay for a full-price ticket instead of a bereavement fare.

Canada’s largest airline did itself no favors when it insisted the bot was “responsible for its own actions”. This is likely to be the first of many cases where to err is not always human.

As for consumer trust in AI, that remains to be seen. The right foundations - transparency, regulations, ethical frameworks - will undoubtedly go a long way towards building trust. However, while customers are happy enough now, enthusiasm can wane rapidly, so keep an eye out for next year’s report, as time will tell us how all these elements will unfold.

To unlock more consumer trust insights, download the 2024 Thales Consumer Digital Trust Index now.