It’s something we’ve all heard repeatedly, but it’s a point worth hammering home: AI will shape the future of humanity. This fact is not lost on policymakers, and they are reacting accordingly.
In October 2022, the US released its Blueprint for an AI Bill of Rights. While the Blueprint is still just that, a blueprint with no legal backing, the fact that the US chose to call this framework a “Bill of Rights” reflects how seriously the US Government takes AI.
Similarly, in May 2024, the European Council approved the Artificial Intelligence Act, the first-ever legal framework on AI. The Act was published in the EU Official Journal on 12 July and will come into force on August 1st. It imposes requirements on companies designing and/or using AI in the European Union and provides a way of assessing risk levels.
But let’s look at them both in a little more detail.
These regulations have been introduced partly in response to several controversial AI incidents that have taken place over the past few years. The Blueprint for an AI Bill of Rights explains areas of potential vulnerability; for example, in February 2024, Air Canada was ordered to pay damages to a passenger after its virtual assistant gave him incorrect information about flight fares. The Canadian Tribunal said the airline didn’t take “reasonable care to ensure its chatbot was accurate.” The US Blueprint aims to ensure that AI-based chatbots (and other AI systems) are reliable and that businesses understand they are accountable for decisions made by these systems.
Similarly, the “safe and effective systems” section of the Blueprint aims to prevent, in part, ineffective systems such as DPD’s ill-fated customer service chatbot. In early 2024, X user Ashley Beauchamp found that the chatbot could not answer even the simplest customer service queries but could write a rudimentary poem about how terrible it was.
The EU AI Act, however, highlights the importance of recognizing that AI is still in its relative infancy and should be treated with appropriate levels of caution (both in terms of inputs and outputs) based on usage and deployment risk.
The US Blueprint for an AI Bill of Rights outlines five principles and practices for ensuring the safe and equitable use of automated systems. They are listed below, along with some real-world examples for context:
The Artificial Intelligence Act (EU AI Act) takes a risk-based approach to regulating AI, defining four levels of risk for AI systems:
Emerging threats to AI platforms loom large over cybersecurity teams, leaving their enterprise environments vulnerable to attack and data loss. The two most common threats to AI systems are:
To protect against model theft, organizations must control access to their machine models. The best way to achieve this is through encryption and the safeguarding of the associated encryption keys. Encrypting a model and rigorously controlling access to the encryption keys based on the application of strong authentication, roles and policies, ensures that only authorized users can access it, meaning that attackers cannot analyze the model structure and, ultimately, replicate it. Similarly, a robust licensing system will prevent unauthorized users from accessing the model.
To mitigate against data poisoning, organizations must monitor, evaluate, and debug AI software. This involves carefully selecting, verifying, and cleaning data before using it for training or testing AI models and refraining from using untrusted or uncontrolled data sources such as crowdsourcing or web scraping. Strong data governance is also essential for preventing data poisoning attacks.
Organizations should consider building Confidential AI models, which would allow them to run AI processes within a trusted confidential computing environment with complete confidence. In this scheme, the security and integrity of the hardware execution environment, as well as the data and applications running inside, are independently attested by a third party to confirm that they have not been compromised.
Thales is enabling its CipherTrust Data Security Platform (CDSP) to support End-To-End Data Protection (E2EDP) on Intel TDX chips that that support Confidential Computing (CC) services offered by Google Cloud and Microsoft Azure. In this architecture, cloud-independent attestation is provided by Intel Trust Authority and subsequently verified by Thales.
Similarly, Thales and Imperva have joined forces to provide security for a world powered by the cloud, data, and software. Check out our Cloud Protection and Licensing solutions to protect your AI system today.