Generative AI (GAI) is becoming increasingly crucial for business leaders due to its ability to fuel innovation, enhance personalization, automate content creation, augment creativity, and help teams explore new possibilities. This is confirmed by surveys, with 83% of business leaders saying they intend to increase their investments in the technology by 50% or more in the next six to 12 months.
Unfortunately, the increasing use of AI tools has also brought a slew of emerging threats that security and IT teams are ill-equipped to deal with. According to the 2024 Thales Data Threat Report, 68% of those surveyed reported that the most concerning security risks with AI are rapid changes that challenge existing plans. Many IT practitioners believe security threats are increasing in volume or severity and that the use of AI exacerbates these risks.
It has become a race between security teams and advanced attackers to see who will be the first to take advantage of AI’s incredible abilities successfully.
The Rising Threat Landscape
Adversaries are already harnessing the power of generative AI for several nefarious purposes.
Stealing the Model: AI models are the crown jewels of organizations leveraging machine learning algorithms. However, they are also prime targets for malicious actors seeking to gain an unfair advantage or disrupt operations. By infiltrating systems or exploiting vulnerabilities, adversaries can steal these models, leading to intellectual property theft and competitive disadvantage.
AI Hallucinations: These are instances where artificial intelligence systems generate outputs that are not grounded in reality or are inconsistent with the intended task. These hallucinations can occur for various reasons, such as errors in the AI's algorithms, biases in the training data, or limitations in the AI's understanding of the context or task.
Data Poisoning: The integrity of AI systems relies heavily on the quality and reliability of the data they are trained on. Data poisoning involves injecting malicious inputs into training datasets, corrupting the learning process, and compromising the model's performance. This tactic can manipulate outcomes, undermine decision-making processes, and even lead to catastrophic consequences in critical applications like healthcare or finance.
Prompt Injection: Prompt injection attacks target natural language processing (NLP) models by injecting specific prompts or queries designed to elicit unintended responses. These subtle manipulations can deceive AI systems into generating misleading outputs or executing unauthorized actions, posing significant risks in applications such as chatbots, virtual assistants, or automated customer service platforms.
Extracting Confidential Information: As AI systems process vast amounts of data, they often handle sensitive or proprietary information. Malicious actors exploit vulnerabilities within AI infrastructures to extract confidential data, including customer records, financial transactions, or trade secrets. Such breaches jeopardize privacy and expose enterprises to regulatory penalties, legal liabilities, and reputational damage.
Vulnerabilities in AI
There have already been instances where AI has caused vulnerabilities in popular apps and software.
Security researchers from Imperva described in a blog called XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT how they discovered multiple security vulnerabilities in OpenAI’s ChatGPT that, if exploited, would enable bad actors to hijack a user’s account.
The company’s researchers pinpointed two cross-site scripting (XSS) vulnerabilities, along with other security issues, in the ChatGPT backend. They outlined the process of refining their exploit from requiring a user to upload a malicious file and interact in a specific manner, to merely necessitating a single click on a citation within a ChatGPT conversation. This was accomplished by exploiting client-side path traversal and a broken function-level authorization bug in the ChatGPT backend.
In another Imperva blog, Hacking Microsoft and Wix with Keyboard Shortcuts, researchers focused on the anchor tag and its behavior with varying target attributes and protocols. They noted that its inconsistent behavior can confuse security teams, enabling bugs to go unnoticed and become potential targets for exploitation. The technique shown in this blog post was instrumental in exploiting the second XSS bug they found in ChatGPT.
Upholding AI Responsibility: Enterprise Strategies
In the face of evolving risks, one thing is clear: The proliferation of GAI will change investment plans in cybersecurity circles. Enterprises must prioritize responsible AI governance to mitigate threats and uphold ethical standards.
There are several key strategies for organizations to navigate this complex landscape:
- Secure Model Development and Deployment: Implement robust security measures throughout the AI lifecycle, from data collection and model training to deployment and monitoring. Employ encryption, access controls, and secure development practices to safeguard models against unauthorized access or tampering.
- Foster Collaboration and Knowledge Sharing: Foster a culture of collaboration and information sharing within the organization and across industry sectors. By collaborating with peers, academia, and cybersecurity experts, enterprises can stay informed about emerging threats, share best practices, and collectively address common challenges.
- Embrace Transparency and Accountability: Prioritize transparency and accountability in AI decision-making processes. Document model development methodologies, disclose data sources and usage policies, and establish mechanisms for auditing and accountability to ensure fairness, transparency, and compliance.
- Investments in Training: Security staff must be trained in AI systems and Generative AI. All this requires new investments for Cybersecurity and must be budgeted through a multi-year budgeting cycle.
- Developing Corporate Policies: Developing AI policies that govern the responsible use of GAI tools within the business is also critical to ensure ethical decision-making and mitigate the potential risks associated with their use. These policies can protect against biases, safeguard privacy, and ensure transparency, fostering trust both within the business and among all its stakeholders.
Changing regulations
Regulations are also changing to accommodate the threats posed by AI. Efforts such as the White House Blueprint for an AI Bill of Rights and the EU AI Act provide the guardrails to guide the responsible design, use, and deployment of automated systems.
One of the principles is about privacy by design to ensure that default consideration of privacy protections, including making sure that data collection conforms to reasonable expectations and that only data strictly needed for the specific context is collected.
Consent management is also considered critical. In sensitive domains, consumer data and related inferences should only be used for strictly necessary functions, and the consumer must be protected by ethical review and use prohibitions.
Several frameworks provide appropriate guidance for implementors and practitioners. However, we are still early and in the discovery phase, so it will take time for data privacy and security regulations to evolve to include GAI-related safety considerations.
For now, the best thing that organizations and security teams can do is keep learning, invest in GAI training (not only for the security professionals but all staff), and budget for incremental investments. This should help them stay a step ahead of adversaries.