Securing Large Language Models: Unique Challenges and Rethinking Traditional Security Approaches

Large Language Models (LLMs) are computational systems that process and generate text by learning from vast datasets. These advanced models, which can understand and generate human-like text, have become integral in driving business innovations, enhancing operational efficiencies and fostering deeper stakeholder engagement. Despite their transformative impact, LLMs also introduce new cybersecurity challenges.

In the context of LLMs, ‘securing’ encompasses a multifaced approach aimed at safeguarding several key assets:

  • Data integrity: Ensuring that the information processed and generated by LLMs remains accurate, reliable and uncorrupted by malicious interventions.
  • Operational continuity: Protecting the operational use of LLMs so that businesses can rely on their continuous and safe functioning for decision-making and automated processes.
  • Confidentiality: Keeping sensitive data private, including proprietary business information, users’ personal data and trade secrets embedded within the training data.
  • System security: Defending the infrastructure that supports LLMs against unauthorized access, exploitation and damage that could compromise the model’s performance or availability.

It is imperative to develop a deep understanding of the unique risks associated with LLMs and establish strategies to mitigate them. This includes implementing security protocols and controls specifically designed to protect against LLM-related threats.

Understanding prompt injection

Prompt injection happens when hackers use deceptive instructions to trick advanced computer systems like LLMs. Think of it as feeding a smart robot confusing commands: it might end up spilling confidential information, agreeing with misleading statements or inadvertently taking harmful actions. For instance, someone could trick an LLM into revealing private database passwords, endorsing false information about a product or generating harmful code.

Prevention measures:

  • Manually inspect and clean the instructions given to the LLMs.
  • Limit the LLMs’ ability to interact with unpredictable online content. Monitor any additional tools or plugins closely to prevent unintended actions.
  • Approach LLMs with the same caution given to a new, untested user: define clear operational guidelines and establish boundaries to ensure safe and controlled usage.
  • Test LLM models with known prompt injection techniques.

Training data poisoning

LLMs assimilate vast amounts of data to respond to queries. However, if this data is corrupted – akin to learning from falsified historical records or doctored scientific journals – the LLM could unintentionally propagate these inaccuracies. Training data poisoning occurs when bad actors introduce such deceptive information, leading to the unintentional dissemination of false facts by LLMs.

Countering data poisoning:

  • Source training data exclusively from reliable, verified channels.
  • Employ sandboxing techniques to meticulously curate the data to which LLMs are exposed.
  • Continuously monitor the LLMs’ output; anomalous responses can be cross-referenced with other trusted models to detect inaccuracies.
  • In the event of data contamination, retrain the model with clean, authenticated information.
  • Utilize synthetic data free from noise and manipulation. This ensures a clean, controlled environment for training AI models.

Permission issues with plugins

Plugins, akin to miniature applications, augment LLM capabilities but can pose risks if granted excessive permissions. Imagine a house guest misusing a master key to access restricted areas. Similarly, a plugin might overreach its intended function, potentially enabling unauthorized status upgrades or modifications without user consent. The risk escalates when multiple plugins interact, potentially amplifying the impact of these overreaches.

Recommendations:

  • Assign only the essential access rights required for plugin functionality.
  • For critical plugins, implement mandatory user consent protocols before any modifications are executed.
  • Minimize the interaction of plugins to avoid compounded risks.
  • If plugin interaction is necessary, ensure thorough vetting of each plugin’s output before it is utilized by another, preventing any escalation of privileges.

Data leakage in AI models

In rare instances, after processing extensive data repositories, LLMs might inadvertently disclose sensitive information. This scenario can be likened to unintentionally revealing confidential information in a conversation. The challenge stems from the LLMs’ ability to retain and sometimes share confidential data snippets.

How to plug the leak:

  • Routinely cleanse the data ingested by LLMs, removing all private and sensitive information.
  • Implement robust security measures, such as encryption, access controls and authentication protocols, to prevent unauthorized access.
  • Establish systems for reviewing the LLMs’ outputs, ensuring they do not divulge confidential information.
  • Train LLM models with synthetic data to help alleviate the data leakage and poisoning issues.
  • Generate synthetic data for training of AI models that mimic real-world data but do not contain sensitive information.

Excessive agency in LLMs

Imagine giving a new, inexperienced employee the power to make company decisions without any supervision; mistakes are bound to happen. Similarly, some LLMs are given too much autonomy, leading them to make decisions that might not always be in a company’s best interest. For instance, an LLM managing a store’s online inventory might, due to a misunderstood prompt, end up ordering a year’s supply of an item that usually sells only once a month. Or, in a more serious scenario, an LLM controlling aspects of a city’s traffic lights could inadvertently create traffic jams due to erratic changes, not grasping the ripple effects of its decisions.

Suggestions for handling:

  • Only grant LLMs the essential level of autonomy necessary for their functions.
  • Incorporate human oversight in critical decision-making processes.
  • Monitor LLM activities and establish alert systems for any irregular behaviors that require immediate attention.

Red teaming

Red teaming has emerged as a critical component for ensuring the resilience and security of AI infrastructure. By adopting a proactive stance through AI red teaming, businesses can uncover potential vulnerabilities, enhance their security postures, and foster a culture of continuous improvement. When red teaming an AI infrastructure:

Prioritize the following key elements:

  • Establish detailed threat models that account for both external and internal threat vectors.
  • Develop realistic attack scenarios that could potentially compromise AI systems.
  • Leverage expertise from various domains, including cybersecurity, data science and domain-specific knowledge, to provide comprehensive assessments (e.g., leverage the components above in conjunction with traditional red team techniques)

As adoption of LLMs accelerates, it is critical that developers, CIOs and security teams recognize and mitigate the novel risks introduced by these systems. Taking a proactive, preventative approach to security, with close collaboration between CIOs and CSOs, will enable continued innovation and application of LLMs in a safer, more secure manner. The vulnerabilities and controls highlighted here provide a starting point to build secure-by-design LLM implementations.

Tom Stewart, Senior Director – Security and Privacy, contributed to this blog.

To learn more about our cybersecurity solutions, contact us.

Varun Ravi

Consultant
Security and Privacy

Subscribe to Topics

Can you name the key pillars of enterprise resilience? Read this introduction to these six pillars that—when implemented—enable organizations to better prepare for the risk environment. https://ow.ly/LpbE50TxygX #ProtivitiTech #Resiliency

Protiviti enabled a global automotive technology manufacturer client to prioritize cybersecurity investments effectively after successfully implementing a Factor Analysis of Information Risk (#FAIR) quantification program. https://ow.ly/req350Txvbx #ProtivitiTech

Protiviti is a proud sponsor of #FAIRCON! Join us October 1-2 as we partake in this year’s theme "Managing Risk at the Speed of the Business.” Visit our FAIRCON page to learn more and get our code for $200 off your conference registration. https://ow.ly/qZHE50Tqan5 #ProtivitiTech

Protiviti’s tailored #Microsoft solutions address unique organizational needs. Learn more about the different use cases for integrating Microsoft Dynamics 365 and CoPilot— from improving sales to enhancing customer service to delivering deep insights. https://ow.ly/8Hhn50Twj2C

Discover how capturing key metadata via a data catalog tool leads companies to make better operational decisions. Read the latest Technology Insights blog: https://ow.ly/O1aX50Twi4K #Protiviti #TechnologyInsights #Data

Load More