The voyage of LLM and discuss about Excessive Agency Vulnerability
What is LLM?
A large language model (LLM) is a type of artificial intelligence (AI) algorithm that uses deep learning techniques and massively large data sets to understand, summarize, generate and predict new content. The term generative AI also is closely connected with LLMs, which are, in fact, a type of generative AI that has been specifically architected to help generate text-based content.
Why are LLMs becoming important to businesses?
As AI continues to grow, its place in the business setting becomes increasingly dominant. This is shown through the use of LLMs as well as machine learning tools. In the process of composing and applying machine learning models, research advises that simplicity and consistency should be among the main goals. Identifying the issues that must be solved is also essential, as is comprehending historical data and ensuring accuracy.
The benefits associated with machine learning are often grouped into four categories: efficiency, effectiveness, experience and business evolution. As these continue to emerge, businesses invest in this technology.
What are large language models used for?
LLMs have become increasingly popular because they have broad applicability for a range of NLP tasks, including the following:
a. Text generation: The ability to generate text on any topic that the LLM has been trained on is a primary use case.
b. Translation: For LLMs trained on multiple languages, the ability to translate from one language to another is a common feature.
c. Content summary: Summarizing blocks or multiple pages of text is a useful function of LLMs.
d. Rewriting content: Rewriting a section of text is another capability.
e. Classification and categorization. An LLM is able to classify and categorize content.
f. Sentiment analysis: Most LLMs can be used for sentiment analysis to help users to better understand the intent of a piece of content or a particular response.
g. Conversational AI and chatbots: LLMs can enable a conversation with a user in a way that is typically more natural than older generations of AI technologies.
Among the most common uses for conversational AI is through a chatbot, which can exist in any number of different forms where a user interacts in a query-and-response model. The most widely used LLM-based AI chatbot is ChatGPT, which is developed by OpenAI. ChatGPT currently is based on the GPT-3.5 model, although paying subscribers can use the newer GPT-4 LLM.
LLM Application Data Flow:
Details of LLM Vulnerabilities
The OWASP Top 10 for Large Language Model Applications project aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models (LLMs).
LLM01: Prompt Injection
LLM02: Insecure Output Handling
LLM03: Training Data Poisoning
LLM04: Model Denial of Service
LLM05: Supply Chain Vulnerabilities
LLM06: Sensitive Information Disclosure
LLM07: Insecure Plugin Design
LLM08: Excessive Agency
LLM09: Overreliance
LLM10: Model Theft
Governance and Compliance:
Today, we will discuss on LLM08: Excessive Agency
What is LLM08: Excessive Agency?
An LLM-based system is often granted a degree of agency by its developer — the ability to interface with other systems and undertake actions in response to a prompt. The decision over which functions to invoke may also be delegated to an LLM ‘agent’ to dynamically
determine based on input prompt or LLM output.
Excessive Agency is the vulnerability that enables damaging actions to be performed in response to unexpected/ambiguous outputs from an LLM.
The root cause of Excessive Agency is typically one or more of: excessive functionality, excessive permissions or excessive autonomy. This differs from Insecure Output Handling which is concerned with insufficient scrutiny of LLM outputs.
Excessive Agency can lead to a broad range of impacts across the confidentiality, integrity and availability spectrum, and is dependent on which systems an LLM-based app is able to interact with.
Common Examples of Vulnerability:
1. An LLM agent has access to plugins which include functions that are not needed for the intended operation of the system. For example, a developer needs to grant an LLM agent the ability to read documents from a repository, but the 3rd-party plugin they choose to use also includes the ability to modify and delete documents.
2. The plugin may have been trialed during a development phase and dropped in favor of a better alternative, but the original plugin remains available to the LLM agent.
3. An LLM plugin with open-ended functionality fails to properly filter the input instructions for commands outside what’s necessary for the intended operation of the application. E.g., a plugin to run one specific shell command fails to properly prevent other shell commands from being executed.
4. An LLM plugin has permissions on other systems that are not needed for the intended operation of the application. E.g., a plugin intended to read data connects to a database server using an identity that not only has SELECT permissions, but also UPDATE, INSERT and DELETE permissions.
5. An LLM plugin that is designed to perform operations on behalf of a user accesses downstream systems with a generic high-privileged identity. E.g., a plugin to read the current user’s document store connects to the document repository with a privileged account that has access to all users’ files.
Example Attack Scenario:
An LLM-based personal assistant app like live chat granted access to an individual’s support to reset the password,product info and debug sql. To achieve this functionality, the email plugin requires the ability to read customer input, however the plugin that the system
developer has chosen to use also contains functions for reset password,debug sql and product info. The LLM is vulnerable to an indirect prompt injection attack, whereby a maliciously-crafted input and fetch the information from the debug sql and get the customer sensitive information
Prevention/Mitigation:
The following actions can prevent Excessive Agency
1. Limit the plugins/tools that LLM agents are allowed to call to only the minimum functions necessary. For example, if an LLM-based system does not require the ability to fetch the contents of a URL then such a plugin should not be offered to the LLM agent.
2. Limit the functions that are implemented in LLM plugins/tools to the minimum necessary. For example, a plugin that accesses a user’s mailbox to summarise emails may only require the ability to read emails, so the plugin should not contain other functionality such as deleting or sending messages,
3. Avoid open-ended functions where possible (e.g., run a shell command, fetch a URL, etc.) and use plugins/tools with more granular functionality. For example, an LLM-based app may need to write some output to a file. If this were implemented using a plugin to run a shell function then the scope for undesirable actions is very large (any other shell command could be executed). A more secure alternative would be to build a file-writing plugin that could only support that specific functionality,
4. Limit the permissions that LLM plugins/tools are granted to other systems to the minimum necessary in order to limit the scope of undesirable actions. For example, an LLM agent that uses a product database in order to make purchase recommendations to a customer might only need read access to a ‘products’ table; it should not have access to other tables, nor the ability to insert, update or delete records. This should be enforced by applying appropriate database permissions for the identity that the LLM plugin uses to connect to the database,
5. Track user authorization and security scope to ensure actions taken on behalf of a user are executed on downstream systems in the context of that specific user, and with the minimum privileges necessary. For example, an LLM plugin that reads a user’s code repo should require the user to authenticate via OAuth and with the minimum scope required,
6. Utilise human-in-the-loop control to require a human to approve all actions before they are taken. This may be implemented in a downstream system (outside the scope of the LLM application) or within the LLM plugin/tool itself. For example, an LLM-based app that creates and posts social media content on behalf of a user should include a user approval routine within the plugin/tool/API that implements the ‘post’ operation.