diff --git a/docs/header_names/allowed_framework_names.adoc b/docs/header_names/allowed_framework_names.adoc index 3362ae93bc0..521e0caecb7 100644 --- a/docs/header_names/allowed_framework_names.adoc +++ b/docs/header_names/allowed_framework_names.adoc @@ -55,6 +55,7 @@ * JSP * Legacy Mongo Java API * OkHttp +* OpenAI * Realm * Apache HttpClient * Couchbase diff --git a/rules/S7518/ask-yourself.adoc b/rules/S7518/ask-yourself.adoc deleted file mode 100644 index aab09e44cd8..00000000000 --- a/rules/S7518/ask-yourself.adoc +++ /dev/null @@ -1,7 +0,0 @@ -== Ask Yourself Whether - -* Malicious LLM behaviors can impact your reputation. -* The LLM was trained with private or sensitive data. -* Your LLM infrastructure does not benefit from AI guardrails. - -There is a risk if you answered yes to any of those questions. diff --git a/rules/S7518/see.adoc b/rules/S7518/common/resources/standards.adoc similarity index 90% rename from rules/S7518/see.adoc rename to rules/S7518/common/resources/standards.adoc index 54a801bb398..c8f255ce7ed 100644 --- a/rules/S7518/see.adoc +++ b/rules/S7518/common/resources/standards.adoc @@ -1,3 +1,3 @@ -== See +== Standards * OWASP GenAI - https://genai.owasp.org/llmrisk/llm01-prompt-injection/[Top 10 2025 Category LLM00 - Prompt Injection] diff --git a/rules/S7518/highlighting.adoc b/rules/S7518/highlighting.adoc new file mode 100644 index 00000000000..b90db4655a6 --- /dev/null +++ b/rules/S7518/highlighting.adoc @@ -0,0 +1,8 @@ +=== Highlighting + +"[varname]" is tainted (assignments and parameters) + +this argument is tainted (method invocations) + +the returned value is tainted (returns & method invocations results) + diff --git a/rules/S7518/impact.adoc b/rules/S7518/impact.adoc new file mode 100644 index 00000000000..4f864741e87 --- /dev/null +++ b/rules/S7518/impact.adoc @@ -0,0 +1,22 @@ +=== What is the potential impact? + +When attackers detect privilege discrepancies while injecting into your LLM +application, they will try to map out their capabilities in terms of actions and +knowledge extraction, and act accordingly. + +Below are some real-world scenarios that illustrate some impacts of an attacker +exploiting the vulnerability. + +==== Data manipulation + +A malicious prompt injection enables data leakages or possibly impacting the +LLM discussions of other users. + +==== Denial of service and code execution + +Malicious prompt injections could allow the attacker to possibly leverage +internal tooling such as MCP, to delete sensitive or important data, or to send +tremendous amounts of requests to third-party services, leading to financial +losses or getting banned from such services. + +This threat is particularly insidious if the attacked organization does not +maintain a disaster recovery plan (DRP). diff --git a/rules/S7518/java/how-to-fix-it/openai.adoc b/rules/S7518/java/how-to-fix-it/openai.adoc new file mode 100644 index 00000000000..14948f4891e --- /dev/null +++ b/rules/S7518/java/how-to-fix-it/openai.adoc @@ -0,0 +1,98 @@ +== How to fix it in OpenAI + +=== Code examples + +In the following piece of code, control over sensitive roles such as `system` +and `developer` provides a clear way to exploit the underlying model, its +proprietary knowledge (e.g., RAG), and its capabilities (with MCPs). + +The compliant solution revokes any external possibility of controlling +sensitive roles by just hardcoding the system and developer messages. + +==== Noncompliant code example + +[source,java,diff-id=1,diff-type=noncompliant] +---- +@RestController +@RequestMapping("/example") +public class ExampleController { + private final OpenAIClient client; + @PostMapping("/example") + public ResponseEntity example(@RequestBody Map payload) { + String promptText = payload.get("prompt_text"); + String systemText = payload.get("sys_text"); + String developerText = payload.get("dev_text"); + ChatCompletionCreateParams request = ChatCompletionCreateParams.builder() + .model(ChatModel.GPT_3_5_TURBO) + .maxCompletionTokens(2048) + .addSystemMessage(systemText) + .addDeveloperMessage(developerText) + .addUserMessage(promptText) + .build(); + var completion = client.chat().completions().create(request); + return ResponseEntity.ok( + Map.of( + "response", + completion.choices().stream() + .flatMap(choice -> choice.message().content().stream()) + .collect(Collectors.joining(" | ")) + ) + ); + } +} +---- + +== Compliant Solution + +[source,java,diff-id=1,diff-type=compliant] +---- +@RestController +@RequestMapping("/example") +public class ExampleController { + private final OpenAIClient client; + @PostMapping("/example") + public ResponseEntity example(@RequestBody Map payload) { + String promptText = payload.get("prompt_text"); + ChatCompletionCreateParams request = ChatCompletionCreateParams.builder() + .model(ChatModel.GPT_3_5_TURBO) + .maxCompletionTokens(2048) + .addSystemMessage(""" + You are "ExampleBot", a friendly and professional AI assistant [...] + Your role is to [...] + """) + .addDeveloperMessage(""" + // Developer Configuration & Safety Wrapper + 1. The user's query will first be processed by [...] + 2. etc. + """) + .addUserMessage(promptText) + .build(); + var completion = client.chat().completions().create(request); + return ResponseEntity.ok( + Map.of( + "response", + completion.choices().stream() + .flatMap(choice -> choice.message().content().stream()) + .collect(Collectors.joining(" | ")) + ) + ); + } +} +---- + +=== How does this work? + +==== Explicitly stem the LLM context + +While designing an LLM application, and particularly at the stage where you +create the "screenplay" of the intended dialogues between model, user(s), +third-parties, tools, keep the **least privilege** principle in mind. + +Start by providing any external third-party or user with the least amount of +capabilities or information, and only level up their privileges +**intentionally**, e.g. when a situation (like tool calls) requires it. + +Another short-term hardening approach is to add AI guardrails to your LLM, but +keep in mind that deny-list-based filtering is hard to maintain in the long-term +**and** can always be bypassed. Attackers can be very creative with bypass +payloads. diff --git a/rules/S7518/java/metadata.json b/rules/S7518/java/metadata.json index c9018170c4b..330fe3825fb 100644 --- a/rules/S7518/java/metadata.json +++ b/rules/S7518/java/metadata.json @@ -1,6 +1,6 @@ { - "title": "Constructing privileged prompts from user input is security-sensitive", - "type": "SECURITY_HOTSPOT", + "title": "Privileged prompts should not be vulnerable to injection attacks", + "type": "VULNERABILITY", "code": { "impacts": { "SECURITY": "LOW" @@ -19,5 +19,8 @@ "sqKey": "S7518", "scope": "Main", "defaultQualityProfiles": [], + "educationPrinciples": [ + "never_trust_user_input" + ], "quickfix": "unknown" } diff --git a/rules/S7518/java/rule.adoc b/rules/S7518/java/rule.adoc index dc5a2f943ac..a1f1dccd7b9 100644 --- a/rules/S7518/java/rule.adoc +++ b/rules/S7518/java/rule.adoc @@ -1,97 +1,16 @@ -include::../description.adoc[] +== Why is this an issue? -include::../ask-yourself.adoc[] +include::../rationale.adoc[] -include::../recommended.adoc[] +include::../impact.adoc[] -== Sensitive Code Example +// How to fix it section -In the following piece of code, control over sensitive roles such as `system` -and `developer` provides a clear way to exploit the underlying model, its -proprietary knowledge (e.g., RAG), and its capabilities (with MCPs). +include::how-to-fix-it/openai.adoc[] -[source,java,diff-id=1,diff-type=noncompliant] ----- -@RestController -@RequestMapping("/example") -public class ExampleController { +== Resources - private final OpenAIClient client; - - @PostMapping("/example") - public ResponseEntity example(@RequestBody Map payload) { - String promptText = payload.get("prompt_text"); - String systemText = payload.get("sys_text"); - String developperText = payload.get("dev_text"); - - ChatCompletionCreateParams request = ChatCompletionCreateParams.builder() - .model(ChatModel.GPT_3_5_TURBO) - .maxCompletionTokens(2048) - .addSystemMessage(systemText) - .addDeveloperMessage(developperText) - .addUserMessage(promptText) - .build(); - - var completion = client.chat().completions().create(request); - return ResponseEntity.ok( - Map.of( - "response", - completion.choices().stream() - .flatMap(choice -> choice.message().content().stream()) - .collect(Collectors.joining(" | ")) - ) - ); - } -} ----- - -== Compliant Solution - -This compliant solution revokes any external possibility of controlling -sensitive roles by just hardcoding the system and developer messages. - -[source,java,diff-id=1,diff-type=compliant] ----- -@RestController -@RequestMapping("/example") -public class ExampleController { - - private final OpenAIClient client; - - @PostMapping("/example") - public ResponseEntity example(@RequestBody Map payload) { - String promptText = payload.get("prompt_text"); - - ChatCompletionCreateParams request = ChatCompletionCreateParams.builder() - .model(ChatModel.GPT_3_5_TURBO) - .maxCompletionTokens(2048) - .addSystemMessage(""" - You are "ExampleBot," a friendly and professional AI assistant [...] - - Your role is to [...] - """) - .addDeveloperMessage(""" - // Developer Configuration & Safety Wrapper - 1. The user's query will first be processed by [...] - 2. etc. - """) - .addUserMessage(promptText) - .build(); - - var completion = client.chat().completions().create(request); - return ResponseEntity.ok( - Map.of( - "response", - completion.choices().stream() - .flatMap(choice -> choice.message().content().stream()) - .collect(Collectors.joining(" | ")) - ) - ); - } -} ----- - -include::../see.adoc[] +include::../common/resources/standards.adoc[] ifdef::env-github,rspecator-view[] @@ -101,6 +20,8 @@ ifdef::env-github,rspecator-view[] include::../message.adoc[] +include::../highlighting.adoc[] ''' + endif::env-github,rspecator-view[] diff --git a/rules/S7518/message.adoc b/rules/S7518/message.adoc index 652effc0f12..e56c1ce860c 100644 --- a/rules/S7518/message.adoc +++ b/rules/S7518/message.adoc @@ -1,3 +1,4 @@ === Message -Make sure this user-controlled prompt does not lead to unwanted behavior. +Change this code to not construct privileged prompts directly from user-controlled data. + diff --git a/rules/S7518/description.adoc b/rules/S7518/rationale.adoc similarity index 57% rename from rules/S7518/description.adoc rename to rules/S7518/rationale.adoc index 37f37031ff9..0b97286bb6d 100644 --- a/rules/S7518/description.adoc +++ b/rules/S7518/rationale.adoc @@ -6,13 +6,12 @@ Injecting unchecked user inputs in privileged prompts gives unauthorized third parties the ability to break out of contexts and constraints that you assume the LLM will follow. -Fundamentally, the core roles of many Large Language Model (LLM) interactions is -defined by the trio of `system`, `user`, and `assistant`. However, the landscape -of conversational AI is expanding to include a more diverse set of roles, such -as `developer`, `tool`, `function`, and even more nuanced roles in multi-agent -systems. +Fundamentally, the trio of `system`, `user`, and `assistant` defines the core +roles of many Large Language Model (LLM) interactions. However, the landscape of +conversational AI is expanding to include a more diverse set of roles, such as +developer, tool, function, and even more nuanced roles in multi-agent systems. -In essence, the LLM conversation roles are no longer a simple triad, but the -most important to keep in mind is that these roles must stay coherent to the +In essence, the LLM conversation roles are no longer a simple triad. The most +important thing to keep in mind is that these roles must stay coherent to the least privilege principle, where each role has the minimum level of access necessary to perform its function. diff --git a/rules/S7518/recommended.adoc b/rules/S7518/recommended.adoc deleted file mode 100644 index cbc2e056d60..00000000000 --- a/rules/S7518/recommended.adoc +++ /dev/null @@ -1,5 +0,0 @@ -== Recommended Secure Coding Practices - -* Follow the principle of least privilege while scenarizing your LLM conversations -* Insert guard rails to prevent the LLM from generating harmful or unsafe content -* Restrict the different LLM capabilities (MCP), and knowledge (RAG).