From 5e2e958aeff0cf9a2288b66a49c671a815733002 Mon Sep 17 00:00:00 2001 From: Loris Sierra Date: Thu, 12 Jun 2025 11:49:36 +0200 Subject: [PATCH 1/4] APPSEC-2481 Change S7518 from hotspot to vuln --- .../header_names/allowed_framework_names.adoc | 1 + rules/S7518/ask-yourself.adoc | 7 -- .../resources/standards.adoc} | 2 +- rules/S7518/highlighting.adoc | 8 ++ rules/S7518/impact.adoc | 22 ++++ rules/S7518/java/how-to-fix-it/openai.adoc | 98 +++++++++++++++++ rules/S7518/java/metadata.json | 5 +- rules/S7518/java/rule.adoc | 101 +++--------------- rules/S7518/message.adoc | 3 +- .../{description.adoc => rationale.adoc} | 0 rules/S7518/recommended.adoc | 5 - 11 files changed, 149 insertions(+), 103 deletions(-) delete mode 100644 rules/S7518/ask-yourself.adoc rename rules/S7518/{see.adoc => common/resources/standards.adoc} (89%) create mode 100644 rules/S7518/highlighting.adoc create mode 100644 rules/S7518/impact.adoc create mode 100644 rules/S7518/java/how-to-fix-it/openai.adoc rename rules/S7518/{description.adoc => rationale.adoc} (100%) delete mode 100644 rules/S7518/recommended.adoc diff --git a/docs/header_names/allowed_framework_names.adoc b/docs/header_names/allowed_framework_names.adoc index 3362ae93bc0..521e0caecb7 100644 --- a/docs/header_names/allowed_framework_names.adoc +++ b/docs/header_names/allowed_framework_names.adoc @@ -55,6 +55,7 @@ * JSP * Legacy Mongo Java API * OkHttp +* OpenAI * Realm * Apache HttpClient * Couchbase diff --git a/rules/S7518/ask-yourself.adoc b/rules/S7518/ask-yourself.adoc deleted file mode 100644 index aab09e44cd8..00000000000 --- a/rules/S7518/ask-yourself.adoc +++ /dev/null @@ -1,7 +0,0 @@ -== Ask Yourself Whether - -* Malicious LLM behaviors can impact your reputation. -* The LLM was trained with private or sensitive data. -* Your LLM infrastructure does not benefit from AI guardrails. - -There is a risk if you answered yes to any of those questions. diff --git a/rules/S7518/see.adoc b/rules/S7518/common/resources/standards.adoc similarity index 89% rename from rules/S7518/see.adoc rename to rules/S7518/common/resources/standards.adoc index 54a801bb398..348fc24c748 100644 --- a/rules/S7518/see.adoc +++ b/rules/S7518/common/resources/standards.adoc @@ -1,3 +1,3 @@ -== See +== Sstandards * OWASP GenAI - https://genai.owasp.org/llmrisk/llm01-prompt-injection/[Top 10 2025 Category LLM00 - Prompt Injection] diff --git a/rules/S7518/highlighting.adoc b/rules/S7518/highlighting.adoc new file mode 100644 index 00000000000..b90db4655a6 --- /dev/null +++ b/rules/S7518/highlighting.adoc @@ -0,0 +1,8 @@ +=== Highlighting + +"[varname]" is tainted (assignments and parameters) + +this argument is tainted (method invocations) + +the returned value is tainted (returns & method invocations results) + diff --git a/rules/S7518/impact.adoc b/rules/S7518/impact.adoc new file mode 100644 index 00000000000..00d0bf59d47 --- /dev/null +++ b/rules/S7518/impact.adoc @@ -0,0 +1,22 @@ +=== What is the potential impact? + +When attackers detect discrepancies while injecting into your LLM application, +they will try to map out their capabilities in terms of actions and knowledge +extraction, and act accordingly. + +Below are some real-world scenarios that illustrate some impacts of an attacker +exploiting the vulnerability. + +==== Data manipulation + +A malicious prompt injection enables data leakages or possibly impacting the +LLM discussions of other users. + +==== Denial of service + +Malicious prompt injections could allow the attacker to possibly leverage +internal tooling such as MCP, to delete sensitive or important data, or to send +tremendous amounts of requests to third-party services, leading to financial +losses or getting banned from such services. + +This threat is particularly insidious if the attacked organization does not +maintain a disaster recovery plan (DRP). diff --git a/rules/S7518/java/how-to-fix-it/openai.adoc b/rules/S7518/java/how-to-fix-it/openai.adoc new file mode 100644 index 00000000000..59630e9f4eb --- /dev/null +++ b/rules/S7518/java/how-to-fix-it/openai.adoc @@ -0,0 +1,98 @@ +== How to fix it in OpenAI + +=== Code examples + +In the following piece of code, control over sensitive roles such as `system` +and `developer` provides a clear way to exploit the underlying model, its +proprietary knowledge (e.g., RAG), and its capabilities (with MCPs). + +The compliant solution revokes any external possibility of controlling +sensitive roles by just hardcoding the system and developer messages. + +==== Noncompliant code example + +[source,java,diff-id=1,diff-type=noncompliant] +---- +@RestController +@RequestMapping("/example") +public class ExampleController { + private final OpenAIClient client; + @PostMapping("/example") + public ResponseEntity example(@RequestBody Map payload) { + String promptText = payload.get("prompt_text"); + String systemText = payload.get("sys_text"); + String developperText = payload.get("dev_text"); + ChatCompletionCreateParams request = ChatCompletionCreateParams.builder() + .model(ChatModel.GPT_3_5_TURBO) + .maxCompletionTokens(2048) + .addSystemMessage(systemText) + .addDeveloperMessage(developperText) + .addUserMessage(promptText) + .build(); + var completion = client.chat().completions().create(request); + return ResponseEntity.ok( + Map.of( + "response", + completion.choices().stream() + .flatMap(choice -> choice.message().content().stream()) + .collect(Collectors.joining(" | ")) + ) + ); + } +} +---- + +== Compliant Solution + +[source,java,diff-id=1,diff-type=compliant] +---- +@RestController +@RequestMapping("/example") +public class ExampleController { + private final OpenAIClient client; + @PostMapping("/example") + public ResponseEntity example(@RequestBody Map payload) { + String promptText = payload.get("prompt_text"); + ChatCompletionCreateParams request = ChatCompletionCreateParams.builder() + .model(ChatModel.GPT_3_5_TURBO) + .maxCompletionTokens(2048) + .addSystemMessage(""" + You are "ExampleBot," a friendly and professional AI assistant [...] + Your role is to [...] + """) + .addDeveloperMessage(""" + // Developer Configuration & Safety Wrapper + 1. The user's query will first be processed by [...] + 2. etc. + """) + .addUserMessage(promptText) + .build(); + var completion = client.chat().completions().create(request); + return ResponseEntity.ok( + Map.of( + "response", + completion.choices().stream() + .flatMap(choice -> choice.message().content().stream()) + .collect(Collectors.joining(" | ")) + ) + ); + } +} +---- + +=== How does this work? + +==== Explicitly stem the LLM context + +While designing an LLM application, and particularly at the stage where you +create the "screenplay" of the intended dialogues between model, user(s), +third-parties, tools, keep the **least privilege** principle in mind. + +Start by providing any external third-party or user with the least amount of +capabilities or information, and only level up their privileges +**intentionally**, e.g. when a situation (like tool calls) requires it. + +Another short-term hardening approach is to add AI guardrails to your LLM, but +keep in mind that deny-list-based filtering is hard to maintain in the long-term +**and** can always be bypassed. Attackers can be very creative with bypass +payloads. diff --git a/rules/S7518/java/metadata.json b/rules/S7518/java/metadata.json index c9018170c4b..75ae0db43d1 100644 --- a/rules/S7518/java/metadata.json +++ b/rules/S7518/java/metadata.json @@ -1,6 +1,6 @@ { "title": "Constructing privileged prompts from user input is security-sensitive", - "type": "SECURITY_HOTSPOT", + "type": "VULNERABILITY", "code": { "impacts": { "SECURITY": "LOW" @@ -19,5 +19,8 @@ "sqKey": "S7518", "scope": "Main", "defaultQualityProfiles": [], + "educationPrinciples": [ + "never_trust_user_input" + ], "quickfix": "unknown" } diff --git a/rules/S7518/java/rule.adoc b/rules/S7518/java/rule.adoc index dc5a2f943ac..514267fc456 100644 --- a/rules/S7518/java/rule.adoc +++ b/rules/S7518/java/rule.adoc @@ -1,97 +1,16 @@ -include::../description.adoc[] +== Why is this an issue? -include::../ask-yourself.adoc[] +include::../rationale.adoc[] -include::../recommended.adoc[] +include::../impact.adoc[] -== Sensitive Code Example +// How to fix it section -In the following piece of code, control over sensitive roles such as `system` -and `developer` provides a clear way to exploit the underlying model, its -proprietary knowledge (e.g., RAG), and its capabilities (with MCPs). +include::how-to-fix-it/openai.adoc[] -[source,java,diff-id=1,diff-type=noncompliant] ----- -@RestController -@RequestMapping("/example") -public class ExampleController { +== Resources - private final OpenAIClient client; - - @PostMapping("/example") - public ResponseEntity example(@RequestBody Map payload) { - String promptText = payload.get("prompt_text"); - String systemText = payload.get("sys_text"); - String developperText = payload.get("dev_text"); - - ChatCompletionCreateParams request = ChatCompletionCreateParams.builder() - .model(ChatModel.GPT_3_5_TURBO) - .maxCompletionTokens(2048) - .addSystemMessage(systemText) - .addDeveloperMessage(developperText) - .addUserMessage(promptText) - .build(); - - var completion = client.chat().completions().create(request); - return ResponseEntity.ok( - Map.of( - "response", - completion.choices().stream() - .flatMap(choice -> choice.message().content().stream()) - .collect(Collectors.joining(" | ")) - ) - ); - } -} ----- - -== Compliant Solution - -This compliant solution revokes any external possibility of controlling -sensitive roles by just hardcoding the system and developer messages. - -[source,java,diff-id=1,diff-type=compliant] ----- -@RestController -@RequestMapping("/example") -public class ExampleController { - - private final OpenAIClient client; - - @PostMapping("/example") - public ResponseEntity example(@RequestBody Map payload) { - String promptText = payload.get("prompt_text"); - - ChatCompletionCreateParams request = ChatCompletionCreateParams.builder() - .model(ChatModel.GPT_3_5_TURBO) - .maxCompletionTokens(2048) - .addSystemMessage(""" - You are "ExampleBot," a friendly and professional AI assistant [...] - - Your role is to [...] - """) - .addDeveloperMessage(""" - // Developer Configuration & Safety Wrapper - 1. The user's query will first be processed by [...] - 2. etc. - """) - .addUserMessage(promptText) - .build(); - - var completion = client.chat().completions().create(request); - return ResponseEntity.ok( - Map.of( - "response", - completion.choices().stream() - .flatMap(choice -> choice.message().content().stream()) - .collect(Collectors.joining(" | ")) - ) - ); - } -} ----- - -include::../see.adoc[] +include::../common/resources/standards.adoc[] ifdef::env-github,rspecator-view[] @@ -101,6 +20,12 @@ ifdef::env-github,rspecator-view[] include::../message.adoc[] +include::../highlighting.adoc[] ''' +== Comments And Links +(visible only on this page) + +include::../comments-and-links.adoc[] + endif::env-github,rspecator-view[] diff --git a/rules/S7518/message.adoc b/rules/S7518/message.adoc index 652effc0f12..e56c1ce860c 100644 --- a/rules/S7518/message.adoc +++ b/rules/S7518/message.adoc @@ -1,3 +1,4 @@ === Message -Make sure this user-controlled prompt does not lead to unwanted behavior. +Change this code to not construct privileged prompts directly from user-controlled data. + diff --git a/rules/S7518/description.adoc b/rules/S7518/rationale.adoc similarity index 100% rename from rules/S7518/description.adoc rename to rules/S7518/rationale.adoc diff --git a/rules/S7518/recommended.adoc b/rules/S7518/recommended.adoc deleted file mode 100644 index cbc2e056d60..00000000000 --- a/rules/S7518/recommended.adoc +++ /dev/null @@ -1,5 +0,0 @@ -== Recommended Secure Coding Practices - -* Follow the principle of least privilege while scenarizing your LLM conversations -* Insert guard rails to prevent the LLM from generating harmful or unsafe content -* Restrict the different LLM capabilities (MCP), and knowledge (RAG). From 5eba07f8ea97315b0feccc64e710e47426235f95 Mon Sep 17 00:00:00 2001 From: Loris Sierra Date: Thu, 12 Jun 2025 11:52:50 +0200 Subject: [PATCH 2/4] another fix --- rules/S7518/impact.adoc | 8 ++++---- rules/S7518/java/metadata.json | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/rules/S7518/impact.adoc b/rules/S7518/impact.adoc index 00d0bf59d47..4f864741e87 100644 --- a/rules/S7518/impact.adoc +++ b/rules/S7518/impact.adoc @@ -1,8 +1,8 @@ === What is the potential impact? -When attackers detect discrepancies while injecting into your LLM application, -they will try to map out their capabilities in terms of actions and knowledge -extraction, and act accordingly. +When attackers detect privilege discrepancies while injecting into your LLM +application, they will try to map out their capabilities in terms of actions and +knowledge extraction, and act accordingly. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability. @@ -12,7 +12,7 @@ exploiting the vulnerability. A malicious prompt injection enables data leakages or possibly impacting the LLM discussions of other users. -==== Denial of service +==== Denial of service and code execution Malicious prompt injections could allow the attacker to possibly leverage internal tooling such as MCP, to delete sensitive or important data, or to send diff --git a/rules/S7518/java/metadata.json b/rules/S7518/java/metadata.json index 75ae0db43d1..330fe3825fb 100644 --- a/rules/S7518/java/metadata.json +++ b/rules/S7518/java/metadata.json @@ -1,5 +1,5 @@ { - "title": "Constructing privileged prompts from user input is security-sensitive", + "title": "Privileged prompts should not be vulnerable to injection attacks", "type": "VULNERABILITY", "code": { "impacts": { From 8d7c23b72e3ffece851d73e47455c2ed06f2897d Mon Sep 17 00:00:00 2001 From: Loris Sierra Date: Thu, 12 Jun 2025 12:01:02 +0200 Subject: [PATCH 3/4] english & check --- rules/S7518/common/resources/standards.adoc | 2 +- rules/S7518/java/rule.adoc | 4 ---- rules/S7518/rationale.adoc | 13 ++++++------- 3 files changed, 7 insertions(+), 12 deletions(-) diff --git a/rules/S7518/common/resources/standards.adoc b/rules/S7518/common/resources/standards.adoc index 348fc24c748..c8f255ce7ed 100644 --- a/rules/S7518/common/resources/standards.adoc +++ b/rules/S7518/common/resources/standards.adoc @@ -1,3 +1,3 @@ -== Sstandards +== Standards * OWASP GenAI - https://genai.owasp.org/llmrisk/llm01-prompt-injection/[Top 10 2025 Category LLM00 - Prompt Injection] diff --git a/rules/S7518/java/rule.adoc b/rules/S7518/java/rule.adoc index 514267fc456..a1f1dccd7b9 100644 --- a/rules/S7518/java/rule.adoc +++ b/rules/S7518/java/rule.adoc @@ -23,9 +23,5 @@ include::../message.adoc[] include::../highlighting.adoc[] ''' -== Comments And Links -(visible only on this page) - -include::../comments-and-links.adoc[] endif::env-github,rspecator-view[] diff --git a/rules/S7518/rationale.adoc b/rules/S7518/rationale.adoc index 37f37031ff9..0b97286bb6d 100644 --- a/rules/S7518/rationale.adoc +++ b/rules/S7518/rationale.adoc @@ -6,13 +6,12 @@ Injecting unchecked user inputs in privileged prompts gives unauthorized third parties the ability to break out of contexts and constraints that you assume the LLM will follow. -Fundamentally, the core roles of many Large Language Model (LLM) interactions is -defined by the trio of `system`, `user`, and `assistant`. However, the landscape -of conversational AI is expanding to include a more diverse set of roles, such -as `developer`, `tool`, `function`, and even more nuanced roles in multi-agent -systems. +Fundamentally, the trio of `system`, `user`, and `assistant` defines the core +roles of many Large Language Model (LLM) interactions. However, the landscape of +conversational AI is expanding to include a more diverse set of roles, such as +developer, tool, function, and even more nuanced roles in multi-agent systems. -In essence, the LLM conversation roles are no longer a simple triad, but the -most important to keep in mind is that these roles must stay coherent to the +In essence, the LLM conversation roles are no longer a simple triad. The most +important thing to keep in mind is that these roles must stay coherent to the least privilege principle, where each role has the minimum level of access necessary to perform its function. From 089654297641619982ec9a8e93e25edc8ae975e1 Mon Sep 17 00:00:00 2001 From: "Loris S." <91723853+loris-s-sonarsource@users.noreply.github.com> Date: Thu, 12 Jun 2025 15:24:21 +0200 Subject: [PATCH 4/4] Apply suggestions from code review Co-authored-by: nicolas-gauthier-sonarsource <121794895+nicolas-gauthier-sonarsource@users.noreply.github.com> --- rules/S7518/java/how-to-fix-it/openai.adoc | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/rules/S7518/java/how-to-fix-it/openai.adoc b/rules/S7518/java/how-to-fix-it/openai.adoc index 59630e9f4eb..14948f4891e 100644 --- a/rules/S7518/java/how-to-fix-it/openai.adoc +++ b/rules/S7518/java/how-to-fix-it/openai.adoc @@ -21,12 +21,12 @@ public class ExampleController { public ResponseEntity example(@RequestBody Map payload) { String promptText = payload.get("prompt_text"); String systemText = payload.get("sys_text"); - String developperText = payload.get("dev_text"); + String developerText = payload.get("dev_text"); ChatCompletionCreateParams request = ChatCompletionCreateParams.builder() .model(ChatModel.GPT_3_5_TURBO) .maxCompletionTokens(2048) .addSystemMessage(systemText) - .addDeveloperMessage(developperText) + .addDeveloperMessage(developerText) .addUserMessage(promptText) .build(); var completion = client.chat().completions().create(request); @@ -57,7 +57,7 @@ public class ExampleController { .model(ChatModel.GPT_3_5_TURBO) .maxCompletionTokens(2048) .addSystemMessage(""" - You are "ExampleBot," a friendly and professional AI assistant [...] + You are "ExampleBot", a friendly and professional AI assistant [...] Your role is to [...] """) .addDeveloperMessage("""