Skip to content

APPSEC-2481 Change S7518 from hotspot to vuln #5123

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 12, 2025
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/header_names/allowed_framework_names.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@
* JSP
* Legacy Mongo Java API
* OkHttp
* OpenAI
* Realm
* Apache HttpClient
* Couchbase
Expand Down
7 changes: 0 additions & 7 deletions rules/S7518/ask-yourself.adoc

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
== See
== Standards

* OWASP GenAI - https://genai.owasp.org/llmrisk/llm01-prompt-injection/[Top 10 2025 Category LLM00 - Prompt Injection]
8 changes: 8 additions & 0 deletions rules/S7518/highlighting.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
=== Highlighting

"[varname]" is tainted (assignments and parameters)

this argument is tainted (method invocations)

the returned value is tainted (returns & method invocations results)

22 changes: 22 additions & 0 deletions rules/S7518/impact.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
=== What is the potential impact?

When attackers detect privilege discrepancies while injecting into your LLM
application, they will try to map out their capabilities in terms of actions and
knowledge extraction, and act accordingly.

Below are some real-world scenarios that illustrate some impacts of an attacker
exploiting the vulnerability.

==== Data manipulation

A malicious prompt injection enables data leakages or possibly impacting the
LLM discussions of other users.

==== Denial of service and code execution

Malicious prompt injections could allow the attacker to possibly leverage
internal tooling such as MCP, to delete sensitive or important data, or to send
tremendous amounts of requests to third-party services, leading to financial
losses or getting banned from such services. +
This threat is particularly insidious if the attacked organization does not
maintain a disaster recovery plan (DRP).
98 changes: 98 additions & 0 deletions rules/S7518/java/how-to-fix-it/openai.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
== How to fix it in OpenAI

=== Code examples

In the following piece of code, control over sensitive roles such as `system`
and `developer` provides a clear way to exploit the underlying model, its
proprietary knowledge (e.g., RAG), and its capabilities (with MCPs).

The compliant solution revokes any external possibility of controlling
sensitive roles by just hardcoding the system and developer messages.

==== Noncompliant code example

[source,java,diff-id=1,diff-type=noncompliant]
----
@RestController
@RequestMapping("/example")
public class ExampleController {
private final OpenAIClient client;
@PostMapping("/example")
public ResponseEntity<?> example(@RequestBody Map<String, String> payload) {
String promptText = payload.get("prompt_text");
String systemText = payload.get("sys_text");
String developperText = payload.get("dev_text");
ChatCompletionCreateParams request = ChatCompletionCreateParams.builder()
.model(ChatModel.GPT_3_5_TURBO)
.maxCompletionTokens(2048)
.addSystemMessage(systemText)
.addDeveloperMessage(developperText)
.addUserMessage(promptText)
.build();
var completion = client.chat().completions().create(request);
return ResponseEntity.ok(
Map.of(
"response",
completion.choices().stream()
.flatMap(choice -> choice.message().content().stream())
.collect(Collectors.joining(" | "))
)
);
}
}
----

== Compliant Solution

[source,java,diff-id=1,diff-type=compliant]
----
@RestController
@RequestMapping("/example")
public class ExampleController {
private final OpenAIClient client;
@PostMapping("/example")
public ResponseEntity<?> example(@RequestBody Map<String, String> payload) {
String promptText = payload.get("prompt_text");
ChatCompletionCreateParams request = ChatCompletionCreateParams.builder()
.model(ChatModel.GPT_3_5_TURBO)
.maxCompletionTokens(2048)
.addSystemMessage("""
You are "ExampleBot," a friendly and professional AI assistant [...]
Your role is to [...]
""")
.addDeveloperMessage("""
// Developer Configuration & Safety Wrapper
1. The user's query will first be processed by [...]
2. etc.
""")
.addUserMessage(promptText)
.build();
var completion = client.chat().completions().create(request);
return ResponseEntity.ok(
Map.of(
"response",
completion.choices().stream()
.flatMap(choice -> choice.message().content().stream())
.collect(Collectors.joining(" | "))
)
);
}
}
----

=== How does this work?

==== Explicitly stem the LLM context

While designing an LLM application, and particularly at the stage where you
create the "screenplay" of the intended dialogues between model, user(s),
third-parties, tools, keep the **least privilege** principle in mind.

Start by providing any external third-party or user with the least amount of
capabilities or information, and only level up their privileges
**intentionally**, e.g. when a situation (like tool calls) requires it.

Another short-term hardening approach is to add AI guardrails to your LLM, but
keep in mind that deny-list-based filtering is hard to maintain in the long-term
**and** can always be bypassed. Attackers can be very creative with bypass
payloads.
7 changes: 5 additions & 2 deletions rules/S7518/java/metadata.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"title": "Constructing privileged prompts from user input is security-sensitive",
"type": "SECURITY_HOTSPOT",
"title": "Privileged prompts should not be vulnerable to injection attacks",
"type": "VULNERABILITY",
"code": {
"impacts": {
"SECURITY": "LOW"
Expand All @@ -19,5 +19,8 @@
"sqKey": "S7518",
"scope": "Main",
"defaultQualityProfiles": [],
"educationPrinciples": [
"never_trust_user_input"
],
"quickfix": "unknown"
}
97 changes: 9 additions & 88 deletions rules/S7518/java/rule.adoc
Original file line number Diff line number Diff line change
@@ -1,97 +1,16 @@
include::../description.adoc[]
== Why is this an issue?

include::../ask-yourself.adoc[]
include::../rationale.adoc[]

include::../recommended.adoc[]
include::../impact.adoc[]

== Sensitive Code Example
// How to fix it section

In the following piece of code, control over sensitive roles such as `system`
and `developer` provides a clear way to exploit the underlying model, its
proprietary knowledge (e.g., RAG), and its capabilities (with MCPs).
include::how-to-fix-it/openai.adoc[]

[source,java,diff-id=1,diff-type=noncompliant]
----
@RestController
@RequestMapping("/example")
public class ExampleController {
== Resources

private final OpenAIClient client;

@PostMapping("/example")
public ResponseEntity<?> example(@RequestBody Map<String, String> payload) {
String promptText = payload.get("prompt_text");
String systemText = payload.get("sys_text");
String developperText = payload.get("dev_text");

ChatCompletionCreateParams request = ChatCompletionCreateParams.builder()
.model(ChatModel.GPT_3_5_TURBO)
.maxCompletionTokens(2048)
.addSystemMessage(systemText)
.addDeveloperMessage(developperText)
.addUserMessage(promptText)
.build();

var completion = client.chat().completions().create(request);
return ResponseEntity.ok(
Map.of(
"response",
completion.choices().stream()
.flatMap(choice -> choice.message().content().stream())
.collect(Collectors.joining(" | "))
)
);
}
}
----

== Compliant Solution

This compliant solution revokes any external possibility of controlling
sensitive roles by just hardcoding the system and developer messages.

[source,java,diff-id=1,diff-type=compliant]
----
@RestController
@RequestMapping("/example")
public class ExampleController {

private final OpenAIClient client;

@PostMapping("/example")
public ResponseEntity<?> example(@RequestBody Map<String, String> payload) {
String promptText = payload.get("prompt_text");

ChatCompletionCreateParams request = ChatCompletionCreateParams.builder()
.model(ChatModel.GPT_3_5_TURBO)
.maxCompletionTokens(2048)
.addSystemMessage("""
You are "ExampleBot," a friendly and professional AI assistant [...]

Your role is to [...]
""")
.addDeveloperMessage("""
// Developer Configuration & Safety Wrapper
1. The user's query will first be processed by [...]
2. etc.
""")
.addUserMessage(promptText)
.build();

var completion = client.chat().completions().create(request);
return ResponseEntity.ok(
Map.of(
"response",
completion.choices().stream()
.flatMap(choice -> choice.message().content().stream())
.collect(Collectors.joining(" | "))
)
);
}
}
----

include::../see.adoc[]
include::../common/resources/standards.adoc[]

ifdef::env-github,rspecator-view[]

Expand All @@ -101,6 +20,8 @@ ifdef::env-github,rspecator-view[]

include::../message.adoc[]

include::../highlighting.adoc[]

'''

endif::env-github,rspecator-view[]
3 changes: 2 additions & 1 deletion rules/S7518/message.adoc
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
=== Message

Make sure this user-controlled prompt does not lead to unwanted behavior.
Change this code to not construct privileged prompts directly from user-controlled data.

13 changes: 6 additions & 7 deletions rules/S7518/description.adoc → rules/S7518/rationale.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,12 @@ Injecting unchecked user inputs in privileged prompts gives unauthorized third
parties the ability to break out of contexts and constraints that you assume the
LLM will follow.

Fundamentally, the core roles of many Large Language Model (LLM) interactions is
defined by the trio of `system`, `user`, and `assistant`. However, the landscape
of conversational AI is expanding to include a more diverse set of roles, such
as `developer`, `tool`, `function`, and even more nuanced roles in multi-agent
systems.
Fundamentally, the trio of `system`, `user`, and `assistant` defines the core
roles of many Large Language Model (LLM) interactions. However, the landscape of
conversational AI is expanding to include a more diverse set of roles, such as
developer, tool, function, and even more nuanced roles in multi-agent systems.

In essence, the LLM conversation roles are no longer a simple triad, but the
most important to keep in mind is that these roles must stay coherent to the
In essence, the LLM conversation roles are no longer a simple triad. The most
important thing to keep in mind is that these roles must stay coherent to the
least privilege principle, where each role has the minimum level of access
necessary to perform its function.
5 changes: 0 additions & 5 deletions rules/S7518/recommended.adoc

This file was deleted.