-
Notifications
You must be signed in to change notification settings - Fork 137
Description
File Path:
LLM-VM/src/llm_vm/agents/REBEL/utils.py
Relevant Code Line:
resp = (requests.get if tool["method"] == "GET" else requests.post)(**tool_args)Vulnerability Description
In the tool_api_call function located in llm_vm/agents/REBEL/utils.py, the API request URL is dynamically constructed using both tool["args"] and the LLM-generated parsed_gpt_suggested_input. Specifically, the replace_variables_for_values function fills placeholders defined in tool["args"] with values from parsed_gpt_suggested_input, resulting in the final request URL.
Vulnerability Analysis
Because the LLM's output (gpt_suggested_input) directly influences the construction of the request URL, this introduces a potential prompt injection risk. A malicious user could craft a prompt that manipulates the LLM into generating URLs pointing to arbitrary external or internal servers. This may lead to the following security issues:
- Server-Side Request Forgery (SSRF): The attacker could make the application send requests to internal networks or restricted external services, enabling internal system probing, sensitive data access, or unauthorized actions.
- Data Leakage: If the LLM is tricked into generating URLs that include sensitive information as parameters, this data could be exfiltrated to attacker-controlled endpoints.
- Denial of Service (DoS): An attacker might induce the LLM to make repeated requests to non-existent or resource-intensive URLs, exhausting server resources.
- Security Control Bypass: If the application relies on specific URL structures for security validation, malicious URLs injected via the LLM could bypass such checks.
Impact
Allowing the LLM direct control over request URLs without proper validation or sandboxing poses serious security risks, threatening the integrity, confidentiality, and availability of the application.
Recommended Mitigations
-
Strict URL Validation and Whitelisting:
Validate all LLM-generated URLs before executing API requests. Only allow requests to trusted domains and paths that are explicitly pre-approved. Avoid relying on blacklists, as they are easier to bypass. -
Input Sanitization:
Sanitize all LLM inputs rigorously, removing or escaping characters that may form part of a URL structure. -
Limit LLM Control Scope:
Restrict the LLM's influence to non-sensitive parts of the URL, such as query parameter values. Avoid letting it control domains or full URL paths. -
Network Isolation and Firewall Rules:
Enforce strict network boundaries and firewall policies in the deployment environment to limit outbound requests, minimizing the impact of SSRF if it occurs. -
Principle of Least Privilege:
Ensure the application performs external requests with only the minimal permissions required to complete its task.