A comprehensive PHP-based testing framework for validating OpenAI function calling capabilities and message responses. This framework allows you to test whether your AI assistant correctly calls the right functions with the right parameters or provides appropriate text responses based on user inputs.
- Function Call Testing: Validate that the AI calls the correct functions with expected parameters
- Message Response Testing: Verify that the AI provides appropriate text responses when no function calls are needed
- AI-Powered Meaning Validation: Uses another AI model to validate if response meanings match expectations
- Colorized Console Output: Easy-to-read test results with color-coded success/failure indicators
- Flexible Test Filtering: Run all tests or filter by test name patterns
- Detailed Error Reporting: Comprehensive output showing what was expected vs what was received
βββ test.php # Main test framework and runner
βββ cases.json # Test case definitions
βββ structure.json # OpenAI API configuration (model, tools, etc.)
βββ settings.ini.php # API keys and configuration settings
βββ README.md # This documentation
βββ LICENSE # Project license
β οΈ Important Setup Notice
Before running the tests, you must copy the default configuration files to their active versions:
- Copy
settings.ini.default.php
βsettings.ini.php
- Copy
cases.default.json
βcases.json
- Copy
structure.default.json
βstructure.json
Then customize these files with your specific API keys and test configurations.
Note: After copying
structure.default.json
tostructure.json
, you might want to edit the file and remove the vector storage section if you don't need it for your testing purposes.
Configure your API keys and models:
<?php
return [
"api_key" => "your-openai-api-key",
"model_meaning_verification" => "gpt-4.1-mini",
"model_core" => "o3-mini",
"system_prompt" => "Your system prompt here..."
];
?>
Defines the OpenAI API request structure including available tools and model configuration.
Test cases are defined in cases.json
with the following structure:
{
"name": "test_name",
"messages": [
{
"role": "user",
"content": "User input message"
}
],
"expected_output": {
"type": "toolcall|message",
"tool": "function_name", // For toolcall type
"arguments": {"key": "value"}, // Optional, for validating function arguments
"meaning": "Expected meaning" // For message type with AI validation
}
}
Here are the current test cases included in the framework:
{
"name": "support_price",
"messages": [
{
"role": "user",
"content": "What is paid support price?"
}
],
"expected_output": {
"type": "toolcall",
"tool": "support_price"
}
}
What it tests: Verifies that when a user asks about support price, the AI correctly calls the support_price
function.
{
"name": "password_reminder",
"messages": [
{
"role": "user",
"content": "I forgot my password, my e-mail is remdex@gmail.com"
}
],
"expected_output": {
"type": "toolcall",
"tool": "password_reminder",
"arguments": {"email": "remdex@gmail.com"}
}
}
What it tests: Ensures the AI calls the correct function with the right parameters when asked to remind password.
{
"name": "welcome",
"messages": [
{
"role": "user",
"content": "Hi"
}
],
"expected_output": {
"type": "message"
}
}
What it tests: Validates that the AI responds with a text message (not a function call) for greeting inputs.
{
"name": "unrelated",
"messages": [
{
"role": "user",
"content": "Who is the president of the USA?"
}
],
"expected_output": {
"type": "message",
"meaning": "Answer to user question should not be provided as it is not related to the Live Helper Chat."
}
}
What it tests: Ensures the AI refuses to answer questions unrelated to the Live Helper domain and uses AI-powered validation to check if the response meaning matches expectations.
php test.php
php test.php "password_reminder"
This will run all tests containing "password_reminder" in their name.
The framework provides detailed, colorized output:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β OpenAI Function Calling Test β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Running test: support_price
Expected tool: support_price
[β] support_price - PASS
β Called tools: support_price
Running test: password_reminder
Expected tool: password_reminder
[β] password_reminder - PASS
β Called tools: password_reminder
β Arguments matched: {"email": "remdex@gmail.com"}
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Test Summary β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Total Tests: 2
Passed: 2
Failed: 0
Success Rate: 100.0%
π All tests passed! π
- Verifies the correct function is called
- Validates function arguments match expected values
- Checks that no unexpected function calls occur
- Ensures AI responds with text messages when appropriate
- Validates no function calls are made when not expected
- Uses AI-powered meaning validation for semantic correctness
For message responses with meaning
defined, the framework uses a separate AI model to validate if the actual response semantically matches the expected meaning in context of the original user question.
- Open
cases.json
- Add a new test case following the format above
- Run the tests to validate your new case
Example of adding a new test:
{
"name": "check_account_balance",
"messages": [
{
"role": "user",
"content": "What is my account balance?"
}
],
"expected_output": {
"type": "toolcall",
"tool": "get_account_balance"
}
}
The framework provides comprehensive error handling:
- API connection errors
- Invalid JSON responses
- Missing or malformed test cases
- Function call validation failures
- Meaning validation errors
- PHP 7.4 or higher
- cURL extension enabled
- Valid OpenAI API key
- Internet connection for API calls
- Add test cases to
cases.json
- Update function definitions in
structure.json
if needed - Run tests to ensure everything works
- Submit your changes
This project is licensed under the terms specified in the LICENSE file.