Skip to content

A comprehensive PHP-based testing framework for validating OpenAI function calling capabilities and message responses. Test whether your AI assistant correctly calls the right functions with the right parameters or provides appropriate text responses based on user inputs.

License

Notifications You must be signed in to change notification settings

LiveHelperChat/openai-test-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

OpenAI Function Calling Test Framework

A comprehensive PHP-based testing framework for validating OpenAI function calling capabilities and message responses. This framework allows you to test whether your AI assistant correctly calls the right functions with the right parameters or provides appropriate text responses based on user inputs.

πŸš€ Features

  • Function Call Testing: Validate that the AI calls the correct functions with expected parameters
  • Message Response Testing: Verify that the AI provides appropriate text responses when no function calls are needed
  • AI-Powered Meaning Validation: Uses another AI model to validate if response meanings match expectations
  • Colorized Console Output: Easy-to-read test results with color-coded success/failure indicators
  • Flexible Test Filtering: Run all tests or filter by test name patterns
  • Detailed Error Reporting: Comprehensive output showing what was expected vs what was received

πŸ“ Project Structure

β”œβ”€β”€ test.php           # Main test framework and runner
β”œβ”€β”€ cases.json         # Test case definitions
β”œβ”€β”€ structure.json     # OpenAI API configuration (model, tools, etc.)
β”œβ”€β”€ settings.ini.php   # API keys and configuration settings
β”œβ”€β”€ README.md          # This documentation
└── LICENSE           # Project license

βš™οΈ Configuration

⚠️ Important Setup Notice
Before running the tests, you must copy the default configuration files to their active versions:

  • Copy settings.ini.default.php β†’ settings.ini.php
  • Copy cases.default.json β†’ cases.json
  • Copy structure.default.json β†’ structure.json

Then customize these files with your specific API keys and test configurations.

Note: After copying structure.default.json to structure.json, you might want to edit the file and remove the vector storage section if you don't need it for your testing purposes.

settings.ini.php

Configure your API keys and models:

<?php
return [
    "api_key" => "your-openai-api-key",
    "model_meaning_verification" => "gpt-4.1-mini",
    "model_core" => "o3-mini",
    "system_prompt" => "Your system prompt here..."
];
?>

structure.json

Defines the OpenAI API request structure including available tools and model configuration.

πŸ“ Test Case Format

Test cases are defined in cases.json with the following structure:

{
    "name": "test_name",
    "messages": [
        {
            "role": "user",
            "content": "User input message"
        }
    ],
    "expected_output": {
        "type": "toolcall|message",
        "tool": "function_name",           // For toolcall type
        "arguments": {"key": "value"},     // Optional, for validating function arguments
        "meaning": "Expected meaning"      // For message type with AI validation
    }
}

πŸ§ͺ Test Examples

Here are the current test cases included in the framework:

1. Function Call Test - Check Support Price

{
    "name": "support_price",
    "messages": [
        {
            "role": "user",
            "content": "What is paid support price?"
        }
    ],
    "expected_output": {
        "type": "toolcall",
        "tool": "support_price"
    }
}

What it tests: Verifies that when a user asks about support price, the AI correctly calls the support_price function.

2. Function Call Test with Arguments - Password Reminder

{
    "name": "password_reminder",
    "messages": [
        {
            "role": "user", 
            "content": "I forgot my password, my e-mail is remdex@gmail.com"
        }
    ],
    "expected_output": {
        "type": "toolcall",
        "tool": "password_reminder",
        "arguments": {"email": "remdex@gmail.com"}
    }
}

What it tests: Ensures the AI calls the correct function with the right parameters when asked to remind password.

3. Message Response Test - Welcome

{
    "name": "welcome",
    "messages": [
        {
            "role": "user",
            "content": "Hi"
        }
    ],
    "expected_output": {
        "type": "message"
    }
}

What it tests: Validates that the AI responds with a text message (not a function call) for greeting inputs.

4. Message Response with Meaning Validation - Unrelated Questions

{
    "name": "unrelated",
    "messages": [
        {
            "role": "user",
            "content": "Who is the president of the USA?"
        }
    ],
    "expected_output": {
        "type": "message",
        "meaning": "Answer to user question should not be provided as it is not related to the Live Helper Chat."
    }
}

What it tests: Ensures the AI refuses to answer questions unrelated to the Live Helper domain and uses AI-powered validation to check if the response meaning matches expectations.

πŸƒβ€β™‚οΈ Running Tests

Run All Tests

php test.php

Run Specific Test by Name

php test.php "password_reminder"

This will run all tests containing "password_reminder" in their name.

πŸ“Š Test Output

The framework provides detailed, colorized output:

╔══════════════════════════════════════════════════════════════╗
β•‘                  OpenAI Function Calling Test                β•‘
β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

Running test: support_price
Expected tool: support_price
[βœ“] support_price - PASS
    β†’ Called tools: support_price

Running test: password_reminder  
Expected tool: password_reminder
[βœ“] password_reminder - PASS
    β†’ Called tools: password_reminder
    β†’ Arguments matched: {"email": "remdex@gmail.com"}

╔══════════════════════════════════════════════════════════════╗
β•‘                        Test Summary                          β•‘
β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

Total Tests: 2
Passed: 2
Failed: 0
Success Rate: 100.0%

πŸŽ‰ All tests passed! πŸŽ‰

πŸ” Validation Types

Function Call Validation

  • Verifies the correct function is called
  • Validates function arguments match expected values
  • Checks that no unexpected function calls occur

Message Response Validation

  • Ensures AI responds with text messages when appropriate
  • Validates no function calls are made when not expected
  • Uses AI-powered meaning validation for semantic correctness

AI-Powered Meaning Validation

For message responses with meaning defined, the framework uses a separate AI model to validate if the actual response semantically matches the expected meaning in context of the original user question.

πŸ› οΈ Adding New Tests

  1. Open cases.json
  2. Add a new test case following the format above
  3. Run the tests to validate your new case

Example of adding a new test:

{
    "name": "check_account_balance", 
    "messages": [
        {
            "role": "user",
            "content": "What is my account balance?"
        }
    ],
    "expected_output": {
        "type": "toolcall",
        "tool": "get_account_balance"
    }
}

πŸ”§ Error Handling

The framework provides comprehensive error handling:

  • API connection errors
  • Invalid JSON responses
  • Missing or malformed test cases
  • Function call validation failures
  • Meaning validation errors

πŸ“‹ Requirements

  • PHP 7.4 or higher
  • cURL extension enabled
  • Valid OpenAI API key
  • Internet connection for API calls

🀝 Contributing

  1. Add test cases to cases.json
  2. Update function definitions in structure.json if needed
  3. Run tests to ensure everything works
  4. Submit your changes

πŸ“„ License

This project is licensed under the terms specified in the LICENSE file.

About

A comprehensive PHP-based testing framework for validating OpenAI function calling capabilities and message responses. Test whether your AI assistant correctly calls the right functions with the right parameters or provides appropriate text responses based on user inputs.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages