- 
                Notifications
    You must be signed in to change notification settings 
- Fork 486
fix: Tools detection for OpenAI o1 + LLM Tools/Functions merging #2161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| Code Review: PR #2161 - Fix tools detection for OpenAI o1SummaryThis PR fixes an issue where an empty list of tools was not being properly handled for o1 models. The change improves the tool detection logic to differentiate between a non-existent  Code Quality & Best Practices ✅Strengths: 
 Observations: 
 Logic Analysis 🔍Before (Line 642): if params.get("tools", False):
    raise ModelToolNotSupportedError(params.get("model"))This incorrectly evaluates  After (Lines 642-646): if "tools" in params:
    if params["tools"]:  # If tools exist, raise as unsupported
        raise ModelToolNotSupportedError(params.get("model"))
    else:
        params.pop("tools")  # Remove empty tools listThis correctly: 
 Potential Issues & Suggestions 🤔Minor: Type SafetyThe code assumes  if "tools" in params:
    if params["tools"]:  # If tools exist (non-empty list), raise as unsupported
        raise ModelToolNotSupportedError(params.get("model"))
    else:
        params.pop("tools")  # Remove empty tools listThe inline comment added helps clarify the intent. Location in autogen/oai/client.py:642-646The fix is placed correctly within the o1 model handling logic (after line 636  Test Coverage  | 
| Code Review for PR #2161: Tools detection for OpenAI o1 + LLM Tools/Functions mergingSummaryThis PR addresses two related issues: 
 Code Quality & Best PracticesStrengths: 
 Suggestions: 
 Potential Bugs or IssuesIn  # Current (line 642-646):
if "tools" in params:
    if params["tools"]:  # If tools exist, raise as unsupported
        raise ModelToolNotSupportedError(params.get("model"))
    else:
        params.pop("tools")  # Remove empty tools listRecommendation: This is likely fine since  ✅ No logic bugs detected - The fix correctly: 
 Performance Considerations✅ Minimal performance impact: 
 Minor optimization opportunity: # Current (line 135):
if config_tools or client_tools:
# Could be slightly more explicit:
if len(config_tools) > 0 or len(client_tools) > 0:However, the current version is more Pythonic and equally performant. Security Concerns✅ No security issues identified: 
 Test CoverageThe new  Recommended test cases: def test_merge_config_with_tools():
    """Test merging configurations with tools and functions handling."""
    
    # Test 1: Both configs have tools - should merge
    config = {"tools": [{"name": "tool1"}], "model": "gpt-4"}
    client_config = {"tools": [{"name": "tool2"}], "api_key": "test"}
    result = merge_config_with_tools(config, client_config)
    assert len(result["tools"]) == 2
    
    # Test 2: Empty tools in both - should not add tools key
    config = {"tools": [], "model": "gpt-4"}
    client_config = {"tools": []}
    result = merge_config_with_tools(config, client_config)
    assert "tools" not in result or result["tools"] == []
    
    # Test 3: Functions parameter present - should not add tools
    config = {"tools": [{"name": "tool1"}]}
    client_config = {"functions": [{"name": "func1"}]}
    result = merge_config_with_tools(config, client_config)
    assert "tools" not in result
    assert "functions" in result
    
    # Test 4: One config has tools, other doesn't
    config = {"tools": [{"name": "tool1"}]}
    client_config = {"model": "gpt-4"}
    result = merge_config_with_tools(config, client_config)
    assert result["tools"] == [{"name": "tool1"}]
    
    # Test 5: Client config overrides model
    config = {"model": "gpt-3.5", "tools": []}
    client_config = {"model": "gpt-4", "tools": []}
    result = merge_config_with_tools(config, client_config)
    assert result["model"] == "gpt-4"Integration tests: def test_o1_with_empty_tools_list(self, o1_mini_client: OpenAIWrapper) -> None:
    """Test that o1 models handle empty tools list correctly."""
    messages = [{"role": "user", "content": "2+2="}]
    # Should not raise ModelToolNotSupportedError for empty tools
    response = o1_mini_client.create(messages=messages, tools=[])
    assert response is not NoneAdditional ObservationsGood practices observed: 
 Documentation: 
 RecommendationApprove with minor suggestions: This is a solid fix that addresses a real issue with o1 model compatibility. The main concern is the lack of unit tests for the new  Before merging: 
 Priority: Medium - The fix is correct, but test coverage should be added to prevent regressions. Review completed by Claude Code | 
| Codecov Report❌ Patch coverage is  
 
 ... and 19 files with indirect coverage changes 🚀 New features to boost your workflow:
 | 
Why are these changes needed?
When using o1 models an empty list of tools was not being handled as having no tools.
Related issue number
Checks