Skip to content

Releases: ExtensityAI/symbolicai

v0.12.0

01 Jun 17:46
Compare
Choose a tag to compare

SymbolicAI v0.12.0 Release Notes

🎉 Major New Features

Google Gemini Support

  • NEW: Added full support for Google Gemini models (gemini-2.5-pro-preview-05-06, gemini-2.5-flash-preview-05-20)
  • NEW: Gemini reasoning engine with thinking trace support
  • NEW: Multi-modal support for Gemini (images, videos, audio, documents)
  • NEW: Token counting and cost estimation for Gemini models

OpenAI Search Engine

  • NEW: Native OpenAI search capabilities with citation support
  • NEW: Interface('openai_search') for web search with AI responses
  • NEW: Configurable search context size and user location parameters
  • NEW: Automatic citation extraction and formatting

Enhanced Function/Tool Calling

  • IMPROVED: Universal function calling support across OpenAI, Claude, and Gemini
  • NEW: Consistent metadata format for function calls across all engines
  • NEW: Better error handling and multiple tool call detection

🔧 Significant Improvements

Metadata Tracking & Cost Estimation

  • NEW: MetadataTracker component for detailed usage tracking
  • NEW: RuntimeInfo utility for cost estimation and analytics
  • NEW: Per-engine token counting and API call tracking
  • IMPROVED: Better metadata aggregation across multiple engine calls

Engine Enhancements

  • IMPROVED: Enhanced Claude reasoning engine with better thinking trace support
  • IMPROVED: Updated model support for Claude 4.0 and Sonnet 4.0
  • IMPROVED: Better streaming support across all engines
  • IMPROVED: Consistent error handling with CustomUserWarning

Vision & Media Processing

  • IMPROVED: Enhanced image processing across all vision-capable models
  • NEW: Frame extraction support for video content
  • IMPROVED: Better handling of media patterns in prompts

🐛 Bug Fixes

Core Fixes

  • FIXED: Token truncation issues across different engines
  • FIXED: Raw input processing for all engine types
  • FIXED: Response format handling for JSON outputs
  • FIXED: Self-prompting functionality across engines

Engine-Specific Fixes

  • FIXED: Claude streaming response collection
  • FIXED: OpenAI tool call argument parsing
  • FIXED: Deepseek response format handling
  • FIXED: Vision pattern removal in prompts

🔄 Breaking Changes

Deprecated Features

  • REMOVED: Legacy experimental engines (Bard wrapper, GPT fine-tuner, etc.)
  • REMOVED: Old completion-based OpenAI engine
  • CHANGED: Standardized engine initialization patterns

API Changes

  • CHANGED: Thinking configuration format for Claude (simplified structure)
  • CHANGED: Consistent error handling across all engines
  • CHANGED: Engine name property now required for all engines

📚 Documentation & Testing

Documentation Updates

  • UPDATED: Comprehensive engine documentation with examples
  • NEW: Cost estimation and metadata tracking examples
  • UPDATED: Search engine configuration guides
  • NEW: Multi-modal content processing examples

Testing Improvements

  • NEW: Mandatory test markers for critical functionality
  • IMPROVED: Engine-specific test coverage
  • NEW: Function calling tests across all supported engines
  • IMPROVED: Vision processing test coverage

🔧 Developer Experience

Configuration

  • NEW: Simplified engine configuration patterns
  • IMPROVED: Better error messages for missing API keys
  • NEW: Engine-specific timeout and retry parameters

Utilities

  • NEW: RuntimeInfo for usage analytics
  • NEW: Enhanced prompt registry with custom delimiters
  • IMPROVED: Better file handling and media processing utilities

📋 Dependencies

  • ADDED: google-genai>=1.16.1 for Gemini support
  • UPDATED: Various dependency versions for compatibility

🚀 Performance

  • IMPROVED: Better token counting accuracy where supported
  • IMPROVED: Optimized streaming response handling
  • IMPROVED: Enhanced memory usage for large media files

Note: This release focuses heavily on expanding AI model support and improving the developer experience. The most accurate documentation is always the code itself - look for the mandatory test markers for guaranteed functionality.

Upgrade Notes:

  • Update your configuration for any Claude thinking configurations
  • Review engine-specific documentation for new capabilities
  • Consider migrating to the new metadata tracking system for cost monitoring

Full Changelog: v0.11.0...v0.12.0

v0.11.0

16 May 09:26
Compare
Choose a tag to compare

Release Notes for v0.11.0

✨ New Features

  • Contract Performance Statistics Tracking

    • Added a contract_perf_stats method to contract-decorated classes, which tracks and reports granular timing statistics (mean, std, min, max, percentage) for each contract operation: input validation, act execution, output validation, forward execution, total execution, and "overhead" (untracked contract time).
    • Unit tests exercise and validate this detailed performance statistics capability.
  • Improved Type and Semantic Validation

    • The contract mechanism now leverages a single TypeValidationFunction to handle both type and semantic validation, streamlining error handling and remedy functions.
    • The previously separate SemanticValidationFunction is now unified, reducing code duplication and making semantic and type checks more consistent.
  • Rich Field Descriptions for LLM Guidance

    • Strongly encourage and enforce the use of descriptive Field(description="...") for all LLMDataModel attributes. These descriptions are directly used to improve LLM prompting, validation, error messages, and data generation.
    • Updated documentation with clearer guidance and rationale on crafting informative descriptions and prompts.

🐛 Bug Fixes & Refactorings

  • Refined Contract Input and Output Handling

    • The contract decorator now strictly enforces keyword arguments (no positional input) and validates the input type up-front.
    • Input object identity and propagation through the contract lifecycle are preserved and tested (no accidental re-instantiation).
  • Improved Error Reporting & Context

    • Type and semantic validation errors are now accumulated and reported with greater clarity when remedy retries are enabled.
    • Error accumulation context is correctly passed to remedies, improving developer diagnostics.
  • Act Method Refactoring

    • The act method inside contracts is validated for correct signature and type annotations.
    • If no act is defined, the input is propagated unchanged, simplifying state-modifying contracts.
  • Output Type Checks

    • Output from contracts is checked against expected type annotation, with informative error messages if mismatches are detected.
  • Contract Performance Test Coverage

    • New and expanded tests for:
      • End-to-end contract flows with state-modifying act methods.
      • Verification that the same input object is propagated and contract state changes are handled as expected.
      • Tracking and assertion of contract performance statistics.
  • Codebase Cleanup

    • Removed unused imports (e.g., SemanticValidationError class and related references).
    • Simplified logic around data model registration and remedy handling.

📘 Documentation

  • Expanded and clarified the documentation for contracts, field descriptions, prompt design, and the role of pre/post validation.
  • Added best practices for driving LLMs with meaningful validation and semantic checks.
  • Highlighted the separation of static contract prompts and dynamic input state.

Full Changelog: v0.10.0...v0.11.0

v0.10.0

11 May 14:30
Compare
Choose a tag to compare

SymbolicAI Release Notes (v0.10.0)

🚀 New Features

1. Contracts System & Design by Contract (DbC) Support

  • Decorator-Based Contracts: Introduction of the @contract class decorator (inspired by Design by Contract principles) for Expression subclasses.
  • Pre-conditions, Post-conditions, and Intermediate Actions: Classes can now define pre, act, and post methods to enforce input validation, intermediate processing, and output validation — all with optional LLM-based remedies.
  • Rich Input and Output Modeling: Mandatory use of LLMDataModel (Pydantic-based) for all contract-associated data, providing structured validation and schema-driven prompting.
  • LLM-Guided Self-Remediation: If enabled, contract failures in pre-conditions or post-conditions can be self-corrected by LLMs using descriptive error messages as corrective prompts.
  • Enhanced Composability and Reliability: Contracts make AI components more robust and predictable, aiding in both integration and maintenance.
  • Clear Fallback Behavior: Even when contracts fail, original forward logic executes with a clear success indicator and contract result for user-defined fallback strategies.

2. Act Method Support

  • Contracts can include an optional act method for intermediate transformations between input validation and output production, with strict signature/type checking.

3. Enhanced Logging & Verbose Mode

  • Verbose mode uses rich panels for visually appealing and structured logging of contract-related operations, error panels, schemas, and dynamic prompts.

4. Error Accumulation Option

  • New accumulate_errors switch for contract decorators allows error messages to be accumulated and shown to the model across multiple remedy attempts to aid LLM self-correction.

5. Developer Tooling Improvements

  • contract_perf_stats() provides per-call timing information to help optimize contract execution and debugging.

6. Extensive Documentation

  • New FEATURES/contracts.md: Comprehensive guide on contracts, their parameters, execution flow, developer patterns, and practical examples.

7. Detailed Testing

  • Addition of tests/contract/test_contract.py with thorough coverage for contract flow, act method, fallback logic, signature checks, contract state management, and various edge cases.

🐞 Bug Fixes & Minor Improvements

  • input_type_validation error message is now more detailed and informative.
  • Contract decorator and remedies now properly distinguish between positional and keyword arguments, preventing ambiguity.
  • Output type validation strictly checks for correct types, preventing silent contract malfunctions.
  • Original forward argument passing has been improved to ensure correct input after contract handling.
  • Improved clarity in docstrings, method comments, and public documentation for easier onboarding.
  • Numerous logging messages refined for clarity and utility.

📚 Documentation

  • docs/source/FEATURES/contracts.md: In-depth, example-driven guide to symbolic contracts, covering all decorator parameters, remedy workflow, fallback/forward execution, and best practices.
  • SUMMARY.md is updated to include the new Contracts section in documentation navigation.

Full Changelog: v0.9.5...v0.10.0

v0.9.5

06 May 12:54
Compare
Choose a tag to compare

Release Notes – v0.9.5

Bug Fixes & Improvements

  • Packaging Improvements:

    • Updated pyproject.toml to change the [tool.setuptools.packages.find] include pattern from ["symai"] to ["symai*"]. This fixed a nasty import bug.
      This ensures that all subpackages (e.g., symai.submodule) are now correctly included during package builds and distributions.
  • Testing Configuration:

    • Adjusted pytest.ini to deselect the specific test tests/engines/neurosymbolic/test_nesy_engine.py::test_token_truncator, likely to address a test flakiness or to temporarily ignore a known issue.
    • Minor cleanup to remove an unnecessary trailing line.
  • General Maintenance:

    • Version bumped from 0.9.4 to 0.9.5 in symai/__init__.py to reflect the new release.
    • Updated .gitignore to ignore .bash_history files, helping prevent accidental commits of shell history.

Full Changelog: v0.9.4...v0.9.5

v0.9.4

02 May 21:14
Compare
Choose a tag to compare

Release Notes for v0.9.4


🔧 Improvements

  • Updated Documentation Link
    • Changed the main documentation badge and link in the README from ReadTheDocs to the new GitBook documentation.
    • Added new Twitter badge for @futurisold to the README alongside existing social and contribution links.

🐛 Bug Fixes

  • No direct bug fixes were indicated in this diff.

🔢 Version Update

  • Bumped the version in symai/__init__.py from 0.9.3 to 0.9.4.

Full Changelog: v0.9.3...v0.9.4

v0.9.3

02 May 21:06
Compare
Choose a tag to compare

Release Notes

Version 0.9.3

🆕 New Features

  • Neuro-Symbolic Engine Documentation
    • Completely new, comprehensive documentation added for the "Neuro-Symbolic Engine" (docs/source/ENGINES/neurosymbolic_engine.md).
    • Covers usage patterns, backend differences, function/tool calls, JSON enforcement, thinking trace, vision input, token handling, preview mode, and more.
    • Highlights model-specific usage (OpenAI, Claude, Deepseek, llama.cpp, HuggingFace).
  • Documentation Overhaul
    • Switched documentation system to GitBook structure:
      • New .gitbook.yaml configuration pointing to Markdown-based docs.
      • Added SUMMARY.md for navigation and topic overview.
    • Documentation hierarchy is now streamlined and modernized.
    • Reorganized Engines, Features, Tutorials, and Tools in clear sections.
  • Enhanced Argument Support
    • Argument class in symai/core.py now always initializes return_metadata property, improving consistency and capability for backend engines to return extra metadata.

🐛 Bug Fixes

  • Anthropic Claude Engine Fixes
    • Fixed empty prompt edge case: Ensures user prompt is non-empty ("N/A") to avoid Anthropic API errors.
    • Proper handling of JSON response format by stripping wrapping Markdown code fences (```json, ```
      `) so the output is pure JSON.
    • When "thinking trace" is enabled, metadata is correctly populated with the model's "thinking" output.
  • DeepSeek Reasoning Engine Fixes
    • Now always returns answer content as the main output and thinking trace under metadata["thinking"], matching documented examples.

⚡ Other Improvements

  • Docs Clean-Up
    • Removed all Sphinx-based files and REST/rst sources, including configuration files, API reference, and build artifacts. Old ReadTheDocs and Sphinx themes are now deprecated.
    • Updated all doc links and cross-references to work with the new Markdown- and GitBook-based structure.
  • Documentation Content Improvements
    • More explicit explanations and structure in Features, Tools, and Tutorials (headings, options, and section hierarchies improved).
    • Outdated rst-formatted docs are removed, new Markdown-based docs are in place.

🔧 Internal/Infrastructure

  • Incremented project version to 0.9.3.
  • Set up for future multi-engine documentation and easier addition of new backends or features.
  • Codebase now explicitly sets the SYMAI_VERSION = "0.9.3".

Full Changelog: v0.9.2...v0.9.3

v0.9.2

27 Apr 16:05
Compare
Choose a tag to compare

Release Notes: v0.9.2


✨ New Features

  • Unified Drawing Interface

    • Added a new high-level drawing interface with two main options:
      • gpt_image: Unified wrapper for OpenAI image APIs (supports dall-e-2, dall-e-3, gpt-image-*). Exposes OpenAI’s full Images API, including advanced parameters (quality, style, moderation, background, output_compression, variations, edits—see updated docs).
      • flux: Simplified interface for Black Forest Labs’ Flux models via api.us1.bfl.ai.
    • Both interfaces now return a list of local PNG file paths for easy downstream consumption.
    • Documented all parameters and new interface usage for both engines.
  • New Engines

    • Added symai.backend.engines.drawing.engine_gpt_image for OpenAI's latest Images API.
    • Deprecated/removed legacy engine_dall_e.py in favor of unified engine_gpt_image.py.
  • Extended Interfaces

    • New public classes: symai.extended.interfaces.gpt_image and updated flux interface for consistency and enhanced discoverability.
    • Added comprehensive tests for drawing engines covering all models and modes (create, variation, edit).

🛠️ Improvements & Fixes

  • Flux Engine

    • Now downloads result images as temporary local PNG files. Handles non-None payload.
    • Uses correct API endpoint (api.us1.bfl.ai).
    • Cleans up error handling, makes API parameters robust against None values.
  • OpenAI Model Support

    • Added support for cutting-edge OpenAI models:
      • Chat/Vision: gpt-4.1, gpt-4.1-mini, gpt-4.1-nano
      • Reasoning: o4-mini, o3
    • Updated max context/response tokens for new models (gpt-4.1* supports up to ~1M context, 32k response tokens).
    • Tiktoken fallback: If initialization fails or support is missing for a new OpenAI model, falls back to "o200k_base" encoding, shows a warning.
  • OpenAI Mixin Enhancements

    • Refined token calculations and model support for new OpenAI and BFL models.
    • Ensured consistent handling of context/response tokens as new models are released.

📚 Documentation

  • Overhauled docs/source/ENGINES/drawing_engine.md:
    • Clearly describes new unified drawing API, how to use models, available parameters, and best practices.
    • Includes ready-to-use code examples for both OpenAI and Flux pathways.

🧪 Testing

  • Comprehensive pytest suite for drawing engines now included (tests/engines/drawing/test_drawing_engine.py).
  • Tests gpt_image create, variation, edit; tests Flux for all supported models.
  • Verifies correct output (generated images exist and are valid).

⚠️ Breaking/Behavioral Changes

  • Legacy DALL·E Engine removed (engine_dall_e.py). Use gpt_image for all OpenAI image generation.
  • All engine calls now return image file paths (as list), not just URLs.
  • Some parameter names and behaviors have changed (see updated docs).

If you use programmatic image generation, especially OpenAI’s DALL·E or gpt-image models, please update your code and refer to the new documentation. The new design offers greater flexibility, future-proofing for new models and APIs, and consistent developer ergonomics.


Full Changelog: v0.9.1...v0.9.2

v0.9.1

07 Apr 11:05
Compare
Choose a tag to compare

Release Notes for Version 0.9.1

New Features

  • Dynamic Engine Switching: Introduced DynamicEngine context manager which allows dynamically switching neurosymbolic engine models. This improves flexibility in using different models within the same context.
  • Engine Mapping for Neurosymbolic Engines: Added a new ENGINE_MAPPING that maps supported model names to their respective engine classes for easier integration and management.
  • Split Model Support: The models for Anthropic, DeepSeek, and OpenAI have been delineated into specific categories (Chat, Reasoning, Completion, and Embedding models) for better clarity and management.

Improvements

  • Config Management Enhancement: Replaced multiple instances of self.config = SYMAI_CONFIG with self.config = deepcopy(SYMAI_CONFIG) to ensure configurations are isolated for each engine instance.
  • Enhanced Logging and Error Handling: Improved logging details including stack traces for better debugging and error tracking within the _process_query and _process_query_single functions.
  • Functionality Testing and Validation: Added several new test cases, especially focusing on testing dynamic engine switching and fallback query executions to ensure robustness.

Bug Fixes

  • Token Computation Correction: Fixed the incorrect computation of artifacts in the GPTXChatEngine and GPTXReasoningEngine classes.
  • Payload Adjustments: Adjustments on payload preparation for the GPTXChatEngine especially for chatgpt-4o-latest, ensuring certain fields are correctly omitted.
  • Argument Preparation Bug Fixes: Fixed issues in _prepare_argument to properly handle raw input and enhance preprocessing capabilities.
  • Self Prompt Improvements: Improved self-prompting logic in Symbol to ensure correct responses and validation.
  • Signature and Type Annotations: Updates on the usage of inspect.Signature methods for resolving return annotations, ensuring compatibility with Python's typing system.

Others

  • Refactoring & Cleanup: Conducted significant code refactoring and cleanup, including reorganizing the test suite and renaming test files for better maintainability and clarity.
  • Warning & Constraint Handling: Adjusted warnings and constraint handling to improve message clarity for developers working with the library.

Full Changelog: v0.9.0...v0.9.1

v0.9.0

18 Mar 22:07
Compare
Choose a tag to compare

These changes expand SymbolicAI's capabilities with next-generation models from OpenAI, DeepSeek, and Anthropic.

Major Changes

New Models Support

  • Added support for Claude 3.7 Sonnet with extended thinking capabilities
  • Added support for OpenAI's o1 and o3-mini models with reasoning mode
  • Added DeepSeek Reasoner model support

New Reasoning Features

  • Implemented structured reasoning support across multiple LLM providers:
    • Claude 3.7 with extended thinking (up to 64k tokens for thinking)
    • OpenAI's o1/o3 models with reasoning mode
    • DeepSeek Reasoner with explicit reasoning capabilities

Engine Improvements

  • Refactored Anthropic Claude engines for improved response handling
  • Added support for streaming responses with Claude models
  • Improved token counting and context management
  • Enhanced tool use support across different model providers

Architecture Changes

  • Modularized request payload preparation with cleaner code structure
  • Improved error handling for API interactions
  • Added consistent handling for reasoning/thinking outputs

Developer Experience

  • Better handling of max_tokens vs max_completion_tokens for OpenAI models
  • More consistent self-prompting behavior
  • Enhanced JSON response format support

Dependencies

  • Added loguru (≥0.7.3) for improved logging
  • Added aiohttp (≥3.11.13) for async HTTP requests

Version Update

  • Increased version from 0.8.0 to 0.9.0

Full Changelog: v0.8.0...v0.9.0

v0.8.0

15 Mar 10:01
Compare
Choose a tag to compare

This release significantly improves the framework's configuration management, local model support, and validation capabilities while maintaining backward compatibility where possible. Users should review the new configuration system documentation when upgrading (see docs here).

Major Features

New Priority-Based Configuration System

  • Introduced a hierarchical configuration management system with three priority levels:
    1. Debug Mode (Current Working Directory) - Highest priority
    2. Environment-Specific Config (Python Environment) - Second priority
    3. Global Config (Home Directory) - Lowest priority
  • Added symconfig command to inspect current configuration setup
  • Configurations now properly cascade and fall back based on priority

Enhanced Contract System

  • Added new contract decorator for Design by Contract (DbC) pattern
  • Supports both type and semantic validation
  • Includes retry mechanisms and performance monitoring
  • Added comprehensive performance statistics tracking for contract execution

Improved Local Model Support

  • Extended support for local LLaMA.cpp models:
    • Added embedding capabilities through local models
    • Support for both Python bindings and direct C++ server
    • Added batch processing for embeddings
  • Enhanced server configuration options for local models

Package Management Improvements

  • Enhanced sympkg with new features:
    • Support for local package installation
    • Git submodules initialization option
    • Improved package update mechanism
  • Added --local-path option for installing from local directories
  • Added --submodules flag for Git repository operations

Breaking Changes

  • Configuration file locations have changed due to new priority system
  • Environment variables structure updated for speech-related settings
  • Some API methods now return different types/structures
  • Updated dependency requirements:
    • numpy: Now supports up to 2.1.3
    • openai: Minimum version increased to 1.60.0

New Features

  • Added MetadataTracker for better usage tracking and statistics
  • Enhanced token truncation system with smart percentage calculation
  • Added new validation primitives for type and semantic checking
  • Improved error handling and reporting
  • Added new data models for structured input/output

Improvements

  • Better handling of JSON validation and error correction
  • Enhanced error messages and logging
  • Improved documentation structure
  • Better support for local development workflows
  • Enhanced configuration management utilities

Dependencies

  • Added new dependencies:
    • nest-asyncio>=1.6.0
    • rich>=13.9.4
  • Optional dependency for LLaMA.cpp: llama-cpp-python[server]>=0.3.7

Documentation

  • Reorganized API documentation structure
  • Added comprehensive configuration management guide
  • Improved package management documentation
  • Added new examples and use cases
  • Enhanced local engine documentation

Bug Fixes

  • Fixed configuration cascade issues
  • Improved error handling in package management
  • Fixed token counting in various scenarios
  • Addressed memory leaks in long-running processes
  • Fixed various edge cases in validation systems

Developer Tools

  • Added new symconfig command for configuration inspection
  • Enhanced symdev and sympkg utilities
  • Improved debugging capabilities
  • Added performance monitoring tools

Full Changelog: v0.7.4...v0.8.0