Releases: solliancenet/foundationallm
Release 0.9.7
FoundationaLLM 0.9.7 Pre-Release Notes
Introduction
Welcome to FoundationaLLM version 0.9.7! This release includes new features, enhancements, performance improvements and bug fixes. Below is a detailed summary of the changes.
Enhancements and Features
New Agent Types
- In this first release of model agnostic agents, we are introducing a new way to create agents that are not tied to a specific language model or to the OpenAI Assistants API and can use any function calling model to invoke tools. Supported models include GPT models and Claude models, with support for Gemini coming soon.
- Agents can now also be created that directly invoke agents created in Azure AI Foundry Agent Service.
- Agents can be created that utilize the Azure AI Inference API to invoke models from Azure AI Foundry.
New Tools
New tools for agents to use include:
- KnowledgeTool for performing file search and vector data store retrieval.
- Code Interpreter tool allows for code execution in the conversation, using Azure Container Apps dynamic sessions or custom containers.
- SQL Tool allows for the execution of SQL queries against a database.
- KQL Tool allows for the execution of KQL queries against a Kusto database.
- File Analysis Tool allows for the analysis of Parquet files in cloud storage.
Data Pipelines
- Data Pipelines allow for the creation of data pipelines that can be used to transform and prepare data for use by the agent. These are used transparently by the file upload capability in the user portal, and can be used via the API programmatically.
Semantic Cache & Prompt Rewriting
- Semantic cache, configurable in the Management Portal, reduces the number of calls to the language model when enabled.
- Prompt rewriting, configurable in the Management Portal, enables better tool behavior in conversations and better cache hits with the semantic cache by using an LLM to rewrite the user prompt before sending it on.
Quota Management
- Quota support for Core API raw requests by count and agent completion requests by count
Agent Test Harness
- Automate the execution of prompts defined in a CSV against an agent, includes support for file uploads.
Improvements
- Agents now have a notion of history of files uploaded to the agent and can be asked about the files available to them.
Various improvements to the user experience in the User Portal and the Management Portal. - Performance improvements to the authorization cache.
Contact Information
For support and further inquiries regarding this release, please reach out to us:
- Support Contact: https://foundationallm.ai/contact
- Website: FoundationalLLM
Conclusion
We hope you enjoy the new features and improvements in FoundationalLLM version 0.9.7. Your feedback continues to be instrumental in driving our product forward. Thank you for your continued support.
Release 0.9.7-beta135.5
FoundationaLLM Version 0.9.7-beta135.5 Release Notes
Introduction
This is a hotfix release that includes fixes for the LangChain API.
Fixes
- Fixes an issue with the
ChatBedrockConverse
LangChain chat model not having the timeout correctly set via thebotocore
library. - Pins down the versions of Open Telemetry libraries to ensure compatibility with AKS.
Impact
The following APIs are impacted by this hotfix:
- LangChain API
Release 0.9.7-beta135.4
FoundationaLLM Version 0.9.7-beta135.4 Release Notes
Introduction
This is a hotfix release that includes a fix for the Management Portal.
Fixes
- Fixes an issue with the Management Portal where the assistant and vector store identifier properties for Azure OpenAI Assistants workflows are not managed correctly, resulting in their re-creation every time an agent is updated. This generates inconsistencies with the private stores of agents by loosing the vectorizations of the associated files.
Impact
The following APIs are impacted by this hotfix:
- Management Portal
Release 0.9.7-beta135.3
FoundationaLLM Version 0.9.7-beta135.3 Release Notes
Introduction
This is a hotfix release that includes a security fix.
Security Fixes
- Fixes and issue with the order in which role assignments are evaluation which can result in ignoring an assignment of the
Agent Access Tokens Contributor
role.
Impact
The following APIs are impacted by this hotfix:
- Management API
- Authorization API
Release 0.9.7-beta135.2
FoundationaLLM Version 0.9.7-beta135.2 Release Notes
Introduction
This is a hotfix release that includes a security fix.
Security Fixes
- Fixes and issue with role assignment deletion where certain role assignments remain cached in the Authorization API.
Release 0.9.7-beta135.1
FoundationaLLM Version 0.9.7-beta135.1 Release Notes
Introduction
This is a hotfix release that includes security and vectorization fixes and improvements.
Security Fixes
- Enforces conversation ownership in addition to agent ownership for completion requests in Core API. This prevents a user with agent permissions to use a conversation for which the user is not an owner.
- Adds a new security role named
Data Pipelines Contributors
that enables non-administrator users to get the necessary permissions to create , run, and monitor vectorization pipelines. Membership for this role should be set at the FoundationaLLM instance level. - Adds a new security role named
Agents Contributors
that enables non-administrator users to get the necessary permissions to create new agents. Membership for this role should be set at the FoundationaLLM instance level. - Adds a new security role named
Agent Access Tokens Contributors
that enables its members to manage agent access tokens. A user must have write access to the agent and be member of this role in order to be allowed to manage agent access tokens.
Vectorization improvements
- Improves the performance of vectorization pipelines when the Vectorization Worker service is set with a low latency processing cycle.
- Exposes vectorization pipeline execution details via the Management API.
- Adds a Python sample demonstrating end-to-end use of vectorization pipelines via the Management API.
Release 0.9.2
FoundationaLLM Version 0.9.2 Release Notes
Introduction
Welcome to FoundationaLLM version 0.9.2! This release includes new features, enhancements, performance improvements and bug fixes. Below is a detailed summary of the changes.
Enhancements and Features
- Added CheckName action for APIEndpointConfiguration resources
- Added default subcategory values on configuration resources
- Added the Agent workflow and tools options in the Management Portal UX
- Added Prompt category and create Prompt option to the Management Portal UX
- Removed orchestration settings Agent validation
- Updated the agent workflow check, where capabilities should rely only on workflow settings for Azure OpenAI Assistants.
- Removed legacy agent AI model and prompt settings
- Added message image from content artifact
Bug Fixes
- Fixed orchestration selection logic
- Cleaned up Prompt form and updated the content artifact style
- Fixed invalid chat session query in URL on startup
Improvements
- Improved the Mobile view for the Management Portal
- Populated OpenAI Assistant information in workflow
- Improved the generation of content artifacts by the DALL-E tool
- Improved user portal toast in the UX
- Improved deployment changes in support of 0.9.2 QuickStart and Standard
Contact Information
For support and further inquiries regarding this release, please reach out to us:
- Support Contact: https://foundationallm.ai/contact
- Website: FoundationalLLM
Conclusion
We hope you enjoy the new features and improvements in FoundationalLLM version 0.9.2. Your feedback continues to be instrumental in driving our product forward. Thank you for your continued support.
Release 0.9.1
FoundationaLLM Version 0.9.1 Release Notes
Introduction
Welcome to FoundationaLLM version 0.9.1! This release includes new features, enhancements, performance improvements, bug fixes, and updates to the documentation. Below is a detailed summary of the changes.
Enhancements and Features
- AgentWorkflow classes have been added for its initial definition.
- Added Private Storage component per agent.
- Added username tooltip and add tooltip component to Management Portal
- Introduced Amazon Bedrock as a Language Provider and added EntraID managed identity to its service.
- Introduced LangGraph ReAct Workflow
- Added PackageName to AgentTool
- Initial implementation of the DALL-E Image Generation tool
- Introduced Python Tool Plugins for agents
- Added IndexingProfileObjectIds and TextEmbeddingModelNames to AgentTool
- Added the ability to export chat conversations in User Portal
- Added FoundationaLLM Skunkworks for experimental LangChain tools
- Added support for agent access tokens instead of using EntraID
- Added in-memory cache for resource providers
- Added semantic search and reranker to Azure AI Search retriever
- Added Semantic caching
Bug Fixes
- Fixed issue with a null agent capabilities property when loading the agent in the Management Portal
- Added CosmosDB Data Contributor Role to Gatekeeper API
- EventGrid - No need to manually restart services after creation or update of an agent.
- Several fixes for accessibility in the Management Portal and the Chat Portal.
Improvements
- Improved agent listing in the User Portal
- Improved User Portal conversation
- Added several Deployment updates to make Quick Start and Standard deployments smoother and faster
- Documented the new Branding capabilities in the Management Portal.
- Renamed Citation class to ContentArtifact and parse it from ToolMessages
- Enhanced CoreAPI authentication
- Added RunnableConfig to LangGraph call to support passing vars to tools
- Added tools array to default agent resource template
- Improved logging capabilities in the Python SDK
- Implemented rating comments in the backend
- Allow for the conditional display of tokens, prompt, rating, and comments in the Management Portal
- Extended the use of OpenTelemetry to Core API entry points
- Added prompt editor to the Management Portal
- Linked LangChain API tracing to main FoundationaLLM tracing
- Enabled optional persistence of completion requests
- Added optional sqlalchemy dependency
- Improved telemetry hierarchy organization
- Updated Certbot to use Ubuntu 22.04 and use RSA when calling Certbot for Standard Deployments
- Added new documentation for Standard Deployment.
- Added Vector Stores for indexing profiles in the management portal
- Added aiohttp library to allow async HTTP requests in Python SDK instead of using the requests package.
Contact Information
For support and further inquiries regarding this release, please reach out to us:
- Support Contact: https://foundationallm.ai/contact
- Website: FoundationalLLM
Conclusion
We hope you enjoy the new features and improvements in FoundationalLLM version 0.9.1. Your feedback continues to be instrumental in driving our product forward. Thank you for your continued support.
Release 0.8.4
FoundationalLLM Version 0.8.4 Release Notes
Introduction
Welcome to FoundationalLLM version 0.8.4! This release includes new features, enhancements, performance improvements, bug fixes, and updates to the documentation. Below is a detailed summary of the changes.
Enhancements and Features
- Polymorphic Serialization Support for Agents: Addressed the issue of all agents are deserialized as
AgentBase
due to the lack of ploymorphic serialization attributes. - KeyVault URI Addition to ACA Deployment
- Management Portal Agent Model: Fixes breaking changes to the API layer from the Management portal and Adding optional
agent_prompt
in internal context agent - PPTX Text Extraction Support
Bug Fixes
- Removal of OpenTelemetry from GatekeeperIntegrationAPI:
GatekeeperIntegrationAPI
does not reference the PythonSDK, so cannot get to the Telemetry class. Temporarily removing OpenTelemetry fromGatekeeperIntegrationAPI
. - App Config and Connection String Validations: Fixes issue with how this environment variable is passed into deployed images and updates resource locator logic when deleting an agent.
- Issue Fix with Vectorization Resource Providers: Configuration values were not being synchronized across multiple instances of various services.
- Fix Authorization Errors and Inconsistencies: Managed identity-based authentication were not working with the Authorization API.
Improvements
- Update Host File Generator: The old method of generating host files missed 1 of the hosts related to cosmos DB.
- Enhanced Data Lake Storage and Vectorization Capabilities: HNS was not enabled on the quick-start storage account
- Event Profiles and Grid Resources: Adding event grid resources to support configuration change events
- Enhanced Polymorphism and Management Portal UI: Fixes the lack of proper serialization polymorphism in vectorization profiles.
Contact Information
For support and further inquiries regarding this release, please reach out to us:
- Support Contact: https://foundationallm.ai/contact
- Website: FoundationalLLM
Conclusion
We hope you enjoy the new features and improvements in FoundationalLLM version 0.8.4. Your feedback continues to be instrumental in driving our product forward. Thank you for your continued support.
Release 0.8.3
FoundationalLLM Version 0.8.3 Release Notes
Introduction
Welcome to FoundationalLLM version 0.8.3! This release includes new features, enhancements, performance improvements, bug fixes, and updates to the documentation. Below is a detailed summary of the changes.
New Features
- Management Portal UI Adjustments: Enhanced the user management interface to improve usability and aesthetics.
- Content Identification and Vectorization: Improvements to the content identification process and vectorization algorithms.
- Vectorization Unit Tests: Implemented new unit tests for comprehensive testing of vectorization features.
- Agent-to-Agent Conversations: Added support for agent-to-agent conversations to enhance overall interaction capabilities by bringing other agents to a conversation using the
@agent
pattern. - Enkrypt Guardrails: Integrate the Enkrypt Guardrails service with Gatekeeper API
- Prompt Shields: Integrate Prompt Shields service with Gatekeeper API
Enhancements
- Management Portal Branding: Improved branding elements within the management portal for a more cohesive visual identity.
- Config Resource Provider: Added missing configuration health checks to ensure system stability.
- Python Resource Provider Defaults: Set default values for Python-based resource providers to streamline configurations.
Bug Fixes
- Text Splitting Based on Tokens: Fixed issues with text splitting when token limits are reached.
- Invalid Parameter Removal: Corrected parameter issues in application Bicep files to prevent configuration errors.
- Vectorization Worker Build: Fixed build issues related to the vectorization worker to ensure smooth deployment.
- OpenTelemetry Integration Issues: Addressed reference issues for integrating OpenTelemetry across various APIs.
- Legacy Agent Selection: Added support for appending legacy agent names through the
FoundationaLLM:Branding:AllowAgentSelection
App Config setting - Gatekeeper API: Multiple changes to the Gatekeeper Integration API for stability.
Documentation Updates
- Knowledge Management Agent: Updated documentation to reflect changes in the knowledge management agent.
- Vectorization Request Documentation: Added and refined documentation for vectorization request processes.
- Basic API Docs Quality Checks: Conducted quality checks and updates to the basic API documentation for precision.
Performance Improvements
- Vectorization Optimizations: Updated algorithms and internal processes to significantly boost performance.
- Event Handling Support: Generalized event handling improvements to ensure robust processing across different scenarios.
- Refined Object Identifiers: Enhanced mechanisms for managing agent and vectorization profiles to reduce overhead and increase efficiency.
Contact Information
For support and further inquiries regarding this release, please reach out to us:
- Support Contact: https://foundationallm.ai/contact
- Website: FoundationalLLM
Conclusion
We hope you enjoy the new features and improvements in FoundationalLLM version 0.8.3. Your feedback continues to be instrumental in driving our product forward. Thank you for your continued support.