Releases: raga-ai-hub/RagaAI-Catalyst
Releases · raga-ai-hub/RagaAI-Catalyst
2.2.3
Changes
- Bug-fix: Fix cost calculation coming from litellm
- Bug-fix: Safeguarding application workflow
- Bug-fix: exclude vital columns while masking like model_name, cost, latency, span_id, trace_id etc.
- Feat: set model_cost as no op function
- Bug-fix: export all columns without any filter
- Bug-fix: fix total cost value in the trace details
2.2.1
Changes
- Feat: Unify the trace format for RAG, Agentic Traces
- Feat: Add feature to automatically refresh token after every 6 hrs
- Feat: Add greater support to capture errors
- Bug: Fix for CSV upload of numerical, categorical values
- Bug: Fix for metric execution error with "_" in column names
- Bug: Fix external_id inconsistencies
- Bug: Fix Add proper span hash ids
Full Changelog: v2.1.7.4...v2.2.1
2.1.7.4
Changes:
- add_metadata
- mask traces
- support for error capturing for RAG
- Improve fallback for token counting
Full Changelog: v2.1.7.1...v2.1.7.4
2.1.7.1
Changes
- Feat: Support adding external_id
- Feat: Add post-processing hook, PII removal hook
- Feat: Trace Upload Consistency on Load
- Feat: RAG-Tracing using OpenInference
- Feat: Test cases, CI/CD. Pipeline
- Bug-fix: list_dataset() to work for large number of datasets
- Bug-fix: Indexing Error in Agentic Tracing
- Bug-fix: Check for crashed when defining tracer without metadata key.
- Bug-fix: add_context not working for langchain rag
Full Changelog: 2.1.6.4...v2.1.7.1
2.1.6.4
What's Changed
-
fix: Corrected total_cost and total_token calculation in custom agentic traces
- Previously, these values were being incorrectly calculated or displayed.
- Now, the logic ensures accurate display of total cost and token usage.
-
feat: Associate model with response and add model_name metadata
- Associated the LLM model used with its corresponding response to align with backend changes and provide a more complete data structure.
- Introduced a new metadata field, model_name, specifically for analysis purposes. This will allow for easier tracking and reporting of model usage.
-
fix: Graceful handling of missing LLM response data
- Improved error handling to prevent crashes when model cost, token, or latency cannot be identified from the LLM response.
- The system now gracefully handles cases where any of these values are missing, ensuring stability and continued operation.
2.1.6.3
What's Changed
- Bump litellm from 1.42.12 to 1.61.15 by @dependabot in #193
- Bump langchain-core from 0.2.11 to 0.2.43 by @dependabot in #194
- Make timeout configurable for
agentic/<framework>
, Add support for set custom model cost for langchain RAG by @kiranscaria in #204
Full Changelog: 2.1.6.2...2.1.6.3
2.1.6.2
Changes
- Add support for tracing OpenAI Agents SDK
- Update trace schema to:
- moved recorded_on to schema_type
timestamp
- add total_cost, total_tokens as numerical metadata
- add model_name as categorical metadata
- moved recorded_on to schema_type
- Bug-fix in input_guardrails related to trace_id is None
2.1.6
What's Changed
- Add auto-instrumentation support for:
- Langgraph
- Langchain
- CrewAI
- Haystack
- SmolAgents
- Add support for workflow (data collection) for auto-instrumentation
- Improved the guardrails flow
- Relaxed the dependencies, removed stale dependencies
- Add examples for multiple agentic frameworks
- Multiple bug-fixes
v2.1.5
Changes:
- Improve synthetic data generation
- Update redteaming
- Multiple bug-fixes
- Improve support for llamaindex tracing
v2.1.4
What's Changed
- support for trace_custom
- add workflow component
- add support for azure-openai for llm tracing
- bug-fix: resolve issues for cost, token
- made metadata & pipeline optional in trace definition
- support for dynamic update of dataset name after initialisation
- support to add_metrics locally
- bug-fix: resolve issues in code zip causing same code hash id
- bug-fix: resolve duplicate metrics added
- add support to debug using DEBUG=1
- add support to save code from colab & jupyter notebook.
- add support to trace file input, output interactions
- add support to concatenate _{index} to duplicate metric names in span
- unified time format across all files
New Contributors
- @keetrap made their first contribution in #136
- @joelrobin18 made their first contribution in #105
- @ujjman made their first contribution in #102
- @muditgaur-1009 made their first contribution in #111
- @NastyRunner13 made their first contribution in #114
- @VijayRagaAI made their first contribution in #143
- @Ritika1311 made their first contribution in #150