You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. Removed reasoning_effort from parameters [PR](https://github.com/BerriAI/litellm/pull/9811)
117
117
2. Fixed custom endpoint check for Databricks [PR](https://github.com/BerriAI/litellm/pull/9925)
118
118
119
-
- General
119
+
-**General**
120
120
1. Added litellm.supports_reasoning() util to track if an llm supports reasoning [Get Started](https://docs.litellm.ai/docs/providers/anthropic#reasoning)
121
121
2. Function Calling - Handle pydantic base model in message tool calls, handle tools = [], and support fake streaming on tool calls for meta.llama3-3-70b-instruct-v1:0 [PR](https://github.com/BerriAI/litellm/pull/9774)
122
122
3. LiteLLM Proxy - Allow passing `thinking` param to litellm proxy via client sdk [PR](https://github.com/BerriAI/litellm/pull/9386)
123
123
4. Fixed correctly translating 'thinking' param for litellm [PR](https://github.com/BerriAI/litellm/pull/9904)
124
124
125
125
126
126
## Spend Tracking Improvements
127
-
- OpenAI, Azure
127
+
-**OpenAI, Azure**
128
128
1. Realtime API Cost tracking with token usage metrics in spend logs [Get Started](https://docs.litellm.ai/docs/realtime)
129
-
- Anthropic
129
+
-**Anthropic**
130
130
1. Fixed Claude Haiku cache read pricing per token [PR](https://github.com/BerriAI/litellm/pull/9834)
131
131
2. Added cost tracking for Claude responses with base_model [PR](https://github.com/BerriAI/litellm/pull/9897)
132
132
3. Fixed Anthropic prompt caching cost calculation and trimmed logged message in db [PR](https://github.com/BerriAI/litellm/pull/9838)
133
-
- General
133
+
-**General**
134
134
1. Added token tracking and log usage object in spend logs [PR](https://github.com/BerriAI/litellm/pull/9843)
135
135
2. Handle custom pricing at deployment level [PR](https://github.com/BerriAI/litellm/pull/9855)
136
136
137
137
138
138
## Management Endpoints / UI
139
139
140
-
1.Test Key Tab:
140
+
-**Test Key Tab**
141
141
1. Added rendering of Reasoning content, ttft, usage metrics on test key page [PR](https://github.com/BerriAI/litellm/pull/9931)
142
142
143
143
<Image
@@ -147,7 +147,7 @@ Get started with this [here](https://docs.litellm.ai/docs/tutorials/msft_sso)
1. Added Tag/Policy Management. Create routing rules based on request metadata. This allows you to enforce that requests with `tags="private"` only go to specific models. [Get Started](https://docs.litellm.ai/docs/tutorials/tag_management)
152
152
153
153
<br />
@@ -159,33 +159,33 @@ Get started with this [here](https://docs.litellm.ai/docs/tutorials/msft_sso)
1. Added debug route to allow admins to debug SSO JWT fields [PR](https://github.com/BerriAI/litellm/pull/9835)
166
166
2. Added ability to use MSFT Graph API to assign users to teams [PR](https://github.com/BerriAI/litellm/pull/9865)
167
167
3. Connected litellm to Azure Entra ID Enterprise Application [PR](https://github.com/BerriAI/litellm/pull/9872)
168
168
4. Added ability for admins to set `default_team_params` for when litellm SSO creates default teams [PR](https://github.com/BerriAI/litellm/pull/9895)
169
169
5. Fixed MSFT SSO to use correct field for user email [PR](https://github.com/BerriAI/litellm/pull/9886)
170
170
6. Added UI support for setting Default Team setting when litellm SSO auto creates teams [PR](https://github.com/BerriAI/litellm/pull/9918)
171
-
5.UI Bug Fixes:
171
+
-**UI Bug Fixes**
172
172
1. Prevented team, key, org, model numerical values changing on scrolling [PR](https://github.com/BerriAI/litellm/pull/9776)
173
173
2. Instantly reflect key and team updates in UI [PR](https://github.com/BerriAI/litellm/pull/9825)
174
174
175
175
## Logging / Guardrail Improvements
176
176
177
-
1.Prometheus:
178
-
- Emit Key and Team Budget metrics on a cron job schedule [Get Started](https://docs.litellm.ai/docs/proxy/prometheus#initialize-budget-metrics-on-startup)
177
+
-**Prometheus**
178
+
1. Emit Key and Team Budget metrics on a cron job schedule [Get Started](https://docs.litellm.ai/docs/proxy/prometheus#initialize-budget-metrics-on-startup)
179
179
180
180
## Security Fixes
181
181
182
-
1. Fixed [CVE-2025-0330](https://www.cve.org/CVERecord?id=CVE-2025-0330) - Leakage of Langfuse API keys in team exception handling [PR](https://github.com/BerriAI/litellm/pull/9830)
183
-
2. Fixed [CVE-2024-6825](https://www.cve.org/CVERecord?id=CVE-2024-6825) - Remote code execution in post call rules [PR](https://github.com/BerriAI/litellm/pull/9826)
182
+
- Fixed [CVE-2025-0330](https://www.cve.org/CVERecord?id=CVE-2025-0330) - Leakage of Langfuse API keys in team exception handling [PR](https://github.com/BerriAI/litellm/pull/9830)
183
+
- Fixed [CVE-2024-6825](https://www.cve.org/CVERecord?id=CVE-2024-6825) - Remote code execution in post call rules [PR](https://github.com/BerriAI/litellm/pull/9826)
184
184
185
185
## Helm
186
186
187
-
1. Added service annotations to litellm-helm chart [PR](https://github.com/BerriAI/litellm/pull/9840)
188
-
2. Added extraEnvVars to the helm deployment [PR](https://github.com/BerriAI/litellm/pull/9292)
187
+
- Added service annotations to litellm-helm chart [PR](https://github.com/BerriAI/litellm/pull/9840)
188
+
- Added extraEnvVars to the helm deployment [PR](https://github.com/BerriAI/litellm/pull/9292)
0 commit comments