Skip to content

Releases: run-llama/llama_index

v0.12.41 (2025-06-07)

07 Jun 22:05
98739a6
Compare
Choose a tag to compare

Release Notes

llama-index-core [0.12.41]

  • feat: Add MutableMappingKVStore for easier caching (#18893)
  • fix: async functions in tool specs (#19000)
  • fix: properly apply file limit to SimpleDirectoryReader (#18983)
  • fix: overwriting of LLM callback manager from Settings (#18951)
  • fix: Adding warning in the docstring of JsonPickleSerializer for the user to deserialize only safe things, rename to PickleSerializer (#18943)
  • fix: ImageDocument path and url checking to ensure that the input is really an image (#18947)
  • chore: remove some unused utils from core (#18985)

llama-index-embeddings-azure-openai [0.3.8]

  • fix: Azure api-key and azure-endpoint resolution fixes (#18975)
  • fix: api_base vs azure_endpoint resolution fixes (#19002)

llama-index-graph-stores-ApertureDB [0.1.0]

  • feat: Aperturedb propertygraph (#18749)

llama-index-indices-managed-llama-cloud [0.7.4]

  • fix: resolve retriever llamacloud index (#18949)
  • chore: composite retrieval add ReRankConfig (#18973)

llama-index-llms-azure-openai [0.3.4]

  • fix: api_base vs azure_endpoint resolution fixes (#19002)

llama-index-llms-bedrock-converse [0.7.1]

  • fix: handle empty message content to prevent ValidationError (#18914)

llama-index-llms-litellm [0.5.1]

  • feat: Add DocumentBlock support to LiteLLM integration (#18955)

llama-index-llms-ollama [0.6.2]

  • feat: Add support for the new think feature in ollama (#18993)

llama-index-llms-openai [0.4.4]

  • feat: add OpenAI JSON Schema structured output support (#18897)
  • fix: skip tool description length check in openai response api (#18956)

llama-index-packs-searchain [0.1.0]

  • feat: Add searchain package (#18929)

llama-index-readers-docugami [0.3.1]

  • fix: Avoid hash collision in XML parsing (#18986)

llama-index-readers-file [0.4.9]

  • fix: pin llama-index-readers-file pandas for now (#18976)

llama-index-readers-gcs [0.4.1]

  • feat: Allow newer versions of gcsfs (#18987)

llama-index-readers-obsidian [0.5.2]

  • fix: Obsidian reader checks and skips hardlinks (#18950)

llama-index-readers-web [0.4.2]

  • fix: Use httpx instead of urllib in llama-index-readers-web (#18945)

llama-index-storage-kvstore-postgres [0.3.5]

  • fix: Remove unnecessary psycopg2 from llama-index-storage-kvstore-postgres dependencies (#18964)

llama-index-tools-mcp [0.2.5]

  • fix: actually format the workflow args into a start event instance (#19001)
  • feat: Adding support for log recording during MCP tool calls (#18927)

llama-index-vector-stores-chroma [0.4.2]

  • fix: Update ChromaVectorStore port field and argument types (#18977)

llama-index-vector-stores-milvus [0.8.4]

  • feat: Upsert Entities supported in Milvus (#18962)

llama-index-vector-stores-redis [0.5.2]

  • fix: Correcting Redis URL/Client handling (#18982)

llama-index-voice-agents-elevenlabs [0.1.0-beta]

  • feat: ElevenLabs beta integration (#18967)

v0.12.40 (2025-06-02)

03 Jun 00:50
8bbfc54
Compare
Choose a tag to compare

Release Notes

llama-index-core [0.12.40]

  • feat: Add StopEvent step validation so only one workflow step can handle StopEvent (#18932)
  • fix: Add compatibility check before providing tool_required to LLM args (#18922)

llama-index-embeddings-cohere [0.5.1]

  • fix: add batch size validation with 96 limit for Cohere API (#18915)

llama-index-llms-anthropic [0.7.2]

  • feat: Support passing static AWS credentials to Anthropic Bedrock (#18935)
  • fix: Handle untested no tools scenario for anthropic tool config (#18923)

llama-index-llms-google-genai [0.2.1]

  • fix: use proper auto mode for google-genai function calling (#18933)

llama-index-llms-openai [0.4.2]

  • fix: clear up some field typing issues of OpenAI LLM API (#18918)
  • fix: migrate broken reasoning_effort kwarg to reasoning_options dict in OpenAIResponses class (#18920)

llama-index-tools-measurespace [0.1.0]

  • feat: Add weather, climate, air quality and geocoding tool from Measure Space (#18909)

llama-index-tools-mcp [0.2.3]

  • feat: Add headers handling to BasicMCPClient (#18919)

v0.12.39 (2025-05-30)

30 May 23:56
a829c95
Compare
Choose a tag to compare

Release Notes

[2025-05-30]

llama-index-core [0.12.39]

  • feat: Adding Resource to perform dependency injection in Workflows (docs coming soon!) (#18884)
  • feat: Add tool_required param to function calling LLMs (#18654)
  • fix: make prefix and response non-required for hitl events (#18896)
  • fix: SelectionOutputParser when LLM chooses no choices (#18886)

llama-index-indices-managed-llama-cloud [0.7.2]

  • feat: add non persisted composite retrieval (#18908)

llama-index-llms-bedrock-converse [0.7.0]

  • feat: Update aioboto3 dependency to allow latest version (#18889)

llama-index-llms-ollama [0.6.1]

  • Support ollama 0.5.0 SDK, update ollama docs (#18904)

llama-index-vector-stores-milvus [0.8.3]

  • feat: Multi language analyzer supported in Milvus (#18901)

v0.12.38 (2025-05-28)

29 May 04:24
5ef386d
Compare
Choose a tag to compare

Release Notes

llama-index-core [0.12.38]

  • feat: Adding a very simple implementation of an embeddings cache (#18864)
  • feat: Add cols_retrievers in NLSQLRetriever (#18843)
  • feat: Add row, col, and table retrievers as args in NLSQLTableQueryEngine (#18874)
  • feat: add configurable allow_parallel_tool_calls to FunctionAgent (#18829)
  • feat: Allow ctx in BaseToolSpec functions, other ctx + tool calling overhauls (#18783)
  • feat: Optimize get_biggest_prompt for readability and efficiency (#18808)
  • fix: prevent DoS attacks in JSONReader (#18877)
  • fix: SelectionOutputParser when LLM chooses no choices (#18886)
  • fix: resuming AgentWorkflow from ctx during hitl (#18844)
  • fix: context serialization during AgentWorkflow runs (#18866)
  • fix: Throw error if content block resolve methods yield empty bytes (#18819)
  • fix: Reduce issues when parsing "Thought/Action/Action Input" ReActAgent completions (#18818)
  • fix: Strip code block backticks from QueryFusionRetriever llm response (#18825)
  • fix: Fix get_function_tool in function_program.py when schema doesn't have "title" key (#18796)

llama-index-agent-azure-foundry [0.1.0]

  • feat: add azure foundry agent integration (#18772)

llama-index-agent-llm-compiler [0.3.1]

  • feat: llm-compiler support stream_step/astream_step (#18809)

llama-index-embeddings-google-genai [0.2.0]

  • feat: add gemini embeddings tests and retry configs (#18846)

llama-index-embeddings-openai-like [0.1.1]

  • fix: Pass http_client & async_http_client to parent for OpenAILikeEmbedding (#18881)

llama-index-embeddings-voyageai [0.3.6]

  • feat: Introducing voyage-3.5 models (#18793)

llama-index-indices-managed-llama-cloud [0.7.1]

  • feat: add client support for search_filters_inference_schema (#18867)
  • feat: add async methods and blank index creation (#18859)

llama-index-llms-anthropic [0.6.19]

  • feat: update for claude 4 support in Anthropic LLM (#18817)
  • fix: thinking + tool calls in anthropic (#18834)
  • fix: check thinking is non-null in anthropic messages (#18838)
  • fix: update/fix claude-4 support (#18820)

llama-index-llms-bedrock-converse [0.6.0]

  • feat: add-claude4-model-support (#18827)
  • fix: fixing DocumentBlock usage within Bedrock Converse (#18791)
  • fix: calling tools with empty arguments (#18786)

llama-index-llms-cleanlab [0.5.0]

  • feat: Update package name and models (#18483)

llama-index-llms-featherlessai [0.1.0]

  • feat: featherless-llm-integration (#18778)

llama-index-llms-google-genai [0.1.14]

  • fix: Google GenAI token counting behavior, add basic retry mechanism (#18876)

llama-index-llms-ollama [0.5.6]

  • feat: Attempt to automatically set context window in ollama (#18822)
  • feat: use default temp in ollama models (#18815)

llama-index-llms-openai [0.3.44]

  • feat: Adding new OpenAI responses features (image gen, mcp call, code interpreter) (#18810)
  • fix: Update OpenAI response type imports for latest openai library compatibility (#18824)
  • fix: Skip tool description length check in OpenAI agent (#18790)

llama-index-llms-servam [0.1.1]

  • feat: add Servam AI LLM integration with OpenAI-like interface (#18841)

llama-index-observability-otel [0.1.0]

  • feat: OpenTelemetry integration for observability (#18744)

llama-index-packs-raptor [0.3.2]

  • Use global llama_index tokenizer in Raptor clustering (#18802)

llama-index-postprocessor-rankllm-rerank [0.5.0]

  • feat: use latest rank-llm sdk (#18831)

llama-index-readers-azstorage-blob [0.3.1]

  • fix: Metadata and filename in azstorageblobreader (#18816)

llama-index-readers-file [0.4.8]

  • fix: reading pptx files from remote fs (#18862)

llama-index-storage-kvstore-postgres [0.3.1]

  • feat: Create PostgresKVStore from existing SQLAlchemy Engine (#18798)

llama-index-tools-brightdata [0.1.0]

  • feat: brightdata integration (#18690)

llama-index-tools-google [0.3.1]

  • fix: GmailToolSpec.load_data() calls search with missing args (#18832)

llama-index-tools-mcp [0.2.2]

  • feat: enhance SSE endpoint detection for broader compatibility (#18868)
  • feat: overhaul BasicMCPClient to support all MCP features (#18833)
  • fix: McpToolSpec fetch all tools given the empty allowed_tools list (#18879)
  • fix: add missing BasicMCPClient.with_oauth() kwargs (#18845)

llama-index-tools-valyu [0.2.0]

  • feat: Update to valyu 2.0.0 (#18861)

llama-index-vector-stores-azurecosmosmongo [0.6.0]

  • feat: Add Vector Index Compression support for Azure Cosmos DB Mongo vector store (#18850)

llama-index-vector-stores-opensearch [0.5.5]

  • feat: add filter support to check if a metadata key doesn't exist (#18851)
  • fix: dont pass in both extra_info and metadata in vector store nodes (#18805)

v0.12.37 (2025-05-19)

20 May 15:19
Compare
Choose a tag to compare

Release Notes

llama-index-core [0.12.37]

  • Ensure Memory returns at least one message (#18763)
  • Separate text blocks with newlines when accessing message.content (#18763)
  • reset next_agent in multi agent workflows (#18782)
  • support sqlalchemy v1 in chat store (#18780)
  • fix: broken hotpotqa dataset URL (#18764)
  • Use get_tqdm_iterable in SimpleDirectoryReader (#18722)
  • Pass agent workflow kwargs into start event (#18747)
  • fix(chunking): Ensure correct handling of multi-byte characters during AST node chunking (#18702)

llama-index-llms-anthropic [0.6.14]

  • Fixed DocumentBlock handling in OpenAI and Anthropic (#18769)

llama-index-llms-bedrock-converse [0.5.4]

  • Fix tool call parsing for bedrock converse (#18781)
  • feat: add missing client params for bedrock (#18768)
  • fix merging multiple tool calls in bedrock converse (#18761)

llama-index-llms-openai [0.3.42]

  • Fixed DocumentBlock handling in OpenAI and Anthropic (#18769)
  • Remove tool-length check in openai (#18784)
  • Add check for empty tool call delta, bump version (#18745)

llama-index-llms-openai-like [0.3.5]

  • Remove tool-length check in openai (#18784)

llama-index-retrievers-vectorize [0.1.0]

  • Add Vectorize retriever (#18685)

llama-index-tools-desearch [0.1.0]

  • Feature/desearch integration (#18738)

v0.12.35 (2024-05-08)

08 May 22:22
df48f1d
Compare
Choose a tag to compare

Release Notes

llama-index-core [0.12.35]

  • add support for prefilling partial tool kwargs on FunctionTool (#18658)
  • Fix/react agent max iterations skipping (#18634)
  • handling for edge-case serialization in prebuilt workflows like AgentWorkflow (#18628)
  • memory revamp with new base class (#18594)
  • add prebuilt memory blocks (#18607)

llama-index-embeddings-autoembeddings [0.1.0]

  • Support for AutoEmbeddings integration from chonkie (#18578)

llama-index-embeddings-huggingface-api [0.3.1]

  • Fix dep versions for huggingface-hub (#18662)

llama-index-indices-managed-vectara [0.4.5]

  • Bugfix in using cutoff argument with chain reranker in Vectara (#18610)

llama-index-llms-anthropic [0.6.12]

  • anthropic citations and tool calls (#18657)

llama-index-llms-cortex [0.3.0]

  • Cortex enhancements 2 for auth (#18588)

llama-index-llms-dashscope [0.3.3]

  • Fix dashscope tool call parsing (#18608)

llama-index-llms-google-genai [0.1.12]

  • Fix modifying object references in google-genai llm (#18616)
  • feat(llama-index-llms-google-genai): 2.5-flash-preview tests (#18575)
  • Fix last_msg indexing (#18611)

llama-index-llms-huggingface-api [0.4.3]

  • Huggingface API fixes for task and deps (#18662)

llama-index-llms-litellm [0.4.2]

  • fix parsing streaming tool calls (#18653)

llama-index-llms-meta [0.1.1]

  • Support Meta Llama-api as an LLM provider (#18585)

llama-index-node-parser-docling [0.3.2]

  • Fix/docling node parser metadata (#186390)

llama-index-node-parser-slide [0.1.0]

  • add SlideNodeParser integration (#18620)

llama-index-readers-github [0.6.1]

  • Fix: Add follow_redirects=True to GitHubIssuesClient (#18630)

llama-index-readers-markitdown [0.1.1]

  • Fix MarkItDown Reader bugs (#18613)

llama-index-readers-oxylabs [0.1.2]

  • Add Oxylabs readers (#18555)

llama-index-readers-web [0.4.1]

  • Fixes improper invocation of Firecrawl library (#18646)
  • Add Oxylabs readers (#18555)

llama-index-storage-chat-store-gel [0.1.0]

  • Add Gel integrations (#18503)

llama-index-storage-docstore-gel [0.1.0]

  • Add Gel integrations (#18503)

llama-index-storage-kvstore-gel [0.1.0]

  • Add Gel integrations (#18503)

llama-index-storage-index-store-gel [0.1.0]

  • Add Gel integrations (#18503)

llama-index-utils-workflow [0.3.2]

  • Fix event colors of draw_all_possible_flows (#18660)

llama-index-vector-stores-faiss [0.4.0]

  • Add Faiss Map Vector store and fix missing index_struct delete (#18638)

llama-index-vector-stores-gel [0.1.0]

  • Add Gel integrations (#18503)

llama-index-vector-stores-postgres [0.5.2]

  • add indexed metadata fields (#18595)

v0.12.34

01 May 03:57
480c5ed
Compare
Choose a tag to compare
v0.12.34

v0.12.33

23 Apr 20:54
ca3aaee
Compare
Choose a tag to compare
v0.12.33

v0.12.32

22 Apr 03:52
60bbe9e
Compare
Choose a tag to compare
v0.12.32

v0.12.31

17 Apr 03:32
ac8cc8c
Compare
Choose a tag to compare
v0.12.31