Skip to content

Commit 5320a19

Browse files
v0.12.16 (#17722)
1 parent 51c1c36 commit 5320a19

File tree

14 files changed

+199
-49
lines changed

14 files changed

+199
-49
lines changed

CHANGELOG.md

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,55 @@
11
# ChangeLog
22

3+
## [2025-01-05]
4+
5+
### `llama-index-core` [0.12.16]
6+
7+
- Be more lenient with leading whitespaces emitted by some models when doing ReAct (#17701)
8+
- Fix `user_msg` vs `chat_history` AgentWorkflow inputs (#17690)
9+
10+
### `llama-index-embeddings-oci-data-science` [0.1.0]
11+
12+
- Add OCI Data Science Model Deployment Embedding Integration (#17243)
13+
14+
### `llama-index-embeddings-vllm` [0.1.0]
15+
16+
- Add vLLM offline inference supports for embedding (#17675)
17+
18+
### `llama-index-embeddings-voyageai` [0.3.5]
19+
20+
- small async voyageai fix (#17698)
21+
22+
### `llama-index-llms-gemini` [0.4.7]
23+
24+
- gemini 2.0 support (#17720)
25+
- feat: support basic function call for gemini (google-generativeai) (#17696)
26+
27+
### `llama-index-llms-oci-data-science` [0.1.0]
28+
29+
- Add OCI Data Science Model Deployment LLM Integration (#17241)
30+
31+
### `llama-index-llms-oci-genai` [0.3.1]
32+
33+
- Option to pass auth_file_location, in-order to overwrite default config file location i.e. ~/.oci/config (#17695)
34+
35+
### `llama-index-llms-ollama` [0.5.1]
36+
37+
- fix: avoid missing tool calls while streaming
38+
39+
### `llama-index-llms-openai` [0.3.17]
40+
41+
- fix: max_tokens in O1 (#17703)
42+
- o3 mini support (#17689)
43+
- fix max_tokens, add reasoning_effort for openai reasoning models (#17694)
44+
45+
### `llama-index-readers-obsidian` [0.5.0]
46+
47+
- Improved Obsidian Reader (#17699)
48+
49+
### `llama-index-tools-scrapegraph` [0.1.1]
50+
51+
- feat: add new scrapegraph endpoint (#17709)
52+
353
## [2025-01-31]
454

555
### `llama-index-core` [0.12.15]

docs/docs/CHANGELOG.md

Lines changed: 51 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,62 @@
11
# ChangeLog
22

3+
## [2025-01-05]
4+
5+
### `llama-index-core` [0.12.16]
6+
7+
- Be more lenient with leading whitespaces emitted by some models when doing ReAct (#17701)
8+
- Fix `user_msg` vs `chat_history` AgentWorkflow inputs (#17690)
9+
10+
### `llama-index-embeddings-oci-data-science` [0.1.0]
11+
12+
- Add OCI Data Science Model Deployment Embedding Integration (#17243)
13+
14+
### `llama-index-embeddings-vllm` [0.1.0]
15+
16+
- Add vLLM offline inference supports for embedding (#17675)
17+
18+
### `llama-index-embeddings-voyageai` [0.3.5]
19+
20+
- small async voyageai fix (#17698)
21+
22+
### `llama-index-llms-gemini` [0.4.7]
23+
24+
- gemini 2.0 support (#17720)
25+
- feat: support basic function call for gemini (google-generativeai) (#17696)
26+
27+
### `llama-index-llms-oci-data-science` [0.1.0]
28+
29+
- Add OCI Data Science Model Deployment LLM Integration (#17241)
30+
31+
### `llama-index-llms-oci-genai` [0.3.1]
32+
33+
- Option to pass auth_file_location, in-order to overwrite default config file location i.e. ~/.oci/config (#17695)
34+
35+
### `llama-index-llms-ollama` [0.5.1]
36+
37+
- fix: avoid missing tool calls while streaming
38+
39+
### `llama-index-llms-openai` [0.3.17]
40+
41+
- fix: max_tokens in O1 (#17703)
42+
- o3 mini support (#17689)
43+
- fix max_tokens, add reasoning_effort for openai reasoning models (#17694)
44+
45+
### `llama-index-readers-obsidian` [0.5.0]
46+
47+
- Improved Obsidian Reader (#17699)
48+
49+
### `llama-index-tools-scrapegraph` [0.1.1]
50+
51+
- feat: add new scrapegraph endpoint (#17709)
52+
353
## [2025-01-31]
454

555
### `llama-index-core` [0.12.15]
656

757
- Add error_on_tool_error param to FunctionCallingLLM.predict_and_call (#17663)
858
- Get tool description from pydantic field (#17679)
9-
- fix: make ctx._events_buffer json-serializable (#17676)
59+
- fix: make ctx.\_events_buffer json-serializable (#17676)
1060
- feat: allow to exclude empty file simple directory reader (#17656)
1161
- improve markdown llm output parsing (#17577)
1262
- small typo fix in the default plan refine prompt (#17644)
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
::: llama_index.embeddings.oci_data_science
2+
options:
3+
members:
4+
- OCIDataScienceEmbedding
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
::: llama_index.embeddings.vllm
2+
options:
3+
members:
4+
- VllmEmbedding
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
::: llama_index.llms.oci_data_science
2+
options:
3+
members:
4+
- OCIDataScience

docs/mkdocs.yml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -261,6 +261,7 @@ nav:
261261
- ./examples/embeddings/nebius.ipynb
262262
- ./examples/embeddings/nomic.ipynb
263263
- ./examples/embeddings/nvidia.ipynb
264+
- ./examples/embeddings/oci_data_science.ipynb
264265
- ./examples/embeddings/oci_genai.ipynb
265266
- ./examples/embeddings/ollama_embedding.ipynb
266267
- ./examples/embeddings/openvino.ipynb
@@ -370,6 +371,7 @@ nav:
370371
- ./examples/llm/nvidia_tensorrt.ipynb
371372
- ./examples/llm/nvidia_text_completion.ipynb
372373
- ./examples/llm/nvidia_triton.ipynb
374+
- ./examples/llm/oci_data_science.ipynb
373375
- ./examples/llm/oci_genai.ipynb
374376
- ./examples/llm/octoai.ipynb
375377
- ./examples/llm/ollama.ipynb
@@ -926,6 +928,7 @@ nav:
926928
- ./api_reference/embeddings/nebius.md
927929
- ./api_reference/embeddings/nomic.md
928930
- ./api_reference/embeddings/nvidia.md
931+
- ./api_reference/embeddings/oci_data_science.md
929932
- ./api_reference/embeddings/oci_genai.md
930933
- ./api_reference/embeddings/octoai.md
931934
- ./api_reference/embeddings/ollama.md
@@ -941,6 +944,7 @@ nav:
941944
- ./api_reference/embeddings/upstage.md
942945
- ./api_reference/embeddings/vertex.md
943946
- ./api_reference/embeddings/vertex_endpoint.md
947+
- ./api_reference/embeddings/vllm.md
944948
- ./api_reference/embeddings/voyageai.md
945949
- ./api_reference/embeddings/xinference.md
946950
- ./api_reference/embeddings/yandexgpt.md
@@ -1044,6 +1048,7 @@ nav:
10441048
- ./api_reference/llms/nvidia.md
10451049
- ./api_reference/llms/nvidia_tensorrt.md
10461050
- ./api_reference/llms/nvidia_triton.md
1051+
- ./api_reference/llms/oci_data_science.md
10471052
- ./api_reference/llms/oci_genai.md
10481053
- ./api_reference/llms/octoai.md
10491054
- ./api_reference/llms/ollama.md
@@ -2346,6 +2351,9 @@ plugins:
23462351
- ../llama-index-integrations/tools/llama-index-tools-linkup-research
23472352
- ../llama-index-integrations/llms/llama-index-llms-deepseek
23482353
- ../llama-index-integrations/llms/llama-index-llms-cortex
2354+
- ../llama-index-integrations/embeddings/llama-index-embeddings-vllm
2355+
- ../llama-index-integrations/embeddings/llama-index-embeddings-oci-data-science
2356+
- ../llama-index-integrations/llms/llama-index-llms-oci-data-science
23492357
- redirects:
23502358
redirect_maps:
23512359
./api/llama_index.vector_stores.MongoDBAtlasVectorSearch.html: api_reference/storage/vector_store/mongodb.md

llama-index-core/llama_index/core/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
"""Init file of LlamaIndex."""
22

3-
__version__ = "0.12.15"
3+
__version__ = "0.12.16"
44

55
import logging
66
from logging import NullHandler

llama-index-core/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ name = "llama-index-core"
4646
packages = [{include = "llama_index"}]
4747
readme = "README.md"
4848
repository = "https://github.com/run-llama/llama_index"
49-
version = "0.12.15"
49+
version = "0.12.16"
5050

5151
[tool.poetry.dependencies]
5252
SQLAlchemy = {extras = ["asyncio"], version = ">=1.4.49"}

llama-index-integrations/embeddings/llama-index-embeddings-oci-data-science/pyproject.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,9 +41,9 @@ jupyter = "^1.0.0"
4141
mypy = "0.991"
4242
pre-commit = "3.2.0"
4343
pylint = "2.15.10"
44-
pytest = "7.2.1"
44+
pytest = ">=7.2.1"
4545
pytest-asyncio = ">=0.24.0"
46-
pytest-mock = "3.11.1"
46+
pytest-mock = ">=3.11.1"
4747
ruff = "0.0.292"
4848
tree-sitter-languages = "^1.8.0"
4949
types-Deprecated = ">=0.1.0"

llama-index-integrations/llms/llama-index-llms-oci-data-science/pyproject.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,9 +39,9 @@ jupyter = "^1.0.0"
3939
mypy = "0.991"
4040
pre-commit = "3.2.0"
4141
pylint = "2.15.10"
42-
pytest = "7.2.1"
42+
pytest = ">=7.2.1"
4343
pytest-asyncio = ">=0.24.0"
44-
pytest-mock = "3.11.1"
44+
pytest-mock = ">=3.11.1"
4545
ruff = "0.0.292"
4646
tree-sitter-languages = "^1.8.0"
4747
types-Deprecated = ">=0.1.0"

llama-index-integrations/llms/llama-index-llms-ollama/llama_index/llms/ollama/base.py

Lines changed: 34 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -272,7 +272,6 @@ def get_tool_calls_from_response(
272272
) -> List[ToolSelection]:
273273
"""Predict and call the tool."""
274274
tool_calls = response.message.additional_kwargs.get("tool_calls", [])
275-
276275
if len(tool_calls) < 1:
277276
if error_on_no_tool_call:
278277
raise ValueError(
@@ -350,6 +349,8 @@ def gen() -> ChatResponseGen:
350349
)
351350

352351
response_txt = ""
352+
seen_tool_calls = set()
353+
all_tool_calls = []
353354

354355
for r in response:
355356
if r["message"]["content"] is None:
@@ -359,7 +360,20 @@ def gen() -> ChatResponseGen:
359360

360361
response_txt += r["message"]["content"]
361362

362-
tool_calls = r["message"].get("tool_calls", [])
363+
new_tool_calls = [dict(t) for t in r["message"].get("tool_calls", [])]
364+
for tool_call in new_tool_calls:
365+
if (
366+
str(tool_call["function"]["name"]),
367+
str(tool_call["function"]["arguments"]),
368+
) in seen_tool_calls:
369+
continue
370+
seen_tool_calls.add(
371+
(
372+
str(tool_call["function"]["name"]),
373+
str(tool_call["function"]["arguments"]),
374+
)
375+
)
376+
all_tool_calls.append(tool_call)
363377
token_counts = self._get_response_token_counts(r)
364378
if token_counts:
365379
r["usage"] = token_counts
@@ -368,7 +382,7 @@ def gen() -> ChatResponseGen:
368382
message=ChatMessage(
369383
content=response_txt,
370384
role=r["message"]["role"],
371-
additional_kwargs={"tool_calls": tool_calls},
385+
additional_kwargs={"tool_calls": list(set(all_tool_calls))},
372386
),
373387
delta=r["message"]["content"],
374388
raw=r,
@@ -397,6 +411,8 @@ async def gen() -> ChatResponseAsyncGen:
397411
)
398412

399413
response_txt = ""
414+
seen_tool_calls = set()
415+
all_tool_calls = []
400416

401417
async for r in response:
402418
if r["message"]["content"] is None:
@@ -406,7 +422,20 @@ async def gen() -> ChatResponseAsyncGen:
406422

407423
response_txt += r["message"]["content"]
408424

409-
tool_calls = r["message"].get("tool_calls", [])
425+
new_tool_calls = [dict(t) for t in r["message"].get("tool_calls", [])]
426+
for tool_call in new_tool_calls:
427+
if (
428+
str(tool_call["function"]["name"]),
429+
str(tool_call["function"]["arguments"]),
430+
) in seen_tool_calls:
431+
continue
432+
seen_tool_calls.add(
433+
(
434+
str(tool_call["function"]["name"]),
435+
str(tool_call["function"]["arguments"]),
436+
)
437+
)
438+
all_tool_calls.append(tool_call)
410439
token_counts = self._get_response_token_counts(r)
411440
if token_counts:
412441
r["usage"] = token_counts
@@ -415,7 +444,7 @@ async def gen() -> ChatResponseAsyncGen:
415444
message=ChatMessage(
416445
content=response_txt,
417446
role=r["message"]["role"],
418-
additional_kwargs={"tool_calls": tool_calls},
447+
additional_kwargs={"tool_calls": all_tool_calls},
419448
),
420449
delta=r["message"]["content"],
421450
raw=r,

llama-index-integrations/llms/llama-index-llms-ollama/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ exclude = ["**/BUILD"]
2727
license = "MIT"
2828
name = "llama-index-llms-ollama"
2929
readme = "README.md"
30-
version = "0.5.0"
30+
version = "0.5.1"
3131

3232
[tool.poetry.dependencies]
3333
python = ">=3.9,<4.0"

0 commit comments

Comments
 (0)