Skip to content

Commit 20ea6fd

Browse files
committed
chore: Bump version
1 parent b681674 commit 20ea6fd

File tree

2 files changed

+10
-6
lines changed

2 files changed

+10
-6
lines changed

CHANGELOG.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,16 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## [Unreleased]
99

10+
## [0.2.50]
11+
12+
- docs: Update Functionary OpenAI Server Readme by @jeffrey-fong in #1193
13+
- fix: LlamaHFTokenizer now receives pre_tokens by @abetlen in 47bad30dd716443652275099fa3851811168ff4a
14+
1015
## [0.2.49]
1116

1217
- fix: module 'llama_cpp.llama_cpp' has no attribute 'c_uint8' in Llama.save_state by @abetlen in db776a885cd4c20811f22f8bd1a27ecc71dba927
1318
- feat: Auto detect Mixtral's slightly different format by @lukestanley in #1214
1419

15-
1620
## [0.2.48]
1721

1822
- feat: Update llama.cpp to ggerganov/llama.cpp@15499eb94227401bdc8875da6eb85c15d37068f7
@@ -151,7 +155,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
151155
- feat: Update llama.cpp to ggerganov/llama.cpp@b3a7c20b5c035250257d2b62851c379b159c899a
152156
- feat: Add `saiga` chat format by @femoiseev in #1050
153157
- feat: Added `chatglm3` chat format by @xaviviro in #1059
154-
- fix: Correct typo in README.md by @qeleb in (#1058)
158+
- fix: Correct typo in README.md by @qeleb in (#1058)
155159

156160
## [0.2.26]
157161

@@ -284,7 +288,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
284288

285289
## [0.2.11]
286290

287-
- Fix bug in `llama_model_params` object has no attribute `logits_all` by @abetlen in d696251fbe40015e8616ea7a7d7ad5257fd1b896
291+
- Fix bug in `llama_model_params` object has no attribute `logits_all` by @abetlen in d696251fbe40015e8616ea7a7d7ad5257fd1b896
288292

289293
## [0.2.10]
290294

@@ -472,7 +476,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
472476

473477
## [0.1.60]
474478

475-
NOTE: This release was deleted due to a bug with the packaging system that caused pip installations to fail.
479+
NOTE: This release was deleted due to a bug with the packaging system that caused pip installations to fail.
476480

477481
- Truncate max_tokens in create_completion so requested tokens doesn't exceed context size.
478482
- Temporarily disable cache for completion requests
@@ -496,4 +500,4 @@ NOTE: This release was deleted due to a bug with the packaging system that caus
496500
- (misc) Added first version of the changelog
497501
- (server) Use async routes
498502
- (python-api) Use numpy for internal buffers to reduce memory usage and improve performance.
499-
- (python-api) Performance bug in stop sequence check slowing down streaming.
503+
- (python-api) Performance bug in stop sequence check slowing down streaming.

llama_cpp/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
from .llama_cpp import *
22
from .llama import *
33

4-
__version__ = "0.2.49"
4+
__version__ = "0.2.50"

0 commit comments

Comments
 (0)