Releases: utilityai/llama-cpp-rs
Releases · utilityai/llama-cpp-rs
0.1.41
What's Changed
- Bumped version to 0.1.41 by @github-actions in #189
- included
sampler
in docs
Full Changelog: 0.1.40...0.1.41
0.1.40
What's Changed
- Bumped version to 0.1.40 by @github-actions in #181
- Llama cpp update ci fix by @sepehr455 in #184
- Sampler by @MarcusDunn in #188
Full Changelog: 0.1.39...0.1.40
0.1.39
What's Changed
- moved the changes to a new branch by @sepehr455 in #169
- Toml version update fix by @sepehr455 in #179
- Bumped version to 0.1.39 by @github-actions in #180
Full Changelog: 0.1.38...0.1.39
0.1.38
What's Changed
- updated llama.cpp by @github-actions in #138
- updated llama.cpp by @github-actions in #139
- Fixed
update-toml
Action Breaking by @sepehr455 in #141 - fixed update toml by @sepehr455 in #143
- Bump cc from 1.0.88 to 1.0.90 by @dependabot in #157
- Bump clap from 4.5.1 to 4.5.2 by @dependabot in #158
Full Changelog: 0.1.37...0.1.38
0.1.37
What's Changed
- updated llama.cpp by @github-actions in #135
- updated llama.cpp by @github-actions in #136
- added
sample_repetition_penalty
by @MarcusDunn in #137
Full Changelog: 0.1.36...0.1.37
0.1.36
What's Changed
Full Changelog: 0.1.35...0.1.36
0.1.35
What's Changed
- small cleanup to pin code by @MarcusDunn in #123 Potentially breaking
- updated llama.cpp by @github-actions in #124
- updated llama.cpp by @github-actions in #125
- updated llama.cpp by @github-actions in #126
- Bump docker/setup-buildx-action from 3.0.0 to 3.1.0 by @dependabot in #129
- updated llama.cpp by @github-actions in #128
- updated llama.cpp by @github-actions in #131
Full Changelog: 0.1.34...0.1.35
0.1.34
What's Changed
- Add CPU Feature Support by @Hirtol in #121
- override model values by @MarcusDunn in #120
- prep 0.1.34 by @MarcusDunn in #122
New Contributors
Full Changelog: 0.1.33...0.1.34
0.1.33
What's Changed
- updated llama.cpp by @github-actions in #115
- updated llama.cpp by @github-actions in #117
- Expose the complete API for dealing with KV cache and states by @zh217 in #116
- add with_main_gpu to LlamaModelParams by @danbev in #118
- updated llama cpp and removed cast to mut by @MarcusDunn in #119
New Contributors
Full Changelog: 0.1.32...0.1.33
0.1.32
What's Changed
- updated llama.cpp by @github-actions in #105
- Bump cc from 1.0.83 to 1.0.88 by @dependabot in #106
- added more sampling options by @MarcusDunn in #110
- updated llama.cpp by @github-actions in #111
- Expose functions
llama_load_session_file
andllama_save_session_file
by @zh217 in #112 - Improved docs for new sampling options @MarcusDunn
- Fix clippy errors @MarcusDunn
New Contributors
Full Changelog: 0.1.31...0.1.32