v3.3.0 #390
Closed
v3.3.0
#390
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
3.3.0 (2024-12-02)
Bug Fixes
llama.cpp
changes (#386) (97abbca)compiler is out of heap space
CUDA build error (#386) (97abbca)Features
llama.cpp
backend registry for GPUs instead of custom implementations (#386) (97abbca)getLlama
:build: "try"
option (#386) (97abbca)init
command:--model
flag (#386) (97abbca)prefixItems
,minItems
,maxItems
support (#388) (4d387de)additionalProperties
,minProperties
,maxProperties
support (#388) (4d387de)minLength
,maxLength
,format
support (#388) (4d387de)description
support (#388) (4d387de)Shipped with
llama.cpp
releaseb4234
This discussion was created from the release v3.3.0.
Beta Was this translation helpful? Give feedback.
All reactions