-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Add llamafile
as a provider
#10203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
ishaan-jaff
merged 7 commits into
BerriAI:litellm_contrib_may_1_2025
from
peteski22:peteski22/provider/llamafile
May 1, 2025
Merged
Add llamafile
as a provider
#10203
ishaan-jaff
merged 7 commits into
BerriAI:litellm_contrib_may_1_2025
from
peteski22:peteski22/provider/llamafile
May 1, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
de6072a
to
6466488
Compare
6466488
to
11eceb3
Compare
11eceb3
to
e0dacfe
Compare
…ude Llamafile in the sidebar
8feba56
to
b63eb3d
Compare
b63eb3d
to
57a5ff9
Compare
57a5ff9
to
293d751
Compare
Deployment failed with the following error:
|
ishaan-jaff
approved these changes
May 1, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
3fe8634
into
BerriAI:litellm_contrib_may_1_2025
6 checks passed
ishaan-jaff
added a commit
that referenced
this pull request
May 2, 2025
* Update docs for OpenAI compatible providers, add Llamafile docs, include Llamafile in the sidebar * Add Llamafile as an LlmProviders enum * Add llamafile as a OpenAI compatible provider (in the list of compatible providers) * Add Llamafile chat config and tests * Wire up Llamafile Co-authored-by: Peter Wilson <peter@mozilla.ai>
4 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Title
Implement
llamafile
as a supported provider.Relevant issues
Closes: #3225
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/
directory, Adding at least 1 test is a hard requirement - see detailsmake test-unit
)[https://docs.litellm.ai/docs/extras/contributing_code]Type
🆕 New Feature
📖 Documentation
✅ Test
Changes
llamafile
, including viamodel
(e.g.model="llamafile/mistralai/Mistral-7B-Instruct-v0.2"
)llamafile
, API key defaults tofake-api-key
if it isn't configured by the user or available in the secret manager (underLLAMAFILE_API_KEY
)llamafile
, base API URL will fallback tohttp://127.0.0.1:8080/v1
if it isn't configured by the user or available in the secret manager (underLLAMAFILE_API_BASE
)Vercel URLs:
Receipts