Skip to content

Update using_llms.md #1207

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 6, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 13 additions & 1 deletion docs/how_to_guides/using_llms.md
Original file line number Diff line number Diff line change
Expand Up @@ -287,8 +287,20 @@ for chunk in stream_chunk_generator
```

## Other LLMs
As mentioned at the top of this page, over 100 LLMs are supported through our litellm integration, including (but not limited to)

See LiteLLM’s documentation [here](https://docs.litellm.ai/docs/providers) for details on many other llms.
- Anthropic
- AWS Bedrock
- Anyscale
- Huggingface
- Mistral
- Predibase
- Fireworks


Find your LLM in LiteLLM’s documentation [here](https://docs.litellm.ai/docs/providers). Then, follow those same steps and set the same environment variables they guide you to use, but invoke a `Guard` object instead of the litellm object.

Guardrails will wire through the arguments to litellm, run the Guarding process, and return a validated outcome.

## Custom LLM Wrappers
In case you're using an LLM that isn't natively supported by Guardrails and you don't want to use LiteLLM, you can build a custom LLM API wrapper. In order to use a custom LLM, create a function that accepts a positional argument for the prompt as a string and any other arguments that you want to pass to the LLM API as keyword args. The function should return the output of the LLM API as a string.
Expand Down
Loading