How to switch providers in LangChain easily with DigitalOcean Gradient AI
This project uses uv for package management, so ensure it is installed.
Next, you will need a DigitalOcean Gradient™ AI Platform serverless inferencing key:
- Log in to the DigitalOcean Cloud console
- Click
Agent Platformin the sidebar, and then click theServerless inferencetab - Click
Create model access keyand follow the prompts to create the key - Rename
.env.exampleto.envand then paste the created key as the value forDIGITALOCEAN_INFERENCE_KEY
Visit our Docs to see an up-to-date list of available Foundation and Embedding models.
FallbackChatGradientAI:
takes in a list of models.
_create_llm just makes the chatgradientai object with a given model (so we can switch models)
_invoke_with_retry - tries multiple times to call the llm (invoke it). After it makes two attempts (and they both fail), then it raises an exception (RetryError from tenacity library)
invoke - try to call each model. if the _invoke_with_retry for that model fails (meaning that model was tried multiple times, all failing), an exception is raised and the loop moved on to the next model. It repeats this until success, or until reaching the last model in which case it raises an exception with the last error message