Custom deployed model usage #283
Unanswered
uttambarthakur
asked this question in
Q&A
Replies: 1 comment
-
Yes ofc! You just need to pass the credentials and pass the right string to map for your model: https://enoch3712.github.io/ExtractThinker/examples/local-processing/ You can more at Litellm, that is the underground LLM agnostic model: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Does extract thinker expose APIs to use custom model. My models are deployed in a secured environment. I want to use those inference APIs instead of directly using the OpenAI or Grok APIs.
How can I set up the connection string and credentials to use such models?
Beta Was this translation helpful? Give feedback.
All reactions