-
When I want to use custom pipelines that run on my locally deployed "pipelines" instance (http://localhost:9099) I do set them up in "Manage OpenAI API Connections". This is somewhat counter intuitive, but my main pain with this is that I cannot just offer users arbitrary OpenAI models anymore. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
Create a new connection. |
Beta Was this translation helpful? Give feedback.
-
Thanks! On a wide screen, that small "+" icon is hardly visible and the screen in general is not overly intuitive imo. I was thinking there is a single "OpenAI API" which you can (de)activate (for what ever reason) and set it to a single connection. Aren't all these, including Ollama, just "OpenAI API compatible endpoints" of which you can have as many as you want? Either way, now I got it, thanks again! |
Beta Was this translation helpful? Give feedback.
-
You're welcome. |
Beta Was this translation helpful? Give feedback.
Create a new connection.
Enter https://api.openai.com/v1 and the key.
And you get all the openai models.