Local Inferencing Endpoint Only? Still references api.aponeai.com/v1/models #544
Unanswered
xJohnWhite
asked this question in
Q&A
Replies: 1 comment
-
I'm wondering if having answered fabric --setup with an OPENAI_API_KEY is a problem when I'm only using my internal-network inferencing server. My assumption is that it was just setting ~/.config/fabric/.env |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Summary
Using an inside-my-network OpenAI API-compatible inferencing endpoint with relevant env set, but getting an authentication error from api.openai.com
Details
I'm at an organization that doesn't allow the use of OpenAI; Fortunately, I have access to an internal inferencing endpoint which is OpenAI API-compatible.
Env check 1
I've got my internal api set up in my environment variables:
printenv | grep OPENAI OPENAI_MODEL_NAME=llama27b OPENAI_API_KEY=[myInternalKey] OPENAI_API_BASE=https://[myEndpointDomain]/api/v1/
Env check 2
I've got that in the env as well.
OpenAPI.com Authentication error
However, when I try to do anything, I get an OpenAI URL error, as if it's hard-coded somewhere:
Am I doing something massively wrong?
Oh, and what's with the ffmpeg/avconv error?
Previously mentioned #189
Saw a previously merged #189 , but I'm using a git clone from today and checked installer/client/cli/utils.py to make sure the merge was still there. There's something in def agents(self, userInput): that has a hard-coded check to see if the model name is in self.model, but I'm hoping that I'm not invoking that without a specific agent creation pattern.
Potential other installer/client/cli/utils.py OpenAI hard-code
In installer/client/cli/utils.py, lines 43-45, there's this:
I'm not familiar enough with the OpenAI invocation to know whether it picks up OPENAI_BASE_URL and OPENAI_API_MODEL from the environment. I've seen other people referencing using non-localhost, non-Ollama servers, so I'm guessing I've got a misconfiguration somewhere.
Request for guidance and wisdom!
Any thoughts on what might be going on?
Beta Was this translation helpful? Give feedback.
All reactions