OpenAI API key shared among all users #95
Replies: 2 comments 3 replies
-
@LeoLee429 Thank you for asking the question; @philippzagar can provide more context about this. But in a few sentences: You are right; creating a proxy is the way to go. You can manually inject a custom token in the model parameters ( @philippzagar I think we can clarify this better and ideally highlight that the usage of a locally stored token should only be considered for testing & small-scale use cases. Maybe we should even aim to add documentation warnings about this and add the custom token injection right in the OpenAI target? I think we might also want to consider how we can more closely align fog and OpenAI. |
Beta Was this translation helpful? Give feedback.
-
Hey @LeoLee429, Thank you for your interest in integrating SpeziLLM into your application! 🚀 To expand on @PSchmiedmayer’s earlier response, the recommended approach for your use case is to utilize a proxy service to manage your OpenAI keys securely. Your mobile application would then authenticate with this proxy using a mechanism such as Firebase Authentication tokens. Our SpeziLLMFog module provides exactly this functionality, authorizing with a Firebase token on a locally deployed proxy service that then forwards the request to the actual inference layer in the case of a valid token. It additionally supports automatic discovery of fog nodes on the local network, which can host and serve inference requests. Detailed documentation is available here. Currently, SpeziLLMOpenAI does not support this proxy-based approach out of the box. However, it could be extended with small changes by reusing patterns already used in SpeziLLMFog:
Optionally, we could also support passing arbitrary headers for greater flexibility.
This enhancement aligns well with ongoing discussions and related feature requests: These improvements would involve little effort and would not introduce breaking changes. I plan to work on implementing this functionality in the coming days. Thank you for identifying this gap and helping us improve the framework! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
What Stanford Spezi module is your challenge related to?
SpeziLLM
Description
By my understanding, the current usage is mainly asking users for OpenAI API key, then store them in the keychain for future retrieval.
However, our app would like to provide LLM conversation with only 1 API key so every users can use without any technical knowledge. (e.g. how to generate an API key... Most of our clients would be elderlies) Hardcoding API key directly in the APP is a bad practice.
The only way I can think of is to run LLM module with a proxy, which is host at our backend. I cannot find any guidelines to use the LLM package with proxy. Probably it requires an endpoint injection, and I will try to figure out how to make a proxy in Firebase cloud function.
Please help with this idea, or point me out if there is better way to achieve such functionality with current codes. Thank you so much!
Reproduction
N/A
Expected behavior
N/A
Additional context
No response
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions