LLM Proxy is a simple demo project that serves as a proxy for OpenAI API and Google Gemini API. It also provides an additional custom endpoint for testing purposes, all of which are exposed via swagger docs under the /api/v1/docs
endpoint. This demo project was created in private during the development of LLM Connector - an official React ChatBotify plugin. It has since been made public to serve as a simple demo project (not just for plugin users, but also anyone interested in a simple LLM proxy).
Note that this LLM Proxy is not an official project of React ChatBotify. With that said, while issues/pull requests are welcome, support for this demo project is not guaranteed.
This demo project exposes a total of 5 endpoints which are listed below:
/api/v1/openai/chat/completions
/api/v1/gemini/models/:model:generateContent
/api/v1/gemini/models/:model:streamGenerateContent
/api/v1/custom
/api/v1/docs
The first 3 endpoints match those provided by OpenAI and Google Gemini. The 4th custom endpoint always returns "Hello World!" in a JSON response for testing. The 5th endpoint exposes the swagger docs page for easy testing.
Technologies used by LLM Proxy are as below:
NodeJS
Typescript
Deploying the project is simple with Docker.
- First, if you have not done so, create a .env file from the provided .env.template and update the variables.
- If you are using the project as it is (i.e. no intended code changes), then simply run
./deploy.sh llm-proxy
within the scripts folder and your deployment will be automatically done! Otherwise, if you wish to make code changes to the project, please read on. - Once you are done with your code changes, you would have to build your own docker image with the following command (take note to replace the tag
-t
with that of your own):docker build -t tjtanjin/llm-proxy .
- Upon creating your image, you may then start your container with the following command (remember to replace image name below if you built your own image):
Note: Notice that the .env file we configured in step 1 is being passed via the
docker run -d -p 8000:8000 --name llm-proxy --env-file .env tjtanjin/llm-proxy:main
--env-file
argument. This is true for the automated/scripted deployment in step 2 as well. Hence, ensure that you have setup your configuration properly before passing in the file. - Visit
http://localhost:8000/api/v1/docs
for the swagger docs page! - Finally, you may wish to update the deployment script to reference your own image/container if you would like to have an easier deployment workflow.
The following section will guide you through setting up your own LLM Proxy.
- First,
cd
to the directory of where you wish to store the project and clone this repository. An example is provided below:cd /home/user/exampleuser/projects/ git clone https://github.com/tjtanjin/llm-proxy.git
- Once the project has been cloned,
cd
into it and install required dependencies with the following command:npm install
- Following which, create (or copy) a .env file at the root of the project using the provided .env.template and update the relevant variables (e.g. API Keys).
- You can also feel free to modify the other variables as you deem fit. Clear descriptions for the variables have been included in the .env.template file.
- When ready, launch away with the following command:
npm run dev
- Visit
http://localhost:8000/api/v1/docs
for the swagger docs page!
Given the simplicity and narrowly scoped purpose of this project, there is no developer guide. Feel free to submit pull requests if you wish to make improvements or fixes.
Alternatively, you may contact me via discord or simply raise bugs or suggestions by opening an issue.
For any questions regarding the implementation of the project, you may reach out on discord or drop an email to: cjtanjin@gmail.com.