-
Just the title; very new to configuring celery, docker, redis, mongo to be setup for a separate machine (say W), and external LLM setup. Have Ollama setup on W with different models to try out. I want to setup Frontend on my laptop, while backend, with celery, mongo, redis on W. I have setup redis on WSL. Wiring everything up without docker seems to be very involved process. Can a guide be written to simplify such a setup? External LLM I have seen a guide so probably can manage that but everything else, I will probably spend 2-3 weeks to learn and get it working. Any help will be greatly appreciated!! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Got it working if anyone is interested. Here is my setup: Workstation (with optional GPU): Here is my working .env: API_URL=http://127.0.0.1:7091 EMBEDDINGS_NAME=huggingface_sentence-transformers/all-mpnet-base-v2 #Ollama OpenAI compatible API MONGO_URI=mongodb://192.168.1.214:27017/docsgpt |
Beta Was this translation helpful? Give feedback.
Got it working if anyone is interested. Here is my setup:
Laptop:
Windows and VS code
Frontend, backend & worker
Workstation (with optional GPU):
Windows
Ollama (various LLM options)
Memurai - Redis does not directly work in Windows, tried via WSL, big hassle to connect from host/or my laptop using intranet IP. ChatGPT suggested Memurai Redis alternative for Windows, so far working well
MongoDB
Here is my working .env:
API_URL=http://127.0.0.1:7091
API_KEY=OLLAMA_NO_KEY
#LLM choices are: openai,azure_openai,sagemaker,huggingface,llama.cpp,anthropic,docsgpt,premai,groq,google
LLM_NAME=openai
#LLM_NAME=docsgpt
#below is used if LLM is hosted by "backend"
#MODEL_NAME=svk/docsgpt-7b
MODEL_NAM…