local ai based slackbot setup error #1485
mohanramunc
started this conversation in
General
Replies: 2 comments
-
![]() ![]() |
Beta Was this translation helpful? Give feedback.
0 replies
-
(base) mohanram@RENCI_FH9R2L3XV0 slack-qa-bot_ida % docker compose up how do i fix this? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I have two qns. 1st is that how wud I get this bot to work based on documentation thts in a txt or pdf file? 2nd is when i use a documentation wiki link in env file.
on setting up slack qa bot example , I get the below error. how do i fix it plz? i keep getting the error in loop...
and when i use the default kairos doc link for building the bot , it does not respond in slack...
Last login: Sun Dec 24 07:26:14 on ttys000
(base) mohanram@RENCI_FH9R2L3XV0 ~ %
cd LocalAI/examples/slack-qa-bot
(base) mohanram@RENCI_FH9R2L3XV0 slack-qa-bot % docker compose up
[+] Running 2/0
✔ Container slack-qa-bot-api-1 Created 0.0s
✔ Container slackbot Running 0.0s
Attaching to api-1, slackbot
slackbot | Getting requirements to build wheel: finished with status 'done'
slackbot | Preparing metadata (pyproject.toml): started
slackbot | Preparing metadata (pyproject.toml): finished with status 'done'
slackbot | Requirement already satisfied: numpy in /usr/local/lib/python3.11/site-packages (from hnswlib==0.8.0) (1.25.0)
slackbot | Building wheels for collected packages: hnswlib
slackbot | Building wheel for hnswlib (pyproject.toml): started
api-1 | @@@@@
api-1 | Skipping rebuild
api-1 | @@@@@
api-1 | If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true
api-1 | If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed:
api-1 | CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF"
api-1 | see the documentation at: https://localai.io/basics/build/index.html
api-1 | Note: See also #288
api-1 | @@@@@
api-1 | CPU info:
api-1 | CPU: no AVX found
api-1 | CPU: no AVX2 found
api-1 | CPU: no AVX512 found
api-1 | @@@@@
api-1 | 12:32PM INF Starting LocalAI using 4 threads, with models path: /models
api-1 | 12:32PM INF LocalAI version: v2.2.0 (9ae47d3)
api-1 | 12:32PM DBG Model: gpt-3.5-turbo (config: {PredictionOptions:{Model:ggml-gpt4all-j.bin Language: N:0 TopP:0.7 TopK:80 Temperature:0.2 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:gpt-3.5-turbo F16:false Threads:0 Debug:false Roles:map[] Embeddings:false Backend:gpt4all-j TemplateConfig:{Chat:gpt4all-chat ChatMessage: Completion:gpt4all-completion Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:1024 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:} CUDA:false})
api-1 | 12:32PM DBG Extracting backend assets files to /tmp/localai/backend_data
api-1 | 12:32PM DBG Checking "ggml-gpt4all-j.bin" exists and matches SHA
slackbot | Building wheel for hnswlib (pyproject.toml): finished with status 'done'
slackbot | Created wheel for hnswlib: filename=hnswlib-0.8.0-cp311-cp311-linux_x86_64.whl size=2146521 sha256=bf3049ea2e3d0c62172e516541b243a0693d6577938cc7d471b05cd5a16505a0
slackbot | Stored in directory: /tmp/pip-ephem-wheel-cache-_wz4f3ra/wheels/c9/07/d7/63ac52dfed32f71e2b79530f008401923f9ef0e8ac62d5e2d0
slackbot | Successfully built hnswlib
slackbot | Installing collected packages: hnswlib
slackbot | Successfully installed hnswlib-0.8.0
slackbot | WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
slackbot |
slackbot | [notice] A new release of pip is available: 23.1.2 -> 23.3.2
slackbot | [notice] To update, run: pip install --upgrade pip
slackbot | INFO:sentence_transformers.SentenceTransformer:Load pretrained SentenceTransformer: all-MiniLM-L6-v2
slackbot | INFO:sentence_transformers.SentenceTransformer:Use pytorch device: cpu
slackbot | INFO:langchain.document_loaders.web_base:fake_useragent not found, using default user agent.To get a realistic header for requests, pip install fake_useragent.
Fetching pages: 100%|##########| 88/88 [00:04<00:00, 20.20it/s]
api-1 | 12:33PM DBG File "/models/ggml-gpt4all-j.bin" already exists and matches the SHA. Skipping download
api-1 | 12:33PM DBG Prompt template "gpt4all-completion" written
api-1 | 12:33PM DBG Prompt template "gpt4all-chat" written
api-1 | 12:33PM DBG Written config file /models/gpt-3.5-turbo.yaml
api-1 |
api-1 | ┌───────────────────────────────────────────────────┐
api-1 | │ Fiber v2.50.0 │
api-1 | │ http://127.0.0.1:8080/ │
api-1 | │ (bound on host 0.0.0.0 and port 8080) │
api-1 | │ │
api-1 | │ Handlers ............ 74 Processes ........... 1 │
api-1 | │ Prefork ....... Disabled PID ................ 29 │
api-1 | └───────────────────────────────────────────────────┘
api-1 |
api-1 | [127.0.0.1]:33904 200 - GET /readyz
slackbot | INFO:chromadb:Running Chroma using direct local API.
slackbot | WARNING:chromadb:Using embedded DuckDB with persistence: data will be stored in: /tmp/memory_dir/chromadb
slackbot | INFO:clickhouse_connect.driver.ctypes:Successfully imported ClickHouse Connect C data optimizations
slackbot | INFO:clickhouse_connect.json_impl:Using python library for writing JSON byte strings
slackbot | INFO:chromadb.db.duckdb:No existing DB found in /tmp/memory_dir/chromadb, skipping load
slackbot | INFO:chromadb.db.duckdb:No existing DB found in /tmp/memory_dir/chromadb, skipping load
slackbot | Creating embeddings. May take some minutes...
api-1 | [192.168.65.1]:36271 404 - GET /
api-1 | [127.0.0.1]:46680 200 - GET /readyz
api-1 | [127.0.0.1]:35770 200 - GET /readyz
api-1 | [127.0.0.1]:47760 200 - GET /readyz
api-1 | [127.0.0.1]:54350 200 - GET /readyz
Beta Was this translation helpful? Give feedback.
All reactions