Skip to content

Access LLM by Ollama with error: [Errno -2] Name or service not known #2894

@mobguang

Description

@mobguang

Hello @Aries-ckt ,

I launched DB-GPT 0.7.3 by docker compose on M2 ultra machine. And I added a LLM by ollama, please refer to the screenshot for the setting.

Then I tried to chat with LLM by "Chat Normal", and I got the error:

ERROR!Model server error!code=1, error msg is LLMServer Generate Error, Please CheckErrorInfo.: [Errno -2] Name or service not known

Please kindly provide the solution. Thanks in advance.

Image

The docker log detail message as following (the error message at the end of the log):

2025-09-16 20:25:42.004 | 2025-09-16 12:25:42 b5d3d41263c5 dbgpt.model.cluster.worker.default_worker[1] INFO model_name: gpt-oss:120b, model_path: None, model_param_class: <class 'dbgpt.model.proxy.llms.ollama.OllamaDeployModelParameters'>
2025-09-16 20:25:42.004 | 2025-09-16 12:25:42 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Init empty instances list for gpt-oss:120b@llm
2025-09-16 20:25:42.004 | 2025-09-16 12:25:42 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Begin start all worker, apply_req: model='gpt-oss:120b' apply_type=<WorkerApplyType.START: 'start'> worker_type=<WorkerType.LLM: 'llm'> params={} apply_user=None
2025-09-16 20:25:42.004 | 2025-09-16 12:25:42 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Apply req: model='gpt-oss:120b' apply_type=<WorkerApplyType.START: 'start'> worker_type=<WorkerType.LLM: 'llm'> params={} apply_user=None, apply_func: <function LocalWorkerManager._start_all_worker.._start_worker at 0xffff56b1cd60>
2025-09-16 20:25:42.005 | 2025-09-16 12:25:42 b5d3d41263c5 dbgpt.model.cluster.worker.default_worker[1] INFO Begin load model, model params:
2025-09-16 20:25:42.005 |
2025-09-16 20:25:42.005 | =========================== OllamaDeployModelParameters ===========================
2025-09-16 20:25:42.005 |
2025-09-16 20:25:42.005 | name: gpt-oss:120b
2025-09-16 20:25:42.005 | provider: proxy/ollama
2025-09-16 20:25:42.005 | verbose: False
2025-09-16 20:25:42.005 | concurrency: 5
2025-09-16 20:25:42.005 | backend: None
2025-09-16 20:25:42.005 | prompt_template: None
2025-09-16 20:25:42.005 | context_length: None
2025-09-16 20:25:42.005 | reasoning_model: None
2025-09-16 20:25:42.005 | api_base: http://host.docker.internal:11434
2025-09-16 20:25:42.005 |
2025-09-16 20:25:42.005 | ======================================================================
2025-09-16 20:25:42.005 |
2025-09-16 20:25:42.005 |
2025-09-16 20:25:42.005 | 2025-09-16 12:25:42 b5d3d41263c5 dbgpt.model.adapter.proxy_adapter[1] INFO Load model from params:
2025-09-16 20:25:42.005 |
2025-09-16 20:25:42.005 | =========================== OllamaDeployModelParameters ===========================
2025-09-16 20:25:42.005 |
2025-09-16 20:25:42.005 | name: gpt-oss:120b
2025-09-16 20:25:42.005 | provider: proxy/ollama
2025-09-16 20:25:42.005 | verbose: False
2025-09-16 20:25:42.005 | concurrency: 5
2025-09-16 20:25:42.005 | backend: None
2025-09-16 20:25:42.005 | prompt_template: None
2025-09-16 20:25:42.005 | context_length: None
2025-09-16 20:25:42.005 | reasoning_model: None
2025-09-16 20:25:42.005 | api_base: http://host.docker.internal:11434
2025-09-16 20:25:42.005 |
2025-09-16 20:25:42.005 | ======================================================================
2025-09-16 20:25:42.005 |
2025-09-16 20:25:42.005 | llm client class: <class 'dbgpt.model.proxy.llms.ollama.OllamaLLMClient'>
2025-09-16 20:25:42.055 | INFO: 127.0.0.1:39858 - "POST /api/controller/models HTTP/1.1" 200 OK
2025-09-16 20:25:42.092 | 2025-09-16 12:25:42 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Model gpt-oss:120b startup successfully
2025-09-16 20:25:42.092 | 2025-09-16 12:25:42 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO There has model storage, save model storage
2025-09-16 20:25:42.101 | 2025-09-16 12:25:42 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Save model storage successfully
2025-09-16 20:25:42.101 | INFO: 192.168.65.1:19177 - "POST /api/v2/serve/model/models HTTP/1.1" 200 OK
2025-09-16 20:25:42.104 | 2025-09-16 12:25:42 b5d3d41263c5 dbgpt.model.cluster.controller.controller[1] INFO Get all instances with WorkerManager@service, healthy_only: True
2025-09-16 20:25:42.104 | 2025-09-16 12:25:42 b5d3d41263c5 dbgpt.model.cluster.controller.controller[1] INFO Get all instances with None, healthy_only: False
2025-09-16 20:25:42.104 | INFO: 192.168.65.1:19177 - "GET /api/v2/serve/model/models HTTP/1.1" 200 OK
2025-09-16 20:25:47.448 | 2025-09-16 12:25:47 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Begin shutdown model, shutdown_req: host='172.18.0.3' port=5670 model='gpt-oss:120b' worker_type=<WorkerType.LLM: 'llm'> params={} delete_after=False worker_name=None sys_code=None user_name=None
2025-09-16 20:25:47.448 | 2025-09-16 12:25:47 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Apply req: model='gpt-oss:120b' apply_type=<WorkerApplyType.STOP: 'stop'> worker_type=<WorkerType.LLM: 'llm'> params={} apply_user=None, apply_func: <function LocalWorkerManager._stop_all_worker.._stop_worker at 0xffff56b1ce00>
2025-09-16 20:25:47.449 | 2025-09-16 12:25:47 b5d3d41263c5 dbgpt.util.model_utils[1] WARNING Torch not installed, skip clear torch cache
2025-09-16 20:25:47.497 | INFO: 127.0.0.1:39878 - "DELETE /api/controller/models?model_name=gpt-oss%3A120b%40llm&host=172.18.0.3&port=5670&weight=1.0&check_healthy=true&healthy=false&enabled=true&prompt_template=&last_heartbeat=&remove_from_registry=false HTTP/1.1" 200 OK
2025-09-16 20:25:47.497 | 2025-09-16 12:25:47 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Model gpt-oss:120b shutdown successfully
2025-09-16 20:25:47.498 | INFO: 192.168.65.1:20378 - "POST /api/v2/serve/model/models/stop HTTP/1.1" 200 OK
2025-09-16 20:25:47.501 | 2025-09-16 12:25:47 b5d3d41263c5 dbgpt.model.cluster.controller.controller[1] INFO Get all instances with WorkerManager@service, healthy_only: True
2025-09-16 20:25:47.501 | 2025-09-16 12:25:47 b5d3d41263c5 dbgpt.model.cluster.controller.controller[1] INFO Get all instances with None, healthy_only: False
2025-09-16 20:25:47.501 | INFO: 192.168.65.1:20378 - "GET /api/v2/serve/model/models HTTP/1.1" 200 OK
2025-09-16 20:25:51.096 | 2025-09-16 12:25:51 b5d3d41263c5 dbgpt.model.cluster.worker.default_worker[1] INFO model_name: gpt-oss:120b, model_path: None, model_param_class: <class 'dbgpt.model.proxy.llms.ollama.OllamaDeployModelParameters'>
2025-09-16 20:25:51.096 | 2025-09-16 12:25:51 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Init empty instances list for gpt-oss:120b@llm
2025-09-16 20:25:51.096 | 2025-09-16 12:25:51 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Begin start all worker, apply_req: model='gpt-oss:120b' apply_type=<WorkerApplyType.START: 'start'> worker_type=<WorkerType.LLM: 'llm'> params={} apply_user=None
2025-09-16 20:25:51.096 | 2025-09-16 12:25:51 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Apply req: model='gpt-oss:120b' apply_type=<WorkerApplyType.START: 'start'> worker_type=<WorkerType.LLM: 'llm'> params={} apply_user=None, apply_func: <function LocalWorkerManager._start_all_worker.._start_worker at 0xffff56b1c680>
2025-09-16 20:25:51.097 | 2025-09-16 12:25:51 b5d3d41263c5 dbgpt.model.cluster.worker.default_worker[1] INFO Begin load model, model params:
2025-09-16 20:25:51.097 |
2025-09-16 20:25:51.097 | =========================== OllamaDeployModelParameters ===========================
2025-09-16 20:25:51.097 |
2025-09-16 20:25:51.097 | name: gpt-oss:120b
2025-09-16 20:25:51.097 | provider: proxy/ollama
2025-09-16 20:25:51.097 | verbose: False
2025-09-16 20:25:51.097 | concurrency: 5
2025-09-16 20:25:51.097 | backend: None
2025-09-16 20:25:51.097 | prompt_template: None
2025-09-16 20:25:51.097 | context_length: None
2025-09-16 20:25:51.097 | reasoning_model: None
2025-09-16 20:25:51.097 | api_base: http://host.docker.internal:11434
2025-09-16 20:25:51.097 |
2025-09-16 20:25:51.097 | ======================================================================
2025-09-16 20:25:51.097 |
2025-09-16 20:25:51.097 |
2025-09-16 20:25:51.097 | 2025-09-16 12:25:51 b5d3d41263c5 dbgpt.model.adapter.proxy_adapter[1] INFO Load model from params:
2025-09-16 20:25:51.097 |
2025-09-16 20:25:51.097 | =========================== OllamaDeployModelParameters ===========================
2025-09-16 20:25:51.097 |
2025-09-16 20:25:51.097 | name: gpt-oss:120b
2025-09-16 20:25:51.097 | provider: proxy/ollama
2025-09-16 20:25:51.097 | verbose: False
2025-09-16 20:25:51.097 | concurrency: 5
2025-09-16 20:25:51.097 | backend: None
2025-09-16 20:25:51.097 | prompt_template: None
2025-09-16 20:25:51.097 | context_length: None
2025-09-16 20:25:51.097 | reasoning_model: None
2025-09-16 20:25:51.097 | api_base: http://host.docker.internal:11434
2025-09-16 20:25:51.097 |
2025-09-16 20:25:51.097 | ======================================================================
2025-09-16 20:25:51.097 |
2025-09-16 20:25:51.097 | llm client class: <class 'dbgpt.model.proxy.llms.ollama.OllamaLLMClient'>
2025-09-16 20:25:51.151 | INFO: 127.0.0.1:56450 - "POST /api/controller/models HTTP/1.1" 200 OK
2025-09-16 20:25:51.188 | 2025-09-16 12:25:51 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Model gpt-oss:120b startup successfully
2025-09-16 20:25:51.188 | 2025-09-16 12:25:51 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO There has model storage, save model storage
2025-09-16 20:25:51.191 | 2025-09-16 12:25:51 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Save model storage successfully
2025-09-16 20:25:51.191 | INFO: 192.168.65.1:20378 - "POST /api/v2/serve/model/models/start HTTP/1.1" 200 OK
2025-09-16 20:25:51.194 | 2025-09-16 12:25:51 b5d3d41263c5 dbgpt.model.cluster.controller.controller[1] INFO Get all instances with WorkerManager@service, healthy_only: True
2025-09-16 20:25:51.194 | 2025-09-16 12:25:51 b5d3d41263c5 dbgpt.model.cluster.controller.controller[1] INFO Get all instances with None, healthy_only: False
2025-09-16 20:25:51.194 | INFO: 192.168.65.1:20378 - "GET /api/v2/serve/model/models HTTP/1.1" 200 OK
2025-09-16 20:25:54.558 | 2025-09-16 12:25:54 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Begin shutdown model, shutdown_req: host='172.18.0.3' port=5670 model='gpt-oss:120b' worker_type=<WorkerType.LLM: 'llm'> params={} delete_after=False worker_name=None sys_code=None user_name=None
2025-09-16 20:25:54.558 | 2025-09-16 12:25:54 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Apply req: model='gpt-oss:120b' apply_type=<WorkerApplyType.STOP: 'stop'> worker_type=<WorkerType.LLM: 'llm'> params={} apply_user=None, apply_func: <function LocalWorkerManager._stop_all_worker.._stop_worker at 0xffff56b1c680>
2025-09-16 20:25:54.559 | 2025-09-16 12:25:54 b5d3d41263c5 dbgpt.util.model_utils[1] WARNING Torch not installed, skip clear torch cache
2025-09-16 20:25:54.614 | INFO: 127.0.0.1:56482 - "DELETE /api/controller/models?model_name=gpt-oss%3A120b%40llm&host=172.18.0.3&port=5670&weight=1.0&check_healthy=true&healthy=false&enabled=true&prompt_template=&last_heartbeat=&remove_from_registry=false HTTP/1.1" 200 OK
2025-09-16 20:25:54.615 | 2025-09-16 12:25:54 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Model gpt-oss:120b shutdown successfully
2025-09-16 20:25:54.615 | INFO: 192.168.65.1:20378 - "POST /api/v2/serve/model/models/stop HTTP/1.1" 200 OK
2025-09-16 20:25:54.618 | 2025-09-16 12:25:54 b5d3d41263c5 dbgpt.model.cluster.controller.controller[1] INFO Get all instances with WorkerManager@service, healthy_only: True
2025-09-16 20:25:54.618 | 2025-09-16 12:25:54 b5d3d41263c5 dbgpt.model.cluster.controller.controller[1] INFO Get all instances with None, healthy_only: False
2025-09-16 20:25:54.618 | INFO: 192.168.65.1:20378 - "GET /api/v2/serve/model/models HTTP/1.1" 200 OK
2025-09-16 20:26:03.824 | 2025-09-16 12:26:03 b5d3d41263c5 dbgpt.model.cluster.worker.default_worker[1] INFO model_name: gpt-oss:120b, model_path: None, model_param_class: <class 'dbgpt.model.proxy.llms.ollama.OllamaDeployModelParameters'>
2025-09-16 20:26:03.825 | 2025-09-16 12:26:03 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Init empty instances list for gpt-oss:120b@llm
2025-09-16 20:26:03.826 | 2025-09-16 12:26:03 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Begin start all worker, apply_req: model='gpt-oss:120b' apply_type=<WorkerApplyType.START: 'start'> worker_type=<WorkerType.LLM: 'llm'> params={} apply_user=None
2025-09-16 20:26:03.827 | 2025-09-16 12:26:03 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Apply req: model='gpt-oss:120b' apply_type=<WorkerApplyType.START: 'start'> worker_type=<WorkerType.LLM: 'llm'> params={} apply_user=None, apply_func: <function LocalWorkerManager._start_all_worker.._start_worker at 0xffff56b1c7c0>
2025-09-16 20:26:03.827 | 2025-09-16 12:26:03 b5d3d41263c5 dbgpt.model.cluster.worker.default_worker[1] INFO Begin load model, model params:
2025-09-16 20:26:03.827 |
2025-09-16 20:26:03.827 | =========================== OllamaDeployModelParameters ===========================
2025-09-16 20:26:03.827 |
2025-09-16 20:26:03.827 | name: gpt-oss:120b
2025-09-16 20:26:03.827 | provider: proxy/ollama
2025-09-16 20:26:03.827 | verbose: False
2025-09-16 20:26:03.827 | concurrency: 5
2025-09-16 20:26:03.827 | backend: None
2025-09-16 20:26:03.827 | prompt_template: None
2025-09-16 20:26:03.827 | context_length: None
2025-09-16 20:26:03.827 | reasoning_model: None
2025-09-16 20:26:03.827 | api_base: http://host.docker.internal:11434
2025-09-16 20:26:03.827 |
2025-09-16 20:26:03.827 | ======================================================================
2025-09-16 20:26:03.827 |
2025-09-16 20:26:03.827 |
2025-09-16 20:26:03.828 | 2025-09-16 12:26:03 b5d3d41263c5 dbgpt.model.adapter.proxy_adapter[1] INFO Load model from params:
2025-09-16 20:26:03.828 |
2025-09-16 20:26:03.828 | =========================== OllamaDeployModelParameters ===========================
2025-09-16 20:26:03.828 |
2025-09-16 20:26:03.828 | name: gpt-oss:120b
2025-09-16 20:26:03.828 | provider: proxy/ollama
2025-09-16 20:26:03.828 | verbose: False
2025-09-16 20:26:03.828 | concurrency: 5
2025-09-16 20:26:03.828 | backend: None
2025-09-16 20:26:03.828 | prompt_template: None
2025-09-16 20:26:03.828 | context_length: None
2025-09-16 20:26:03.828 | reasoning_model: None
2025-09-16 20:26:03.828 | api_base: http://host.docker.internal:11434
2025-09-16 20:26:03.828 |
2025-09-16 20:26:03.828 | ======================================================================
2025-09-16 20:26:03.828 |
2025-09-16 20:26:03.828 | llm client class: <class 'dbgpt.model.proxy.llms.ollama.OllamaLLMClient'>
2025-09-16 20:26:03.867 | INFO: 127.0.0.1:40434 - "POST /api/controller/models HTTP/1.1" 200 OK
2025-09-16 20:26:03.902 | 2025-09-16 12:26:03 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Model gpt-oss:120b startup successfully
2025-09-16 20:26:03.902 | 2025-09-16 12:26:03 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO There has model storage, save model storage
2025-09-16 20:26:03.906 | 2025-09-16 12:26:03 b5d3d41263c5 dbgpt.model.cluster.worker.manager[1] INFO Save model storage successfully
2025-09-16 20:26:03.906 | INFO: 192.168.65.1:61412 - "POST /api/v2/serve/model/models/start HTTP/1.1" 200 OK
2025-09-16 20:26:03.909 | 2025-09-16 12:26:03 b5d3d41263c5 dbgpt.model.cluster.controller.controller[1] INFO Get all instances with WorkerManager@service, healthy_only: True
2025-09-16 20:26:03.909 | 2025-09-16 12:26:03 b5d3d41263c5 dbgpt.model.cluster.controller.controller[1] INFO Get all instances with None, healthy_only: False
2025-09-16 20:26:03.910 | INFO: 192.168.65.1:61412 - "GET /api/v2/serve/model/models HTTP/1.1" 200 OK
2025-09-16 20:26:21.925 | INFO: 192.168.65.1:44556 - "GET /chat?scene=chat_normal&id=1af26d1c-92f8-11f0-96f8-4e7ee5565a65&model=gpt-oss:120b HTTP/1.1" 307 Temporary Redirect
2025-09-16 20:26:21.935 | INFO: 192.168.65.1:44556 - "GET /chat/?scene=chat_normal&id=1af26d1c-92f8-11f0-96f8-4e7ee5565a65&model=gpt-oss:120b HTTP/1.1" 200 OK
2025-09-16 20:26:22.116 | 2025-09-16 12:26:22 b5d3d41263c5 dbgpt_app.openapi.api_v1.api_v1[1] INFO /controller/model/types
2025-09-16 20:26:22.116 | 2025-09-16 12:26:22 b5d3d41263c5 dbgpt.model.cluster.controller.controller[1] INFO Get all instances with None, healthy_only: True
2025-09-16 20:26:22.118 | INFO: 192.168.65.1:44556 - "GET /api/v1/model/types HTTP/1.1" 200 OK
2025-09-16 20:26:22.122 | INFO: 192.168.65.1:40863 - "GET /api/v1/question/list?is_hot_question=true HTTP/1.1" 200 OK
2025-09-16 20:26:22.123 | INFO: 192.168.65.1:42591 - "GET /api/v1/chat/dialogue/list HTTP/1.1" 200 OK
2025-09-16 20:26:22.125 | 2025-09-16 12:26:22 b5d3d41263c5 dbgpt_serve.agent.app.controller[1] INFO app_detail:chat_normal,chat_normal
2025-09-16 20:26:22.132 | INFO: 192.168.65.1:40863 - "GET /api/v1/chat/dialogue/messages/history?con_uid=1af26d1c-92f8-11f0-96f8-4e7ee5565a65 HTTP/1.1" 200 OK
2025-09-16 20:26:22.133 | INFO: 192.168.65.1:42591 - "GET /api/v1/app/info?chat_scene=chat_normal&app_code=chat_normal HTTP/1.1" 200 OK
2025-09-16 20:26:25.779 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt_app.openapi.api_v1.api_v1[1] INFO chat_completions:chat_normal,,gpt-oss:120b, timestamp=1758025585778
2025-09-16 20:26:25.787 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt_app.openapi.api_v1.api_v1[1] INFO get_chat_instance:conv_uid='1af26d1c-92f8-11f0-96f8-4e7ee5565a65' user_input='hi' user_name='001' chat_mode='chat_normal' app_code='chat_normal' temperature=0.6 max_new_tokens=4000 select_param='' model_name='gpt-oss:120b' incremental=False sys_code=None prompt_code=None ext_info={}
2025-09-16 20:26:25.788 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt.core._private.prompt_registry[1] INFO Get prompt template of scene_name: chat_normal with model_name: gpt-oss:120b, proxyllm_backend: None, language: en
2025-09-16 20:26:25.801 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt.core.awel.runner.local_runner[1] INFO Begin run workflow from end operator, id: bd8c762e-9add-4fdf-8d9c-049b12cab7c7, runner: <dbgpt.core.awel.runner.local_runner.DefaultWorkflowRunner object at 0xffff6ff8a490>
2025-09-16 20:26:25.801 | INFO: 192.168.65.1:42591 - "POST /api/v1/chat/completions HTTP/1.1" 200 OK
2025-09-16 20:26:25.802 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt.core.awel.runner.local_runner[1] INFO Begin run workflow from end operator, id: 20c99281-7ffb-476e-b66a-db869381f0ec, runner: <dbgpt.core.awel.runner.local_runner.DefaultWorkflowRunner object at 0xffff6ff8a490>
2025-09-16 20:26:25.805 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt.model.utils.token_utils[1] INFO tiktoken installed, using it to count tokens, tiktoken will download tokenizer from network, also you can download it and put it in the directory of environment variable TIKTOKEN_CACHE_DIR
2025-09-16 20:26:25.807 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt_app.scene.base_chat[1] INFO payload request:
2025-09-16 20:26:25.807 | ModelRequest(model='gpt-oss:120b', messages=[ModelMessage(role='system', content='You are a helpful AI assistant.', round_index=0), ModelMessage(role='human', content='hi', round_index=1), ModelMessage(role='human', content='你好', round_index=2), ModelMessage(role='human', content='hi', round_index=0)], temperature=0.6, top_p=None, max_new_tokens=4000, stop=None, stop_token_ids=None, context_len=None, echo=False, span_id='ffdd00176b6524d53bb755761aae9940:8daa740f5e24edd9', context=ModelRequestContext(stream=True, cache_enable=False, user_name='001', sys_code=None, conv_uid=None, span_id='ffdd00176b6524d53bb755761aae9940:8daa740f5e24edd9', chat_mode='chat_normal', chat_param=None, extra={}, request_id=None, is_reasoning_model=False))
2025-09-16 20:26:25.809 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt.core.awel.runner.local_runner[1] INFO Begin run workflow from end operator, id: a319c470-e39f-4870-b918-34db2b84902e, runner: <dbgpt.core.awel.runner.local_runner.DefaultWorkflowRunner object at 0xffff6ff8a490>
2025-09-16 20:26:25.810 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt.core.awel.operators.common_operator[1] INFO branch_input_ctxs 0 result None, is_empty: False
2025-09-16 20:26:25.810 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt.core.awel.operators.common_operator[1] INFO Skip node name llm_model_cache_node
2025-09-16 20:26:25.810 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt.core.awel.operators.common_operator[1] INFO branch_input_ctxs 1 result True, is_empty: False
2025-09-16 20:26:25.810 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt.core.awel.runner.local_runner[1] INFO Skip node name llm_model_cache_node, node id 2d13e2f0-dcfd-4b5c-8b45-2ec6663184aa
2025-09-16 20:26:25.812 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt.model.adapter.base[1] INFO Message version is v2
2025-09-16 20:26:25.814 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt.model.cluster.worker.default_worker[1] INFO current generate stream function is synchronous generate stream function
2025-09-16 20:26:25.815 | 2025-09-16 12:26:25 b5d3d41263c5 dbgpt.model.cluster.worker.default_worker[1] INFO llm_adapter: <_DynProxyLLMModelAdapter model_name=gpt-oss:120b model_path=None>
2025-09-16 20:26:25.815 |
2025-09-16 20:26:25.815 | model prompt:
2025-09-16 20:26:25.815 |
2025-09-16 20:26:25.815 | system: You are a helpful AI assistant.
2025-09-16 20:26:25.815 | human: hi
2025-09-16 20:26:25.815 | human: 你好
2025-09-16 20:26:25.815 | human: hi
2025-09-16 20:26:25.815 |
2025-09-16 20:26:25.815 | generate stream output:
2025-09-16 20:26:25.815 |
2025-09-16 20:26:26.089 | 2025-09-16 12:26:26 b5d3d41263c5 dbgpt.model.cluster.worker.default_worker[1] ERROR Model inference error, detail: Traceback (most recent call last):
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 101, in map_httpcore_exceptions
2025-09-16 20:26:26.089 | yield
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 250, in handle_request
2025-09-16 20:26:26.089 | resp = self._pool.handle_request(req)
2025-09-16 20:26:26.089 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 256, in handle_request
2025-09-16 20:26:26.089 | raise exc from None
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 236, in handle_request
2025-09-16 20:26:26.089 | response = connection.handle_request(
2025-09-16 20:26:26.089 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 101, in handle_request
2025-09-16 20:26:26.089 | raise exc
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 78, in handle_request
2025-09-16 20:26:26.089 | stream = self._connect(request)
2025-09-16 20:26:26.089 | ^^^^^^^^^^^^^^^^^^^^^^
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 124, in _connect
2025-09-16 20:26:26.089 | stream = self._network_backend.connect_tcp(**kwargs)
2025-09-16 20:26:26.089 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 207, in connect_tcp
2025-09-16 20:26:26.089 | with map_exceptions(exc_map):
2025-09-16 20:26:26.089 | File "/usr/lib/python3.11/contextlib.py", line 155, in exit
2025-09-16 20:26:26.089 | self.gen.throw(typ, value, traceback)
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
2025-09-16 20:26:26.089 | raise to_exc(exc) from exc
2025-09-16 20:26:26.089 | httpcore.ConnectError: [Errno -2] Name or service not known
2025-09-16 20:26:26.089 |
2025-09-16 20:26:26.089 | The above exception was the direct cause of the following exception:
2025-09-16 20:26:26.089 |
2025-09-16 20:26:26.089 | Traceback (most recent call last):
2025-09-16 20:26:26.089 | File "/app/packages/dbgpt-core/src/dbgpt/model/cluster/worker/default_worker.py", line 178, in generate_stream
2025-09-16 20:26:26.089 | for output in generate_stream_func(
2025-09-16 20:26:26.089 | File "/app/packages/dbgpt-core/src/dbgpt/model/proxy/llms/ollama.py", line 56, in ollama_generate_stream
2025-09-16 20:26:26.089 | for r in client.sync_generate_stream(request):
2025-09-16 20:26:26.089 | File "/app/packages/dbgpt-core/src/dbgpt/model/proxy/llms/ollama.py", line 136, in sync_generate_stream
2025-09-16 20:26:26.089 | for chunk in stream:
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/ollama/_client.py", line 163, in inner
2025-09-16 20:26:26.089 | with self._client.stream(*args, **kwargs) as r:
2025-09-16 20:26:26.089 | File "/usr/lib/python3.11/contextlib.py", line 137, in enter
2025-09-16 20:26:26.089 | return next(self.gen)
2025-09-16 20:26:26.089 | ^^^^^^^^^^^^^^
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpx/_client.py", line 868, in stream
2025-09-16 20:26:26.089 | response = self.send(
2025-09-16 20:26:26.089 | ^^^^^^^^^^
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpx/_client.py", line 914, in send
2025-09-16 20:26:26.089 | response = self._send_handling_auth(
2025-09-16 20:26:26.089 | ^^^^^^^^^^^^^^^^^^^^^^^^^
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpx/_client.py", line 942, in _send_handling_auth
2025-09-16 20:26:26.089 | response = self._send_handling_redirects(
2025-09-16 20:26:26.089 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
2025-09-16 20:26:26.089 | response = self._send_single_request(request)
2025-09-16 20:26:26.089 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpx/_client.py", line 1014, in _send_single_request
2025-09-16 20:26:26.089 | response = transport.handle_request(request)
2025-09-16 20:26:26.089 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 249, in handle_request
2025-09-16 20:26:26.089 | with map_httpcore_exceptions():
2025-09-16 20:26:26.089 | File "/usr/lib/python3.11/contextlib.py", line 155, in exit
2025-09-16 20:26:26.089 | self.gen.throw(typ, value, traceback)
2025-09-16 20:26:26.089 | File "//opt/.uv.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 118, in map_httpcore_exceptions
2025-09-16 20:26:26.089 | raise mapped_exc(message) from exc
2025-09-16 20:26:26.089 | httpx.ConnectError: [Errno -2] Name or service not known
2025-09-16 20:26:26.089 |
2025-09-16 20:26:26.115 | Traceback (most recent call last):
2025-09-16 20:26:26.115 | File "/app/packages/dbgpt-app/src/dbgpt_app/scene/base_chat.py", line 486, in stream_call
2025-09-16 20:26:26.115 | ai_response_text, view_message = await self._handle_final_output(
2025-09-16 20:26:26.115 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-09-16 20:26:26.115 | File "/app/packages/dbgpt-app/src/dbgpt_app/scene/base_chat.py", line 572, in _handle_final_output
2025-09-16 20:26:26.115 | parsed_output = self.prompt_template.output_parser.parse_model_nostream_resp(
2025-09-16 20:26:26.115 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-09-16 20:26:26.115 | File "/app/packages/dbgpt-core/src/dbgpt/core/interface/output_parser.py", line 134, in parse_model_nostream_resp
2025-09-16 20:26:26.115 | raise ValueError(
2025-09-16 20:26:26.115 | ValueError: Model server error!code=1, error msg is LLMServer Generate Error, Please CheckErrorInfo.: [Errno -2] Name or service not known
2025-09-16 20:26:26.115 |
2025-09-16 20:26:26.116 | 2025-09-16 12:26:26 b5d3d41263c5 dbgpt_app.scene.base_chat[1] ERROR model response parse failed!Model server error!code=1, error msg is LLMServer Generate Error, Please CheckErrorInfo.: [Errno -2] Name or service not known
2025-09-16 20:26:26.167 | INFO: 192.168.65.1:42591 - "GET /api/v1/chat/dialogue/list HTTP/1.1" 200 OK

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions