OpenAI not honoring the timeout parameter #27335
Replies: 1 comment 10 replies
-
LangChain handles timeouts using the
If these steps don't resolve the issue, consider enabling detailed logging with |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
Hello @dosu. I want to set a timeout limit for my model to avoid very long waiting times. I am trying to set a timeout parameter when using the ChatOpenAI class for my llm. Looking at the api call logs, it seems that whenever the OpenAPI is being called by my code, it does not honor the timeout set and defaults to None. I have even tried setting the timeout at a session level with openai.timeout, but again it defaults to None. The example code I've provided contains the script with which I instantiate the ChatOpenAI class and how I use the llm in my code further on. Here are the logs:
2024-10-13 20:42:10,931 - httpcore.connection - DEBUG - connect_tcp.started host='api.openai.com' port=443 local_address=None timeout=None socket_options=None 2024-10-13 20:42:10,950 - httpcore.connection - DEBUG - connect_tcp.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7f4544d2a590> 2024-10-13 20:42:10,950 - httpcore.connection - DEBUG - start_tls.started ssl_context=<ssl.SSLContext object at 0x7f450ab899a0> server_hostname='api.openai.com' timeout=None 2024-10-13 20:42:10,972 - httpcore.connection - DEBUG - start_tls.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7f45392d8a50> 2024-10-13 20:42:10,973 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'POST']> 2024-10-13 20:42:10,973 - httpcore.http11 - DEBUG - send_request_headers.complete 2024-10-13 20:42:10,974 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'POST']> 2024-10-13 20:42:10,975 - httpcore.http11 - DEBUG - send_request_body.complete 2024-10-13 20:42:10,975 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'POST']> 2024-10-13 20:42:11,764 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Sun, 13 Oct 2024 20:42:14 GMT'), (b'Content-Type', b'text/event-stream; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'access-control-expose-headers', b'X-Request-ID'), (b'openai-organization', b'co3-kft'), (b'openai-processing-ms', b'537'), (b'openai-version', b'2020-10-01'), (b'x-ratelimit-limit-requests', b'10000'), (b'x-ratelimit-limit-tokens', b'200000'), (b'x-ratelimit-remaining-requests', b'9999'), (b'x-ratelimit-remaining-tokens', b'195442'), (b'x-ratelimit-reset-requests', b'8.64s'), (b'x-ratelimit-reset-tokens', b'1.367s'), (b'x-request-id', b'req_b72d1b4d58525e471babde5b982e68bf'), (b'strict-transport-security', b'max-age=31536000; includeSubDomains; preload'), (b'CF-Cache-Status', b'DYNAMIC'), (b'Set-Cookie', b'__cf_bm=Y5S79S7q.CD5W.O_ykKOtH.1g6CBXU69eadq2VV_Y0Q-1728852134-1.0.1.1-VZU_8Zs.EoGn8LaO1UhA5USkibTcg1h3B0m4xj9gMIGd7wMhC0H5BCvdbBHRISjUHHUHEshTkRZooF6If5O.xQ; path=/; expires=Sun, 13-Oct-24 21:12:14 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), (b'X-Content-Type-Options', b'nosniff'), (b'Set-Cookie', b'_cfuvid=DM5W7HEyEGpacx7ZFB.jAxe1C0UYWe51EPYECunnSKA-1728852134066-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), (b'Server', b'cloudflare'), (b'CF-RAY', b'8d2227a95b1f68af-BUD'), (b'alt-svc', b'h3=":443"; ma=86400')]) 2024-10-13 20:42:11,765 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2024-10-13 20:42:11,766 - openai._base_client - DEBUG - HTTP Request: POST https://api.openai.com/v1/chat/completions "200 OK" 2024-10-13 20:42:11,767 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'POST']> 2024-10-13 20:42:21,657 - httpcore.http11 - DEBUG - receive_response_body.complete 2024-10-13 20:42:21,658 - httpcore.http11 - DEBUG - response_closed.started 2024-10-13 20:42:21,659 - httpcore.http11 - DEBUG - response_closed.complete 2024-10-13 20:42:21,689 - openai._base_client - DEBUG - Request options: {'method': 'post', 'url': '/embeddings', 'files': None, 'post_parser': <function Embeddings.create.<locals>.parser at 0x7f450b590900>, 'json_data': {'input': [[very long list of vectors]], 'model': 'text-embedding-ada-002', 'encoding_format': 'base64'}} 2024-10-13 20:42:21,698 - openai._base_client - DEBUG - Sending HTTP Request: POST https://api.openai.com/v1/embeddings 2024-10-13 20:42:21,698 - httpcore.connection - DEBUG - close.started 2024-10-13 20:42:21,707 - httpcore.connection - DEBUG - close.complete 2024-10-13 20:42:21,713 - httpcore.connection - DEBUG - connect_tcp.started host='api.openai.com' port=443 local_address=None timeout=None socket_options=None 2024-10-13 20:42:21,737 - httpcore.connection - DEBUG - connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x7f4539169f50> 2024-10-13 20:42:21,743 - httpcore.connection - DEBUG - start_tls.started ssl_context=<ssl.SSLContext object at 0x7f45454ce210> server_hostname='api.openai.com' timeout=None 2024-10-13 20:42:21,770 - httpcore.connection - DEBUG - start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x7f4539ff5b90> 2024-10-13 20:42:21,771 - httpcore.http11 - DEBUG - send_request_headers.started request=<Request [b'POST']> 2024-10-13 20:42:21,778 - httpcore.http11 - DEBUG - send_request_headers.complete 2024-10-13 20:42:21,783 - httpcore.http11 - DEBUG - send_request_body.started request=<Request [b'POST']> 2024-10-13 20:42:21,791 - httpcore.http11 - DEBUG - send_request_body.complete 2024-10-13 20:42:21,797 - httpcore.http11 - DEBUG - receive_response_headers.started request=<Request [b'POST']> 2024-10-13 20:42:22,551 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Sun, 13 Oct 2024 20:42:24 GMT'), (b'Content-Type', b'application/json'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'access-control-allow-origin', b'*'), (b'access-control-expose-headers', b'X-Request-ID'), (b'openai-model', b'text-embedding-ada-002'), (b'openai-organization', b'co3-kft'), (b'openai-processing-ms', b'33'), (b'openai-version', b'2020-10-01'), (b'strict-transport-security', b'max-age=31536000; includeSubDomains; preload'), (b'x-ratelimit-limit-requests', b'3000'), (b'x-ratelimit-limit-tokens', b'1000000'), (b'x-ratelimit-remaining-requests', b'2999'), (b'x-ratelimit-remaining-tokens', b'999394'), (b'x-ratelimit-reset-requests', b'20ms'), (b'x-ratelimit-reset-tokens', b'36ms'), (b'x-request-id', b'req_fbdb817bc253ba3eb71ac69ca9bc5769'), (b'CF-Cache-Status', b'DYNAMIC'), (b'X-Content-Type-Options', b'nosniff'), (b'Server', b'cloudflare'), (b'CF-RAY', b'8d2227ecdfc568bc-BUD'), (b'Content-Encoding', b'gzip'), (b'alt-svc', b'h3=":443"; ma=86400')]) 2024-10-13 20:42:22,552 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK" 2024-10-13 20:42:22,558 - httpcore.http11 - DEBUG - receive_response_body.started request=<Request [b'POST']> 2024-10-13 20:42:22,655 - httpcore.http11 - DEBUG - receive_response_body.complete 2024-10-13 20:42:22,661 - httpcore.http11 - DEBUG - response_closed.started 2024-10-13 20:42:22,663 - httpcore.http11 - DEBUG - response_closed.complete 2024-10-13 20:42:22,668 - openai._base_client - DEBUG - HTTP Response: POST https://api.openai.com/v1/embeddings "200 OK" Headers({'date': 'Sun, 13 Oct 2024 20:42:24 GMT', 'content-type': 'application/json', 'transfer-encoding': 'chunked', 'connection': 'keep-alive', 'access-control-allow-origin': '*', 'access-control-expose-headers': 'X-Request-ID', 'openai-model': 'text-embedding-ada-002', 'openai-organization': 'co3-kft', 'openai-processing-ms': '33', 'openai-version': '2020-10-01', 'strict-transport-security': 'max-age=31536000; includeSubDomains; preload', 'x-ratelimit-limit-requests': '3000', 'x-ratelimit-limit-tokens': '1000000', 'x-ratelimit-remaining-requests': '2999', 'x-ratelimit-remaining-tokens': '999394', 'x-ratelimit-reset-requests': '20ms', 'x-ratelimit-reset-tokens': '36ms', 'x-request-id': 'req_fbdb817bc253ba3eb71ac69ca9bc5769', 'cf-cache-status': 'DYNAMIC', 'x-content-type-options': 'nosniff', 'server': 'cloudflare', 'cf-ray': '8d2227ecdfc568bc-BUD', 'content-encoding': 'gzip', 'alt-svc': 'h3=":443"; ma=86400'}) 2024-10-13 20:42:22,674 - openai._base_client - DEBUG - request_id: req_fbdb817bc253ba3eb71ac69ca9bc5769
After this the model returns the chunks and everything works as intended. What can be the problem? Thanks in advance
System Info
Langchain info:
langchain==0.3.3
langchain-chroma==0.1.4
langchain-community==0.3.2
langchain-core==0.3.10
langchain-openai==0.2.2
langchain-text-splitters==0.3.0
Using Docker with Debian
Python 3.11.10
Beta Was this translation helpful? Give feedback.
All reactions