You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I saw the same error with the configuration below on multiple endpoints, on being a L40s with vllm 0.8.2 that has been reliable for us, so I suspect its on the testing side.
guidellm benchmark
--target ""
--model ""
--processor""
--rate-type sweep
--max-seconds 30
--max-requests 128
--data "prompt_tokens=512,output_tokens=512"
I’m encountering the following error for some requests when running the GuideLLM benchmark command using version v0.2.1 with Python 3.12.9.
The command I’m using is:
And the error I’m seeing intermittently is:
text_completions | ERROR - OpenAIHTTPBackend request with headers: {'Content-Type': 'application/json', 'Authorization': 'Bearer <valid token>'} and payload: {'prompt': '; <valid prompt>', 'model': '<valide model>', 'stream': True, 'stream_options': {'include_usage': True}, 'max_tokens': 128, 'max_completion_tokens': 128, 'stop': None, 'ignore_eos': True} failed: Event loop is closed
Has anyone else encountered this issue? If so, was there a resolution or workaround that worked for you?
The text was updated successfully, but these errors were encountered: