Optimized support for the API for Qwen streaming (paragraph-by-paragraph answers)优化对于通义千问API的支持,让其能够流式传输(一段一段的回答) #1165
Unanswered
XSR-WatchPioneer
asked this question in
Q&A
Replies: 1 comment
-
@XSR-WatchPioneer Cannot be resolved at this time see:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I use a third party compatible with the OpenAI API to enable the Qwen model, but cross-domain needs to be enabled, otherwise it will not work properly.
我使用兼容OpenAI API的第三方可以启用通义千问模型,但是需要开启跨域,否则无法正常工作。
However, after cross-domain is enabled, the answers of the model cannot be streamed, and must be displayed after all the answers are generated.
但是开启跨域以后,模型的回答不能流式加载,必须要等全部回答生成后才显示。
Configuration about Qwen: - fill
https://dashscope.aliyuncs.com/compatible-mode/v1 URL.
关于通义千问的配置: - URL 填
https://dashscope.aliyuncs.com/compatible-mode/v1
。provider Specifies the third party.
provider选第三方。
Select cross-domain CORS.
勾选跨域CORS。
Beta Was this translation helpful? Give feedback.
All reactions