请问可以介绍下alpaca-13b-plus是如何优化多轮对话的吗? #330
qiguanqiang
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
首先感谢你们开源的贡献。
我在 release v3.1 看到 alpaca-13b-plus 版本在多轮对话中可以生成更长的句子,请问有没有为此专门优化呢?
如:使用了多轮对话数据集(belle-multiround-0.8m)
另外,如果在多轮对话能力需要专门训练,请问作者是将多轮数据集的训练 prompt 构造成 llama.cpp 多轮交互的形式吗?
Beta Was this translation helpful? Give feedback.
All reactions