Skip to content

本地模型翻译发包优化 #288

@aabbccgg

Description

@aabbccgg

环境:Safari,自编译插件
我通过custom定义了本地的lmstudio中的gpt-oss作为翻译接口,并发1 间隔100

Request

(text, from, to, url, key) => [
  url,
  {
    headers: {
      "Content-Type": "application/json",
      "Authorization": `Bearer ${key}`,
    },
    method: "POST",
    body: JSON.stringify({
      model: "gpt-oss",
      messages: [
        { role: "system", content: "Reasoning: low" },
        { role: "system", content: "You are a professional, authentic machine translation engine." },
        { role: "user", content: `Translate the following source text from ${from || "auto"} to ${to}. Output translation directly without any additional text.\n\nSource Text: ${text}\n\nTranslated Text:` }
      ],
      temperature: 0,
    }),
  },
]

Reasponse

(res, text, from, to) => {
  const translated = res.choices?.[0]?.message?.content?.trim() || "翻译失败";
  return [translated, from === to];
}

目前有如下两个问题:

  1. 作为网页翻译时,会因为请求2在请求1还没完成的情况下发送,导致打断模型的工作,最后导致请求1返回空。
  2. 当前的网页翻译请求是一个块一个块的发送的,对本地大模型来说效率很低,可否支持将text列表/json化的发送方式,让本地模型可以单次处理全部的翻译内容,这样效率更高,并且基于上下文的翻译结果会比逐个翻译的准确度更好。

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions