Skip to content

Feature Request: Enable Online Search Capability for DeepSeek-R1 Models #204

Open
@gaochangw

Description

@gaochangw

I highly appreciate the efforts of the LM Studio developers in creating such a powerful and versatile local LLM inference solution. This issue was drafted by a local DeepSeek-R1-Qwen-14B model that runs on LM Studio.

Why This is Important:

The integration of the Online Search capability would make LM Studio even more versatile and competitive for developers and businesses who rely on cutting-edge AI solutions. It would also align with the growing trend of integrating external knowledge sources into LLM-based systems.

Proposed Implementation:

  1. Configuration Option: Add an option in LM Studio to enable Online Search when loading DeepSeek-R1 models (e.g., enableOnlineSearch: boolean).
  2. API Integration: Provide a seamless way to integrate with DeepSeek’s search capabilities through their API or direct model integration.
  3. Documentation Update: Include clear documentation on how to use the new feature, including any required dependencies or setup steps.

Use Case Example:

Imagine developers using LM Studio for applications like chatbots, where access to real-time information is critical. With Online Search enabled:

const model = await client.llm.load("deepseek/r1-series", {
  enableOnlineSearch: true,
});

const response = await model.respond([
  { role: "system", content: "Answer the most recent updates on AI advancements." },
  { role: "user", content: "What are the latest developments in AI for 2024?" },
]);

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions