Skip to content

feat: support Ollama with synthetic-data-kit #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

rachittshah
Copy link

@rachittshah rachittshah commented Apr 30, 2025

Ollama Backend Integration

Resolves #3

Changes

  • Added dual backend system with abstract BaseLLMBackend class
  • Implemented OllamaBackend with OpenAI-compatible API support
  • Updated LLMClient to support backend selection (vLLM/Ollama)
  • Added Ollama configuration in YAML with model, API, and retry settings
  • Enhanced system-check command to display available Ollama models

Testing

  • Tested with Ollama models: llama3.2, qwen3, phi3
  • Verified compatibility with existing QA, CoT, and summary generation
  • Confirmed model listing and availability checks

Documentation

  • Updated README with Ollama setup and usage instructions
  • Added backend-specific troubleshooting guides
  • Added example commands for both vLLM and Ollama workflows

Migration

No breaking changes. Existing vLLM users can continue using the default backend.

@facebook-github-bot
Copy link
Contributor

Hi @rachittshah!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Apr 30, 2025
@init27
Copy link
Contributor

init27 commented Apr 30, 2025

Thank you SO MUCH for the instant PR, Rachit! I will test and review later today/tomorrow

@rachittshah
Copy link
Author

Sounds good Sanyam!

Let me know if there's any changes needed, and we can pickup SGLang next :D

@rachittshah
Copy link
Author

@init27 did you get a chance to review this PR?

@dhuebner
Copy link

@init27
vLLM installation might become very challenging on some platforms. It would be really nice to have Ollama support soon. Thanks!

@init27
Copy link
Contributor

init27 commented May 16, 2025

Thanks for the follow up @rachittshah and @dhuebner!

I was planning to add CI/CD tests to main but it's taking a bit, so let me review the PR now please. Thanks again for the reminder!

Copy link
Contributor

@init27 init27 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking great, I did a quick review and test:

  • We need to fix cli.py to work with ollama/vLLM to make curate functions work
  • After this we need to re-test with curate function

@arty-hlr
Copy link
Contributor

Out of curiosity, why focus on individual backends when it could support openAI compatible APIs instead, which would unlock a lot more backends, including ollama?

@init27
Copy link
Contributor

init27 commented May 18, 2025

@arty-hlr great question and valid point, I think the main reason being we want to focus on llama model supports.

IIRC, some of the model endpoints/providers are not OAI compatible and ideally we'd like to maximize support for them hence this approach. But open to your suggestions like always.

@dhuebner
Copy link

@init27 I just tried synthetic-data-kit with Ollama over api-endpoint and it worked well. Except of some problems when running in a Notebook, I was able to create and curate some data. Is the new version with api-endpoint already published?

@init27
Copy link
Contributor

init27 commented May 28, 2025

@dhuebner thanks for the feedback and trying! Can you please open an issue or share the details of the problems with notebook please, I can take a look.

I've put out an alpha version: https://pypi.org/project/synthetic-data-kit/0.0.4a0/

@rachittshah
Copy link
Author

@init27 what do you need from me to merge this PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature request: support for SGLang and Ollama
5 participants