Skip to content

vessl-ai/llm-finetuning-agent-run-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 

Repository files navigation

llm-finetuning-agent-run-server

This repository contains the server code required for running the llm-finetuning-agent. It includes various components for evaluation, API serving, and chatbot functionality.

Prerequisites

  • Python 3.x
  • uv (Python package manager)

Setup

  1. Install uv
cd app
pip uv

Running the Server

The server consists of multiple components that need to be run in sequence:

1. Evaluation

Run the evaluation script with specified metrics:

uv run eval.py --metrics relevancy correctness clarity professionalism --data-generation-method raft

2. Run API Server

Start the main API server:

uv run uvicorn main:app --host 0.0.0.0 --port 8080 > run_server.log 2>&1 &

3. Chatbot Serving

Start the chatbot server:

uv run python chatbot.py > chatbot.log 2>&1 &

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages