Skip to content

eventanilha82/myllmprojects

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Português | English


LLM & Embedding Parallel Pipeline

Projeto Python robusto e profissional para inferência paralela eficiente com LLMs (Large Language Models) e embeddings OpenAI, com retry automático e estatísticas de execução


Recursos

  • Chamadas paralelas para LLM (chat/completions) e embeddings
  • Retry automático com backoff
  • Estatísticas automáticas de tempo, throughput, tokens e erros

Como usar

  1. Crie seu arquivo de ambiente .env:

    OPENAI_API_KEY=sua-chave-openai-aqui
    
  2. Instale as dependências:

    pip install -r requirements.txt
  3. Execute o projeto:

    python main.py

O script main.py traz exemplos prontos de uso das funções de LLM e Embeddings em paralelo.

LLM & Embedding Parallel Pipeline

A robust, professional Python project for efficient parallel inference with LLMs (Large Language Models) and OpenAI embeddings, featuring automatic retry and execution statistics


Features

  • Parallel calls for LLM (chat/completions) and embeddings
  • Automatic retry with backoff
  • Automatic time, throughput, token, and error statistics

How to Use

  1. Create your .env file:

    OPENAI_API_KEY=your-openai-key-here
    
  2. Install dependencies:

    pip install -r requirements.txt
  3. Run the project:

    python main.py

The script main.py includes ready-to-run examples for parallel LLM and embedding functions.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages