llm-fine-tuning
Here are 32 public repositories matching this topic...
Collection of resources for finetuning Large Language Models (LLMs).
-
Updated
Jan 12, 2025
Distributed Reinforcement Learning for LLM Fine-Tuning with multi-GPU utilization
-
Updated
Mar 12, 2025 - Python
Sustain-LC is a benchmarking environment for traditional and reinforcement learning based controls as well as LLM based control
-
Updated
Jun 19, 2025 - Jupyter Notebook
A sacred space for heartfelt conversations, where wisdom flows freely and memories gently fade like whispers at sunset.
-
Updated
Apr 15, 2025 - HTML
Análise Avançada de Dados com Causalidade e Aprendizado por Reforço
-
Updated
Feb 27, 2025 - Jupyter Notebook
FlowerTune LLM on Coding Dataset
-
Updated
Feb 18, 2025 - Python
The Personal Knowledge Graph You Didn’t Know You Already Wrote
-
Updated
May 12, 2025 - Python
ARC-Test-Time-Training (ARC-TTT)
-
Updated
Jan 15, 2025 - Python
Chaining thoughts and LLMs to learn DNA structural biophysics
-
Updated
Mar 5, 2024 - Python
FlowerTune LLM on Medical Dataset
-
Updated
Dec 3, 2024 - Python
Clone your Discord friends with AI!
-
Updated
May 27, 2025 - Python
-
Updated
Mar 22, 2025 - Python
FlowerTune LLM on NLP Dataset
-
Updated
Jan 15, 2025 - Python
Comparing QLoRA, Prompt & Prefix Tuning on Mistral-7B for medical instruction-following
-
Updated
Jun 28, 2025 - Jupyter Notebook
Django implementation of CodeBERT for detecting vulnerable code.
-
Updated
Dec 29, 2023 - Python
Schematic Blueprint for Finetuning LLM (e.g. Qwen or Llama) for text classification using LORA. Output model can have original or modified head (e.g. for SequenceClassification).
-
Updated
Jan 20, 2025 - Python
This repository contains all the notebooks, resources, and documentation used to develop and evaluate models for the Automated Essay Scoring (AES) Kaggle competition. The project aims to build an open-source solution for automated essay evaluation to support educators and provide timely feedback to students.
-
Updated
Dec 30, 2024 - Jupyter Notebook
Showcase a fine-tuned model trained for a specific task or audience.
-
Updated
May 21, 2025 - Jupyter Notebook
Implemented and fine-tuned BERT for a custom sequence classification task, leveraging LoRA adapters for efficient parameter updates and 4-bit quantization to optimize performance and resource utilization.
-
Updated
Dec 30, 2024 - Jupyter Notebook
Improve this page
Add a description, image, and links to the llm-fine-tuning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llm-fine-tuning topic, visit your repo's landing page and select "manage topics."