Skip to content

[ICML 2025] Official implementation of paper "Token Cleaning: Fine-Grained Data Selection for LLM Supervised Fine-Tuning"

Notifications You must be signed in to change notification settings

UCSC-REAL/TokenCleaning

Repository files navigation

Token Cleaning: Fine-Grained Data Selection for LLM Supervised Fine-Tuning

Jinlong Pang, Na Di, Zhaowei Zhu, Jiaheng Wei, Hao Cheng, Chen Qian, Yang Liu.

University of California, Santa Cruz


Brief Introduction

This project investigates token quality from a noisy-label perspective and propose a generic token cleaning pipeline for SFT tasks. Our method filters out uninformative tokens while preserving those carrying key task-specific information. Specifically, we first evaluate token quality by examining the influence of model updates on each token, then apply a threshold-based separation. The token influence can be measured in a single pass with a fixed reference model or iteratively with self-evolving reference models.

The Overview of Token Cleaning Pipelines

  • Fixed-Model Cleaning This pipeline applies a one-shot cleaning process to the entire dataset.

  • Self-Evolving Cleanning This pipeline follows an iterative approach.

🎉🎉 News

  • [2025.05.01] 🚀🚀 Accepted by ICML 2025.
  • [2025.04.01] 🚀🚀 Code Release

Environment Setup

To run training, evaluation, or inference for finetuned models, you need to install the required packages by running the following command (after installing pytorch):

pip install -r requirements.txt

Dataset Preparation

The data pool (50k samples) is constructed based on a new powerful data curation pipeline DS2, which involves selecting data samples using quality rating scores generated by LLMs. For convenience, the 50k used samples can be accessed from Huggingface via the link.

Our selected evaluation and training data are listed below.

Category Dataset
Evaluation Data MMLU, TruthfulQA, TydiQA, HellaSwag, BoolQ, ARC-C, LoqiQA
Training Data Flan v2, OASST1, WizardLM, Dolly, Stanford Alpaca

🚀🚀 Get Started

Note that our cleaning pipelines consists of Fixed-Model Cleaning and Self-Evolving Cleaning. One can run the code by

# Fixed-model cleaning
bash get_ref_model.sh
bash fixed_model_cleaning.sh

# Self-evolving cleaning
bash self_evolving_cleaning.sh

The implementations of our baselines can be found in the baselines directory, including full, random and rho baselines.

Model Evaluation

The task performances are evaluated on the lm-eval-hareness repository. For convenience, one can do evaluation by

bash run_eval.sh

Note that lm-eval-harness repo does not contain TydiQA task. Here, we follows the original Tydiqa code repo to conduct evaluation. The TydiQA dataset can be downloaded via prepare_eval_data.sh.

Results Presentations

The tabular results can be printed via the read_results.ipynb jupyter notebook.

Citation

If you used this repository, please cite our work:

@article{pang2025token,
  title={Token Cleaning: Fine-Grained Data Selection for LLM Supervised Fine-Tuning},
  author={Pang, Jinlong and Di, Na and Zhu, Zhaowei and Wei, Jiaheng and Cheng, Hao and Qian, Chen and Liu, Yang},
  journal={arXiv preprint arXiv:2502.01968},
  year={2025}
}

About

[ICML 2025] Official implementation of paper "Token Cleaning: Fine-Grained Data Selection for LLM Supervised Fine-Tuning"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published