Jinlong Pang, Na Di, Zhaowei Zhu, Jiaheng Wei, Hao Cheng, Chen Qian, Yang Liu.
University of California, Santa Cruz
This project investigates token quality from a noisy-label perspective and propose a generic token cleaning pipeline for SFT tasks. Our method filters out uninformative tokens while preserving those carrying key task-specific information. Specifically, we first evaluate token quality by examining the influence of model updates on each token, then apply a threshold-based separation. The token influence can be measured in a single pass with a fixed reference model or iteratively with self-evolving reference models.
-
Fixed-Model Cleaning This pipeline applies a one-shot cleaning process to the entire dataset.
-
Self-Evolving Cleanning This pipeline follows an iterative approach.
- [2025.02.01] 🚀🚀 Release the code of Token-Cleaning.
To run training, evaluation, or inference for finetuned models, you need to install the required packages by running the following command (after installing pytorch):
pip install -r requirements.txt
The data pool (50k samples) is constructed based on a new powerful data curation pipeline proposed by DS2, which involves selecting data samples using quality rating scores generated by LLMs. For convenience, the 50k used samples can be accessed from Huggingface via the link.
Our selected evaluation and training data are listed below.
Category | Dataset |
---|---|
Evaluation Data | MMLU, TruthfulQA, TydiQA, HellaSwag, BoolQ, ARC-C, LoqiQA |
Training Data | Flan v2, OASST1, WizardLM, Dolly, Stanford Alpaca |
Note that our cleaning pipelines consists of Fixed-Model Cleaning and Self-Evolving Cleaning. One can run the code by
# Fixed-model cleaning
bash fixed_model_cleaning.sh
# Self-evolving cleaning
bash self_evolving_cleaning.sh
The task performances are evaluated on the lm-eval-hareness repository. For convenience, one can do evaluation by
bash run_eval.sh
The tabular results can be printed via the read_results.ipynb
jupyter notebook.
If you used this repository, please cite our work:
@article{pang2025token,
title={Token Cleaning: Fine-Grained Data Selection for LLM Supervised Fine-Tuning},
author={Pang, Jinlong and Di, Na and Zhu, Zhaowei and Wei, Jiaheng and Cheng, Hao and Qian, Chen and Liu, Yang},
journal={arXiv preprint arXiv:2502.01968},
year={2025}
}