Skip to content

microsoft/DELT

Repository files navigation

Data Efficacy for Language Model Training

Task Paper License

[📜 Paper][🐱 GitHub Code][🤗 HF Model]

Figure 1

Figure 1. Average result across 8 benchmarks for different methods. High performance at the same selection ratio indicates high efficacy, while achieving similar performance with a smaller selection ratio demonstrates high efficiency. Our method excels in both efficacy and efficiency.

🌟 Introduction

Data is fundamental to the training of language models (LM). Recent research has been dedicated to data efficiency, which aims to maximize performance by selecting a minimal or optimal subset of training data. Techniques such as data filtering, sampling, and selection play a crucial role in this area. To complement it, we define Data Efficacy, which focuses on maximizing performance by optimizing the organization of training data and remains relatively underexplored. This work introduces a general paradigm, DELT, for considering data efficacy in LM training, which highlights the significance of training data organization. DELT comprises three components: Data Scoring, Data Selection, and Data Ordering.

Figure 2

Figure 2. DELT paradigm.

For data scoring, we design Learnability-Quality Scoring (LQS) method, which considers both the learnability and quality of each data sample from the gradient consistency perspective.

Figure 3

Figure 3. Learnability-Quality Scoring (LQS).

For data ordering, we devise Folding Ordering (FO) method, which addresses issues such as model forgetting and data distribution bias.

Figure 4

Figure 4. Folding Ordering (FO).

📢 News and Updates

Done

  • 2025/06/28: 💥The Arxiv paper released.
  • 2025/08/31: 💥The DELT code released for pre-training on general domain.

TBD

  • Release the model of LQS data scorer on general domain (CommonCrawl).
  • Release the DELT code for post-training on specific domain.

⚙️ Environment Installation

conda create -n data_efficacy python=3.10 -y
conda activate data_efficacy
pip install -r requirements.txt

💾 Preparation.

Environment Variables
export HF_TOKEN="<your_huggingface_token>"
export WANDB_API_KEY="<your_wandb_apikey>"
Dataset
python utils.py --content dataset --id $HF_DATASET_ID --save-dir $OUTPUT_DATA_PATH

# e.g. python utils.py --content=dataset --id=togethercomputer/RedPajama-Data-1T --save-dir=data/source-cc-1b.jsonl --data-name=common_crawl --split-name=train --sample-size=500000
# If you want to try the dataset used in the paper, please use the below commandline:
# python utils.py --content=dataset --id=togethercomputer/RedPajama-Data-1T-Sample --save-dir=data/source-cc-1b.jsonl 
# You could also replace it with your own dataset under jsonl format. 
Model
python utils.py --content=model --id $HF_MODEL_ID --save-dir $OUTPUT_MODEL_PATH

# e.g. python utils.py --content=model --id=Data-Selection/BSL-160M --save-dir=models/mistral-160m
# You could also replace it with your own model under hf format.

⏩ Quick Start.

Data Scoring

Existing scoring method: Learnability-Quality Score (lqs), and Perplexity (kenlm). For more details about LQS, please refer to this guideline.

bash data_scoring/entry.sh $INPUT_DATA_PATH $OUTPUT_DATA_PATH $METHOD $CONFIG_PATH

# e.g. bash data_scoring/entry.sh data/source-cc-1b.jsonl data/source-cc-1b_scored-lqs.jsonl lqs data_scoring/config/lqs.yaml
# Please note that LQS involves downloading Hugging Face gated models/datasets, and you need to configure it.
Data Selection

Existing selection method: Top-R (top-r), Threshold (threshold), and Top-K (top-k).

bash data_selection/entry.sh $INPUT_DATA_PATH $OUTPUT_DATA_PATH $METHOD $CONFIG_PATH

# e.g. bash data_selection/entry.sh data/source-cc-1b_scored-lqs.jsonl data/source-cc-1b_scored-lqs_selected-r1.0.jsonl top-r data_selection/config/top-r.yaml
Data Ordering

Existing ordering method: Folding Ordering (FO) (folding), Shuffle (shuffle), and Sorting (sorting).

bash data_ordering/entry.sh $INPUT_DATA_PATH $OUTPUT_DATA_PATH $METHOD $CONFIG_PATH

# e.g. bash data_ordering/entry.sh data/source-cc-1b_scored-lqs_selected-r1.0.jsonl data/source-cc-1b_scored-lqs_selected-r1.0_ordered-folding-l3.jsonl folding data_ordering/config/folding.yaml
Model Training
bash model_train/entry.sh $INPUT_DATA_PATH $INPUT_MODEL_PATH $OUTPUT_MODEL_PATH $METHOD $CONFIG_PATH

# e.g. bash model_train/entry.sh data/source-cc-1b_scored-lqs_selected-r1.0_ordered-folding-l3.jsonl models/mistral-160m models/pretrain_mistral-160m_source-cc-1b_scored-lqs_selected-r1.0_ordered-folding-l3_src pretrain model_train/config/train.yaml
Model Evaluation
bash model_eval/entry.sh $INPUT_MODEL_PATH $OUTPUT_RESULT_PATH $METHOD $CONFIG_PATH

# e.g. bash model_eval/entry.sh models/pretrain_mistral-160m_source-cc-1b_scored-lqs_selected-r1.0_ordered-folding-l3_src models/pretrain_mistral-160m_source-cc-1b_scored-lqs_selected-r1.0_ordered-folding-l3_src/result.yaml lm_evaluation_harness model_eval/config/general.yaml

🔗 Citation

@article{dai2025data,
  title={Data Efficacy for Language Model Training},
  author={Yalun Dai and Yangyu Huang and Xin Zhang and Wenshan Wu and Chong Li and Wenhui Lu and Shijie Cao and Li Dong and Scarlett Li},
  journal={arXiv preprint arXiv:2506.21545},
  year={2025}
}

👀 License

This repository is licensed under the MIT License.

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •