This is the official repository for the supporting code to our paper Calibrated Language Models and How to Find Them with Label Smoothing, presented at ICML 2025.
This repository is built off of two public repositories
Each of these is included in their own separate folder and contains their own requirements.txt
file for running. If you run into any issues during installation or running the code, please access the specific repositories.
Here are some issues that we ran into that may be helpful.
We ran into this and our solution was to install the package directly from the wheels here. Just match your torch
, gcc
and CUDA versions. We generally use the abiFalse
version of any wheel.
You may have to add some handling code in open-instruct/open_instruct/dataset_transformation.py
if your tokenizer isn't directly supported.
We generally suggest to use at least 4 NVIDIA A100 80GB for training models. For testing/benchmarking, only a single 80GB GPU is necessary, but this can vary depending on the model (Gemma2 does not use flash-attention
and therefore may require more resources).
@inproceedings{
huang2025calibrated,
title={Calibrated Language Models and How to Find Them with Label Smoothing},
author={Jerry Huang and Peng Lu and QIUHAO Zeng},
booktitle={Forty-second International Conference on Machine Learning},
year={2025},
url={https://openreview.net/forum?id=soLNj4l2EL}
}