ChestX-Reasoner: Advancing Radiology Foundation Models with Reasoning through Step-by-Step Verification.
| 📑 Paper | 🐱 Github Repo | 🐱 ChestX-Reasoner-7B | 🐱 RadRBench |
Ziqing Fan1,2 , Cheng Liang1,2 , Chaoyi Wu1,2 , Ya Zhang1,2, Yanfeng Wang1,2, Weidi Xie1,2
1 Shanghai Jiao Tong University, 2 Shanghai AI Laboratory.
The official codes for "ChestX-Reasoner: Advancing Radiology Foundation Models with Reasoning through Step-by-Step Verification".
In the following, we provide an overview and detailed guidance on the code used to train our ChestX-Reasoner and its variants.
- Note that SFT step requires at least 4 A100 80GB GPUs and training for about 2 days.
- Note that RL step requires at least 8 A100 80GB GPUs and training for about 3 days.
- VLLM for inference and Verl engine are essential to save training times.
You can install the code environment used for training our model. Our code is established based on VERL(https://github.com/volcengine/verl) engine. You may see for more detailed instructions. Besides, we provide a copy of our env list in ./env.txt.
conda create -n env_name python==3.10
conda activate env_name
pip3 install torch torchvision
pip3 install flash-attn --no-build-isolation
git clone https://github.com/volcengine/verl.git
cd verl
pip3 install -e .[vllm]
- Python: Version >= 3.9
- CUDA: Version >= 12.1
- VLLM: Version >= 0.7
cd ChestXReasoner
bash run_SFT.sh
Notably, before run the bash file, there are configs and data paths should set in your devices. Please see details in ./ChestXReasoner/readme.md
To be continue
In eval/data
, we present our benchmark construction code and our data.
We provide:
- The evaluation code on both reasoning and accuracy in
eval/
- The baseline inference code in
eval/inference
- The evaluation results on both reasoning and accuracy of all baselines in
eval/res
If you find this work is relevant with your research or applications, please feel free to cite our work!
@article{fan2025chestx,
title={ChestX-Reasoner: Advancing Radiology Foundation Models with Reasoning through Step-by-Step Verification},
author={Fan, Ziqing and Liang, Cheng and Wu, Chaoyi and Zhang, Ya and Wang, Yanfeng and Xie, Weidi},
journal={arXiv preprint arXiv:2504.20930},
year={2025}
}