Welcome to the artifact documentation for our paper, LightCTS: A Lightweight Framework for Correlated Time Series Forecasting. This documentation outlines the steps required to reproduce our work.
Authors: Zhichen Lai, Dalin Zhang*, Huan Li*, Christian S. Jensen, Hua Lu, Yan Zhao
Paper Link: LightCTS: A Lightweight Framework for Correlated Time Series Forecasting
We trained all models on a server with an NVIDIA Tesla P100 GPU. Additionally, we conducted some inference experiments on an x86 device with a 380 MHz CPU to emulate resource-restricted environments.
We developed the code for experiments using Python 3.7.13 and PyTorch 1.13.0. You can install PyTorch following the instructions on the PyTorch website, tailored to your specific operating system, CUDA version, and computing platform. For example:
pip install torch==1.13.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
After successfully installing PyTorch, you can install the remaining dependencies using:
pip install -r requirements.txt
Please note, if you encounter issues installing fvcore
directly using pip, you can install it from its GitHub repository using:
pip install git+https://github.com/facebookresearch/fvcore.git
We tested LightCTS on four public multi-step correlated time series forecasting datasets and two public single-step correlated time series forecasting datasets.
Multi-Step Datasets:
Dataset | Data Type | Download Link |
---|---|---|
PEMS04 | Traffic Flow | download |
PEMS08 | Traffic Flow | download |
METR-LA | Traffic Speed | download |
PEMS-BAY | Traffic Speed | download |
Single-Step Datasets:
Dataset | Data Type | Download Link |
---|---|---|
Solar | Solar Power Production | download |
Electricity | Electricity Consumption | download |
To download all the datasets in one run, please follow these instructions:
Install the download library gdown
:
pip install gdown
Run the script to download all the datasets:
python data_downloading.py
After downloading the datasets, move them to the '\data' directory. The directory structure should appear as follows:
data
├─METR-LA
├─PEMS-BAY
├─PEMS04
├─PEMS08
├─solar.txt
├─electricity.txt
This section provides detailed steps to reproduce the multi-step and single-step forecasting experiments from our paper.
To replicate the multi-step traffic flow forecasting experiments presented in Table 5 of the paper, follow these instructions:
python Multi-step/Traffic Flow/$DATASET_NAME/train_$DATASET_NAME.py --device='cuda:0'
#python Multi-step/Traffic Flow/PEMS04/train_PEMS04.py --device='cuda:0'
After the training phase concludes, a log summarizing the best model's performance on the test set will appear:
On average: Test MAE: ..., Test MAPE: ..., Test RMSE: ...
python Multi-step/Traffic Flow/$DATASET_NAME/test_$DATASET_NAME.py --device='cuda:0' --checkpoint=$CKPT_PATH
#python Multi-step/Traffic Flow/PEMS04/test_PEMS04.py --device='cuda:0' --checkpoint='./checkpoint.pth'
#python Multi-step/Traffic Flow/PEMS08/test_PEMS08.py --device='cuda:0' --checkpoint='./checkpoint.pth'
After the testing phase concludes, a log summarizing the tested model's performance will appear:
On average: Test MAE: ..., Test MAPE: ..., Test RMSE: ...
python Multi-step/Traffic Flow/$DATASET_NAME/lightness_metrics_$DATASET_NAME.py
#python Multi-step/Traffic Flow/PEMS04/lightness_metrics_PEMS04.py
#python Multi-step/Traffic Flow/PEMS08/lightness_metrics_PEMS08.py
Upon completion, a log like the foolowing one will display the number of parameters and FLOPs:
| module | #parameters or shape | #flops |
|:-------------------------------------------------------- |:-----------------------|:----------|
| model | 0.185M | 0.147G |
| Filter_Convs | 8.448K | 23.892M |
| Filter_Convs.0 | 2.112K | 9.431M |
| Filter_Convs.0.weight | (64, 16, 1, 2) | |
| Filter_Convs.0.bias | (64,) | |
...
In the above commands, replace $DATASET_NAME
with either PEMS04
or PEMS08
, and $CKPT_PATH
with the path to the desired saved checkpoint. Adjust the --device
option in the command line according to your available hardware.
To reproduce the multi-step traffic speed forecasting experiments presented in Table 6 of the paper, follow these instructions:
python Multi-step/Traffic Speed/$DATASET_NAME/train_$DATASET_NAME.py --device='cuda:0'
#python Multi-step/Traffic Speed/METR-LA/train_METR-LA.py --device='cuda:0'
#python Multi-step/Traffic Speed/PEMS-BAY/train_PEMS-BAY.py --device='cuda:0'
After the training phase concludes, a log summarizing the best model's performance on the test set will appear:
Horizon 1, Test MAE: ..., Test MAPE: ..., Test RMSE: ...
Horizon 2, Test MAE: ..., Test MAPE: ..., Test RMSE: ...
Horizon 3, Test MAE: ..., Test MAPE: ..., Test RMSE: ...
...
...
Horizon 12, Test MAE: ..., Test MAPE: ..., Test RMSE: ...
On average: Test MAE: ..., Test MAPE: ..., Test RMSE: ...
Here, Horizon 3, Horizon 6, and Horizon 12 correspond to '15 mins', '30 mins', and '60 mins' in Table 6, respectively.
python Multi-step/Traffic Speed/$DATASET_NAME/test_$DATASET_NAME.py --device='cuda:0' --checkpoint=$CKPT_PATH
#python Multi-step/Traffic Speed/METR-LA/test_METR-LA.py --device='cuda:0' --checkpoint='./checkpoint.pth'
#python Multi-step/Traffic Speed/PEMS-BAY/test_PEMS-BAY.py --device='cuda:0' --checkpoint='./checkpoint.pth'
After the testing phase concludes, a log summarizing the tested model's performance will appear:
Horizon 1, Test MAE: ..., Test MAPE: ..., Test RMSE: ...
Horizon 2, Test MAE: ..., Test MAPE: ..., Test RMSE: ...
Horizon 3, Test MAE: ..., Test MAPE: ..., Test RMSE: ...
...
...
Horizon 12, Test MAE: ..., Test MAPE: ..., Test RMSE: ...
On average: Test MAE: ..., Test MAPE: ..., Test RMSE: ...
Here, Horizon 3, Horizon 6, and Horizon 12 correspond to '15 mins', '30 mins', and '60 mins' in Table 6, respectively.
python Multi-step/Traffic Speed/$DATASET_NAME/lightness_metrics__$DATASET_NAME.py
#python Multi-step/Traffic Speed/METR-LA/lightness_metrics_METR-LA.py
#python Multi-step/Traffic Speed/PEMS-BAY/lightness_metrics_PEMS-BAY.py
A similar log like the above traffic flow forecasting's lightness metrics will appear.
In the above commands, replace $DATASET_NAME
with either METR-LA
or PEMS-BAY
, and $CKPT_PATH
with the path to the desired saved checkpoint. Update --device
in the command line according to your available hardware.
To replicate the single-step forecasting experiments presented in Table 7 of the paper, follow these instructions:
python Single-step/$DATASET_NAME/train_$DATASET_NAME.py --horizon=3 --device='cuda:0'
#python Single-step/Solar/train_Solar.py --horizon=3 --device='cuda:0'
#python Single-step/Electricity/train_Electricity.py --horizon=3 --device='cuda:0'
After the training phase concludes, a log summarizing the best model's performance on the test set will appear:
On average: Test RRSE: ..., Test CORR ...
python Single-step/$DATASET_NAME/test_$DATASET_NAME.py --horizon=3 --device='cuda:0' --checkpoint=$CKPT_PATH
#python Single-step/Solar/test_Solar.py --horizon=3 --device='cuda:0' --checkpoint='./save.pt'
#python Single-step/Electricity/test_Electricity.py --horizon=3 --device='cuda:0' --checkpoint='./save.pt'
After the testing phase concludes, a log summarizing the tested model's performance will appear:
On average: Test RRSE: ..., Test CORR ...
python Single-step/$DATASET_NAME/test_$DATASET_NAME.py
#python Single-step/Solar/lightness_metrics_Solar.py
#python Single-step/Electricity/lightness_metrics_Electricity.py
A similar log like the above traffic flow forecasting's lightness metrics will appear.
In the above commands, replace $DATASET_NAME
with either Solar
or Electricity
, and $CKPT_PATH
with the path to the desired saved checkpoint. Update --horizon
to the target future horizons, which are [3, 6, 12, 24] in Table 7, and update --device
in the command line to suit your hardware.
Please note that during the training process, the saved checkpoints will be stored in the '/logs' directory within each dataset's directory.
After gathering the metrics results of each dataset, you can follow the instructions to draw the figures in the paper. First, please install the library, Matplotlib, for figure drawing:
pip install matplotlib
Then, move to the Figure_drawing
, modify the metrics results in the file, and then run the code to draw the figure.
python Figure_drawing/Figure_$FIGURE_$SUBFIGURE_drawing.py
#python Figure_drawing/Figure_5_a_drawing.py
#python Figure_drawing/Figure_6_b_drawing.py
In the above commands, replace $FIGURE
and $SUBFIGURE
with the number of Figure and Subfigure.
If you prefer to skip the training process and directly access the pre-trained checkpoints for reproduction, we have provided a version of the codebase here. This version is optimized for better execution of the pre-trained checkpoints.
For any inquiries, please reach out to Zhichen Lai at zhla@cs.aau.dk.