repr-control is a toolbox to solve nonlinear stochastic control via representation learning. User can simply input the dynamics, rewards, initial distributions sample_files of the nonlinear control problem and get the optimal controller parametrized by a neural network.
The optimal controller is trained via Spectral Dynamics Embedding Control (SDEC) algorithm based on representation learning and reinforcement learning. For those interested in the details of SDEC algorithm, please check our papers.
-
Install anaconda and git (if you haven't).
-
Create new environment,
Windows : Open Anaconda prompt. Mac or Linux : Open Terminal:
conda create -n repr-control python=3.10 conda activate repr-control
-
Install PyTorch dependencies.
Windows or Linux:
If you have CUDA-compatible GPUs,
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
If you don't have CUDA-compatible GPUs,
conda install pytorch torchvision torchaudio cpuonly -c pytorch
Mac:
conda install pytorch::pytorch torchvision torchaudio -c pytorch
-
install the toolbox
git clone https://github.com/CoNG-harvard/repr_control.git cd repr_control pip install -e .
Helpful resources:
Please refer to our documentation on how to train the controller.
@article{ren2023stochastic,
title={Stochastic Nonlinear Control via Finite-dimensional Spectral Dynamic Embedding},
author={Tongzheng Ren and Zhaolin Ren and Haitong Ma and Na Li and Bo Dai},
year={2023},
eprint={2304.03907},
archivePrefix={arXiv}
}