This repository contains implementations of computational primitives for convolutional multi-hybrid models and layers: Hyena-[SE, MR, LI], StripedHyena 2, Evo 2.
For training, please refer to the savanna project.
There are two main ways to interface with vortex
:
- Use
vortex
as the inference engine for pre-trained multi-hybrids such as Evo 2 40B. In this case, we recommend installingvortex
in a new environment (see below). - Import from
vortex
specific classes, kernels or utilities to work with custom convolutional multi-hybrids. For example,sourcing utilities fromhyena_ops.interface
.
The simplest way to install vortex
is from PyPi or github.
Vortex requires PyTorch and Transformer Engine, and it is strongly recommended to also use Flash Attention. For detailed instructions and compatibility information, please refer to their respective GitHub repositories. Note TransformerEngine requires python 3.12 and has these additional system requirements.
- PyTorch with CUDA: Ensure you have a CUDA-enabled PyTorch installation compatible with your NVIDIA drivers.
- Transformer Engine: NVIDIA's Transformer Engine.
- Flash Attention: For optimized attention operations.
Example of installing prerequisites. We recommended using conda
for easy installation of Transformer Engine:
conda install -c nvidia cuda-nvcc cuda-cudart-dev
conda install -c conda-forge transformer-engine-torch==2.3.0
pip install flash-attn==2.8.0.post2
After installing the requirements, you can install vortex:
pip install vtx
or you can install vortex after cloning the repository:
pip install .
make setup-vortex-ops
Note that this does not install all dependencies required to run autoregressive inference with larger pre-trained models.
Docker is one of the easiest ways to get started with Vortex (and Evo 2). The Docker environment does not depend on the currently installed CUDA version and ensures that major dependencies (such as PyTorch and Transformer Engine) are pinned to specific versions, which is beneficial for reproducibility.
To run Evo 2 40B generation sample, simply run ./run
.
To run Evo 2 7B generation sample: sz=7 ./run
.
To run tests: ./run ./run_tests
.
To interactively execute commands in docker environment: ./run bash
.
python3 generate.py \
--config_path <PATH_TO_CONFIG> \
--checkpoint_path <PATH_TO_CHECKPOINT> \
--input_file <PATH_TO_INPUT_FILE> \
--cached_generation
--cached_generation
activates KV-caching and custom caching for different variants of Hyena layers, reducing peak memory usage and latency.
Vortex was developed by Michael Poli (Zymrael) and Garyk Brixi (garykbrixi). Vortex maintainers include Michael Poli (Zymrael), Garyk Brixi (garykbrixi), Anton Vorontsov (antonvnv) with contributions from Amy Lu (amyxlu), Jerome Ku (jeromeku).
If you find this project useful, consider citing the following references.