-
Notifications
You must be signed in to change notification settings - Fork 516
[bug] SparseXLA backend not implement #6853
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Environment setup, conda create -n xla python=3.11 transformers diffusers datasets accelerate evaluate torchvision torchaudio bitsandbytes safetensors sentencepiece imageio scipy numpy pyglet gradio open3d fire rich -c conda-forge -c pytorch -y
conda activate xla
conda env config vars set LD_LIBRARY_PATH="$CONDA_PREFIX/lib"
conda env config vars set HF_HOME="/dev/shm"
conda env config vars set PJRT_DEVICE=TPU
# conda env config vars set XLA_USE_BF16=1
# conda env config vars set XLA_USE_SPMD=1
conda deactivate && conda activate xla
pip install 'torch~=2.2.0' --index-url https://download.pytorch.org/whl/cpu
pip install 'torch_xla[tpu]~=2.2.0' -f https://storage.googleapis.com/libtpu-releases/index.html
pip uninstall -y accelerate
pip install git+https://github.com/huggingface/accelerate |
Seems like it got codegen in https://github.com/pytorch/pytorch/blob/3243be7c3a7e871acfc9923eea817493f996da9a/torchgen/model.py#L166 but we didn't implement the corresponding sparse kernels. I can make this a feature request but it is unlikely we have resouce to work on sparse related projects anytime soon. |
do we have alternative solution for |
not that I am aware of, we haven't think too much about sparsity yet. |
Related to #8719 |
Bugs:
The text was updated successfully, but these errors were encountered: