Skip to content

add Jetson Orin support #467

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

thomas-hiddenpeak
Copy link

@thomas-hiddenpeak thomas-hiddenpeak commented Jan 4, 2025

Motivation and Context

NVIDIA Jetson Orin devices have a compute capability of 8.7, which is not currently supported in the compute_cap_matching function. This PR ensures that these devices can be used with the library by adding the necessary support.

What does this PR do?

This PR adds support for NVIDIA Jetson Orin devices by including the compute capability 8.7 in the compute_cap_matching function and updating the tests to ensure the new capability is correctly supported.

Fixes #466

Checklist

  • I have read the contributor guidelines.
  • I have added tests to verify my changes.
  • I have tagged the appropriate reviewers.

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.

@OlivierDehaene OR @Narsil

add Jetson Orin support
@r0kk
Copy link

r0kk commented Jan 16, 2025

@HiddenPeak I am wondering if you could share reproducible steps how were you able to run text-embeddings-inference on Jetson AGX Orin. It would be greatly appreciated🙏.

Unfortunately I don't have deep enough knowledge to review you PR.

@thomas-hiddenpeak
Copy link
Author

thomas-hiddenpeak commented Jan 20, 2025

@r0kk
The Jetson Orin series uses the CUDA architecture SM8.7, which is part of the Ampere architecture. Theoretically, it should be compatible with TEI. However, in practical applications, there are many incompatibilities, making direct support generally impossible. During the process of attempting to use it, I encountered the following issues:

  1. The compute_cap_matching() function does not support the SM87 architecture, so I modified the source code and recompiled it.
  2. It is necessary to ensure that the GPU driver, CUDA runtime, and CUDA compiler are correctly installed and available in the environment variable paths.(on Jetpack 6.1 with cuda 12.6)
  3. The compilation process is extremely long, and memory usage exceeds 90% (60GB).

I attempted to compile and deploy TEI on a Jetson AGX Orin 64G and found that it could not recognize SM87. Therefore, I modified the compute_cap_matching() function in backends/candle/src/compute_cap.rs to add support for the SM87 environment and architecture. Such modifications may not be effective in many cases, but fortunately, after making these changes, I was able to achieve support on the Jetson AGX Orin 64G. Not only did it not produce any errors, but it also showed excellent performance.

curl 127.0.0.1:8080/rerank \
    -X POST \
    -d '{"query": "What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' \
    -H 'Content-Type: application/json'

logs

2025-01-04T20:38:18.706787Z  INFO text_embeddings_backend_candle: backends/candle/src/lib.rs:292: Starting FlashBert model on Cuda(CudaDevice(DeviceId(1)))
2025-01-04T20:38:31.539445Z  INFO text_embeddings_router: router/src/lib.rs:248: Warming up model
2025-01-04T20:38:32.189069Z  INFO text_embeddings_router::http::server: router/src/http/server.rs:1812: Starting HTTP server: 0.0.0.0:8080
2025-01-04T20:38:32.189098Z  INFO text_embeddings_router::http::server: router/src/http/server.rs:1813: Ready
2025-01-04T20:44:11.047170Z  INFO rerank{total_time="177.15121ms" tokenization_time="727.783µs" queue_time="79.024583ms" inference_time="87.618256ms"}: text_embeddings_router::http::server: router/src/http/server.rs:459: Success

More screenshots are as follows:
74f61043ee2b7a9da75e93547f397b7
6519f72d3f88800fd8e89705b2d5fe1
fc2f46532bbe2545f4f7f2c55388190

Therefore, I created a branch and added test code. After testing it in my application, I submitted a merge request.
Additionally, I also tried other embedding and rerank models, which ran well.

@r0kk
Copy link

r0kk commented Feb 6, 2025

@HiddenPeak
I can confirm that this is working on Jetson AGX 64GB. Thank you very much 🙏.

@thomas-hiddenpeak
Copy link
Author

@HiddenPeak I can confirm that this is working on Jetson AGX 64GB. Thank you very much 🙏.

It's very cool~

@taresh18-ag
Copy link

taresh18-ag commented Jul 8, 2025

Hi, great work.

How did you get it running on jetson orin? When I try to compile it, it throws this error

image

these are the steps I followed:

curl https://sh.rustup.rs/ -sSf | sh
sudo apt-get install libssl-dev gcc -y
git clone https://github.com/huggingface/text-embeddings-inference.git
cd text-embeddings-inference
cargo install --path router -F candle-cuda -F http --no-default-features # getting error here

Also if cuda inference is not possible, I would like to test using cpu only. What are the steps to run this library on an arm cpu? I looked into dockerfiles, they are all dependent on intel mkl libs

@thomas-hiddenpeak
Copy link
Author

add -F dynamic-linking

@r0kk
Copy link

r0kk commented Jul 9, 2025

Following process worked for me:

Add NVCC to the path

NVIDIA's NVCC (NVIDIA CUDA Compiler) is a compiler driver used to compile CUDA (Compute Unified Device Architecture) code, which allows developers to write programs that run on NVIDIA GPUs. It translates CUDA code into executable binaries for GPU acceleration.

  1. Check if nvcc exists
ls /usr/local/cuda/bin/nvcc
  1. Update .env variables

    • Open .bashrc

      nano ~/.bashrc
    • add nvcc paths

      export PATH=/usr/local/cuda/bin:$PATH
      export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
    • restart terminal

      source ~/.bashrc
    • check nvcc version

      nvcc --version

Build Process (you can skip if build exists)

We prepare the build and can be found in current repository. If it doesn't exist, you can follow instructions below:

  1. Install Rust

    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    source $HOME/.cargo/env
  2. Clone the Project
    It is the branch of the original project, because at the time of the writing official release for Jetson family didn't exist.

    git clone https://github.com/HiddenPeak/text-embeddings-inference.git
    cd text-embeddings-inference
  3. Install ssl (sometimes openssl problem might appear when building)

    sudo apt install libssl-dev
  4. Build

  • move inside router dir inside the project
  • to use less space on Jetson, set --target-dir to external disc
    cd router/
    cargo build --release --features=candle-cuda --target-dir <target dir for generated artifact>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Could not start backend on Jetson AGX Orin
3 participants