Skip to content

Conversation

psiddh
Copy link
Contributor

@psiddh psiddh commented Oct 14, 2025

Summary

[PLEASE REMOVE] See CONTRIBUTING.md's Pull Requests for ExecuTorch PR guidelines.

[PLEASE REMOVE] If this PR closes an issue, please add a Fixes #<issue-id> line.

[PLEASE REMOVE] If this PR introduces a fix or feature that should be the upcoming release notes, please add a "Release notes: " label. For a list of available release notes labels, check out CONTRIBUTING.md's Pull Requests.

Test plan

[PLEASE REMOVE] How did you test this PR? Please write down any manual commands you used and note down tests that you have written if applicable.

@psiddh psiddh requested a review from mergennachin as a code owner October 14, 2025 15:12
Copy link

pytorch-bot bot commented Oct 14, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15109

Note: Links to docs will display an error until the docs builds have been completed.

❌ 10 New Failures

As of commit 5062655 with merge base fca0f38 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 14, 2025
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.


- **Python 3.10-3.12** (ExecuTorch requirement)
- **conda** or **venv** for environment management
- **CMake 3.29.6+** for cross-compilation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **CMake 3.29.6+** for cross-compilation
- **CMake 3.29.6+**


### Target Device Requirements

**Supported Devices**: **Raspberry Pi 4** and **Raspberry Pi 5** with **64-bit OS**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we restrict to 64-bit target os?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We haven't tested on other hardware yet, It should work mostly


- **Minimum 4GB RAM** (8GB recommended for larger models)
- **8GB+ storage** for model files and binaries
- **64-bit Raspberry Pi OS** (Bullseye or newer)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, and also why is this under memory reqs?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is needed for llama I assume

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we haven't done exhaustive per evals yet, better to leave out specific details out of this section for now..
Let me know what you both think about this, something as follows:

  • RAM & Storage Varies by model size and optimization level

python3 --version # Should be 3.10-3.12

# Check required tools
which cmake git md5sum
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/which/hash?

Comment on lines 60 to 64
mkdir ~/executorch-rpi && cd ~/executorch-rpi
# Clone ExecuTorch repository
git clone -b release/1.0 https://github.com/pytorch/executorch.git
cd executorch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do all conditional.. i.e. a && b && c && d

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

cd executorch
```

### Create Conda Environment
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are we repeating ET setup here? Just point to the main one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just below this, I left the link to the main one

pip install --upgrade pip
```
Refer to → {doc}`getting-started` for more details.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

## Cross-Compilation Toolchain Setup
Run the following automated cross compile script on your Linux host machine:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this reads off..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

## Model Preparation and Export
### Download Llama Models
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again can we point people to use standard llama tutorial to create an XNNPACK pte as opposed to repeating steps here?

sudo apt update
# Install newer GLIBC packages
sudo apt-get -t sid install libc6 libstdc++6
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how do we know that will be compatible with our toolchain?

./llama_main --model_path ./llama3_2.pte --tokenizer_path ./tokenizer.model --seq_len 128 --prompt "What is the meaning of life?"
```
Happy Inferencing!
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what happened to your script? why are we writing these steps again manually, may be I missed something..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two parts to this tutorials.

  1. Linux cross compilation on host: This is scripted with this PR
  2. All the Steps on RPI, involves moving required files from host machine onto RPI, and then ensuring basic environment is ok, then potentially trouble shoot if needed The tutorials gives step-by-step guidance on RPI (2nd step)

Copy link
Contributor

@digantdesai digantdesai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Sid. I left some comments..


- {doc}`tutorial-arm-ethos-u` — Export a simple PyTorch model for the ExecuTorch Ethos-U backend

- {doc}`raspberry_pi_llama_tutorial` — Deploy a LLaMA model on a Raspberry Pi with the ExecuTorch Ethos-U backend
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no ethos-U backend on Raspberry Pi

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, this tutorial is not quite embedded I suppose it's running on Cortex-A based Raspberry Pi

So, I recommend also having another link in the https://docs-preview.pytorch.org/pytorch/executorch/15109/desktop-section.html

but also keep it here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no ethos-U backend on Raspberry Pi

Fixes

# Run the Raspberry Pi setup script for Pi 5
examples/raspberry_pi/setup.sh pi5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@psiddh

we should protect in our CI as a fast follow-on. please create an issue

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


### Host Machine Requirements

**Operating System**: Linux x86_64 (Ubuntu 20.04+ or CentOS Stream 9+)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can the host machine be MacOS if you're cross compiling?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not support MacOs , unfortunately the required arm toolchain for RPi on host Mac is not supported

@cccclai
Copy link
Contributor

cccclai commented Oct 14, 2025

There are many good comments from Digant and Mergen, overall is reasonable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants