Skip to content

Commit b63095b

Browse files
committed
Update installation and usage documentation for TritonParse
Summary: - Revised prerequisites in README.md and installation guide to specify GPU requirements for Triton. - Enhanced installation instructions for both NVIDIA and AMD GPUs, including PyTorch installation steps. - Updated usage examples to dynamically select device based on GPU availability. - Clarified FAQ section regarding GPU necessity for generating traces. These changes improve clarity and ensure users are well-informed about the hardware requirements for TritonParse.
1 parent 0aa48ac commit b63095b

File tree

5 files changed

+137
-92
lines changed

5 files changed

+137
-92
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ cd tritonparse
6868
pip install -e .
6969
```
7070

71-
**Prerequisites:** Python ≥ 3.10, Triton > 3.3.1 ([install from source](https://github.com/triton-lang/triton))
71+
**Prerequisites:** Python ≥ 3.10, Triton > 3.3.1 ([install from source](https://github.com/triton-lang/triton)), GPU required (NVIDIA/AMD)
7272

7373
## 📚 Complete Documentation
7474

docs/wiki-pages/01.-Installation.md

Lines changed: 115 additions & 79 deletions
Original file line numberDiff line numberDiff line change
@@ -20,20 +20,20 @@ This guide covers all installation scenarios for TritonParse, from basic usage t
2020
### System Requirements
2121
- **Python** >= 3.10
2222
- **Operating System**: Linux, macOS, or Windows (with WSL recommended)
23-
- **CUDA** (for GPU tracing): Compatible NVIDIA GPU with CUDA support
23+
- **GPU Required** (Triton depends on GPU):
24+
- **NVIDIA GPUs**: CUDA 11.8+ or 12.x
25+
- **AMD GPUs**: ROCm 5.0+ (supports MI100, MI200, MI300 series)
2426
- **Node.js** >= 18.0.0 (for website development only)
2527

26-
### Required: Triton Installation
27-
**Important**: You need Triton > 3.3.1 or compiled from source.
28+
> ⚠️ **Important**: GPU is required to generate traces because Triton kernels can only run on GPU hardware. The web interface can view existing traces without GPU.
2829
29-
```bash
30-
# Install Triton from source (required)
31-
git clone https://github.com/triton-lang/triton.git
32-
cd triton
33-
pip install -e .
34-
```
30+
### Required Dependencies
31+
- **PyTorch** with GPU support (we recommend PyTorch nightly for best compatibility)
32+
- For NVIDIA GPUs: PyTorch with CUDA support
33+
- For AMD GPUs: PyTorch with ROCm support
34+
- **Triton** > 3.3.1 (must be compiled from source for TritonParse compatibility)
3535

36-
For detailed Triton installation instructions, see the [official Triton documentation](https://github.com/triton-lang/triton?tab=readme-ov-file#install-from-source).
36+
> 💡 **Note**: Detailed installation instructions for these dependencies are provided in each installation option below.
3737
3838
---
3939

@@ -47,16 +47,63 @@ git clone https://github.com/pytorch-labs/tritonparse.git
4747
cd tritonparse
4848
```
4949

50-
### Step 2: Install TritonParse
50+
### Step 2: Install PyTorch with GPU Support
51+
52+
#### For NVIDIA GPUs (CUDA)
53+
```bash
54+
# Install PyTorch nightly with CUDA 12.8 support (recommended)
55+
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
56+
57+
# Alternative: Install stable PyTorch with CUDA support
58+
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
59+
60+
# Verify PyTorch installation
61+
python -c "import torch; print(f'PyTorch version: {torch.__version__}')"
62+
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"
63+
```
64+
65+
#### For AMD GPUs (ROCm)
66+
```bash
67+
# Install PyTorch nightly with ROCm support (recommended)
68+
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.2
69+
70+
# Alternative: Install stable PyTorch with ROCm support
71+
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1
72+
73+
# Verify PyTorch installation
74+
python -c "import torch; print(f'PyTorch version: {torch.__version__}')"
75+
python -c "import torch; print(f'ROCm available: {torch.cuda.is_available()}')"
76+
python -c "import torch; print(f'GPU device: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"No GPU\"}')"
77+
```
78+
79+
### Step 3: Install Triton from Source
5180
```bash
81+
# First, uninstall any existing PyTorch-bundled Triton
82+
pip uninstall -y pytorch-triton triton || true
83+
84+
# Install Triton from source (required)
85+
git clone https://github.com/triton-lang/triton.git
86+
cd triton
87+
pip install -e .
88+
89+
# Verify Triton installation
90+
python -c "import triton; print(f'Triton version: {triton.__version__}')"
91+
python -c "import triton; print(f'Triton path: {triton.__file__}')"
92+
```
93+
94+
### Step 4: Install TritonParse
95+
```bash
96+
# Go back to tritonparse directory
97+
cd ../tritonparse
98+
5299
# Install in development mode
53100
pip install -e .
54101

55102
# Or install with test dependencies
56103
pip install -e ".[test]"
57104
```
58105

59-
### Step 3: Verify Installation
106+
### Step 5: Verify Installation
60107
```bash
61108
# Test with the included example
62109
cd tests
@@ -85,7 +132,7 @@ INFO:tritonparse:Copying parsed logs from /tmp/tmp1gan7zky to /scratch/findhao/t
85132
================================================================================
86133
```
87134

88-
### Step 4: Use the Web Interface
135+
### Step 6: Use the Web Interface
89136
1. Generate trace files using the Python API
90137
2. Visit [https://pytorch-labs.github.io/tritonparse/](https://pytorch-labs.github.io/tritonparse/)
91138
3. Load your trace files (.ndjson or .gz format)
@@ -101,7 +148,7 @@ For contributors working on the React-based web interface.
101148
- npm (comes with Node.js)
102149

103150
### Step 1: Basic Installation
104-
Follow [Option 1](#option-1-basic-user-installation) first.
151+
Follow [Option 1: Basic User Installation](#-option-1-basic-user-installation) first to install PyTorch, Triton, and TritonParse.
105152

106153
### Step 2: Install Website Dependencies
107154
```bash
@@ -141,7 +188,7 @@ npm run preview
141188
For core contributors working on Python code, including formatting and testing.
142189

143190
### Step 1: Basic Installation
144-
Follow [Option 1](#option-1-basic-user-installation) first.
191+
Follow [Option 1: Basic User Installation](#-option-1-basic-user-installation) first to install PyTorch, Triton, and TritonParse.
145192

146193
### Step 2: Install Development Dependencies
147194
```bash
@@ -167,47 +214,7 @@ python -m unittest tests.test_tritonparse -v
167214
```
168215

169216
### Step 4: Website Development (Optional)
170-
```bash
171-
cd website
172-
npm install
173-
npm run dev
174-
```
175-
176-
---
177-
178-
## 🛠️ Development Commands
179-
180-
### Python Development
181-
```bash
182-
# Format code
183-
make format
184-
185-
# Check formatting
186-
make format-check
187-
188-
# Run linting
189-
make lint-check
190-
191-
# Run tests
192-
python -m unittest tests.test_tritonparse -v
193-
194-
# Run specific test
195-
python -m unittest tests.test_tritonparse.TestTritonparseCUDA.test_whole_workflow -v
196-
```
197-
198-
### Website Development
199-
```bash
200-
cd website
201-
202-
# Development server
203-
npm run dev
204-
205-
# Build for production
206-
npm run build
207-
208-
# Lint frontend code
209-
npm run lint
210-
```
217+
If you also need to work on the web interface, follow [Option 2: Website Development Setup](#-option-2-website-development-setup) for additional setup.
211218

212219
---
213220

@@ -218,20 +225,34 @@ npm run lint
218225
#### 1. Triton Installation Issues
219226
```bash
220227
# Error: "No module named 'triton'"
221-
# Solution: Install Triton from source
228+
# Solution: Uninstall existing Triton and install from source
229+
pip uninstall -y pytorch-triton triton || true
222230
git clone https://github.com/triton-lang/triton.git
223231
cd triton
224232
pip install -e .
225233
```
226234

227-
#### 2. CUDA Not Available
235+
#### 2. GPU Not Available
228236
```bash
229-
# Error: "CUDA not available"
230-
# Check CUDA installation
231-
python -c "import torch; print(torch.cuda.is_available())"
232-
233-
# If False, install CUDA-enabled PyTorch
237+
# Error: "CUDA not available" or "ROCm not available"
238+
# Check GPU installation
239+
python -c "import torch; print(f'GPU available: {torch.cuda.is_available()}')"
240+
python -c "import torch; print(f'Device count: {torch.cuda.device_count()}')"
241+
242+
# For NVIDIA GPUs - Install CUDA-enabled PyTorch
243+
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
244+
# Alternative: stable version
234245
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
246+
247+
# For AMD GPUs - Install ROCm-enabled PyTorch
248+
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.2
249+
# Alternative: stable version
250+
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1
251+
252+
# Verify GPU support after installation
253+
python -c "import torch; print(f'GPU available: {torch.cuda.is_available()}')"
254+
python -c "import torch; print(f'GPU device: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"No GPU\"}')"
255+
python -c "import torch; print(f'Backend version: {torch.version.cuda if torch.version.cuda else torch.version.hip}')"
235256
```
236257

237258
#### 3. Permission Issues
@@ -270,20 +291,23 @@ npm install
270291

271292
Set these for development:
272293
```bash
273-
# Enable debug logging
274-
export TRITONPARSE_DEBUG=1
275-
276-
# Enable NDJSON output (default)
277-
export TRITONPARSE_NDJSON=1
278-
279-
# Enable gzip compression
280-
export TRITON_TRACE_GZIP=1
281-
282-
# Custom trace directory
283-
export TRITON_TRACE=/path/to/traces
284-
285-
# Disable FX graph cache (for testing)
286-
export TORCHINDUCTOR_FX_GRAPH_CACHE=0
294+
# TritonParse specific
295+
export TRITONPARSE_DEBUG=1 # Enable debug logging
296+
export TRITONPARSE_NDJSON=1 # Enable NDJSON output (default)
297+
export TRITON_TRACE_GZIP=1 # Enable gzip compression
298+
export TRITON_TRACE=/path/to/traces # Custom trace directory
299+
300+
# PyTorch/TorchInductor related
301+
export TORCHINDUCTOR_FX_GRAPH_CACHE=0 # Disable FX graph cache (for testing)
302+
export TORCH_LOGS="+dynamo,+inductor" # Enable PyTorch debug logs
303+
export CUDA_VISIBLE_DEVICES=0 # Limit to specific NVIDIA GPU
304+
export ROCR_VISIBLE_DEVICES=0 # Limit to specific AMD GPU (ROCm)
305+
export HIP_VISIBLE_DEVICES=0 # Alternative for AMD GPUs
306+
307+
# GPU debugging (if needed)
308+
export CUDA_LAUNCH_BLOCKING=1 # Synchronous CUDA execution (NVIDIA)
309+
export HIP_LAUNCH_BLOCKING=1 # Synchronous HIP execution (AMD)
310+
export TORCH_USE_CUDA_DSA=1 # Enable CUDA device-side assertions (NVIDIA)
287311
```
288312

289313
### Getting Help
@@ -306,10 +330,22 @@ After installation, verify everything works:
306330

307331
### 1. Python API Test
308332
```python
333+
# Test PyTorch installation
334+
import torch
335+
print(f"PyTorch version: {torch.__version__}")
336+
print(f"GPU available: {torch.cuda.is_available()}")
337+
print(f"GPU device count: {torch.cuda.device_count()}")
338+
if torch.cuda.is_available():
339+
print(f"GPU device: {torch.cuda.get_device_name(0)}")
340+
print(f"Backend version: {torch.version.cuda if torch.version.cuda else torch.version.hip}")
341+
342+
# Test Triton installation
343+
import triton
344+
print(f"Triton version: {triton.__version__}")
345+
346+
# Test TritonParse installation
309347
import tritonparse.structured_logging
310348
import tritonparse.utils
311-
312-
# Should not raise any errors
313349
print("TritonParse installed successfully!")
314350
```
315351

docs/wiki-pages/02.-Usage-Guide.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -76,8 +76,9 @@ def tensor_add(a, b):
7676
# Example usage
7777
if __name__ == "__main__":
7878
# Create test tensors
79-
a = torch.randn(1024, 1024, device="cuda", dtype=torch.float32)
80-
b = torch.randn(1024, 1024, device="cuda", dtype=torch.float32)
79+
device = "cuda" if torch.cuda.is_available() else "cpu"
80+
a = torch.randn(1024, 1024, device=device, dtype=torch.float32)
81+
b = torch.randn(1024, 1024, device=device, dtype=torch.float32)
8182

8283
# Execute kernel (this will be traced)
8384
c = tensor_add(a, b)
@@ -110,8 +111,9 @@ def simple_add(a, b):
110111
compiled_add = torch.compile(simple_add)
111112

112113
# Create test data
113-
a = torch.randn(1024, 1024, device="cuda", dtype=torch.float32)
114-
b = torch.randn(1024, 1024, device="cuda", dtype=torch.float32)
114+
device = "cuda"
115+
a = torch.randn(1024, 1024, device=device, dtype=torch.float32)
116+
b = torch.randn(1024, 1024, device=device, dtype=torch.float32)
115117

116118
# Execute compiled function (this will be traced)
117119
result = compiled_add(a, b)

docs/wiki-pages/06.-FAQ.md

Lines changed: 14 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,9 @@ This page addresses common questions and issues encountered when using TritonPar
2626
**A:**
2727
- **Python** >= 3.10
2828
- **Triton** > 3.3.1 (must be compiled from source)
29-
- **CUDA** (for GPU tracing) - compatible NVIDIA GPU
29+
- **GPU Support** (for GPU tracing):
30+
- **NVIDIA GPUs**: CUDA 11.8+ or 12.x
31+
- **AMD GPUs**: ROCm 5.0+ (MI100, MI200, MI300 series)
3032
- **Modern browser** (Chrome 90+, Firefox 88+, Safari 14+, Edge 90+)
3133

3234
## 🔧 Installation and Setup
@@ -36,12 +38,16 @@ This page addresses common questions and issues encountered when using TritonPar
3638
**A:** Triton must be compiled from source for TritonParse to work:
3739

3840
```bash
41+
# First, uninstall any existing PyTorch-bundled Triton
42+
pip uninstall -y pytorch-triton triton || true
43+
44+
# Install Triton from source
3945
git clone https://github.com/triton-lang/triton.git
4046
cd triton
4147
pip install -e .
4248
```
4349

44-
For detailed instructions, see the [Triton installation guide](https://github.com/triton-lang/triton?tab=readme-ov-file#install-from-source).
50+
For detailed instructions, see our [Installation Guide](01.-Installation) or the [official Triton installation guide](https://github.com/triton-lang/triton?tab=readme-ov-file#install-from-source).
4551

4652
### Q: I'm getting "No module named 'triton'" errors. What's wrong?
4753

@@ -50,12 +56,13 @@ For detailed instructions, see the [Triton installation guide](https://github.co
5056
2. **Wrong Python environment** - Make sure you're in the right virtual environment
5157
3. **Installation failed** - Check for compilation errors during Triton installation
5258

53-
### Q: Do I need CUDA to use TritonParse?
59+
### Q: Do I need a GPU to use TritonParse?
5460

55-
**A:**
56-
- **For CPU-only analysis**: No CUDA needed
57-
- **For GPU kernel tracing**: Yes, CUDA is required
58-
- **For web interface**: No CUDA needed (just to view existing traces)
61+
**A:** Yes, a GPU is required because Triton itself depends on GPU:
62+
- **For generating traces**: GPU is required (either NVIDIA with CUDA or AMD with ROCm)
63+
- **For web interface only**: No GPU needed (just to view existing trace files from others)
64+
65+
Note: Triton kernels can only run on GPU, so you need GPU hardware to generate your own traces.
5966

6067
## 📊 Generating Traces
6168

docs/wiki-pages/Home.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,4 +112,4 @@ This project is licensed under the BSD-3 License. See the [LICENSE](https://gith
112112

113113
---
114114

115-
**Note**: This tool is designed for developers working with Triton kernels and GPU computing. Basic familiarity with CUDA, GPU programming concepts, and the Triton language is recommended for effective use.
115+
**Note**: This tool is designed for developers working with Triton kernels and GPU computing. Basic familiarity with GPU programming concepts (CUDA for NVIDIA or ROCm/HIP for AMD), and the Triton language is recommended for effective use.

0 commit comments

Comments
 (0)