You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you would like to install the repository with [TensorRT](https://github.com/NVIDIA/TensorRT) support, you currently need to install a PyTorch image from NVIDIA instead. First install [enroot](https://github.com/NVIDIA/enroot), next follow the steps below:
@@ -55,57 +53,6 @@ We are offering an extensive suite of models. For more information about the inv
55
53
56
54
The weights of the autoencoder are also released under [apache-2.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md) and can be found in the HuggingFace repos above.
57
55
58
-
We also offer a Gradio-based demo for an interactive experience. To run the Gradio demo:
59
-
60
-
```bash
61
-
python demo_gr.py --name flux-schnell --device cuda
62
-
```
63
-
64
-
Options:
65
-
66
-
-`--name`: Choose the model to use (options: "flux-schnell", "flux-dev")
67
-
-`--device`: Specify the device to use (default: "cuda" if available, otherwise "cpu")
68
-
-`--offload`: Offload model to CPU when not in use
69
-
-`--share`: Create a public link to your demo
70
-
71
-
To run the demo with the dev model and create a public link:
72
-
73
-
```bash
74
-
python demo_gr.py --name flux-dev --share
75
-
```
76
-
77
-
## Diffusers integration
78
-
79
-
`FLUX.1 [schnell]` and `FLUX.1 [dev]` are integrated with the [🧨 diffusers](https://github.com/huggingface/diffusers) library. To use it with diffusers, install it:
where `name` is one of `flux-dev-canny`, `flux-dev-depth`, `flux-dev-canny-lora`, or `flux-dev-depth-lora`.
41
41
42
+
### TRT engine infernece
43
+
44
+
You may also download ONNX export of [FLUX.1 Depth \[dev\]](https://huggingface.co/black-forest-labs/FLUX.1-Depth-dev-onnx) and [FLUX.1 Canny \[dev\]](https://huggingface.co/black-forest-labs/FLUX.1-Canny-dev-onnx). We provide exports in BF16, FP8, and FP4 precision. Note that you need to install the repository with TensorRT support as outlined [here](../README.md).
where `<precision>` is either bf16, fp8, or fp4. For fp4, you need a NVIDIA GPU based on the [Blackwell Architecture](https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture/).
50
+
42
51
## Diffusers usage
43
52
44
53
Flux Control (including the LoRAs) is also compatible with the `diffusers` Python library. Check out the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) to learn more.
Copy file name to clipboardExpand all lines: docs/text-to-image.md
+11Lines changed: 11 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,6 +35,17 @@ python -m flux --name <name> \
35
35
--prompt "<prompt>"
36
36
```
37
37
38
+
### TRT engine infernece
39
+
40
+
You may also download ONNX export of [FLUX.1 \[dev\]](https://huggingface.co/black-forest-labs/FLUX.1-dev-onnx) and [FLUX.1 \[schnell\]](https://huggingface.co/black-forest-labs/FLUX.1-schnell-onnx). We provide exports in BF16, FP8, and FP4 precision. Note that you need to install the repository with TensorRT support as outlined [here](../README.md).
where `<precision>` is either bf16, fp8, or fp4. For fp4, you need a NVIDIA GPU based on the [Blackwell Architecture](https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture/).
46
+
47
+
### Streamlit and Gradio
48
+
38
49
We also provide a streamlit demo that does both text-to-image and image-to-image. The demo can be run via
0 commit comments