-
Notifications
You must be signed in to change notification settings - Fork 6.1k
Description
Describe the bug
Hello, I implemented my own custom pipeline referring StableDiffusionPipeline (RepDiffusionPipeline), but there are some issues
I called "accelerator.prepare" properly, and mapped the models on device (with "to.(accelerator.device)")
But when I call pipeline and the 'call' function is called, sometimes I met the error
It is not only problem in using multi-gpu, it occurs when I use single gpu.
For example, I defined my pipeline for my validation in training code like this:
val_pipe = RepDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
unet=accelerator.unwrap_model(unet),
rep_encoder=accelerator.unwrap_model(rep_encoder),
vae=accelerator.unwrap_model(vae),
revision=None, variant=None, torch_dtype=weight_dtype, safety_checker=None
).to(accelerator.device)
then, when I called 'val_pipe' like this:
model_pred = val_pipe(
image = condition_original_image if args.val_mask_op else data["original_images"],
representation = representation,
prompt = "",
num_inference_steps = 20,
image_guidance_scale = 1.5,
guidance_scale = scale,
generator = generator
).images[0]
At that time, the error "RepDiffusionPipeline has no attribute '_execution_device'" occurs. (Not always, just randomly)
How can I solve this issue, or what part of my code can be doubted and fixed?
Thank you for reading:)
Reproduction
It occurs randomly, so there is no option to reproduce...
But when I call the defined pipeline, it occurs randomly.
Logs
RepDiffusionPipeline has no attribute '_execution_device'
System Info
I tried to test in various diffusers & python versions, but the problem still occurs.
In now, I am running my code in diffusers 0.27.2, python 3.10.14.
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.2.2+cu121 with CUDA 1201 (you have 2.2.2+cu118)
Python 3.10.14 (you have 3.10.14)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
diffusers
version: 0.27.2- Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.31
- Python version: 3.10.14
- PyTorch version (GPU?): 2.2.2+cu118 (True)
- Huggingface_hub version: 0.24.3
- Transformers version: 4.43.3
- Accelerate version: 0.33.0
- xFormers version: 0.0.25.post1
- Using GPU in script?:
- Using distributed or parallel set-up in script?: