Skip to content

Error RuntimeError: Invalid buffer size  #5894

@misteral

Description

@misteral

Describe the bug

I got the error RuntimeError: Invalid buffer size: 11.25 GB while create simple gif with AnimateDiffPipeline, code is bellow.

Please help me. I use Mac M1 Pro, 16GB

Reproduction

import torch
from diffusers import MotionAdapter, AnimateDiffPipeline, EulerAncestralDiscreteScheduler
from diffusers.utils import export_to_gif

# Load the motion adapter
adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2")
# load SD 1.5 based finetuned model
adapter = adapter.to('mps')
model_id = "DiffCivit/epiCPhotoGasm_X_v2"
pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter,
                                           variant="fp16",
                                           torch_dtype=torch.float16,
                                           safety_checker=None,
                                           use_safetensors=True,
                                           cache_dir="./models_cache")
scheduler = EulerAncestralDiscreteScheduler.from_pretrained(
    model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", steps_offset=1
)
pipe.scheduler = scheduler
# Check if CUDA is available and set the appropriate device
# device = "cuda" if torch.cuda.is_available() else "cpu"
# Ensure all parts of the pipeline and related components are using the correct device
pipe = pipe.to('mps')

# This line is removed because DDIMScheduler does not have a 'to' method
# print(f"Pipeline components are using device: {device}")

# enable memory savings
# pipe.unet.enable_forward_chunking(chunk_size=4, dim=4)
pipe.enable_vae_slicing()

pipe.enable_attention_slicing()
# pipe.enable_model_cpu_offload()

prompt = (
        "beautiful young looking woman, "
        "smiling, white teeth, deep blue eyes, dress, (looking at the camera:1.4), "
        "(highest quality), (best shadow), intricate details, interior, blonde hair:1.3, "
        "dark studio, muted colors, jewelry"
    )
negative_prompt = "cartoon, cgi, render, illustration, painting, drawing"

generator = torch.Generator(device="mps").manual_seed(3358854173)

output = pipe(prompt=prompt,
    negative_prompt=negative_prompt,
    width=512,
    height=768,
    num_frames=10,
    guidance_scale=5,
    num_inference_steps=15,
    generator=generator
)
frames = output.frames[0]
export_to_gif(frames, "animation.gif")

Logs

This is my console report 

The config attributes {'motion_activation_fn': 'geglu', 'motion_attention_bias': False, 'motion_cross_attention_dim': None} were passed to MotionAdapter, but are not expected and will be ignored. Please verify your config.json configuration file.
/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/diffusers/pipelines/pipeline_utils.py:1695: FutureWarning: You are trying to load the model files of the `variant=fp16`, but no such modeling files are available.The default model files: {'unet/diffusion_pytorch_model.safetensors', 'vae/diffusion_pytorch_model.safetensors', 'safety_checker/model.safetensors', 'text_encoder/model.safetensors'} will be loaded instead. Make sure to not load from `variant=fp16`if such variant modeling files are not available. Doing so will lead to an error in v0.24.0 as defaulting to non-variantmodeling files is deprecated.
  deprecate("no variant default", "0.24.0", deprecation_message, standard_warn=False)
Keyword arguments {'safety_checker': None} are not expected by AnimateDiffPipeline and will be ignored.
Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:03<00:00,  1.49it/s]
The config attributes {'center_input_sample': False, 'flip_sin_to_cos': True, 'freq_shift': 0, 'mid_block_type': 'UNetMidBlock2DCrossAttn', 'only_cross_attention': False, 'dropout': 0.0, 'transformer_layers_per_block': 1, 'encoder_hid_dim': None, 'encoder_hid_dim_type': None, 'attention_head_dim': 8, 'dual_cross_attention': False, 'class_embed_type': None, 'addition_embed_type': None, 'addition_time_embed_dim': None, 'num_class_embeds': None, 'upcast_attention': False, 'resnet_time_scale_shift': 'default', 'resnet_skip_time_act': False, 'resnet_out_scale_factor': 1.0, 'time_embedding_type': 'positional', 'time_embedding_dim': None, 'time_embedding_act_fn': None, 'timestep_post_act': None, 'time_cond_proj_dim': None, 'conv_in_kernel': 3, 'conv_out_kernel': 3, 'projection_class_embeddings_input_dim': None, 'attention_type': 'default', 'class_embeddings_concat': False, 'mid_block_only_cross_attention': None, 'cross_attention_norm': None, 'addition_embed_type_num_heads': 64} were passed to UNetMotionModel, but are not expected and will be ignored. Please verify your config.json configuration file.
  0%|                                                                                                                                                                                       | 0/15 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/Users/aleksandrbobrov/data/sd/sd-local/anima.py", line 45, in <module>
    output = pipe(prompt=prompt,
             ^^^^^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/diffusers/pipelines/animatediff/pipeline_animatediff.py", line 661, in __call__
    noise_pred = self.unet(
                 ^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/diffusers/models/unet_motion_model.py", line 781, in forward
    sample, res_samples = downsample_block(
                          ^^^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/diffusers/models/unet_3d_blocks.py", line 1083, in forward
    hidden_states = attn(
                    ^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/diffusers/models/transformer_2d.py", line 375, in forward
    hidden_states = block(
                    ^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/diffusers/models/attention.py", line 258, in forward
    attn_output = self.attn1(
                  ^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 522, in forward
    return self.processor(
           ^^^^^^^^^^^^^^^
  File "/Users/aleksandrbobrov/data/sd/sd-local/.venv/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 1231, in __call__
    hidden_states = F.scaled_dot_product_attention(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Invalid buffer size: 11.25 GB


### System Info

  • diffusers version: 0.23.1
  • Platform: macOS-14.1.1-arm64-arm-64bit
  • Python version: 3.11.6
  • PyTorch version (GPU?): 2.2.0.dev20231121 (False)
  • Huggingface_hub version: 0.19.4
  • Transformers version: 4.35.2
  • Accelerate version: 0.24.1
  • xFormers version: not installed
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

### Who can help?

@DN6 @sayakpaul

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingstaleIssues that haven't received updates

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions