Skip to content

Questions regarding implementation #13

@a-r-r-o-w

Description

@a-r-r-o-w

Hey 👋

I'm Aryan from the HuggingFace Diffusers team. I am working on integrating FasterCache into the library to make it available for all the video models we support. I had some questions regarding the implementation and was hoping to get some help.

In the paper, the section describing CFG Cache has the following:

These biases ensure that both high- and low-frequency differences are accurately captured and compensated during the reuse process. In the subsequent n timesteps (from t − 1 to t − n), we infer only the outputs of the conditional branches and compute the unconditional outputs using the cached ∆HF and ∆LF as follows:

It says that inference is run for the conditional branch, and outputs for the unconditional branch are computed with the given equations. This is the relevant lines of code that seems to be doing what is mentioned:

single_output = self.fastercache_model_single_forward(hidden_states[:1],timestep[:1],encoder_hidden_states[:1],added_cond_kwargs,class_labels,cross_attention_kwargs,attention_mask,encoder_attention_mask,use_image_num,enable_temporal_attentions,return_dict)[0]

However, the indexing of the inputs is done as hidden_states[:1],timestep[:1],encoder_hidden_states[:1]. Isn't this corresponding to the unconditional inputs instead of conditional inputs? I think it is unconditional because the order of concatenation of prompts embeds is like: (negative_prompt_embeds, prompt_embeds) here.

Is this incorrect by any chance? Or is unconditional branch being used for approximating output of conditional branch?

Thank you for your time! 🤗

cc @cszy98 @ChenyangSi

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions