-
Notifications
You must be signed in to change notification settings - Fork 6
Open
Description
Genie-Envisioner/models/ltx_models/transformer_ltx_multiview.py
Lines 100 to 110 in 4c874d9
if image_rotary_emb is not None: # for self attn, extend the sequence length according to the cross_view_attn param | |
query = apply_rotary_emb(query, image_rotary_emb) | |
key = apply_rotary_emb(key, image_rotary_emb) | |
if cross_view_attn: | |
query = rearrange(query, '(b v) l c -> b (v l) c', v=n_view) | |
key = rearrange(key, '(b v) l c -> b (v l) c', v=n_view) | |
value = rearrange(value, '(b v) l c -> b (v l) c', v=n_view) | |
else: # for cross attn, extend the sequence length | |
query = rearrange(query, '(b v) l c -> b (v l) c', v=n_view) | |
I noticed that you used positional embeddings in self-attention, but not in cross-attention. Is this a mistake, or did you do it intentionally?
Metadata
Metadata
Assignees
Labels
No labels