Skip to content

Commit a931b4c

Browse files
authored
Remove Qwen Omni workaround that's no longer necessary (#21057)
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
1 parent a0f8a79 commit a931b4c

File tree

1 file changed

+0
-7
lines changed

1 file changed

+0
-7
lines changed

vllm/transformers_utils/config.py

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -733,13 +733,6 @@ def get_hf_text_config(config: PretrainedConfig):
733733
"""Get the "sub" config relevant to llm for multi modal models.
734734
No op for pure text models.
735735
"""
736-
# This block should be unnecessary after https://github.com/huggingface/transformers/pull/37517
737-
if hasattr(config, "thinker_config"):
738-
# TODO(suyang.fy): Refactor code.
739-
# For Qwen2.5-Omni, change hf_text_config to
740-
# thinker_config.text_config.
741-
return config.thinker_config.text_config
742-
743736
text_config = config.get_text_config()
744737

745738
if text_config is not config:

0 commit comments

Comments
 (0)