Skip to content

ValueError: The checkpoint you are trying to load has model type multi_modality but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date. #60

@Y-PanC

Description

@Y-PanC

您好!
我下载该模型搭配LLamafactory框架,在做api部署的时候,报以下错误
[2024-10-01 00:15:35,483] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[INFO|configuration_utils.py:670] 2024-10-01 00:15:38,538 >> loading configuration file /mnt/ssd2/models/deepseek-vl-7b-chat/config.json
Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1023, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 725, in getitem
raise KeyError(key)
KeyError: 'multi_modality'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/bin/llamafactory-cli", line 8, in
sys.exit(main())
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/cli.py", line 79, in main
run_api()
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/api/app.py", line 129, in run_api
chat_model = ChatModel()
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/chat/chat_model.py", line 52, in init
self.engine: "BaseEngine" = HuggingfaceEngine(model_args, data_args, finetuning_args, generating_args)
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/chat/hf_engine.py", line 54, in init
tokenizer_module = load_tokenizer(model_args)
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/model/loader.py", line 69, in load_tokenizer
config = load_config(model_args)
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/model/loader.py", line 122, in load_config
return AutoConfig.from_pretrained(model_args.model_name_or_path, **init_kwargs)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1025, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type multi_modality but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
请问怎么修改,我的transformer=4.45.0,具体环境附上图。
环境图

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions