Skip to content

TypeError when passing "tokenizer_name" to "FastVisionModel.from_pretrained" due to parameter conflict #3275

@AhmadAlmustadi

Description

@AhmadAlmustadi

Hi

I am trying to load a vision model using FastVisionModel.from_pretrained where the model and the tokenizer/processor are located in different Hugging Face repositories. When I pass the tokenizer_name parameter to specify a different tokenizer, I get a TypeError: got multiple values for keyword argument 'tokenizer_name'.

model, processor = FastVisionModel.from_pretrained(
    "unsloth/Qwen2.5-VL-3B-Instruct",
    load_in_4bit = False, 
    use_gradient_checkpointing = "unsloth",
    tokenizer_name="unsloth/Qwen2.5-VL-3B-Instruct"  
    )

the error:

TypeError Traceback (most recent call last)
/tmp/ipython-input-1738901268.py in <cell line: 0>()
----> 1 model, processor = FastVisionModel.from_pretrained(
--> 857 model, tokenizer = FastBaseModel.from_pretrained(
858 model_name = model_name,
859 max_seq_length = max_seq_length,

TypeError: unsloth.models.vision.FastBaseModel.from_pretrained() got multiple values for keyword argument 'tokenizer_name'

After checking the source code and I think this error occurs because the tokenizer_name parameter is passed twice internally: once from the user's kwargs and once from the internal logic in FastModel.from_pretrained.

Environment:
Python: 3.12.11
OS: Linux 6.1.123+
PyTorch: 2.8.0+cu126
CUDA: 12.6
GPU: Tesla T4
Unsloth: 2025.9.1
Transformers: 4.55.4
Accelerate: 1.10.1
Bitsandbytes: 0.47.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions