Skip to content

OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. #767

@Yeeeeeee123

Description

@Yeeeeeee123

Has this issue been opened before?

  • It is not in the FAQ, I checked.
  • It is not in the issues, I searched.

Which UI

auto

Additional context

> docker compose --profile auto up --build
Compose can now delegate builds to bake for better performance.
 To do so, set COMPOSE_BAKE=true.
[+] Building 49.6s (28/28) FINISHED                                                                docker:desktop-linux
 => [auto internal] load build definition from Dockerfile                                                          0.0s
 => => transferring dockerfile: 3.19kB                                                                             0.0s
 => WARN: FromAsCasing: 'as' and 'FROM' keywords' casing do not match (line 1)                                     0.0s
 => [auto internal] load metadata for docker.io/pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime                     46.2s
 => [auto internal] load metadata for docker.io/alpine/git:2.36.2                                                 49.5s
 => [auto internal] load .dockerignore                                                                             0.0s
 => => transferring context: 2B                                                                                    0.0s
 => [auto internal] load build context                                                                             0.0s
 => => transferring context: 122B                                                                                  0.0s
 => [auto stage-1  1/13] FROM docker.io/pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime@sha256:0279f7aa29974bf64e61  0.0s
 => [auto download 1/9] FROM docker.io/alpine/git:2.36.2@sha256:ec491c893597b68c92b88023827faa771772cfd5e106b76c7  0.0s
 => CACHED [auto stage-1  2/13] RUN sed -i 's/archive.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list       0.0s
 => CACHED [auto stage-1  3/13] RUN sed -i 's/security.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list      0.0s
 => CACHED [auto stage-1  4/13] RUN --mount=type=cache,target=/var/cache/apt   apt-get update &&   apt-get instal  0.0s
 => CACHED [auto stage-1  5/13] RUN --mount=type=cache,target=/root/.cache/pip   git clone https://github.com/AUT  0.0s
 => CACHED [auto download 2/9] COPY clone.sh /clone.sh                                                             0.0s
 => CACHED [auto download 3/9] RUN . /clone.sh stable-diffusion-webui-assets https://github.com/AUTOMATIC1111/sta  0.0s
 => CACHED [auto download 4/9] RUN . /clone.sh stable-diffusion-stability-ai https://github.com/Stability-AI/stab  0.0s
 => CACHED [auto download 5/9] RUN . /clone.sh BLIP https://github.com/salesforce/BLIP.git 48211a1594f1321b00f14c  0.0s
 => CACHED [auto download 6/9] RUN . /clone.sh k-diffusion https://github.com/crowsonkb/k-diffusion.git ab527a9a6  0.0s
 => CACHED [auto download 7/9] RUN . /clone.sh clip-interrogator https://github.com/pharmapsychotic/clip-interrog  0.0s
 => CACHED [auto download 8/9] RUN . /clone.sh generative-models https://github.com/Stability-AI/generative-model  0.0s
 => CACHED [auto download 9/9] RUN . /clone.sh stable-diffusion-webui-assets https://github.com/AUTOMATIC1111/sta  0.0s
 => CACHED [auto stage-1  6/13] COPY --from=download /repositories/ /stable-diffusion-webui/repositories/          0.0s
 => CACHED [auto stage-1  7/13] RUN mkdir /stable-diffusion-webui/interrogate && cp /stable-diffusion-webui/repos  0.0s
 => CACHED [auto stage-1  8/13] RUN --mount=type=cache,target=/root/.cache/pip   pip install pyngrok xformers==0.  0.0s
 => CACHED [auto stage-1  9/13] RUN apt-get -y install libgoogle-perftools-dev && apt-get clean                    0.0s
 => CACHED [auto stage-1 10/13] COPY . /docker                                                                     0.0s
 => CACHED [auto stage-1 11/13] RUN   sed -i 's/in_app_dir = .*/in_app_dir = True/g' /opt/conda/lib/python3.10/si  0.0s
 => CACHED [auto stage-1 12/13] WORKDIR /stable-diffusion-webui                                                    0.0s
 => [auto] exporting to image                                                                                      0.0s
 => => exporting layers                                                                                            0.0s
 => => writing image sha256:60d1ae86535a091e3bae94c70881a2822ff35b37dbbf5f9ccf375fdd0820e4a5                       0.0s
 => => naming to docker.io/library/sd-auto:78                                                                      0.0s
 => [auto] resolving provenance for metadata file                                                                  0.0s
[+] Running 2/2
 ✔ auto                           Built                                                                            0.0s
 ✔ Container webui-docker-auto-1  Created                                                                          0.0s
Attaching to auto-1
auto-1  | /stable-diffusion-webui
auto-1  | total 784K
auto-1  | drwxr-xr-x 1 root root 4.0K Jun  1 08:59 .
auto-1  | drwxr-xr-x 1 root root 4.0K May 31 12:12 ..
auto-1  | -rw-r--r-- 1 root root   48 May 31 12:07 .eslintignore
auto-1  | -rw-r--r-- 1 root root 3.4K May 31 12:07 .eslintrc.js
auto-1  | drwxr-xr-x 8 root root 4.0K May 31 12:07 .git
auto-1  | -rw-r--r-- 1 root root   55 May 31 12:07 .git-blame-ignore-revs
auto-1  | drwxr-xr-x 4 root root 4.0K May 31 12:07 .github
auto-1  | -rw-r--r-- 1 root root  521 May 31 12:07 .gitignore
auto-1  | -rw-r--r-- 1 root root  119 May 31 12:07 .pylintrc
auto-1  | -rw-r--r-- 1 root root  84K May 31 12:07 CHANGELOG.md
auto-1  | -rw-r--r-- 1 root root  243 May 31 12:07 CITATION.cff
auto-1  | -rw-r--r-- 1 root root  657 May 31 12:07 CODEOWNERS
auto-1  | -rw-r--r-- 1 root root  35K May 31 12:07 LICENSE.txt
auto-1  | -rw-r--r-- 1 root root  13K May 31 12:07 README.md
auto-1  | drwxr-xr-x 2 root root 4.0K May 31 12:12 __pycache__
auto-1  | -rw-r--r-- 1 root root  146 May 31 12:07 _typos.toml
auto-1  | drwxr-xr-x 4 root root 4.0K May 31 14:49 cache
auto-1  | lrwxrwxrwx 1 root root   29 Jun  1 08:59 config.json -> /data/config/auto/config.json
auto-1  | lrwxrwxrwx 1 root root   31 Jun  1 08:59 config_states -> /data/config/auto/config_states
auto-1  | drwxr-xr-x 2 root root 4.0K May 31 12:07 configs
auto-1  | lrwxrwxrwx 1 root root   16 Jun  1 08:59 embeddings -> /data/embeddings
auto-1  | -rw-r--r-- 1 root root  167 May 31 12:07 environment-wsl2.yaml
auto-1  | lrwxrwxrwx 1 root root   28 Jun  1 08:59 extensions -> /data/config/auto/extensions
auto-1  | drwxr-xr-x 1 root root 4.0K May 31 12:07 extensions-builtin
auto-1  | drwxr-xr-x 2 root root 4.0K May 31 12:07 html
auto-1  | drwxr-xr-x 2 root root 4.0K May 31 12:09 interrogate
auto-1  | drwxr-xr-x 2 root root 4.0K May 31 12:07 javascript
auto-1  | -rw-r--r-- 1 root root 1.3K May 31 12:07 launch.py
auto-1  | drwxr-xr-x 2 root root 4.0K May 31 12:07 localizations
auto-1  | lrwxrwxrwx 1 root root   12 Jun  1 08:59 models -> /data/models
auto-1  | drwxr-xr-x 1 root root 4.0K May 31 12:12 modules
auto-1  | -rw-r--r-- 1 root root  185 May 31 12:07 package.json
auto-1  | -rw-r--r-- 1 root root  841 May 31 12:07 pyproject.toml
auto-1  | drwxr-xr-x 1 root root 4.0K May 31 12:12 repositories
auto-1  | -rw-r--r-- 1 root root   49 May 31 12:07 requirements-test.txt
auto-1  | -rw-r--r-- 1 root root  371 May 31 12:07 requirements.txt
auto-1  | -rw-r--r-- 1 root root   42 May 31 12:07 requirements_npu.txt
auto-1  | -rw-r--r-- 1 root root  645 May 31 12:07 requirements_versions.txt
auto-1  | -rw-r--r-- 1 root root 411K May 31 12:07 screenshot.png
auto-1  | -rw-r--r-- 1 root root 6.1K May 31 12:07 script.js
auto-1  | drwxr-xr-x 1 root root 4.0K May 31 12:12 scripts
auto-1  | -rw-r--r-- 1 root root  43K May 31 12:07 style.css
auto-1  | lrwxrwxrwx 1 root root   28 Jun  1 08:59 styles.csv -> /data/config/auto/styles.csv
auto-1  | drwxr-xr-x 4 root root 4.0K May 31 12:07 test
auto-1  | drwxr-xr-x 2 root root 4.0K May 31 12:07 textual_inversion_templates
auto-1  | lrwxrwxrwx 1 root root   32 Jun  1 08:59 ui-config.json -> /data/config/auto/ui-config.json
auto-1  | -rw-r--r-- 1 root root  670 May 31 12:07 webui-macos-env.sh
auto-1  | -rw-r--r-- 1 root root   84 May 31 12:07 webui-user.bat
auto-1  | -rw-r--r-- 1 root root 1.4K May 31 12:07 webui-user.sh
auto-1  | -rw-r--r-- 1 root root 2.3K May 31 12:07 webui.bat
auto-1  | -rw-r--r-- 1 root root 5.3K May 31 12:07 webui.py
auto-1  | -rwxr-xr-x 1 root root  11K May 31 12:07 webui.sh
auto-1  | Mounted .cache
auto-1  | Mounted config_states
auto-1  | Mounted .cache
auto-1  | Mounted embeddings
auto-1  | Mounted config.json
auto-1  | Mounted models
auto-1  | Mounted styles.csv
auto-1  | Mounted ui-config.json
auto-1  | Mounted extensions
auto-1  | Installing extension dependencies (if any)
auto-1  | /opt/conda/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
auto-1  |   warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
auto-1  | Loading weights [c6bbc15e32] from /stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt
auto-1  | Running on local URL:  http://0.0.0.0:7860
auto-1  |
auto-1  | To create a public link, set `share=True` in `launch()`.
auto-1  | Startup time: 19.2s (import torch: 8.6s, import gradio: 3.2s, setup paths: 2.3s, import ldm: 0.2s, initialize shared: 0.8s, other imports: 0.9s, list SD models: 0.1s, load scripts: 0.8s, create ui: 1.1s, gradio launch: 0.5s, add APIs: 0.4s).
auto-1  | Creating model from config: /stable-diffusion-webui/configs/v1-inpainting-inference.yaml
auto-1  | /opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:943: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
auto-1  |   warnings.warn(
auto-1  | creating model quickly: OSError
auto-1  | Traceback (most recent call last):
auto-1  |   File "/opt/conda/lib/python3.10/threading.py", line 973, in _bootstrap
auto-1  |     self._bootstrap_inner()
auto-1  |   File "/opt/conda/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
auto-1  |     self.run()
auto-1  |   File "/opt/conda/lib/python3.10/threading.py", line 953, in run
auto-1  |     self._target(*self._args, **self._kwargs)
auto-1  |   File "/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
auto-1  |     shared.sd_model  # noqa: B018
auto-1  |   File "/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
auto-1  |     return modules.sd_models.model_data.get_sd_model()
auto-1  |   File "/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model
auto-1  |     load_model()
auto-1  |   File "/stable-diffusion-webui/modules/sd_models.py", line 723, in load_model
auto-1  |     sd_model = instantiate_from_config(sd_config.model)
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
auto-1  |     return get_obj_from_str(config["target"])(**config.get("params", dict()))
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1650, in __init__
auto-1  |     super().__init__(concat_keys, *args, **kwargs)
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1515, in __init__
auto-1  |     super().__init__(*args, **kwargs)
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__
auto-1  |     self.instantiate_cond_stage(cond_stage_config)
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
auto-1  |     model = instantiate_from_config(config)
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
auto-1  |     return get_obj_from_str(config["target"])(**config.get("params", dict()))
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in __init__
auto-1  |     self.tokenizer = CLIPTokenizer.from_pretrained(version)
auto-1  |   File "/opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
auto-1  |     raise EnvironmentError(
auto-1  | OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
auto-1  |
auto-1  | Failed to create model quickly; will retry using slow method.
auto-1  | loading stable diffusion model: OSError
auto-1  | Traceback (most recent call last):
auto-1  |   File "/opt/conda/lib/python3.10/threading.py", line 973, in _bootstrap
auto-1  |     self._bootstrap_inner()
auto-1  |   File "/opt/conda/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
auto-1  |     self.run()
auto-1  |   File "/opt/conda/lib/python3.10/threading.py", line 953, in run
auto-1  |     self._target(*self._args, **self._kwargs)
auto-1  |   File "/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
auto-1  |     shared.sd_model  # noqa: B018
auto-1  |   File "/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
auto-1  |     return modules.sd_models.model_data.get_sd_model()
auto-1  |   File "/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model
auto-1  |     load_model()
auto-1  |   File "/stable-diffusion-webui/modules/sd_models.py", line 732, in load_model
auto-1  |     sd_model = instantiate_from_config(sd_config.model)
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
auto-1  |     return get_obj_from_str(config["target"])(**config.get("params", dict()))
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1650, in __init__
auto-1  |     super().__init__(concat_keys, *args, **kwargs)
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1515, in __init__
auto-1  |     super().__init__(*args, **kwargs)
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__
auto-1  |     self.instantiate_cond_stage(cond_stage_config)
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
auto-1  |     model = instantiate_from_config(config)
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
auto-1  |     return get_obj_from_str(config["target"])(**config.get("params", dict()))
auto-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in __init__
auto-1  |     self.tokenizer = CLIPTokenizer.from_pretrained(version)
auto-1  |   File "/opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
auto-1  |     raise EnvironmentError(
auto-1  | OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
auto-1  |
auto-1  |
auto-1  | Stable diffusion model failed to load
auto-1  | Applying attention optimization: xformers... done.

I have already saw other same issues,but their solutions are download those files directly,but I don't know where to put these files

This is my folder structure.

> tree ./
F:\AI\STABLE-DIFFUSION-WEBUI-DOCKER
├─.devscripts
├─.github
│  ├─ISSUE_TEMPLATE
│  └─workflows
├─.vscode
├─data
│  ├─embeddings
│  ├─config
│  │  └─auto
│  │      ├─scripts
│  │      ├─config_states
│  │      └─extensions
│  ├─models
│  │  ├─Stable-diffusion
│  │  ├─GFPGAN
│  │  ├─RealESRGAN
│  │  ├─LDSR
│  │  ├─VAE
│  │  ├─VAE-approx
│  │  ├─karlo
│  │  ├─hypernetworks
│  │  ├─Codeformer
│  │  └─Lora
│  └─.cache
│      ├─matplotlib
│      └─huggingface
│          └─hub
├─output
├─services
    ├─AUTOMATIC1111
    ├─comfy
    └─download

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions