Replies: 2 comments
-
received reply from Abhishek |
Beta Was this translation helpful? Give feedback.
0 replies
-
need gpu or remove quantization |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
When i run the below command i get error. Please help
autotrain llm --train --project-name llm101 --model abhishek/llama-2-7b-hf-small-shards --data-path . --use-peft --quantization int4 --lr 2e-4 --train-batch-size 12 --epochs 3 --trainer sft
INFO | 2024-02-17 14:57:25 | main.process_input_data:83 - Valid data: None
ERROR | 2024-02-17 14:57:26 | autotrain.trainers.common.wrapper:91 - train has failed due to an exception: Traceback (most recent call last):
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\autotrain\trainers\common.py", line 88, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\autotrain\trainers\clm_main_.py", line 186, in train
model = AutoModelForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\transformers\models\auto\auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zau3\AppData\Roaming\Python\Python311\site-packages\transformers\modeling_utils.py", line 3032, in from_pretrained
raise RuntimeError("No GPU found. A GPU is needed for quantization.")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: No GPU found. A GPU is needed for quantization.
ERROR | 2024-02-17 14:57:26 | autotrain.trainers.common.wrapper:92 - No GPU found. A GPU is needed for quantization.
Beta Was this translation helpful? Give feedback.
All reactions