-
Notifications
You must be signed in to change notification settings - Fork 50
Closed
Description
I tried the minimum example from https://huggingface.co/Snowflake/snowflake-arctic-instruct and it did not work. Can you help me to fix it?

Im using the latest trasnformers release commit.

snowflake-arctic-instruct.py
import os
# enable hf_transfer for faster ckpt download
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from deepspeed.linear.config import QuantizationConfig
tokenizer = AutoTokenizer.from_pretrained(
"Snowflake/snowflake-arctic-instruct",
trust_remote_code=True
)
quant_config = QuantizationConfig(q_bits=8)
model = AutoModelForCausalLM.from_pretrained(
"Snowflake/snowflake-arctic-instruct",
trust_remote_code=True,
low_cpu_mem_usage=True,
device_map="auto",
ds_quantization_config=quant_config,
max_memory={i: "150GiB" for i in range(8)},
torch_dtype=torch.bfloat16)
content = "5x + 35 = 7x - 60 + 10. Solve for x"
messages = [{"role": "user", "content": content}]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids=input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
requirements.txt
annotated-types==0.6.0
certifi==2024.2.2
charset-normalizer==3.3.2
deepspeed==0.14.2
filelock==3.13.4
fsspec==2024.3.1
hf_transfer==0.1.6
hjson==3.1.0
huggingface-hub==0.22.2
idna==3.7
Jinja2==3.1.3
MarkupSafe==2.1.5
mpmath==1.3.0
networkx==3.3
ninja==1.11.1.1
numpy==1.26.4
packaging==24.0
psutil==5.9.8
py-cpuinfo==9.0.0
pydantic==2.7.1
pydantic_core==2.18.2
pynvml==11.5.0
PyYAML==6.0.1
regex==2024.4.28
requests==2.31.0
safetensors==0.4.3
sympy==1.12
tokenizers==0.19.1
torch==2.3.0
tqdm==4.66.2
transformers @ git+https://github.com/huggingface/transformers@9fe3f585bb4ea29f209dc705d269fbe292e1128f
typing_extensions==4.11.0
urllib3==2.2.1
Metadata
Metadata
Assignees
Labels
No labels