How to ask proper question for AutoTrained LLM ? #485
Unanswered
charles-123456
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
If I follow the below sample not answer any question:
rom transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "catai_bert-base-uncased_finetuning"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="cuda",
torch_dtype='auto'
).eval()
Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
Model response: "Hello! How can I assist you today?"
print(response)
Beta Was this translation helpful? Give feedback.
All reactions