Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion examples/models/llama/evaluate/eager_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ def __init__(
use_kv_cache: bool = False,
):
device = "cuda" if torch.cuda.is_available() else "cpu"
super().__init__(device=device)
super().__init__(device=device, pretrained="gpt2")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, this hack will make the newer version happy by giving it a valid HF model_repo though it won't be used to eval at all. Maybe put a comment for this?

self._model = model
self._tokenizer = tokenizer
self._device = torch.device(device)
Expand Down
2 changes: 1 addition & 1 deletion examples/models/llama/install_requirements.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ pip install --no-use-pep517 "git+https://github.com/pytorch/ao.git@${TORCHAO_VER

# Install lm-eval for Model Evaluation with lm-evalution-harness
# Install tiktoken for tokenizer
pip install lm_eval==0.4.2
pip install lm_eval==0.4.5
pip install tiktoken blobfile

# Call the install helper for further setup
Expand Down
Loading