audit a big offline model like llama3 70b #2145
eivorwolfkissed
started this conversation in
General
Replies: 1 comment
-
no longer required |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am able to run garak on a smaller model. how would i do it for a bigger model say llama370B that requires multiple gpus to start inferencing?
Beta Was this translation helpful? Give feedback.
All reactions