-
Notifications
You must be signed in to change notification settings - Fork 416
Hosting validator models doc #1008
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -0,0 +1,210 @@ | |||
# Hosting Validator Models | |||
|
|||
Validation using machine learning models is a highly effective way of detecting LLM hallucinations that are not easily or possibly detected using traditional coding techniques. We’ve selected and tuned some ML models to run different validations. These models are wrapped within model validators. This makes the access pattern straightforward — you can use model-based validators the same as any other [validator](https://www.guardrailsai.com/docs/concepts/validators). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit links to https://www.guardrailsai.com/docs should be relative and not hard coded to this domain so local and staging can mostly work
|
||
The standard inference request can be found [here](https://github.com/guardrails-ai/guardrails/blob/main/guardrails/validator_base.py#L258). | ||
|
||
The standard input definition looks like this: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is an example, not the definition
|
||
Here, `“text”` contains the text we want to check for toxic language and `“threshold”` contains the confidence threshold for determining toxicity. If the model predicts a toxic level higher than the threshold, then the text is considered toxic. | ||
|
||
The standard output definition looks like this: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
# Option 2 - Install only required dependencies (Validator Specific - From README) | ||
pip install detoxify torch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this option, mention it as part of #4 here https://github.com/guardrails-ai/guardrails/pull/1008/files#r1717344125
This PR adds a doc explaining how to host validator models.