Skip to content

Commit a5ed593

Browse files
authored
Merge pull request #1157 from jayercule/whyuse-1106
Linking to related pages
2 parents e4f4d88 + caa1667 commit a5ed593

File tree

1 file changed

+13
-7
lines changed

1 file changed

+13
-7
lines changed

docs/getting_started/why_use_guardrails.md

Lines changed: 13 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,8 @@ Guardrails AI is a trusted framework for developing Generative AI applications,
55
While users may find various reasons to integrate Guardrails AI into their projects, we believe its core strengths lie in simplifying LLM response validation, enhancing reusability, and providing robust operational features. These benefits can significantly reduce development time and improve the consistency of AI applications.
66

77

8-
## A Standard for LLM Response Validation
8+
## [A Standard for LLM Response Validation](/docs/concepts/validators)
9+
910
Guardrails AI provides a framework for creating reusable validators to check LLM outputs. This approach reduces code duplication and improves maintainability by allowing developers to create validators that can be integrated into multiple LLM calls. Using this approach, we're able to uplevel performance, LLM feature compatability, and LLM app reliability.
1011

1112
Here's an example of validation with and without Guardrails AI:
@@ -46,10 +47,12 @@ Guardrails AI implements automatic retries and exponential backoff for common LL
4647
Providing a comprehensive set of tools for working with LLMs streamlines the development process and promotes the creation of more robust and reliable AI applications.
4748

4849

49-
## Streaming
50+
## [Streaming](/docs/concepts/streaming)
51+
5052
Guardrails AI supports [streaming validation](/docs/how_to_guides/enable_streaming), and it's the only library to our knowledge that can *fix LLM responses in real-time*. This feature is particularly useful for applications that require immediate feedback or correction of LLM outputs, like chat bots.
5153

52-
## The Biggest LLM Validation Library
54+
## [The Biggest LLM Validation Library](/docs/concepts/hub)
55+
5356
[Guardrails Hub](https://hub.guardrailsai.com) is our centralized location for uploading validators that we and members of our community make available for other developers and companies.
5457

5558
Validators are written using a few different methods:
@@ -62,19 +65,22 @@ Some of these validators require additional infrastructure, and Guardrails provi
6265
The Guardrails Hub is open for submissions, and we encourage you to contribute your own validators to help the community.
6366

6467

65-
## Supports All LLMs
68+
## [Supports All LLMs](/docs/how_to_guides/using_llms)
69+
6670
Guardrails AI supports many major LLMs directly, as well as a host of other LLMs via our integrations with LangChain and Hugging Face. This means that you can use the same validators across multiple LLMs, making it easy to swap out LLMs based on performance and quality of responses.
6771

6872
Supported models can be found in our [LiteLLM partner doc](https://docs.litellm.ai/docs/providers).
6973

7074
Don't see your LLM? You can always write a thin wrapper using the [instructions in our docs](/docs/how_to_guides/using_llms#custom-llm-wrappers).
7175

72-
## Monitoring
76+
## [Monitoring](/docs/concepts/telemetry)
77+
7378
Guardrails AI automatically keeps a log of all LLM calls and steps taken during processing, which you can access programmatically via a guard’s history. Additionally, Guardrails AI [supports OpenTelemetry for capturing metrics](/docs/concepts/telemetry), enabling easy integration with Grafana, Arize AI, iudex, OpenInference, and all major Application Performance Monitoring (APM) services.
7479

75-
## Structured Data
80+
## [Structured Data](/docs/how_to_guides/generate_structured_data)
7681
Guardrails AI excels at [validating structured output](/docs/how_to_guides/generate_structured_data), returning data through a JSON-formatted response or generating synthetic structured data. Used in conjunction with Pydantic, you can define reusable models in Guardrails AI for verifying structured responses that you can then reuse across apps and teams.
7782

7883

79-
## Used Widely in the Open-Source Community
84+
## [Used Widely in the Open-Source Community](/docs/getting_started/contributing)
85+
8086
We’re honored and humbled that open-source projects that support AI application development are choosing to integrate Guardrails AI. Supporting guards provides open-source projects an easy way to ensure they’re processing the highest-quality LLM output possible.

0 commit comments

Comments
 (0)