Skip to content

This Plugion provides a set of tools to enhance the safety of generative AI applications with advanced guardrails for responsible AI.

License

Notifications You must be signed in to change notification settings

fujita-h/dify-plugin-azure-ai-content-safety

Repository files navigation

Icon

Azure AI Content Safety

GitHub Repo
GitHub Release GitHub License

This Plugion provides a set of tools to enhance the safety of generative AI applications with advanced guardrails for responsible AI. Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services.

Tools provided by this plugin

Text Moderation

Scans text for sexual content, violence, hate, and self harm with multi-severity levels.

Learn more abount the text moderation categories here.

Image Moderation

Scans text for sexual content, violence, hate, and self harm with multi-severity levels.

Learn more abount the text moderation categories here.

Prompt Shields

Prompt Shields analyzes LLM input and detects adversarial user input attacks.

Learn more abount the prompt shields here.

Configuration and Usage

See Plugin README for configuration and usage details.

Notes

Supported languages

See the official documentation for the supported languages.

Contributing

This plugin is open-source and contributions are welcome. Please visit the GitHub repository to contribute.

About

This Plugion provides a set of tools to enhance the safety of generative AI applications with advanced guardrails for responsible AI.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published