We take security seriously. If you discover a security vulnerability in the Prompt Library, please report it responsibly:
- Email: Send details to i@xi-xu.me or create a private security advisory on GitHub
- Include:
- Description of the vulnerability
- Steps to reproduce the issue
- Potential impact assessment
- Suggested fix (if available)
 
- Acknowledgment: We'll acknowledge receipt within 48 hours
- Assessment: Initial assessment within 5 business days
- Disclosure: Coordinated disclosure after fix is available
While this is primarily a documentation repository, please be aware that:
- Prompt Injection: Some prompts may be vulnerable to injection attacks when used with AI systems
- Sensitive Data: Avoid including real sensitive data in examples
- Malicious Use: Consider how prompts might be misused and include appropriate warnings
- Always validate and sanitize inputs when using prompts in production
- Be cautious with prompts that handle user-generated content
- Review prompt outputs before using in sensitive contexts
- Follow your AI provider's security guidelines and terms of service
When contributing to or using the Prompt Library:
- No Secrets: Never include API keys, passwords, or sensitive information in prompts or examples
- Sanitize Examples: Use placeholder data or properly anonymized examples
- Consider Context: Think about how prompts might be misused or exploited
- Report Issues: If you notice potential security concerns in existing prompts, please report them
Thank you for helping keep the Prompt Library secure for everyone!