Skip to content
GitHub Universe 2025
Explore 100+ talks, demos, and workshops at Universe 2025. Choose your favorites.
#

prompt-injection-llm-security

Here are 10 public repositories matching this topic...

PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.

  • Updated Jun 29, 2025
  • Python

Proof of Concept (PoC) demonstrating prompt injection vulnerability in AI code assistants (like Copilot) using hidden Unicode characters within instruction files (copilot-instructions.md). Highlights risks of using untrusted instruction templates. For educational/research purposes only.

  • Updated May 10, 2025

Improve this page

Add a description, image, and links to the prompt-injection-llm-security topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the prompt-injection-llm-security topic, visit your repo's landing page and select "manage topics."

Learn more