A comprehensive prompt library with multiple interfaces: CLI tool, macOS menubar app, MCP server, and PWA. Easily access, copy, and use AI prompts across different platforms and AI tools.
This prompt library offers four different ways to access your prompts:
- π§ macOS Menubar App - Quick access with click-to-copy functionality
- β¨οΈ CLI Tool - Command-line interface for automation and scripting
- π MCP Server - Standardized interface for AI assistants and tools
- π± PWA (Progressive Web App) - Cross-platform web app for mobile devices
Choose the interface that best fits your workflow!
- Make sure you have the
llm
command line tool installed - Clone this repository
- Make the script executable:
chmod +x run-prompt
To enable zsh completion for run-prompt:
(update to where you have this installed)
# Add this to your .zshrc
PROMPT_LIBRARY_PATH="/Users/wschenk/prompt-library"
fpath=($PROMPT_LIBRARY_PATH $fpath)
export PATH="$PROMPT_LIBRARY_PATH:$PATH"
autoload -Uz compinit
compinit
The macOS menubar app provides quick access to all your prompts with a simple click-to-copy interface.
Features:
- π§ Brain icon in the macOS menubar
- π Hierarchical menu organized by categories
- π One-click copy to clipboard with notification
- π Automatic file monitoring and updates
- β¨ Clean, formatted prompt names
Quick Start:
cd menubar
swift pdfbar.swift --dir ~/prompt-library &
Usage:
- Click the brain icon (π§ ) in your menubar
- Navigate through categories (Code, Content, Planning, etc.)
- Click any prompt name to copy its content to clipboard
- You'll see a notification confirming the copy
- Paste anywhere with Cmd+V
For detailed installation instructions and troubleshooting, see menubar/README.md.
run-prompt <prompt_file> <input_file>
You can optionally specify a different model using the MODEL environment variable:
MODEL=claude-3.7-sonnet run-prompt <prompt_file> <input_file>
The default model is claude-3.7-sonnet.
The Model Context Protocol (MCP) server provides a standardized interface for AI assistants to access your prompt library programmatically.
Features:
- π List Prompts: Discover all available prompts with descriptions
- π Load Prompts: Retrieve the full content of specific prompts
- πΎ Save Prompts: Create or update prompts with metadata
- π·οΈ Metadata Support: Add descriptions and usage information
- π Standardized Interface: Compatible with MCP-enabled AI tools
Quick Start:
# Start MCP server in STDIO mode
./run-prompt mcp
# Or inspect with MCP Inspector
npx @modelcontextprotocol/inspector uv run run-prompt mcp
Available MCP Tools:
mcp_prompt_library_list_prompts
- List all available promptsmcp_prompt_library_load_prompt
- Load a specific prompt's contentmcp_prompt_library_save_prompt
- Save a new prompt or update existing
Usage with AI Assistants: Configure your AI assistant (Claude Desktop, etc.) to use this MCP server by adding to your MCP configuration file:
{
"mcpServers": {
"prompt-library": {
"command": "uv",
"args": ["run", "/path/to/prompt-library/run-prompt", "mcp"],
"cwd": "/path/to/prompt-library"
}
}
}
π Documentation:
- MCP-SERVER.md - Complete MCP server user guide
- guides/mcp-implementation-guide.md - MCP development guide
- summarize.md - Generate 5 different two-sentence summaries to encourage readership
- key-themes.md - Extract key themes from the input text
- linkedin.md - Format content as an engaging LinkedIn post
- lint.md - Assess code quality and suggest improvements
- git-commit-message.md - Generate semantic commit messages from code diffs
- architecture-review.md - Review architectural patterns and decisions
- api-documentation.md - Generate API documentation
- performance-review.md - Analyze performance considerations
- security-review.md - Review security implications
- developer-guide.md - Create developer documentation
Summarize a README file:
./run-prompt content/summarize README.md
Extract key themes from a document:
./run-prompt content/key-themes document.txt
Format content for LinkedIn:
./run-prompt content/linkedin article.txt
Generate a commit message:
./run-prompt code/git-commit-message.md diff.txt
Review code quality:
./run-prompt code/lint.md source_code.py
Add new prompt files to the appropriate directory:
content/
- For content analysis and formatting promptscode/
- For code-related promptscode/repomix/
- For repository analysis prompts
The prompt file should contain the instructions/prompt that will be sent to the LLM along with the content of your input file.
cat repomix-output.txt | ollama run gemma3:12b "$(cat ~/prompts/code/repomix/developer-guide.md )"
llm install llm-ollama
MODEL=${MODEL:-claude-3.7-sonnet}
cat repomix-output.txt | \
llm -m $MODEL \
"$(cat ~/prompts/code/repomix/developer-guide.md )"
This repository also includes a Progressive Web App (PWA) for browsing and copying prompts on mobile devices.
- π± Install as a mobile app
- π Browse all prompts with folder navigation
- π One-click copy to clipboard
- π Works offline
- π Auto light/dark mode
The PWA is located in the pwa/
directory. To deploy it on GitHub Pages:
- Go to Settings β Pages in your GitHub repository
- Select "Deploy from a branch"
- Choose your main branch and
/pwa
folder as the source - Save the settings
After a few minutes, your PWA will be available at:
https://the-focus-ai.github.io/prompt-library/
- Visit the URL on your mobile device
- You'll see an "Add to Home Screen" prompt (or use browser menu)
- Once installed, it works like a native app
- Click refresh to cache all prompts for offline use
For more details, see pwa/deployment.md.