A simple tool to run prompts against files using various LLM models.
- Make sure you have the
llm
command line tool installed - Clone this repository
- Make the script executable:
chmod +x run-prompt
To enable zsh completion for run-prompt:
(update to where you have this installed)
# Add this to your .zshrc
PROMPT_LIBRARY_PATH="/Users/wschenk/prompt-library"
fpath=($PROMPT_LIBRARY_PATH $fpath)
export PATH="$PROMPT_LIBRARY_PATH:$PATH"
autoload -Uz compinit
compinit
run-prompt <prompt_file> <input_file>
You can optionally specify a different model using the MODEL environment variable:
MODEL=claude-3.7-sonnet run-prompt <prompt_file> <input_file>
The default model is claude-3.7-sonnet.
npx @modelcontextprotocol/inspector uv run run-prompt mcp
- summarize.md - Generate 5 different two-sentence summaries to encourage readership
- key-themes.md - Extract key themes from the input text
- linkedin.md - Format content as an engaging LinkedIn post
- lint.md - Assess code quality and suggest improvements
- git-commit-message.md - Generate semantic commit messages from code diffs
- architecture-review.md - Review architectural patterns and decisions
- api-documentation.md - Generate API documentation
- performance-review.md - Analyze performance considerations
- security-review.md - Review security implications
- developer-guide.md - Create developer documentation
Summarize a README file:
./run-prompt content/summarize README.md
Extract key themes from a document:
./run-prompt content/key-themes document.txt
Format content for LinkedIn:
./run-prompt content/linkedin article.txt
Generate a commit message:
./run-prompt code/git-commit-message.md diff.txt
Review code quality:
./run-prompt code/lint.md source_code.py
Add new prompt files to the appropriate directory:
content/
- For content analysis and formatting promptscode/
- For code-related promptscode/repomix/
- For repository analysis prompts
The prompt file should contain the instructions/prompt that will be sent to the LLM along with the content of your input file.
cat repomix-output.txt | ollama run gemma3:12b "$(cat ~/prompts/code/repomix/developer-guide.md )"
llm install llm-ollama
MODEL=${MODEL:-claude-3.7-sonnet}
cat repomix-output.txt | \
llm -m $MODEL \
"$(cat ~/prompts/code/repomix/developer-guide.md )"
This repository also includes a Progressive Web App (PWA) for browsing and copying prompts on mobile devices.
- 📱 Install as a mobile app
- 🔍 Browse all prompts with folder navigation
- 📋 One-click copy to clipboard
- 🌐 Works offline
- 🌓 Auto light/dark mode
The PWA is located in the pwa/
directory. To deploy it on GitHub Pages:
- Go to Settings → Pages in your GitHub repository
- Select "Deploy from a branch"
- Choose your main branch and
/pwa
folder as the source - Save the settings
After a few minutes, your PWA will be available at:
https://the-focus-ai.github.io/prompt-library/
- Visit the URL on your mobile device
- You'll see an "Add to Home Screen" prompt (or use browser menu)
- Once installed, it works like a native app
- Click refresh to cache all prompts for offline use
For more details, see pwa/deployment.md.