Skip to content

Clean UI for LLM development workflows with prompt versioning and model selection. Built for engineers, not hype. Streamlined prompt → model → tag → export workflow. Currently supports OpenAI, Claude, and Ollama.

License

Notifications You must be signed in to change notification settings

Cre4T3Tiv3/llm-prompt-debugger

Repository files navigation

LLM Prompt Debugger social preview

A developer-first UI for testing, tagging, and exporting LLM prompts. With built-in support for OpenAI, Claude, and Ollama.

CI Next.js Latest Tag License: MIT GitHub Stars Contributions welcome


About

LLM Prompt Debugger is a playground for evaluating and labeling LLM outputs.

Features:

  • Prompt input + response viewing
  • Model selection (OpenAI, Claude, Ollama)
  • Tagging UI for prompt categorization
  • JSON + Markdown export support
  • Hotkey: Cmd+Enter or Ctrl+Enter to run

Getting Started

ℹ️ Requires Node.js 18+ and pnpm

If you don’t have pnpm installed:

npm install -g pnpm

Then clone and run the project locally:

git clone https://github.com/Cre4T3Tiv3/llm-prompt-debugger.git
cd llm-prompt-debugger
pnpm install
pnpm dev

Visit: http://localhost:3000


Lockfile Strategy

This project uses a pnpm-lock.yaml file to ensure deterministic installs across contributors and CI environments.

  • Use pnpm to install dependencies and preserve the lockfile
  • If you prefer npm or yarn, delete pnpm-lock.yaml before running install
  • Officially supported: pnpm (fast, efficient, and CI-friendly)

Tagging System

Apply semantic and stylistic tags to each prompt-response pair.

Built-in tags:

  • code, debug, refactor, summarization, technical, marketing, LLM, simulation
  • tone:professional, tone:casual, tone:funny, tone:neutral

Custom tags are supported via input field.


Exporting

Export history to:

  • JSON for programmatic analysis
  • Markdown for docs or knowledge sharing

ℹ️ Markdown output is grouped by model and time-stamped


Model Support

Provider Example Model Usage Notes
OpenAI gpt-4, gpt-4o Requires OPENAI_API_KEY
Anthropic claude-3-opus Requires CLAUDE_API_KEY
Ollama llama3 Local model support

Set these API keys in .env.local


End-to-End Usage Guide

Looking to test prompts from start to finish?

See the full walkthrough for testing, tagging, exporting, and sharing prompts across supported LLM providers:

E2E-GUIDE.md


Deployment

To deploy statically:

pnpm build
pnpm start

Supports Vercel, Netlify, Docker, and self-hosting.


Contributing

PRs are welcome! Open an issue or discussion to propose ideas.

See CONTRIBUTOR.md for setup and guidelines.


Maintainer

Built with ❤️ by @Cre4T3Tiv3 at ByteStack Labs


License

MIT – © 2025 @Cre4T3Tiv3


⚠️ Known Installation Warnings

This project includes some development dependencies with upstream deprecation warnings (e.g., eslint@8.x, node-domexception@2.0.2). These are non-breaking and safe to ignore.

For detailed context and updates:

KNOWN-WARNINGS.md


About

Clean UI for LLM development workflows with prompt versioning and model selection. Built for engineers, not hype. Streamlined prompt → model → tag → export workflow. Currently supports OpenAI, Claude, and Ollama.

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published