SystemProbe is an AI-powered desktop application designed to help users discover and optimize system prompts for Large Language Models (LLMs) without requiring complex fine-tuning or LoRA training. It guides users through an iterative process of refining system prompts to achieve desired outputs for dynamic inputs.
- Dual LLM Workflow: Uses two LLMs - one for testing prompts and another for refining them
- Iterative Refinement: Step-by-step process to refine system prompts based on user feedback
- Visual Scoring: Rate prompt effectiveness with an intuitive slider
- Custom Guidance: Provide specific guidance to the refiner LLM
- Session Management: Save and load your prompt optimization sessions
- Dark/Light Theme Support: Choose your preferred visual theme
- Groq API Integration: Leverages Groq's powerful LLM models
-
Start the application:
python main.py
-
Step 1: Define Inputs and Examples
-
Step 2: Set Initial System Prompt
-
Step 3: Test Output and Score Results
-
Step 4: Analyze and Refine
-
Step 5: Final Optimized Prompt
-
Clone the repository:
git clone https://github.com/fernicar/SystemProbe.git cd SystemProbe
-
Create and activate a virtual environment:
python -m venv .venv .venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Set up your Groq free API key:
- Create a
.env
file in the project root with:GROQ_API_KEY='your_groq_api_key_here'
- Or enter it in the application settings
- Create a
- API Key: Set your Groq API key in Settings
- Theme: Choose between Dark and Light themes
- LLM Model: Select from available Groq models
- Model Updates: Toggle automatic model list updates
- Python: Core programming language
- PySide6: Qt-based GUI framework
- Langchain: Framework for LLM application development
- Groq API: High-performance LLM provider
- QThread Workers: For non-blocking LLM operations
This project is licensed under the MIT License - see the LICENSE file for details.
- Special thanks to ScuffedEpoch for the TINS methodology and the initial example.
- Thanks to the free tier AI assistant for its initial contribution to the project.
- Gratitude to the Groq team for their API and support.
- Thanks to the Langchain and PySide6 communities for their respective libraries and documentation.
- Augment extension for VS Code
- Tested LLM Gemini2.5pro (free tier beta testing) from Google AI Studio