LSPRAG (Language Server Protocol-based AI Generation) is a cutting-edge VS Code extension that leverages Language Server Protocol (LSP) integration and Large Language Models (LLMs) to automatically generate high-quality unit tests in real-time. By combining semantic code analysis with AI-powered generation, LSPRAG delivers contextually accurate and comprehensive test suites across multiple programming languages.
- Generate unit tests instantly as you code
- Context-aware test creation based on function semantics
- Intelligent test case generation with edge case coverage
- Java: Full support with JUnit framework
- Python: Comprehensive pytest integration
- Go: Native Go testing framework support
- Extensible: Easy to add support for additional languages
- Semantic Analysis: Deep code understanding through LSP
- Dependency Resolution: Automatic import and mock generation
- Coverage Optimization: Generate tests for maximum code coverage
- Multiple LLM Providers: Support for OpenAI, DeepSeek, and Ollama
- Customizable Prompts: Multiple generation strategies available
Language | Status | Framework | Features |
---|---|---|---|
Java | β Production Ready | JUnit 4/5 | Full semantic analysis, mock generation |
Python | β Production Ready | pytest | Type hints, async support, fixtures |
Go | β Production Ready | Go testing | Package management, benchmarks |
- VS Code: Version 1.95.0 or higher
- Node.js: Version 20 or higher
Note: Currently, LSPRAG is available only as source code. While we plan to publish it as a one-click extension in the future, we're maintaining source-only distribution to preserve anonymity. Please follow the steps below to set up the application.
-
Download Source Code
- Use
git clone
or download the ZIP file directly
- Use
-
Setup Project
- Navigate to the project's root directory
LSPRAG
- Install dependencies:
npm install --force
- Compile the project:
npm run compile
- Navigate to the project's root directory
-
Install Language Server Extensions
For Python:
For Java:
- Install "Oracle Java Extension Pack" from VS Code Marketplace
For Go:
- Install "Go" extension
- Enable semantic tokens in settings:
{ "gopls": { "ui.semanticTokens": true } }
-
Download Baseline Python Project
- Navigate to experiments directory:
cd experiments
- Create projects folder:
mkdir projects
- Clone a sample project:
git clone https://github.com/psf/black.git
- Navigate to experiments directory:
-
Activate Extension
-
β οΈ IMPORTANT: Configure LLM Settings in the NEW EditorCritical: You must configure your LLM settings in the newly opened VS Code editor (not the original one) for LSPRAG to work properly.
Option A: VS Code Settings UI
- Open VS Code Settings (
Ctrl/Cmd + ,
) - Search for "LSPRAG" settings
- Configure provider, model, and API keys
Option B: Direct JSON Configuration Add to your
settings.json
:{ "LSPRAG": { "provider": "deepseek", "model": "deepseek-chat", "deepseekApiKey": "your-api-key", "openaiApiKey": "your-openai-key", "localLLMUrl": "http://localhost:11434", "savePath": "lsprag-tests", "promptType": "detailed", "generationType": "original", "maxRound": 3 } }
Test your configuration with
Ctrl+Shift+P
βLSPRAG: Show Current Settings
- Open VS Code Settings (
-
Open Your Project
- Open your workspace in the new VS Code editor
- Navigate to the black project:
LSPRAG/experiments/projects/black
- Ensure language servers are active for your target language
-
Generate Tests
-
Review & Deploy
LSPRAG: Generate Unit Test
- Generate tests for selected functionLSPRAG: Show Current Settings
- Display current configurationLSPRAG: Test LLM
- Test LLM connectivity and configuration
Setting | Type | Default | Description |
---|---|---|---|
LSPRAG.provider |
string | "deepseek" |
LLM provider (deepseek, openai, ollama) |
LSPRAG.model |
string | "deepseek-chat" |
Model name for generation |
LSPRAG.savePath |
string | "lsprag-tests" |
Output directory for generated tests |
LSPRAG.promptType |
string | "basic" |
Prompt strategy for generation |
LSPRAG.generationType |
string | "original" |
Generation approach |
LSPRAG.maxRound |
number | 3 |
Maximum refinement rounds |
{
"LSPRAG.provider": "deepseek",
"LSPRAG.model": "deepseek-chat",
"LSPRAG.deepseekApiKey": "your-api-key"
}
{
"LSPRAG.provider": "openai",
"LSPRAG.model": "gpt-4o-mini",
"LSPRAG.openaiApiKey": "your-api-key"
}
{
"LSPRAG.provider": "ollama",
"LSPRAG.model": "llama3-70b",
"LSPRAG.localLLMUrl": "http://localhost:11434"
}
naive
: Basic test generation without semantic analysisoriginal
: Standard LSP-aware generation (recommended)agent
: Multi-step reasoning with iterative refinementcfg
: Control flow graph-based generationexperimental
: Latest experimental featuresfastest
: Optimized for speedbest
: Highest quality generation
basic
: Minimal context, fast generationdetailed
: Comprehensive context analysisconcise
: Balanced approachfastest
: Speed-optimized promptsbest
: Quality-optimized prompts
- Minimum: 8GB RAM, 4 CPU cores
- Recommended: 16GB RAM, 8 CPU cores
- GPU: Optional but recommended for local LLM inference
Ready to generate unit tests with LSPRAG! π