A powerful tool to analyze and improve variable and function names in JavaScript/TypeScript projects using AI-powered suggestions and predefined naming rules.
-
Install the package:
# With npm npm install --save-dev project-code-namer # With yarn yarn add --dev project-code-namer
-
Set up AI (recommended):
# Install Ollama (free local AI) brew install ollama && brew services start ollama # Run setup script yarn setup-ollama # or npm run setup-ollama
-
Start analyzing:
yarn project-code-namer # or npx project-code-namer
That's it! The tool will guide you through the rest.
- ๐ง Smart Analysis: Analyzes code context to suggest more descriptive names
- ๐ค AI Integration: Support for multiple AI providers (OpenAI, Anthropic, Google Gemini, Ollama, GitHub Copilot)
- ๐ Predefined Rules: Rule-based system following naming best practices
- โ๏ธ React Support: Specific suggestions for React components, hooks, and events
- ๐งช Context Detection: Recognizes different file types (testing, API, components, etc.)
- ๐ Detailed Statistics: Complete analysis reports
- ๐ฏ Easy to Use: Interactive and user-friendly CLI interface
- ๐๏ธ Clean Architecture: Built with SOLID principles and clean code practices
With npm:
npm install -g project-code-namer
With yarn:
yarn global add project-code-namer
With npm:
npm install --save-dev project-code-namer
With yarn:
yarn add --dev project-code-namer
Ollama allows you to run AI models locally for free, ensuring privacy and no API costs.
macOS:
# Install Ollama using Homebrew
brew install ollama
# Start the Ollama service
brew services start ollama
Windows:
- Download from https://ollama.ai/download/windows
- Run the installer
- Or use winget:
winget install Ollama.Ollama
Linux:
# Install using the official script
curl -fsSL https://ollama.ai/install.sh | sh
# Start the service
sudo systemctl start ollama
sudo systemctl enable ollama
After installing Ollama, download a lightweight model:
# Download a small, fast model (1.3GB) - Recommended
ollama pull llama3.2:1b
# Or a more capable model (4.7GB) - Better quality
ollama pull llama3.2:3b
# Verify downloaded models
ollama list
Test that everything is working:
# Test the model
ollama run llama3.2:1b "Hello, can you suggest better names for a variable called 'data'?"
In your project root, create a file called .ai-config.json
:
{
"provider": "ollama",
"ollama": {
"endpoint": "http://localhost:11434/api/generate",
"model": "llama3.2:1b"
}
}
If you prefer OpenAI's models:
- Get your API key from OpenAI Platform
- Create a
.ai-config.json
file in your project root:{ "provider": "openai", "openai": { "apiKey": "your_actual_api_key_here", "model": "gpt-3.5-turbo" } }
- Or use environment variables by creating a
.env
file:OPENAI_API_KEY=your_actual_api_key_here
Run the automatic setup script to configure Ollama:
With npm:
npm run setup-ollama
With yarn:
yarn setup-ollama
With npm:
npx project-code-namer
With yarn:
yarn project-code-namer
If installed globally:
project-code-namer
import { NamerSuggesterApp, CodeAnalyzer, SuggestionService } from 'project-code-namer';
// Use the complete application
const app = new NamerSuggesterApp();
await app.run();
// Or use individual components
const result = CodeAnalyzer.analyzeFile('./src/example.ts');
const suggestionService = new SuggestionService(config);
const suggestions = await suggestionService.getSuggestions(
'data',
'variable',
'',
fileContext
);
After installation, you need to configure an AI provider. The easiest way is:
-
Run the setup script (automatically creates
.ai-config.json
):# With npm npm run setup-ollama # With yarn yarn setup-ollama
-
Or manually create
.ai-config.json
in your project root:
Create a .ai-config.json
file in your project root with one of these configurations:
{
"provider": "ollama",
"ollama": {
"endpoint": "http://localhost:11434/api/generate",
"model": "llama3.2:1b"
}
}
{
"provider": "openai",
"openai": {
"apiKey": "your-api-key-here",
"model": "gpt-3.5-turbo"
}
}
{
"provider": "auto",
"ollama": {
"endpoint": "http://localhost:11434/api/generate",
"model": "llama3.2:1b"
},
"openai": {
"apiKey": "your-api-key-here",
"model": "gpt-3.5-turbo"
}
}
{
"provider": "rules"
}
๐ก Tip: The setup script (
yarn setup-ollama
) automatically creates the Ollama configuration for you!
- ๐ ollama: Free local models (Recommended)
- ๐ rules: Predefined naming rules only
- ๐ฐ openai: OpenAI GPT models (Paid)
- ๐ฐ anthropic: Anthropic Claude (Paid)
- ๐ฐ gemini: Google Gemini (Paid)
- ๐ auto: Tries all available providers
- ๐ copilot: GitHub Copilot CLI
The project follows Clean Code and SOLID principles:
src/
โโโ analyzers/ # Code analysis and context extraction
โโโ cli/ # Command line interface
โโโ config/ # Configuration management
โโโ providers/ # AI providers
โโโ services/ # Main business logic
โโโ types/ # TypeScript type definitions
โโโ utils/ # General utilities
โโโ bin/ # Executables
โโโ app.ts # Main application
- SRP (Single Responsibility Principle): Each class has a specific responsibility
- DRY (Don't Repeat Yourself): Reusable code without duplication
- Clean Code: Descriptive names, small functions, clear structure
- Separation of Concerns: Clear separation between analysis, suggestions, CLI, and configuration
- Event handlers (
handle*
,on*
) - API functions (
fetch*
,load*
,retrieve*
) - Validators (
validate*
,is*Valid
) - Initializers (
initialize*
,setup*
,create*
)
- States and flags (
is*
,has*
,should*
) - Data and collections (
items
,collection
,payload
) - Counters and indices (
counter
,index
,position
)
- Components (
*Component
) - Custom hooks (
use*
) - Event handlers (
handle*
,on*
) - States (
*State
)
function getData() {
const d = fetch('/api/users');
return d;
}
const flag = true;
const arr = [1, 2, 3];
function fetchUserData() {
const userData = fetch('/api/users');
return userData;
}
const isVisible = true;
const userItems = [1, 2, 3];
When you run project-code-namer
, you'll see an interactive menu:
๐ Namer Suggester - Name Analyzer
---------------------------------------
๐งฐ Project detected: TYPESCRIPT
๐ ๏ธ Framework: NEXTJS
๐ค Suggestion engine: Automatic (tries all available)
๐ง What would you like to do?
โฏ ๐ Analyze files and get naming suggestions
โ๏ธ Configure AI providers
โ View help
โ Exit
- Improved Code Readability: More descriptive and meaningful names
- Consistency: Follows established naming conventions
- Team Alignment: Standardized naming across the team
- Learning Tool: Helps developers learn better naming practices
- Time Saving: Automated suggestions instead of manual thinking
- Context Awareness: Suggestions based on code context and purpose
- Node.js 16+
- TypeScript 4.5+
With npm:
npm run build
With yarn:
yarn build
The codebase is organized into specialized modules:
- analyzers/: AST parsing and context extraction
- cli/: Interactive user interface
- config/: Configuration management
- providers/: AI service integrations
- services/: Core business logic
- types/: TypeScript definitions
- utils/: Shared utilities
- Fork the project
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
- Follow TypeScript best practices
- Maintain test coverage
- Update documentation
- Follow existing code style
For most users, we recommend Ollama:
- โ Free: No API costs
- โ Private: Your code never leaves your machine
- โ Fast: No network latency
- โ Reliable: No rate limits or quotas
Use OpenAI if:
- You already have credits
- You need the absolute best quality suggestions
- You don't mind the API costs
With Ollama: Yes! Once you download a model, everything works completely offline.
With OpenAI: No. Requires internet connection for API calls.
llama3.2:1b
: ~1.3GB (recommended for fast suggestions)llama3.2:3b
: ~4.7GB (better quality, slower)
With Ollama: No. Everything stays on your local machine.
With OpenAI: Yes. Code snippets are sent to OpenAI's servers for analysis.
Yes! Use the rules
provider for CI/CD environments:
{
"provider": "rules"
}
This uses only predefined rules without requiring AI models.
This project is licensed under the MIT License - see the LICENSE file for details.
- Inspired by code naming best practices
- Uses Babel for AST analysis
- Integration with multiple AI providers for advanced suggestions
- Built with modern TypeScript and Node.js
- ๐ Report Issues
- ๐ฌ Discussions
- ๐ง Contact
- Unit and integration tests
- VS Code extension
- More AI providers
- Custom rule configuration
- Batch processing mode
- CI/CD integration
- Support for more languages
Made by Juan David Peรฑa
Helping developers write better, more readable code, one name at a time.