Skip to content

Revolutionary AI-powered development platform with Code Time Travel, AI Pair Programming, and unique features not found anywhere else

License

Notifications You must be signed in to change notification settings

Tehlikeli107/universal-ai-dev-assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

31 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ Universal AI Development Assistant

Durum: Erken geliştirme aşamasında (v0.3.0-alpha)
Hedef: Γ‡oklu AI sağlayΔ±cΔ± desteği ile gΓΌvenlik odaklΔ± geliştirme asistanΔ±

Rust License Development Status

Next-Generation AI-Powered Development Platform with Multi-Provider Support

License: MIT Rust Docker

🌟 Revolutionary Features

πŸ€– Multi-Provider AI Integration

  • OpenRouter - Access to 100+ AI models through one API
  • OpenAI - GPT-4o, GPT-4o-mini, GPT-3.5-turbo
  • Anthropic - Claude 3.5 Sonnet, Claude 3 Haiku
  • Google - Gemini Pro, Gemini Flash
  • Groq - Ultra-fast Llama 3.1, Mixtral models
  • Together AI - Open-source model hosting
  • Cohere - Command R+ models
  • Ollama - Local model execution (always-available fallback)

🎯 Intelligent Provider Routing

  • Priority-based routing - Use your preferred providers first
  • Cost optimization - Automatically select cheapest available provider
  • Performance optimization - Route to fastest responding provider
  • Automatic failover - Seamless fallback when providers fail
  • Load balancing - Distribute requests across providers

πŸ’» Advanced Code Capabilities

  • Code Completion - Context-aware suggestions in any language
  • Code Analysis - Security, performance, quality analysis
  • Documentation Generation - Auto-generate comprehensive docs
  • Test Generation - Create unit tests automatically
  • Code Explanation - Understand complex code instantly
  • Code Refactoring - Improve code structure and quality
  • Language Translation - Convert code between languages
  • Real-time Streaming - Live completion responses

πŸ“Š Enterprise Features

  • Cost Tracking - Monitor API usage and costs
  • Analytics Dashboard - Provider performance metrics
  • Rate Limiting - Control API usage
  • Authentication - JWT-based security
  • Caching - Reduce API calls and costs
  • Health Monitoring - Real-time provider status

πŸš€ Quick Start

1. Clone the Repository

git clone https://github.com/Tehlikeli107/universal-ai-dev-assistant.git
cd universal-ai-dev-assistant

2. Configure Environment

cp .env.example .env
# Edit .env with your API keys

3. Run with Docker (Recommended)

docker-compose up -d

4. Or Run Locally

# Backend
cd backend
cargo run

# Frontend (in another terminal)
cd frontend
npm install
npm start

πŸ”§ Configuration

Environment Variables

# OpenRouter (Recommended - Access to 100+ models)
OPENROUTER_API_KEY=your_openrouter_key_here

# OpenAI
OPENAI_API_KEY=your_openai_key_here

# Anthropic Claude
ANTHROPIC_API_KEY=your_anthropic_key_here

# Google Gemini
GOOGLE_API_KEY=your_google_key_here

# Groq (Free tier available)
GROQ_API_KEY=your_groq_key_here

# Ollama (Local - No API key needed)
OLLAMA_BASE_URL=http://localhost:11434

Provider Priorities

# Higher number = higher priority
OPENROUTER_PRIORITY=9
OPENAI_PRIORITY=8
ANTHROPIC_PRIORITY=8
GROQ_PRIORITY=6
OLLAMA_PRIORITY=3

πŸ“š API Documentation

Health Check

curl http://localhost:8080/health

Code Completion

curl -X POST http://localhost:8080/api/v1/complete \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "def fibonacci(n):",
    "language": "python",
    "max_tokens": 100
  }'

Code Analysis

curl -X POST http://localhost:8080/api/v1/analyze \
  -H "Content-Type: application/json" \
  -d '{
    "code": "function add(a, b) { return a + b; }",
    "language": "javascript",
    "analysis_type": "security"
  }'

Code Actions

# Generate documentation
curl -X POST http://localhost:8080/api/v1/code/action \
  -H "Content-Type: application/json" \
  -d '{
    "code": "def quicksort(arr): ...",
    "language": "python",
    "action": "document"
  }'

# Generate tests
curl -X POST http://localhost:8080/api/v1/code/action \
  -H "Content-Type: application/json" \
  -d '{
    "code": "function add(a, b) { return a + b; }",
    "language": "javascript",
    "action": "test"
  }'

# Translate code
curl -X POST http://localhost:8080/api/v1/code/action \
  -H "Content-Type: application/json" \
  -d '{
    "code": "def hello(): print(\"Hello\")",
    "language": "python",
    "action": "translate",
    "target_language": "rust"
  }'

Provider Management

# List available providers
curl http://localhost:8080/api/v1/providers

# List available models
curl http://localhost:8080/api/v1/models

# Get metrics
curl http://localhost:8080/api/v1/metrics

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Frontend      β”‚    β”‚   Backend       β”‚    β”‚   AI Providers  β”‚
β”‚   (React)       │◄──►│   (Rust)        │◄──►│   (Multiple)    β”‚
β”‚                 β”‚    β”‚                 β”‚    β”‚                 β”‚
β”‚ β€’ Dashboard     β”‚    β”‚ β€’ Provider      β”‚    β”‚ β€’ OpenRouter    β”‚
β”‚ β€’ Code Editor   β”‚    β”‚   Router        β”‚    β”‚ β€’ OpenAI        β”‚
β”‚ β€’ Analytics     β”‚    β”‚ β€’ Load Balancer β”‚    β”‚ β€’ Anthropic     β”‚
β”‚ β€’ Settings      β”‚    β”‚ β€’ Cost Tracker  β”‚    β”‚ β€’ Google        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚ β€’ Health Monitorβ”‚    β”‚ β€’ Groq          β”‚
                       β”‚ β€’ Rate Limiter  β”‚    β”‚ β€’ Ollama        β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚ β€’ Caching       β”‚    β”‚ β€’ Together      β”‚
β”‚   VSCode Ext    │◄──►│ β€’ Analytics     β”‚    β”‚ β€’ Cohere        β”‚
β”‚                 β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ β€’ Completions   β”‚
β”‚ β€’ Code Actions  β”‚
β”‚ β€’ Diagnostics   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

🎨 Frontend Features

  • Modern React Dashboard - Beautiful, responsive UI
  • Real-time Provider Status - Live health monitoring
  • Cost Analytics - Track usage and spending
  • Model Comparison - Compare provider performance
  • Code Playground - Test completions interactively
  • Settings Management - Configure providers and preferences

πŸ”Œ VSCode Extension

  • Intelligent Code Completion - Context-aware suggestions
  • Code Actions - Quick fixes and improvements
  • Hover Documentation - Instant code explanations
  • Diagnostics - Real-time code analysis
  • Multi-provider Support - Choose your preferred AI

🐳 Docker Deployment

# docker-compose.yml
version: '3.8'
services:
  backend:
    build: ./backend
    ports:
      - "8080:8080"
    environment:
      - OPENROUTER_API_KEY=${OPENROUTER_API_KEY}
      - OPENAI_API_KEY=${OPENAI_API_KEY}
    
  frontend:
    build: ./frontend
    ports:
      - "3000:3000"
    depends_on:
      - backend

πŸ“ˆ Performance & Scaling

  • Sub-100ms Response Times - Optimized for speed
  • Horizontal Scaling - Multiple backend instances
  • Intelligent Caching - Reduce API calls by 60%
  • Connection Pooling - Efficient resource usage
  • Rate Limiting - Prevent API abuse
  • Health Checks - Automatic failover

πŸ’° Cost Optimization

  • Provider Cost Comparison - Always use cheapest option
  • Usage Analytics - Track spending per provider
  • Free Tier Maximization - Use free providers first
  • Caching Strategy - Avoid duplicate API calls
  • Token Optimization - Minimize prompt sizes

πŸ”’ Security Features

  • API Key Management - Secure credential storage
  • Rate Limiting - Prevent abuse
  • Input Validation - Sanitize all inputs
  • CORS Protection - Secure cross-origin requests
  • JWT Authentication - Secure API access
  • Audit Logging - Track all API usage

πŸ§ͺ Testing

# Backend tests
cd backend
cargo test

# Frontend tests
cd frontend
npm test

# Integration tests
npm run test:integration

# Load testing
npm run test:load

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Commit changes: git commit -m 'Add amazing feature'
  4. Push to branch: git push origin feature/amazing-feature
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • OpenRouter - For providing access to multiple AI models
  • Rust Community - For the amazing ecosystem
  • React Team - For the excellent frontend framework
  • All AI Providers - For making this possible

πŸ“ž Support

  • GitHub Issues - Bug reports and feature requests
  • Discussions - Community support and ideas
  • Documentation - Comprehensive guides and examples

Made with ❀️ by the Universal AI Development Assistant Team

Empowering developers with the best AI tools available

About

Revolutionary AI-powered development platform with Code Time Travel, AI Pair Programming, and unique features not found anywhere else

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published