AI-Powered Physics Simulations with CopilotKit-style Research Canvas UI
A complete, production-ready enterprise platform for Physics-Informed Neural Networks (PINNs) featuring a CopilotKit-inspired research canvas UI, RAG-powered AI code generation, and global serverless deployment.
🎨 Research Canvas UI: Try the Interactive Demo
📚 API Documentation: Explore the API
🚀 Production Deployment: Coming soon at api.ensimu.space
- Hybrid Serverless: Lambda for coordination, containers for computation
- Event-Driven: Asynchronous messaging for all components
- GPU-Optimized: Intelligent GPU resource allocation for PINN training/inference
- Cost-Efficient: Pay-per-use with automatic scaling and resource optimization
- Fault-Tolerant: Graceful degradation and automatic retries
# Install dependencies
npm install -g serverless
pip install -r requirements.txt
# Deploy the platform
serverless deploy --stage prod
# Build and deploy training containers
./scripts/deploy-containers.sh
# Deploy infrastructure
./scripts/deploy-infrastructure.sh
- API Gateway & Orchestration - Main entry point and workflow coordination
- PINN Problem Analyzer - Intelligent problem analysis and architecture recommendation
- ECS Training Service - GPU-accelerated PINN training with DeepXDE
- Fast Inference Handler - Real-time inference with model caching
- Model Deployment - SageMaker integration for production inference
- Monitoring & Optimization - Cost optimization and performance monitoring
- Heat Transfer (Diffusion equations)
- Fluid Dynamics (Navier-Stokes equations)
- Structural Mechanics (Elasticity equations)
- Electromagnetics (Maxwell equations)
- Wave Propagation (Wave equations)
- Automatic PINN Architecture Selection: Based on problem complexity and physics domain
- Hybrid Compute Strategy: Lambda for coordination, ECS/Batch for heavy computation
- Real-time Inference: Sub-second inference with cached models
- Cost Optimization: Intelligent resource scaling and cleanup
- Production Monitoring: CloudWatch dashboards and custom metrics
- Multi-GPU Support: Automatic GPU allocation for training workloads
POST /pinn/solve
- Submit physics problem for PINN solutionGET /pinn/status/{workflow_id}
- Check workflow statusGET /pinn/results/{workflow_id}
- Retrieve simulation resultsPOST /pinn/inference/{workflow_id}
- Real-time inference
Typical costs for different problem types:
- Simple heat transfer: $0.10 - $0.50 per solution
- Complex fluid dynamics: $2.00 - $10.00 per solution
- Real-time inference: $0.001 - $0.01 per request
MIT License - see LICENSE file for details.