A secure, AI-powered emotional assistant that enhances communication across text, audio, and video conversations.
Empowering better communication through real-time emotional intelligence, while maintaining user privacy and control.
- π― Emotion-Aware Analysis: Real-time detection of emotional states, stress levels, and conversation dynamics
- π‘ Live Communication Guidance: Contextual suggestions for better communication
- π Smart Summaries: Automated insights and action items from conversations
- π Privacy-First: Local-first architecture with user-controlled data sharing
- π₯ Multi-Modal Support: Works with text, audio, and video communications
- Frontend: Next.js with TypeScript
- Backend: Node.js with Express
- AI Processing: TensorFlow.js for local processing
- Real-time Communication: WebRTC
- State Management: Redux Toolkit
- Styling: Tailwind CSS
- Testing: Jest and React Testing Library
- Clone the repository
- Install dependencies:
npm install
- Set up environment variables:
cp .env.example .env.local
- Start the development server:
npm run dev
βββ src/
β βββ components/ # Reusable UI components
β βββ pages/ # Next.js pages and API routes
β βββ services/ # Core services (AI, WebRTC, etc.)
β βββ store/ # Redux Toolkit store and slices
β βββ styles/ # Global styles
β βββ types/ # TypeScript type definitions
βββ public/ # Static assets
βββ scripts/ # Utility scripts
βββ tests/ # Test files
βββ docs/ # Documentation
sequenceDiagram
participant User
participant UI
participant AI
participant Storage
loop Text Analysis
User->>UI: Types message
UI->>AI: Analyze emotion
AI->>Storage: Save results
Storage->>UI: Update display
end
loop Audio Processing
User->>UI: Speaks
UI->>AI: Analyze tone
AI->>Storage: Save metrics
Storage->>UI: Show feedback
end
loop Video Analysis
User->>UI: Video feed
UI->>AI: Analyze expressions
AI->>Storage: Save data
Storage->>UI: Display insights
end
π User Input
β π€ Audio
β π§ AI Analysis
β π‘ Insights
π User Input
β π Text
β π§ AI Analysis
β π‘ Insights
π User Input
β π₯ Video
β π§ AI Analysis
β π‘ Insights
graph LR
A[User] -->|Text Input| B[TextChat.tsx]
B -->|storeText| C[conversationSlice.ts]
B -->|analyze| D[TextAnalysisService.ts]
D -->|process| E[TensorFlow.js]
E -->|results| D
D -->|update| C
C -->|render| F[EmotionAnalysisVisualizer.tsx]
F --> A
Key Components:
src/components/TextChat.tsx
: Text input interfacesrc/services/TextAnalysisService.ts
: Text processing servicesrc/store/slices/conversationSlice.ts
: Redux storagesrc/components/EmotionAnalysisVisualizer.tsx
: Visualization component
graph LR
A[User] -->|Microphone| B[AudioChat.tsx]
B -->|stream| C[WebRTCService.ts]
B -->|chunk| D[audioProcessor.ts]
D -->|analyze| E[TensorFlow.js]
E -->|tone| D
D -->|update| F[conversationSlice.ts]
F -->|render| G[EmotionAnalysisVisualizer.tsx]
G --> A
Key Components:
src/components/AudioChat.tsx
: Audio interfacesrc/services/WebRTCService.ts
: Real-time communicationsrc/scripts/audioProcessor.ts
: Audio processing scriptssrc/services/AudioAnalysisService.ts
: Audio analysis service
graph LR
A[User] -->|Camera| B[VideoChat.tsx]
B -->|stream| C[WebRTCService.ts]
B -->|frame| D[videoProcessor.ts]
D -->|analyze| E[TensorFlow.js]
E -->|expressions| D
D -->|update| F[conversationSlice.ts]
F -->|render| G[EmotionAnalysisVisualizer.tsx]
G --> A
Key Components:
src/components/VideoChat.tsx
: Video interfacesrc/scripts/videoProcessor.ts
: Frame processingsrc/services/VideoAnalysisService.ts
: Video analysis- Shared
conversationSlice.ts
for state management
graph TD
User[User Input] --> Interface{Communication Interface}
Interface --> Processing[Data Processing]
Processing --> AI[Local AI Analysis]
AI --> Storage[State Management]
Storage --> Feedback[Real-time Feedback]
Feedback --> User
- User Input: Text is entered into the UI via
src/components/TextChat.tsx
. - Component Handling: The
src/components/TextChat.tsx
dispatches a Redux action (storeText
) to temporarily store the text insrc/store/slices/conversationSlice.ts
. - AI Processing: The
src/services/TextAnalysisService.ts
sends the text toTensorFlow.js
(emotionDetectionModel
). - Feedback: Results are stored in
conversationSlice
and displayed viasrc/components/EmotionAnalysisVisualizer.tsx
.
- User Input: Audio is captured via
src/components/AudioChat.tsx
(using browser'sMediaRecorder
API). - Component Handling: The
src/scripts/audioProcessor.ts
processes the stream viasrc/services/WebRTCService.ts
(audioStreamModule
) or stores it locally (localAudioStorage
). - AI Processing: The
src/services/AudioAnalysisService.ts
sends data toTensorFlow.js
(toneAnalysisModel
). - Feedback: Real-time suggestions are rendered by
src/components/EmotionAnalysisVisualizer.tsx
.
- User Input: Video is captured via
src/components/VideoChat.tsx
(using browser'sgetUserMedia
API). - Component Handling: Frames are processed by
src/scripts/videoProcessor.ts
viaWebRTCService.ts
(videoStreamModule
) or analyzed locally (localFrameAnalysis
). - AI Processing: The
src/services/VideoAnalysisService.ts
usesTensorFlow.js
(facialExpressionModel
). - Feedback: Insights are displayed by
src/components/EmotionAnalysisVisualizer.tsx
.
We welcome contributions! Please follow these steps:
- Fork and clone the repository.
- Install dependencies (
npm install
). - Set up environment variables (
cp .env.example .env.local
). - Make your changes and ensure they adhere to the project's coding standards.
- Test your changes thoroughly.
- Submit a pull request with a clear description of your changes. Please refer to the CONTRIBUTING.md file for more detailed guidelines.
Users interact with the application through its frontend interface.
- The application integrates with communication platforms to access text, audio, or video streams (details of integration mechanisms are handled within the services layer).
- Real-time data is fed to the local AI processing module (TensorFlow.js).
- Emotion analysis and communication dynamics are detected locally.
- The application provides real-time feedback and suggestions to the user based on the analysis.
- Optionally, conversations can be summarized, and action items are generated, stored locally, and accessible via the UI.
- User settings control privacy preferences and data sharing options.