Skip to content

AI-Powered Feedback System for Medical Education - OSCE Assessment with Whisper, Claude Opus, and Full-Stack Dashboard

Notifications You must be signed in to change notification settings

patelshrey40/rutgers-health-ai-feedback

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rutgers Health AI Feedback System

A comprehensive AI-powered feedback system for oral presentations in healthcare education, specifically designed for Summative OSCE (Objective Structured Clinical Examination) feedback. This system provides real-time analysis of medical presentations with detailed feedback on clinical reasoning, communication skills, and presentation structure.

🚀 Features

Core Functionality

  • 🎤 Speech-to-Text: Automatic transcription using OpenAI Whisper
  • 🧠 AI Analysis: AWS Bedrock with Claude Opus for clinical data extraction
  • 📊 Clinical Analysis: OPQRST analysis, gap detection, flow analysis, and communication metrics
  • 📝 Rubric Scoring: Automated scoring aligned with OSCE rubric criteria
  • 💬 Detailed Feedback: Comprehensive feedback generation with evidence references
  • 🌐 Web Interface: Modern React/Next.js frontend with real-time updates
  • 📱 Responsive Design: Mobile-friendly interface for various devices

Analysis Components

  • OPQRST Analysis: Systematic evaluation of medical history elements
  • Gap Detection: Identification of missing clinical information
  • Flow Analysis: Assessment of presentation structure and organization
  • Communication Metrics: Evaluation of communication effectiveness
  • Rubric Scoring: Multi-dimensional scoring across clinical domains
  • Teaching Points: Educational guidance for improvement

🏗️ Architecture

Backend (Python/FastAPI)

backend/
├── integrated_main.py          # Main FastAPI server
├── demo_pipeline_any_audio.py  # Analysis pipeline
├── logging_backend.py          # Logging utilities
└── temp_uploads/              # Temporary file storage

Frontend (React/Next.js)

rutgers-health-frontend/
├── src/
│   ├── app/                   # Next.js app router
│   ├── components/           # React components
│   │   ├── UploadTab.tsx     # File upload interface
│   │   ├── FullTranscriptDisplay.tsx  # Transcript viewer
│   │   ├── DetailedAnalysisTab.tsx    # Analysis results
│   │   ├── ResultsTab.tsx    # Scoring and metrics
│   │   └── DashboardTab.tsx  # System analytics
│   └── lib/                   # Utilities and stores
│       ├── store.ts          # Zustand state management
│       ├── api.ts            # API client
│       └── debugLogger.ts    # Debug logging

Analysis Pipeline

src/
├── opqrst_analyzer.py         # OPQRST analysis
├── gap_detector.py           # Gap detection
├── flow_analyzer.py          # Flow analysis
├── communication_metrics.py  # Communication evaluation
├── feedback_generator.py     # Feedback generation
└── bedrock_client.py         # AWS Bedrock integration

🛠️ Installation & Setup

Prerequisites

  • Python 3.9+
  • Node.js 18+
  • AWS Account with Bedrock access
  • Audio files in WAV format

Backend Setup

  1. Clone the repository
git clone https://github.com/patelshrey40/rutgers-health-ai-feedback.git
cd rutgers-health-ai-feedback
  1. Install Python dependencies
pip install -r requirements.txt
  1. Configure AWS credentials Create aws-config.json:
{
  "region": "us-east-1",
  "access_key_id": "YOUR_AWS_ACCESS_KEY_ID",
  "secret_access_key": "YOUR_AWS_SECRET_ACCESS_KEY",
  "session_token": "YOUR_AWS_SESSION_TOKEN"
}

Frontend Setup

  1. Navigate to frontend directory
cd rutgers-health-frontend
  1. Install dependencies
npm install

🚀 How to Start the Project

Method 1: Manual Startup (Recommended for Development)

Step 1: Start the Backend Server

# Navigate to project root
cd /path/to/rutgers-health-ai-feedback

# Start the backend server
cd backend
python integrated_main.py

Expected Output:

INFO:     Started server process [12345]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)

Step 2: Start the Frontend Server (New Terminal)

# Open a new terminal window
cd /path/to/rutgers-health-ai-feedback/rutgers-health-frontend

# Start the frontend development server
npm run dev

Expected Output:

▲ Next.js 15.5.4
- Local:        http://localhost:3000
- Network:      http://192.168.1.100:3000

✓ Ready in 2.3s

Step 3: Verify Both Services

  • Backend: Visit http://localhost:8000 - should show API welcome message
  • Frontend: Visit http://localhost:3000 - should show the application interface

Method 2: Using Docker (Production)

Start with Docker Compose

# From project root
docker-compose up -d

Check Docker Status

docker-compose ps

Method 3: Development Scripts

Create startup scripts (optional)

Create start_backend.sh:

#!/bin/bash
cd backend
python integrated_main.py

Create start_frontend.sh:

#!/bin/bash
cd rutgers-health-frontend
npm run dev

Make them executable:

chmod +x start_backend.sh start_frontend.sh

🛑 How to Stop the Project

Method 1: Manual Stop

Stop Backend Server

# In the terminal running the backend
# Press Ctrl+C to stop the server

Stop Frontend Server

# In the terminal running the frontend
# Press Ctrl+C to stop the development server

Method 2: Using Docker

# Stop all services
docker-compose down

# Stop and remove volumes
docker-compose down -v

Method 3: Kill Processes by Port

Find and Kill Backend Process

# Find process using port 8000
lsof -ti:8000

# Kill the process
kill -9 $(lsof -ti:8000)

Find and Kill Frontend Process

# Find process using port 3000
lsof -ti:3000

# Kill the process
kill -9 $(lsof -ti:3000)

Method 4: System-wide Process Cleanup

# Kill all Python processes (be careful!)
pkill -f "python.*integrated_main"

# Kill all Node processes (be careful!)
pkill -f "node.*next"

🔄 Complete Workflow Example

Starting the Project

# Terminal 1: Start Backend
cd /Users/charmypatel/Desktop/test/rutgers-health-ai-feedback
cd backend
python integrated_main.py

# Terminal 2: Start Frontend (after backend is running)
cd /Users/charmypatel/Desktop/test/rutgers-health-ai-feedback
cd rutgers-health-frontend
npm run dev

Using the Application

  1. Open browser to http://localhost:3000
  2. Go to Upload tab
  3. Upload a WAV file
  4. Wait for processing
  5. View results in Transcript, Detailed Analysis, Results, and Dashboard tabs

Stopping the Project

# Terminal 1: Stop Backend
# Press Ctrl+C

# Terminal 2: Stop Frontend
# Press Ctrl+C

🔧 Development Workflow

Daily Development

# Morning: Start the project
cd /path/to/project
cd backend && python integrated_main.py &
cd ../rutgers-health-frontend && npm run dev &

# Evening: Stop the project
pkill -f "python.*integrated_main"
pkill -f "node.*next"

Testing Changes

# Make changes to code
# Backend: Restart with Ctrl+C and run again
# Frontend: Usually auto-reloads, or restart with Ctrl+C and npm run dev

Debugging

# Check if ports are in use
netstat -an | grep :8000
netstat -an | grep :3000

# Check running processes
ps aux | grep python
ps aux | grep node

📖 Usage

Web Interface

  1. Upload Audio File

    • Navigate to the Upload tab
    • Select a WAV audio file
    • Click "Upload and Analyze"
    • Wait for processing to complete
  2. View Results

    • Transcript Tab: View the full transcription
    • Detailed Analysis Tab: See comprehensive analysis results
    • Results Tab: View scoring and metrics
    • Dashboard Tab: See system analytics

Command Line Interface

Run the analysis pipeline directly:

python demo_pipeline_any_audio.py <case_id> [whisper_model]

Examples:

# Use default case with base model
python demo_pipeline_any_audio.py

# Use specific case with medium model
python demo_pipeline_any_audio.py 0042 medium

# Available Whisper models: tiny, base, small, medium, large

Audio File Structure

Place your audio files in:

data/shared-dataset/Oral_presentations_audio_out_anon/RUHH_Oral_{case_id}_bleeped_anon.wav

📊 Output & Results

Generated Files

The system creates comprehensive output files in demo_output_whisper/:

  • {case_id}_feedback.json - Complete analysis results
  • {case_id}_metadata.json - Processing metadata
  • {case_id}_structured.json - Structured clinical data
  • {case_id}_whisper_transcript.txt - Full transcription

Analysis Results

  • OPQRST Analysis: Coverage of medical history elements
  • Gap Detection: Missing clinical information identification
  • Flow Analysis: Presentation structure evaluation
  • Communication Metrics: Communication effectiveness scores
  • Rubric Scores: Multi-dimensional clinical scoring
  • Detailed Feedback: Comprehensive improvement recommendations

🔧 Configuration

Environment Variables

# AWS Configuration
export AWS_DEFAULT_REGION="us-east-1"
export AWS_ACCESS_KEY_ID="your_access_key"
export AWS_SECRET_ACCESS_KEY="your_secret_key"
export AWS_SESSION_TOKEN="your_session_token"

# Optional: Custom model settings
export WHISPER_MODEL="base"  # tiny, base, small, medium, large
export BEDROCK_MODEL_ID="anthropic.claude-3-opus-20240229-v1:0"

Whisper Model Selection

  • tiny: Fastest, least accurate
  • base: Good balance (recommended)
  • small: Better accuracy
  • medium: High accuracy
  • large: Best accuracy, slowest

🧪 Testing

Run Tests

# Test complete workflow
python test_complete_workflow.py

# Test frontend integration
python test_frontend_integration.py

# Test analysis modules
python test_analysis_modules.py

Debug Mode

Enable debug logging:

# Backend debug
python logging_backend.py

# Frontend debug
# Open browser console to view debug logs

🐛 Troubleshooting

Common Issues

  1. Whisper FP16 Error

    • Solution: System automatically uses FP32 on CPU
    • Ensure sufficient RAM for model loading
  2. AWS Bedrock Access

    • Verify AWS credentials in aws-config.json
    • Ensure Bedrock access is enabled in your AWS account
    • Check region compatibility
  3. Memory Issues

    • Use smaller Whisper models (tiny, base)
    • Ensure sufficient RAM (8GB+ recommended)
    • Close other applications during processing
  4. Audio Format Issues

    • Only WAV files are supported
    • Ensure audio quality is good
    • Check file size (large files may timeout)
  5. Port Already in Use

    # Check what's using port 8000
    lsof -ti:8000
    
    # Check what's using port 3000
    lsof -ti:3000
    
    # Kill processes if needed
    kill -9 $(lsof -ti:8000)
    kill -9 $(lsof -ti:3000)
  6. Backend Won't Start

    # Check Python version
    python --version
    
    # Check if dependencies are installed
    pip list | grep fastapi
    
    # Reinstall dependencies
    pip install -r requirements.txt
  7. Frontend Won't Start

    # Check Node version
    node --version
    
    # Clear npm cache
    npm cache clean --force
    
    # Reinstall dependencies
    rm -rf node_modules package-lock.json
    npm install
  8. Analysis Not Working

    • Check AWS credentials are valid
    • Verify audio file is in WAV format
    • Check backend logs for error messages
    • Ensure sufficient disk space for output files

Debug Tools

  1. Debug Tab: Use the Debug tab in the frontend to test API connections
  2. Console Logs: Check browser console for detailed error messages
  3. Backend Logs: Monitor terminal output for processing status
  4. Network Tab: Use browser dev tools to inspect API calls

Quick Health Checks

# Check if backend is running
curl http://localhost:8000/

# Check if frontend is running
curl http://localhost:3000/

# Check system resources
top
htop

# Check disk space
df -h

# Check memory usage
free -h

📋 Quick Reference

Essential Commands

Start Everything

# Terminal 1: Backend
cd backend && python integrated_main.py

# Terminal 2: Frontend  
cd rutgers-health-frontend && npm run dev

Stop Everything

# Method 1: Ctrl+C in each terminal
# Method 2: Kill by port
kill -9 $(lsof -ti:8000) $(lsof -ti:3000)

Reset Everything

# Stop all processes
pkill -f "python.*integrated_main"
pkill -f "node.*next"

# Clear temporary files
rm -rf backend/temp_uploads/*
rm -rf demo_output_whisper/*

# Restart
cd backend && python integrated_main.py &
cd ../rutgers-health-frontend && npm run dev &

File Locations

Project Structure:
├── backend/
│   ├── integrated_main.py          # Main server
│   ├── temp_uploads/              # Uploaded files
│   └── demo_output_whisper/       # Analysis results
├── rutgers-health-frontend/
│   ├── src/components/            # React components
│   └── src/lib/                  # Utilities
├── src/                          # Analysis modules
├── data/                         # Sample audio files
├── aws-config.json              # AWS credentials
└── requirements.txt             # Python dependencies

Important URLs

Environment Variables

# Optional: Set these for custom configuration
export AWS_DEFAULT_REGION="us-east-1"
export WHISPER_MODEL="base"
export BEDROCK_MODEL_ID="anthropic.claude-3-opus-20240229-v1:0"

Log Locations

  • Backend Logs: Terminal output where you run python integrated_main.py
  • Frontend Logs: Browser console (F12 → Console)
  • Debug Logs: Check browser console for detailed API calls
  • Error Logs: Backend terminal and browser console

📈 Performance

Processing Times

  • Tiny Model: ~30 seconds
  • Base Model: ~60 seconds (recommended)
  • Small Model: ~90 seconds
  • Medium Model: ~2-3 minutes
  • Large Model: ~5-10 minutes

System Requirements

  • RAM: 8GB+ recommended
  • CPU: Multi-core processor
  • Storage: 2GB+ for models and dependencies
  • Network: Stable internet for AWS Bedrock access

🔒 Security

Data Handling

  • Audio files are processed locally
  • Transcripts are stored temporarily in memory
  • No persistent storage of sensitive medical data
  • AWS credentials should be kept secure

Best Practices

  • Use environment variables for credentials
  • Don't commit aws-config.json to version control
  • Regularly rotate AWS access keys
  • Use IAM roles with minimal required permissions

🤝 Contributing

Development Setup

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Test thoroughly
  5. Submit a pull request

Code Style

  • Python: Follow PEP 8 guidelines
  • TypeScript: Use ESLint configuration
  • React: Follow React best practices
  • Documentation: Update README for new features

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • OpenAI Whisper for speech-to-text capabilities
  • AWS Bedrock for LLM analysis
  • Rutgers Health for medical education context
  • Open source community for various dependencies

📞 Support

For issues and questions:

  1. Check the troubleshooting section
  2. Review existing GitHub issues
  3. Create a new issue with detailed information
  4. Include system logs and error messages

🔄 Version History

  • v1.0.0: Initial release with basic functionality
  • v1.1.0: Added web interface and real-time processing
  • v1.2.0: Enhanced analysis pipeline and improved accuracy
  • v1.3.0: Added comprehensive debugging and error handling
  • v1.4.0: Full frontend-backend integration with production-ready features

🚨 Emergency Procedures

System Completely Broken

# Nuclear option: Reset everything
pkill -f python
pkill -f node
rm -rf backend/temp_uploads/*
rm -rf demo_output_whisper/*
rm -rf rutgers-health-frontend/node_modules
rm -rf rutgers-health-frontend/.next

# Reinstall everything
pip install -r requirements.txt
cd rutgers-health-frontend && npm install

Out of Memory

# Check memory usage
free -h
top

# Kill heavy processes
pkill -f whisper
pkill -f python

# Restart with smaller model
export WHISPER_MODEL="tiny"

AWS Issues

# Test AWS connection
aws sts get-caller-identity

# Check Bedrock access
aws bedrock list-foundation-models --region us-east-1

Database/Storage Issues

# Check disk space
df -h

# Clean up old files
find demo_output_whisper/ -name "*.json" -mtime +7 -delete
find backend/temp_uploads/ -name "*.wav" -mtime +1 -delete

🔧 Maintenance

Daily Tasks

  • Check system resources: htop
  • Monitor logs for errors
  • Clean temporary files if needed

Weekly Tasks

  • Update dependencies: pip install -r requirements.txt --upgrade
  • Clean old analysis files
  • Check AWS quota usage

Monthly Tasks

  • Review and rotate AWS credentials
  • Update documentation
  • Performance optimization review

📞 Support & Help

Getting Help

  1. Check this README first
  2. Review troubleshooting section
  3. Check GitHub issues: https://github.com/patelshrey40/rutgers-health-ai-feedback/issues
  4. Create new issue with:
    • System information (OS, Python version, Node version)
    • Error messages (full stack trace)
    • Steps to reproduce
    • Log files

System Information to Include

# When reporting issues, include:
python --version
node --version
npm --version
pip list | grep -E "(fastapi|whisper|boto3)"
npm list --depth=0

Log Files to Attach

  • Backend terminal output
  • Browser console logs
  • System logs if available
  • Error screenshots

Built with ❤️ for medical education and healthcare training

Last Updated: October 2024
Version: 1.4.0
Maintainer: Rutgers Health AI Team

About

AI-Powered Feedback System for Medical Education - OSCE Assessment with Whisper, Claude Opus, and Full-Stack Dashboard

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published