Real-time remote physiological sensing application using deep learning models for non-contact vital signs monitoring.
MMRPhys-Live is a web application that monitors vital signs (heart rate and respiratory rate) using facial video analysis. It currently deploys MMRPhys model trained using the SCAMPS dataset to extract physiological signals from facial videos in real-time.
Try the application here: MMRPhys-Live Demo
- Real-time video capture with face detection
- Blood Volume Pulse (BVP) signal extraction
- Respiratory signal extraction
- Heart rate and respiratory rate monitoring
- Real-time signal visualization with Chart.js
- Data export functionality
- Cross-platform compatibility (works on desktop and mobile browsers)
- Web Worker-based processing for improved performance
- React (TypeScript)
- ONNX Runtime Web for model inference
- Face-API.js for face detection
- Chart.js for real-time data visualization
- Vite for development and bundling
- Tailwind CSS for styling
- Node.js (v16 or higher)
- npm (v7 or higher)
- Modern web browser with camera access
- Clone the repository:
git clone https://github.com/PhysiologicAILab/mmrphys-live.git
cd mmrphys-live
- Install dependencies:
npm install
- Set up models (face-api.js and ONNX models):
npm run setup
Start the development server:
npm run dev
Build the application:
npm run build
Preview the production build:
npm run preview
The application can be deployed to GitHub Pages using:
npm run deploy
The current deployment is available at: https://physiologicailab.github.io/mmrphys-live/
mmrphys-live/
├── public/ # Static assets
│ ├── models/ # Model files
│ │ ├── face-api/ # Face detection models
│ │ └── rphys/ # Physiological sensing models
│ └── ort/ # ONNX Runtime Web assets
├── src/
│ ├── components/ # React components
│ │ ├── Controls/ # Capture control components
│ │ ├── StatusMessage/ # Status notifications
│ │ ├── VideoDisplay/ # Video feed display
│ │ └── VitalSignsChart/ # Charts for vital signs
│ ├── hooks/ # Custom React hooks
│ ├── services/ # Service layer
│ ├── styles/ # CSS and styling
│ ├── types/ # TypeScript type definitions
│ ├── utils/ # Utility functions
│ ├── workers/ # Web Workers
│ │ ├── inferenceWorker.ts # ONNX model inference
│ │ └── videoProcessingWorker.ts # Video frame processing
│ ├── App.tsx # Main application component
│ └── main.tsx # Entry point
├── scripts/ # Build and setup scripts
├── python_scripts/ # Python utilities to read and process the acquired data
└── torch2onnx/ # PyTorch to ONNX conversion tools
The application uses configuration files for the physiological sensing models in public/models/rphys/config.json
. The key parameters include:
- Frame buffer size
- Sampling rate
- Physiological signal parameters (min/max rates)
- Model input/output specifications
- Chrome (recommended)
- Firefox
- Safari
- Edge
The application works best on devices with good camera quality and processing power.
The application allows you to export vital signs data. After recording a session and exporting the data, you can use the included Python script to analyze and visualize the results:
-
Prerequisites:
- Python 3.7+
- Required packages:
pip install numpy scipy matplotlib
-
Using the processing script:
python python_scripts/process_data.py path/to/exported_data.json --sampling_rate 30
This will:
- Load the exported vital signs data
- Filter the signals and compute heart rate and respiratory rate
- Generate and save visualization plots of the signals and their frequency spectra
- Compare computed values with the recorded values
-
Output:
- Visual plots of BVP (blood volume pulse) and respiratory signals
- Frequency spectrum analysis
- Heart rate and respiratory rate calculations
- Plots are saved to the same directory as the input file
(To be updated)
Jitesh Joshi and Youngjun Cho, "Efficient and Robust Multidimensional Attention in Remote Physiological Sensing through Target Signal Constrained Factorization", In Review, 2025