Engage HF AI-Voice is a Vapor server that integrates Twilio with OpenAI's real-time API (ChatGPT-4o) to enable voice-based conversations for healthcare data collection.
-
Twilio + OpenAI Integration
Receives and streams audio to and from Twilio, relaying it in real-time to OpenAI's API. -
Conversational AI on FHIR Questionnaires
Configures ChatGPT-4o to conduct voice conversations based on FHIR Questionnaires and records user responses in FHIR format on disk (encrypted on rest). -
Customizable Conversation Flow
Configure the voice assistant with multiple questionnaires, custom system prompts, and flexible session settings to tailor the conversation flow and data handling.
The ENGAGE-HF-AI-Voice assistant is configured by default with 3 questionnaires that are processed sequentially:
- Vital Signs - Collects blood pressure, heart rate, and weight (4 questions)
- KCCQ12 - Kansas City Cardiomyopathy Questionnaire (12 questions)
- Q17 - A final question on how the patient feels compared to three months ago (1 question)
To customize the conversation flow and questions, you can replace or modify these services:
-
Create a new service class that conforms to
BaseQuestionnaireService
andSendable
:@MainActor class YourCustomService: BaseQuestionnaireService, Sendable { init(phoneNumber: String, logger: Logger) { super.init( questionnaireName: "yourQuestionnaire", directoryPath: Constants.yourQuestionnaireDirectoryPath, phoneNumber: phoneNumber, logger: logger ) } }
-
Add your FHIR R4 questionnaire JSON file to
Sources/App/Resources/
(e.g.,yourQuestionnaire.json
) -
Add the directory path to
Sources/App/constants.swift
:static let yourQuestionnaireDirectoryPath = "\(dataDirectory)/yourQuestionnaire/"
-
Add questionnaire instructions to
Sources/App/constants.swift
Here is an example of what it could look like:static let yourQuestionnaireInstructions = """ Your Questionnaire Instructions: 1. Inform the patient about this section of questions. Before you start, use the count_answered_questions function to count the number of questions that have already been answered. If the number is not 0, inform the user about the progress and that you will continue with the remaining questions. If the number is 0, inform the user that you will start with the first/initial question. 2. For each question: - Ask the question from the question text clearly to the patient, start by reading the current progress, then read the question - Listen to the patient's response - Confirm their answer - After the answer is confirmed, save the question's linkId and answer using the save_response function - Move to the next question IMPORTANT: - Call save_response after each response is confirmed - Don't let the user end the call before ALL answers are collected - The function will show you progress (e.g., "Question 1 of 3") to help track completion """
-
Update the
getSystemMessageForService
function inSources/App/constants.swift
to include your service:static func getSystemMessageForService(_ service: QuestionnaireService, initialQuestion: String) -> String? { switch service { // ... other cases ... case is YourCustomService: // add a case for your service return initialSystemMessage + yourQuestionnaireInstructions + "Initial Question: \(initialQuestion)" default: return nil } }
-
Inject your service into the
ServiceState
inSources/App/routes.swift
:let serviceState = await ServiceState(services: [ /// ... other services ... YourCustomService(phoneNumber: callerPhoneNumber, logger: req.logger) // Add your service /// ... other services ... ])
- Replace a service: Simply replace the service in the array with your custom implementation
- Reorder services: Change the order in the array to change the conversation flow
- Remove services: Remove services from the array to skip them
-
System Message (AI Behavior)
Edit thesystemMessage
oderinstruction
constants inSources/App/constants.swift
to customize AI behavior for each questionnaire. -
Session Configuration (Voice, Functions, etc.)
ModifysessionConfig.json
inSources/App/Resources/
to control OpenAI-specific parameters such as:- Which voice model to use
- The available function calls (e.g., saving responses)
- Other ChatGPT session settings
You can either run the server in Xcode or using Docker.
To run the server locally using Xcode:
-
Add your OpenAI API key as an environment variable:
- Open the Scheme Editor (
Product > Scheme > Edit Scheme…
) - Select the Run section and go to the Arguments tab.
- Add a new environment variable:
OPENAI_API_KEY=your_key_here
- Open the Scheme Editor (
-
Build and run the server in Xcode.
-
Start ngrok to expose the local server:
ngrok http http://127.0.0.1:5000
-
In your Twilio Console, update the "A call comes in" webhook URL to match the forwarding address from ngrok, appending
/incoming-call
. Example:https://your-ngrok-url.ngrok-free.app/incoming-call
-
Call your Twilio number and talk to the AI.
To run the server using Docker:
-
Copy the example environment file:
cp .env.example .env
-
Open the .env file and insert your OpenAI API Key and optionally a encryption key if you wish to encrypt the response files (you can generate one using
openssl rand -base64 32
).Optional: For internal testing, you can also set
INTERNAL_TESTING_MODE=true
which allows to do the survey multiple times per day and serves a reduced KCCQ12 section with only three questions to allow faster testing. -
Build and start the server:
docker compose build docker compose up app
-
Start ngrok to expose the local server:
ngrok http http://127.0.0.1:8080
-
In your Twilio Console, update the "A call comes in" webhook URL to match the forwarding address from ngrok, appending
/incoming-call
. Example:https://your-ngrok-url.ngrok-free.app/incoming-call
-
Call your Twilio number and talk to the AI.
To deploy the service in a production environment, follow these steps:
-
Prerequesites Have Docker and Docker Compose installed.
-
Prepare the Deployment Directory
- Create a new directory on your target machine (e.g.,
engage-hf-ai-voice
) - Copy the following files to this directory (e.g. with
scp
or by creating a empty file and copy over the content):docker-compose.prod.yml
nginx.conf
- Create a new directory on your target machine (e.g.,
-
Configure Environment Variables
- Create a
.env
file in the deployment directory - Add your OpenAI API key like this:
OPENAI_API_KEY=<your-api-key>
- Create a
-
Set Up SSL Certificates
-
Create the required SSL certificate directories:
sudo mkdir -p ./certs sudo mkdir -p ./private
-
Add your SSL certificates:
- Place your certificate file which requies a full certificate chain (e.g.,
certificate.pem
) in./certs
- Place your private key file (e.g.,
certificate.key
) in./private
- Place your certificate file which requies a full certificate chain (e.g.,
-
Ensure proper permissions:
sudo chmod 644 ./certs/certificate.pem sudo chmod 600 ./private/private.key
-
3.1. Update file names in nginx.conf
- Depending on how your certificate and private key files are named, you need to adjust that in the nginx.conf
file at:
bash # SSL configuration with Stanford certificates ssl_certificate /etc/ssl/certs/voiceai-engagehf.stanford.edu/certificate.pem; ssl_certificate_key /etc/ssl/private/voiceai-engagehf.stanford.edu/certificate.key;
- Start the Service
- Navigate to your deployment directory
- Run the following command to start the service in detached mode:
docker compose -f docker-compose.prod.yml up -d
The service should now be running and accessible via your configured domain. You can test the health check endpoint, e.g. via curl, like that:
curl -I https://voiceai-engagehf.stanford.edu/health
To decrypt questionnaire response files for analysis:
-
Install Python cryptography library:
pip3 install cryptography
-
Run the decryption script (make sure you're in the directory containing the
/vital_signs
,/kccq12_questionnairs
, and/q17
folders):chmod +x decrypt_files.sh # make it executable ./decrypt_files.sh <your-base64-encryption-key>
The script will decrypt all files from ./vital_signs/
, ./kccq12_questionnairs/
, and ./q17/
directories and save them to ./decrypted/
.
This project is licensed under the MIT License. See Licenses for more information.
This project is developed as part of the Stanford Byers Center for Biodesign at Stanford University. See CONTRIBUTORS.md for a full list of all ENGAGE-HF-AI-Voice contributors.