This app sends a Recall.ai bot into a Zoom call, streams real-time transcripts to the browser, and—when the call ends—fetches the final MP4 video, MP3 audio, and full transcript, then displays download links as well as players for the audio and video in the web app.
Stack: Next.js API routes, Prisma + PostgreSQL, a tiny Node WebSocket relay, ngrok (static domain), Recall.ai Meeting Bot API.
Live features: Real-time transcript.data
→ Webhook → WS → Browser.
Post-call features: Handle bot.status_change
(call_ended
/ done
) → resolve recording_id
→ fetch media & transcript → persist → UI polling shows links.
- A Recall.ai workspace + API key (free to start). Create a workspace and generate an API key in the dashboard.
- A static domain (ngrok is what we use here). For instructions see the ngrok section in the appendix.
Store the API key somewhere safe (you'll need to add it to your .env file later)
-
Node.js LTS (18+ recommended)
- macOS:
brew install node
(or use Node installer) - Windows: install from https://nodejs.org or
choco install nodejs-lts
- macOS:
-
PostgreSQL (14+)
- macOS:
brew install postgresql@16 && brew services start postgresql@16
- Windows: use the official installer or
choco install postgresql
- macOS:
-
ngrok (with a reserved/static domain)
- macOS:
brew install ngrok/ngrok/ngrok
- Windows: download installer or
choco install ngrok
- macOS:
-
pnpm (optional but recommended):
npm i -g pnpm
-
TypeScript & Type Definitions
Already included indevDependencies
, but if you're setting up manually:
pnpm add -D typescript ts-node @types/node @types/express @types/ws @types/react @types/react-dom
You can swap
brew
/choco
for GUI installers if you prefer.
After installing PostgreSQL, make sure it's running and that the target database exists.
# Start PostgreSQL (macOS)
brew services start postgresql@16
# Create the database manually if it doesn't exist yet
createdb recall_demo
On Windows, you can use
pgAdmin
or the command line to ensure therecall_demo
database exists.
git clone <this-repo>
cd <this-repo>
npm install or pnpm install
Create a .env
file in the project root. Copy the following into the .env
file:
# .env
# Postgres: adjust user, password, db name as needed
DATABASE_URL="postgresql://postgres:postgres@localhost:5432/recall_demo?schema=public"
# Recall: IMPORTANT — raw key, no “Bearer ”
RECALL_API_KEY="<YOUR_RECALL_API_KEY>"
Run Prisma (only after PostgreSQL is running and the database exists):
npx prisma generate
# This applies the DB schema and seeds the database
npx prisma migrate dev -n init
Create or edit ~/.config/ngrok/ngrok.yml
(macOS/Linux) or %UserProfile%\.config\ngrok\ngrok.yml
If
~/.config/ngrok/ngrok.yml
doesn’t exist, just create it manually.
(Windows):
version: 2
authtoken: <YOUR_NGROK_AUTHTOKEN>
tunnels:
web:
proto: http
addr: 3000
domain: <your-static-domain>.ngrok-free.app
# (optional) expose WS relay too if you don’t proxy it via Next
ws:
proto: http
addr: 4000
domain: <your-static-ws-domain>.ngrok-free.app
You’ll reference https://.ngrok-free.app as the public base URL for webhooks.
-
pages/api/startRecall.ts
— creates the bot with:meeting_url
(your Zoom link),webhook_url: "https://<ngrok-domain>/api/webhook"
,recording_config.realtime_endpoints
for real-time transcripts.
Docs: Real-time Webhook Endpoints
-
pages/api/webhook.ts
— ACKs fast, handles:transcript.data
→ save + broadcast to WS,bot.status_change
→ oncall_ended
/done
resolverecording_id
and fetch media/transcripts, Docs: Real-time Webhook Endpoints, Bot status change events
-
lib/recall-media.ts
— calls Recall API:GET /bot/{id}
untilrecordings[]
appears,- prefers
media_shortcuts
(direct download URLs), - falls back to
video_mixed
/audio_mixed
, - fetches full structured transcript.
Docs: Bot status change events, Mixed Audio, Mixed Video
-
pages/api/userData.ts
— returns latest meeting’s transcript +videoUrl
/audioUrl
for the UI poller. -
pages/api/manualRetrieve.ts
— manual “fetch artifacts now” endpoint. -
ws-server.ts
— tiny WebSocket relay/recall
+/send
. -
prisma/schema.prisma
— schema forMeeting
andTranscript
.
You’ll need to run three terminals side-by-side:
- Terminal A: Starts the Next.js web app (UI + API routes)
- Terminal B: Starts the WebSocket relay (real-time transcript updates)
- Terminal C: Starts ngrok to expose your local server to Recall’s webhook system
Terminal A — Next.js
npm run dev
# or
pnpm dev
Terminal B --WebSocket relay
# if compiled JS exists
node ws-server.js
# or run TypeScript directly
npx ts-node ws-server.ts
ngrok start --all
-
Open a Zoom meeting you control (so you can admit the bot).
-
Visit the app at
http://localhost:3000
(or via your ngrok domain). -
Paste the Zoom link and click Start Bot.
–
startRecall.ts
creates the bot and stores{ externalId, botId }
. -
Admit the bot in the Zoom UI.
-
Talk for a bit — you’ll see transcript lines appear in real time.
– Those are
transcript.data
webhook events → DB → WS → browser.
Docs: https://docs.recall.ai -
End the call — watch server logs:
– You’ll see
bot.status_change
withcode: call_ended
thencode: done
.
Docs: https://docs.recall.ai -
The server resolves
recording_id
and fetches:- MP4 (mixed video),
- MP3 (mixed audio),
- full structured transcript.
Docs: Bot status change events, Mixed Audio, Transcription
-
Wait 5–10s — the UI polls
/api/userData
; Video and Audio links appear. -
(Optional) Click Get Async Transcript & Video to force retrieval.
Once your app is running and has received a real-time transcript or finished a call, you can inspect the saved data in Postgres directly.
Use the psql
CLI to open a connection to your local Postgres instance:
psql -h localhost -U postgres -d recall_ai_dev
-h localhost
: Connect to local DB server-U postgres
: Use the default Postgres user-d recall_ai_dev
: Use the same DB as in.env
(DATABASE_URL
)
If prompted for a password, use the one configured for your local Postgres setup (e.g. postgres
by default if unchanged).
\dt
This will show all tables — you should see "Meeting" and "Transcript" if migrations ran correctly.
SELECT * FROM "Meeting" ORDER BY "createdAt" DESC LIMIT 5;
This will show the latest meetings. Useful columns to check:
externalId
: Used to track Recall bot sessionsmeetingUrl
: The original Zoom linkbotId
,recordingId
: Populated once the call endscreatedAt
: When the meeting entry was saved
First, find the id
of the meeting you want to inspect (from the "Meeting"
table), then run:
SELECT * FROM "Transcript" WHERE "meetingId" = '<YOUR_MEETING_ID>' ORDER BY "timestamp" ASC;
This shows all transcript lines tied to that meeting. You’ll see:
text
: What was saidspeaker
: Who said it (if available)timestamp
: When it was spoken
Type \q
and press Enter to quit the Postgres session.
You can use this to confirm that:
- Real-time transcripts are being saved correctly
- Post-call artifacts like
videoUrl
,audioUrl
, andrecordingId
are being set afterbot.status_change
events
-
Authorization header must be the raw key (no “Bearer ”): Authorization: $RECALLAI_API_KEY
-
Webhooks
-
Real-time transcript is configured in
recording_config.realtime_endpoints
and hits your/api/webhook
.
Docs: Real-time WebSocket -
Bot status change webhooks are delivered via Svix; you can receive them at the
webhook_url
you pass when creating the bot or configure endpoints in your dashboard.
Docs: Svix -
Artifacts availability
-
After
done
,GET /bot/{BOT_ID}
will includerecordings[]
. Use thatrecording_id
to fetch media or readmedia_shortcuts
.
Docs: https://docs.recall.ai
No transcripts?
Ensure you enabled transcription when creating the bot:
"recording_config": {
"transcript": { "provider": { "meeting_captions": {} } },
"realtime_endpoints": [
{
"type": "webhook",
"url": "https://<your-ngrok-domain>/api/webhook",
"events": ["transcript.data", "transcript.partial_data"]
}
]
}
- Real-time transcription must be explicitly enabled.
- Check server logs around
bot.status_change → done
. - Confirm your
RECALL_API_KEY
and that theAuthorization
header does not include “Bearer”. - Verify
/api/webhook
is publicly reachable at your ngrok domain.
- Either expose port 4000 with a second ngrok tunnel, or
- Proxy
/recall
through Next.js so the browser connects to the same domain.
- Wrong header or region. Check the header and base URL (e.g.,
us-east-1
).
Docs: Errors
To receive webhook events from Recall.ai, your app must be accessible via a public, static domain. This requires:
- A free ngrok account
- A reserved (static) domain
macOS:
brew install ngrok/ngrok/ngrok
Windows:
choco install ngrok
Grab you auth token from the ngrok dashboard then run:
ngrok config add-authtoken <YOUR_AUTHTOKEN>
- Go to the ngrok Reserved Domains dashboard
- Click "+ Reserve Domain"
- Choose something like:
zoom-bot.ngrok-free.app
You’ll use this domain when configuring webhooks and writing your ngrok.yml (see Step 4 in the README).
A static domain ensures Recall.ai can consistently reach your app with real-time events.
- Three terminals:
npm run dev
,ts-node ws-server.ts
,ngrok start --all
. - Show
.env
withDATABASE_URL
andRECALL_API_KEY
(no “Bearer ”). - Open Zoom meeting.
- In the app: paste Zoom link → Start Bot.
- Admit bot in Zoom; speak → see real-time transcript appear.
- End call → watch
bot.status_change
logs → links appear → click MP4/MP3.
Get started & docs (Recall.ai): home page, Quickstart, Authentication
Status & recording webhooks, Real-time transcript webhooks
Fetching recordings/transcripts