cloud and ai team #3
Isackskyla
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Objective: Simulate and later implement the core AI response engine of our assistant app.
Task:
Set up a temporary AI handler using the Dad Jokes API as a mock AI model.
Create a local module or microservice that takes a message input and returns a joke in response.
Ensure your code exposes a simple function that other services can call:
getAIResponse(message: string): Promise
Next Steps:
Plan how this will later connect to OpenAI, DeepSeek, or a custom LLM.
Define a clear interface that backend/cloud can call when ready.
How it connects:
The backend team will call this module as the AI brain.
Once cloud deploys it, you’ll need to adjust it for online serving.
Objective: Prepare cloud infrastructure to host our AI services and backend endpoints.
Task:
Choose a deployment platform (e.g., AWS Lambda, Google Cloud Functions, Vercel, or Render).
Create a cloud function that accepts a message and returns the AI mock result.
Set up a /chat endpoint with basic logging and test return from the AI Integration logic.
Ensure it's deployed and accessible via HTTPS.
Next Steps:
Set up authentication layer for production.
Start container setup or serverless runtime configs for future scaling.
How it connects:
You will host and expose the AI Integration team’s logic to the frontend and backend.
Frontend will consume your endpoint to show chat replies.
Backend may also need access as we scale logic and add database logging or sessions.
Beta Was this translation helpful? Give feedback.
All reactions