This is the ml server for owldo, using FastAPI. This server takes care of routing and calling the different ml models required for the successful running of the application.
- Install git
- Install python 3.7+
-
Setup and activate the environment.
$ virtualenv env $ source env/Scripts/activate
-
Install the dependencies
$ pip install -r requirements.txt
-
Save a local copy of the trained models in
models/
from:- question-generation-model (t5-large-best-model)
- mcq-generation-model (t5-race-qa-2)
Your folder structure should now look like:
. ├── models │ └── t5-large-best-model │ └── t5-race-qa-2
-
Start the server
$ uvicorn app:app --reload
-
The server will run at port 8000 by default.
-
Happy coding 🎉