Skip to content

summaria/owldo-ml-server

Repository files navigation

owldo-ml-server

This is the ml server for owldo, using FastAPI. This server takes care of routing and calling the different ml models required for the successful running of the application.

Local development

Dependencies

Setting up

  • Setup and activate the environment.

    $ virtualenv env
    $ source env/Scripts/activate
  • Install the dependencies

    $ pip install -r requirements.txt
  • Save a local copy of the trained models in models/ from:

    Your folder structure should now look like:

    .
    ├── models
    │   └── t5-large-best-model
    │   └── t5-race-qa-2
    
    
  • Start the server

    $ uvicorn app:app --reload
  • The server will run at port 8000 by default.

  • Happy coding 🎉

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •