Skip to content

joaquinrovira/FaaS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FaaS Project

Description

This project's aim is to build and deploy a basic Functions-as-a-Service platform capable of handling on demand computation of arbitrary functions. It will be served as REST API (documentation here) with simplistic user services, monitoring and auto-scaling.

Implementation

In order to develop a distributed system capable of handling variable loads, the full FaaS is divided into 4 separate components. Communication between the different components is via ZeroMQ sockets.

Frontend

Frontend design

The frontend component is in charge of receiving users' requests and route them through the system accordingly. It's implemented as a basic ExpressJS server. On startup it receives initialization data through environment variables:

  • DB_URL Specifies the location and protocol of the database manager (DBManager) service. Must be a valid ZeroMQ connection string.

  • JQ_URL Specifies the location and protocol of the job-queue manager (JQManager) service. Must be a valid ZeroMQ connection string.

  • PORT Specifies the port on which the frontend will listen.

Database Manager (DBManager)

Database manager design

The DBManager component is in charge of offering an interface to a MongoDB database. On startup it receives initialization data through environment variables:

  • DB_URL Specifies the location of the (MongoDB) database service. Must be a valid MongoDB connection string.

  • PORT Specifies the port on which the database manager will listen for ZeroMQ connections.

Along with the DBManager implementation a DBManager proxy abstracts communication and gives a common inteface for all other components to use.

Job-queue Manager (JQManager)

Job-queue manager design

The JQManager component is in charge of managing the function calls and distributing them between the workers that will execute them. On startup it receives initialization data through environment variables:

  • DB_URL Specifies the location and protocol of the job-queue manager (DBManager) service. Must be a valid ZeroMQ connection string.

  • PORT Specifies the port on which the job-queue manager will listen for ZeroMQ connections.

Along with the JQManager implementation a JQManager proxy abstracts communication and gives a common inteface for all other components to use.

Worker

Worker design

The worker component executing the requested function calls. Functions called are executed in a sandboxed environment although not completely safe via NodeJS's vm.

On startup it receives initialization data through environment variables:

  • DB_URL Specifies the location and protocol of the database manager (DBManager) service. Must be a valid ZeroMQ connection string.

  • JQ_URL Specifies the location and protocol of the job-queue manager (JQManager) service. Must be a valid ZeroMQ connection string.

  • PORT Specifies the port on which the frontend will listen.

Auto-scaling

Scaler design

In order to auto-scale the service depending on the current workload a scaler component is implemented. It will query the JQManager for the size of the current job backlog. If it grows too much, the number of workers will increase (up to a maximum) and if the workload diminishes the number of workers will decrease (down to a minimum).

IMPORTANT NOTE: The credentials of the service must be given to the scaler component on deployment. A file called KUMORI_CREDENTIALS must be located on the components' working directory and it must contain the username and the password, separated by a new line:

<username>
<password>

For example, the credentials file could be given to the component this way:

> kumorictl exec -it $DEPLOYMENT scaler $INSTANCE -- bash
> echo -e 'USERNAME\nPASSWORD' >> KUMORI_CREDENTIALS

Deployment

Deployment

The deployment of the service is described as a CUE module. The 7 components required are:

Each component module it describes among other things the docker image it uses (example) and the environment variables it will use os startup (example). For more information about how the deployment is described check the platform documentation here. A sample deployment script can be found here and a script that describes how the docker images must be build here.

F.A.Q.

How are functions specified?

The functions must be specified as NodeJS function. They can be both asynchronous or syncronous and be executable in a sandboxed vm environment. The precise method of function execution can be found here. For example, the following would be valid functions:

function add(a,b) { return a + b; }
async function add_with_delay(a,b) {
  await (new Promise((res,_) => setTimeout(res, 1000))); // Wait 1 second
  return a + b;
}

How are functions registered in the system?

How are execution requests sent to the system?

Execution requests are sent as described in the REST API.

How are executiong results gathered?

After sending an execution request, an execution ID will be received in response. One can poll the job status and once the stated changes to 1, the execution result can be requested.

How does the service auto-scale?

Every couple of seconds, the scaler component polls the JQManager's job queue length and if it grows too much the number of workers is doubled (up to a maximum). The exact implementation can be found here.

How are execution resources measured for each user?

The workers measure each function's execution time individually and adds it to the owner's total execution time in the database.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published