Skip to content

Module of the Viam mlmodel service that allows inference on a Tensorflow model in the SavedModel format.

License

Notifications You must be signed in to change notification settings

ishayrao/tensorflow-cpu

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

tensorflow-cpu

Viam provides a tensorflow-cpu model of the ML model service that allows CPU-based inference on a Tensorflow model in the SavedModel format.

Configure this ML model service as a modular resource on your robot to take advantage of Tensorflow on the Viam platform--including previously existing or even user-trained models.

Getting started

The first step is to prepare a valid Tensorflow model. A valid Tensorflow model comes as a directory which can be named anything. Within the model directory, there should at least be a saved_model.pb file and an internal directory named variables, which itself should contain two files: variables.index and variables.data-00000-of-00001. The model directory may also include other files (such as keras_metadata.pb), but those are irrelevant for now. The path to the model directory will be important later.

Note

Before adding or configuring your module, you must create a robot.

Configuration

Navigate to the Config tab of your robot’s page in the Viam app. Click on the Services subtab and click Create service. Select the mlmodel type, then select the tensorflow-cpu model. Enter a name for your service and click Create.

Example Configuration

{
  "modules": [
    {
      "type": "registry",
      "name": "viam_tensorflow-cpu",
      "module_id": "viam:tensorflow-cpu",
      "version": "latest"
    }
  ],
  "services": [
    {
      "model": "viam:mlmodel:tensorflow-cpu",
      "attributes": {
        "package_reference": null,
        "model_path": "/home/kj/Resnet50/",
        "label_path": "/home/kj/imagenetlabels.txt"
      },
      "name": "myTFModel",
      "type": "mlmodel",
      "namespace": "rdk"
    },
  ]
}

Note

For more information, see Configure a Robot.

Attributes

The following attributes are available for viam:mlmodel:tensorflow-cpu services:

Name Type Inclusion Description
model_path string Required The full path (on robot) to a valid Tensorflow model directory.
label_path string Optional The full path (on robot) to a text file with class labels

Usage

This module is made for use with the following methods of the ML model service API:

A call to Metadata() will return relevant information about the shape, type, and size of the input and output tensors. For the Infer() method, the module will accept a struct of numpy arrays representing input tensors. The number and dimensionality of the input tensors depends on the included Tensorflow model. It will return a struct of numpy arrays representing output tensors.

About

Module of the Viam mlmodel service that allows inference on a Tensorflow model in the SavedModel format.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 90.2%
  • Shell 8.1%
  • Makefile 1.7%