This repository provides a detailed guide for Network Optix AI model developer to integrate their models with Network Optix VMS, precisely the Nx AI Manager.
For more information about the Nx AI Manager and its capabilities, please visit the Nx AI Manager documentation website.
For the sake of example, this guide will involve the training and deployment of an object detection model using the YOLOv11 architecture. The model will be trained on a custom dataset to detect chicken and eggs and then integrated with the Nx AI Manager.
- x86_64 based Linux machine, preferably Ubuntu 20.04 or later.
- Python 3.10.
- Repository cloned:
git clone https://github.com/scailable/ultralytics-support
- Fresh virtual environment created:
python3 -m venv venv
- Virtual environment activated:
source venv/bin/activate
- Requirements installed:
pip install -r requirements.txt
- Network Optix AI Manager installed on the target machine.
- Roboflow account to download the dataset.
To meet the compatibility requirements for Nx AI Manager XPU runtimes, we advise using or exporting only ONNX versions up to 1.15.0. This version is solely compatible with Python 3.11. We refresh our runtimes at least every six months, so feel free to check back periodically for updates on the latest ONNX version support. To install ONNX 1.15.0 for Python 3.11, you can use the following pip command:
pip install onnx==1.15.0
For inference purposes, ONNX models are typically executed using the ONNX Runtime. The ONNX Runtime version 1.17.0 supports ONNX opset version 20 and is compatible with Python 3.11. To install the ONNX Runtime for CPU execution, use:
pip install onnxruntime==1.17.0
If you require GPU support, you can install the GPU version of ONNX Runtime:
pip install onnxruntime-gpu==1.17.0
Always ensure your development environment aligns with the above versions to maintain compatibility with your Nx AI Manager XPU runtimes.
The dataset used for training the model can be downloaded from the following link: Chicken and Egg Dataset.
You can download the dataset in the YOLOv11 format directly from the Roboflow platform, or by using the following command:
export ROBOFLOW_API_KEY=<replace-api-key>
python3 src/download_data.py
The script will download the dataset in the YOLOv11 format and extract it to the src/eggs-dataset
directory.
The model will be developed using the YOLOv11n architecture. A generic training script is provided in the src/train.py
file. To customize it, please refer to these examples.
To train the model, run the following command:
python3 src/train.py --model yolo11n.pt --data src/eggs-dataset/data.yaml --epochs 100 --imgsz 416 --device cpu --batch_size 8
The model training might take minutes to hours, depending on the machine's computational power; use a GPU for faster training.
After training, the model will be saved in the runs/detect/train
directory. Feel free to examine the model's performance by reviewing the training logs and the generated images and charts, which are saved in the same directory.
If the model's performance is satisfactory, proceed to the next step. Otherwise, consider retraining the model with different hyperparameters.
To deploy with the Nx AI Manager, the model must be converted to the ONNX format. The src/deploy.sh
script will convert the model to ONNX and save it in the runs/detect/train
directory.
bash src/export.sh ./runs/detect/train/weights/best.pt 416 B-eggs W-eggs
The general syntax for the script is:
bash src/export.sh <pt_path> <imgsz> <class1> <class2> ... <classN>
To integrate the model with the Nx AI Manager, the ONNX model must be uploaded to the Nx AI Cloud, as shown in the image below:
If you don't have an account, you can create one for free by following these instructions: Nx AI Cloud Account Creation.
After uploading the model, it will take a couple of minutes for it to be ready to run. The model page will have a green ok
status when it is ready, as shown in the image below:
To use the trained model, you need to need to have the Nx Meta VMS installed on your machine. Addionally, you need to have the Nx AI Manager installed and running.
You can follow the instructions in the Nx AI Manager documentation to install and configure the Nx AI Manager.
The model can then be deployed by following the steps mentioned here.
We recommend to test the AI Manager using one of the default models before integrating your custom model, to ensure that the AI Manager is working correctly on your machine.
The model's output should looks something like this:
For any questions or issues, please checkout this support page.