βEdge AI Vision + Sensor Gatewayβ for Vehicle / Factory / City Use
The Sintrones Edge AI Starter Kit is a production-ready, open-source framework designed to accelerate real-world deployments of AI-enhanced sensor fusion across vehicles, factories, and smart cities.
Built on rugged industrial-grade hardware, it enables seamless integration of vision AI, sensor telemetry, edge dashboards, and protocol adapters (MQTT, Modbus, CANbus) to create deployable proof-of-concepts and real-time systems.
Ideal for system integrators, smart factory teams, and urban solution architects, this repo provides all core modules and examples to quickly demonstrate AI value at the edge.
π‘ Sales + Collaboration: Use this as a customer-facing PoC and R&D starter kit. Ideal for OEMs, system integrators, and smart infrastructure pilots in Thailand or SEA deployments.
- π₯ Multi-Modal Sensor Input β Real camera streams + industrial signals (USB, PoE, RS232, GPIO)
- π§ AI Model Inference β Supports YOLOv5, OpenVINO, or ONNX for object detection and event logic
- π Dashboards β Visualize detections and sensor states via Streamlit (lightweight) or Grafana (pro)
- π Industrial Protocol Support β Communicates via MQTT, Modbus RTU/TCP, and CANBus for machine/vehicle data
- π‘ Mobility-Ready β Integrates 5G modules, GNSS/GPS, and CAN for use in transportation/fleet systems
- π OTA Management β Update devices in the field via JSON-controlled OTA agent
- π€ AI Agent Framework β Add-on agents include:
- β‘ System Recovery Agent for fault detection and recovery
- π§ Adapter Auto-Gen Agent to auto-generate configs for new devices (MQTT/OPC-UA)
- π¦ Release Agent to run readiness tests and publish release notes
- π§ͺ Vision Inspection Demos β ONNX model generator + camera inferencing pipeline
- π Repo Healthcheck β Lint and structure audit via
tools/healthcheck.py
This starter kit aligns with Sintronesβ efforts to:
- π οΈ Support System Integrators and SMEs with demo-ready tools
- π€ Collaborate on R&D and Proof-of-Concepts
- π Promote industrial AI adoption across Thailand & SEA markets
Use it as a base to build your own PoC, integrate with IIoT, or contribute modules! this repo helps you accelerate time-to-demo and validate value at the edge.
Feature | Open-Source Starter Kit | Commercial Offering |
---|---|---|
Real-time AI Inference (YOLO, etc.) | β Yes | β Yes |
Dashboard UI (Streamlit/Grafana) | β Yes | β Yes |
OTA Agent | β Yes | β Enhanced |
Health Monitoring | β CLI Tool | β Web Dashboard |
AI Agent Automation (Recovery, Adapter) | β Yes | β Advanced |
Odoo / Cloud / AWS Integration | π‘ Manual | β Plug-in Ready |
Hardware Acceleration Support | π‘ Generic | β Tuned Drivers |
Long-term Support + SLA | β | β Yes |
Turnkey Packaging (VM/Image) | β | β Yes |
Mode | Description |
---|---|
Standalone | Fully offline dashboard & sensor integration |
Edge-to-Cloud | MQTT to Odoo, AWS, or other IoT platforms |
Vehicle AI | Add GPS/CANbus for on-road deployments |
- π¦ Smart Logistics β Detect vehicles or goods, monitor temperature/vibration
- π Factory Automation β Visual inspection + machine health monitoring
- ποΈ Smart Cities β Public space detection, traffic analytics, air quality
- π Use Cases: Real-world Edge AI applications in factories, vehicles, and smart cities
- π€ Contributing Guide: How to get involved and contribute to this project
This repository integrates an AI Agents Add-on with three useful agents to enhance reliability, adaptability, and release workflows.
- Purpose: Monitors MQTT heartbeat topics (e.g.,
factory/health/#
). - Behavior: If a device misses heartbeats for a configured timeout, it triggers recovery actions (e.g., restart services or notify operators).
- Usage:
python -m src.agents.system_recovery_agent --config agents/system_recovery.yaml
- Purpose: Inspects new devices and automatically generates adapter configuration snippets.
- Modes:
- MQTT sniff mode: listens to wildcard topics and infers field mappings.
- OPC UA browse mode: enumerates nodeIds and proposes mappings.
- Usage:
# MQTT mode python -m src.agents.adapter_autogen_agent --mode mqtt --host localhost --topic factory/# --samples 30 --timeout 20 # OPC UA mode python -m src.agents.adapter_autogen_agent --mode opcua --endpoint opc.tcp://192.168.10.20:4840
- Purpose: Automates readiness checks and drafts GitHub release notes.
- Behavior: Runs
tools/healthcheck.py
, verifies syntax & dependencies, and generates release notes intodist/release_notes.md
. - Usage:
python -m src.agents.release_agent --tag v0.3.0 --notes "Adapters + Vision QA"
factory/vision/detections
β raw detections (per frame)factory/vision/events
β filtered/decided events (if you wire through decision engine)
- Model not found: ensure
models/defect_detector.onnx
exists or pass an absolute path with--model
. - Unsupported IR version: upgrade
onnxruntime
or re-generate model with IR=10. - No camera: use
--video
with a test clip. - Broker connection: start Mosquitto locally or point to your broker in
examples/vision_inspection/camera_infer.py
(MQTT_HOST/PORT).
Install additional dependencies with:
python -m pip install -r requirements-addon.txt
- Recovery logs: console output
- Auto-generated configs:
dist/config.autogen.yaml
- Release notes:
dist/release_notes.md
For more details, see docs/AGENTS.md
.
sintrones-edge-ai-starter-kit/
ββ- agents/ # Agent configs (e.g., system_recovery.yaml)
βββ ai_models/ # YOLOv5 or OpenVINO model files
βββ app/ # Core dashboard + logic
β βββ main.py
βββ configs/ # System & sensor configuration files
β ββ config.yaml
βββ dashboard/ # Streamlit and Grafana dashboard configs
ββ- dist/ # Auto-generated configs and release notes
βββ docker/ # Dockerfile + docker-compose.yml
βββ docs/ # Wiring diagrams, ABOX-5220 architecture
β βββ index.md
β ββ AGENTS.md # Documentation for AI Agents
βββ examples/ # Application-specific integration (vehicle, factory, city)
β ββ vision_inspection/...
ββ- models/
β ββ defect_detector.onnx
βββ ota/ # OTA update agent and JSON control
βββ sensor_drivers/ # CANbus, Modbus, GPIO, MQTT handlers
ββ- src/
β ββ agents/ # AI Agents (system recovery, adapter autogen, release agent)
β ββ collector.py
β ββ batcher.py
β ββ cli.py
β ββ decision_engine/
β ββ engine.py
ββ- tools/
β ββ healthcheck.py # Repo healthcheck tool
ββ- requirements.txt
ββ- requirements-addon.txt # Dependencies for AI Agents
βββ INSTALL.md
βββ README.md
βββ LICENSE
βββ .gitignore
-
Clone the repository:
git clone https://github.com/sintrones/edge-ai-starter-kit.git cd edge-ai-starter-kit
-
Install Python dependencies:
pip install -r requirements.txt
-
Run the dashboard demo:
python app/main.py
This example publishes per-frame detections to MQTT for the collector to ingest. It supports a real ONNX model or a mock fallback.
# Core
python -m pip install onnxruntime opencv-python paho-mqtt
# Apple Silicon (M1/M2/M3): use the silicon wheel
# python -m pip install onnxruntime-silicon opencv-python paho-mqtt
If you already have an ONNX model (e.g., YOLO export), put it in models/defect_detector.onnx
.
Otherwise, generate a tiny test model (always outputs one detection):
python models/onnx-model-generator/generate_dummy_onnx_with_onnx.py
# -> writes models/defect_detector.onnx
Note: If you see an error like Unsupported model IR version: 11, max supported IR version: 10, either upgrade onnxruntime (
pip install --upgrade onnxruntime
oronnxruntime-silicon
) or regenerate the model with IR=10.
Use a webcam:
python examples/vision_inspection/camera_infer.py \
--model models/defect_detector.onnx --camera 0
Or a sample video:
python examples/vision_inspection/camera_infer.py \
--model models/defect_detector.onnx --video path/to/sample.mp4
If you donβt have a model yet, run the mock-fallback script (publishes synthetic detections periodically):
python onnx-model-generator-ready/camera_infer_mock_fallback.py --camera 0
# or
python onnx-model-generator-ready/camera_infer_mock_fallback.py --video path/to/sample.mp4
# Collector should already be running to write JSONL
python -m src.cli collect --config configs/config.yaml
# Batch to Parquet
python -m src.cli batch --config configs/config.yaml
If you encounter errors like:
Unsupported model IR version: 11, max supported IR version: 10
You can either:
-
Upgrade your ONNX runtime:
pip install --upgrade onnxruntime
-
Re-export the model with
ir_version=10
-
β‘ System Recovery Agent
python -m src.agents.system_recovery_agent --config agents/system_recovery.yaml
-
π§ Adapter Auto-Gen Agent
# MQTT mode python -m src.agents.adapter_autogen_agent --mode mqtt --host localhost --topic factory/# --samples 30 --timeout 20 # OPC UA mode python -m src.agents.adapter_autogen_agent --mode opcua --endpoint opc.tcp://192.168.10.20:4840
-
π¦ Release Agent
python -m src.agents.release_agent --tag v0.3.0 --notes "Adapters + Vision QA"
Install add-on dependencies:
pip install -r requirements-addon.txt
Outputs include:
dist/config.autogen.yaml
dist/release_notes.md
Run locally:
# macOS (brew)
brew install mosquitto
brew services start mosquitto
# Ubuntu/Debian
sudo apt update && sudo apt install -y mosquitto mosquitto-clients
mosquitto -v # foreground mode with verbose logs
Or via Docker:
docker run -it -p 1883:1883 -p 9001:9001 eclipse-mosquitto
Test:
mosquitto_sub -h localhost -t "test/topic"
mosquitto_pub -h localhost -t "test/topic" -m "hello"
This patch includes additional AI/vision inspection modules for anomaly detection, logging, and RCA.
Feature | Module | Description |
---|---|---|
Anomaly Detection | anomaly/padim_infer.py |
Basic visual anomaly scoring (extendable to PaDiM, DRAEM) |
Frame Logger | logger/frame_logger.py |
Save inspection snapshots with metadata |
Root Cause Analysis (RCA) | clustering/image_cluster.py |
Cluster visually similar defects using PCA + k-means |
OTA Model Switch | ota/model_switcher.py + update_control.json |
Swap ONNX models by version or line config |
Log a Frame and Metadata
from logger.frame_logger import save_frame_with_metadata
save_frame_with_metadata(image, {"line": "A1", "status": "PASS"})
Run Anomaly Detection
from anomaly.padim_infer import detect_anomalies
result = detect_anomalies(image)
print(result["is_anomaly"], result["anomaly_score"])
Switch Model via OTA Config
from ota.model_switcher import get_model_path_from_ota
model_path = get_model_path_from_ota() # uses ota/update_control.json
Cluster Image Features (RCA)
from clustering.image_cluster import cluster_features
labels = cluster_features(feature_array, n_clusters=4)
These modules are designed to be extendable and can be linked with camera inference pipelines, OTA update agents, or retraining workflows.
Use the built-in dashboard to browse visual inspection logs.
streamlit run dashboard/log_viewer.py
- Displays image logs from
logs/*.jpg
- Shows associated metadata from
logs/*.json
- Sidebar shows count of PASS/FAIL units
Run all tests using:
pytest tests/
Tests include:
- Logger: saves annotated frame + JSON
- OTA model switch: reads ONNX path from control JSON
This project includes a growing suite of pytest
-based unit tests found in the /tests
folder.
logger.py
β Logs each inference and frame snapshotota/model_switcher.py
β OTA JSON-based model switch controllersrc/agents/system_recovery_agent.py
β Simple heartbeat-based recovery agentdashboard/log_viewer.py
β Streamlit app to view inference logs and anomaly images
GitHub Actions automatically runs tests on every push or pull request to main
.
The test workflow includes:
- Python 3.10 setup
- Dependency install (
requirements.txt
,requirements-addon.txt
) - CI environment with
PYTHONPATH
for clean imports - Full pytest run on
/tests
See .github/workflows/python-ci.yml
for the CI config.
π¬ Want a hardware demo kit? Contact Sintrones
MIT License β open for research, testing, and pilot deployment.