The SNO Agent-Based Installer is a comprehensive toolkit for deploying and managing Single Node OpenShift (SNO) clusters using the OpenShift Agent-Based Installer. This repository provides automated scripts for ISO generation, cluster deployment, and configuration management, specifically optimized for Telco RAN workloads.
Note: This repository requires OpenShift 4.12 or later. For multi-node deployments, see the sister repository: Multiple Nodes OpenShift
- 📦 Automated ISO Generation: Generate bootable ISO images with pre-configured operators and tunings
- 🚀 Automated Deployment: Deploy SNO clusters via BMC/Redfish integration
- ⚙️ Day-1 Operations: Pre-configure operators and system tunings during installation
- 🔧 Day-2 Operations: Post-deployment configuration and operator management
- ✅ Validation Framework: Comprehensive cluster validation and health checks
- 🏗️ Telco RAN Ready: Optimized for vDU applications with performance tunings
- 🌐 Multi-Platform Support: Works with HPE, ZT Systems, Dell, and KVM environments
The toolkit consists of four main components:
Script | Purpose | Phase |
---|---|---|
sno-iso.sh |
Generate bootable ISO with operators and tunings | Pre-deployment |
sno-install.sh |
Deploy SNO via BMC/Redfish integration | Deployment |
sno-day2.sh |
Apply post-deployment configurations | Post-deployment |
sno-ready.sh |
Validate cluster configuration and health | Validation |
- OpenShift 4.12 or later
- Linux system with internet access
- BMC/Redfish access to target hardware
- HTTP server for ISO hosting
Install the following tools before running the scripts:
# Install nmstatectl
sudo dnf install /usr/bin/nmstatectl -y
# Install yq (YAML processor)
# See: https://github.com/mikefarah/yq#install
sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
sudo chmod +x /usr/local/bin/yq
# Install jinja2 CLI
pip3 install jinja2-cli jinja2-cli[yaml]
Create a configuration file based on the sample:
cp config.yaml.sample config-mysno.yaml
Edit the configuration file with your environment details:
cluster:
domain: example.com
name: mysno
ntps:
- pool.ntp.org
host:
hostname: mysno.example.com
interface: ens1f0
mac: b4:96:91:b4:9d:f0
ipv4:
enabled: true
dhcp: false
ip: 192.168.1.100
dns:
- 192.168.1.1
gateway: 192.168.1.1
prefix: 24
machine_network_cidr: 192.168.1.0/24
disk: /dev/nvme0n1
cpu:
isolated: 2-31,34-63
reserved: 0-1,32-33
bmc:
address: 192.168.1.200
username: admin
password: password
iso:
address: http://192.168.1.10/iso/mysno.iso
pull_secret: ./pull-secret.json
ssh_key: /root/.ssh/id_rsa.pub
./sno-iso.sh config-mysno.yaml
./sno-install.sh mysno
./sno-day2.sh mysno
./sno-ready.sh mysno
Generate a bootable ISO image with pre-configured operators and tunings:
# Basic usage
./sno-iso.sh config-mysno.yaml
# Specify OpenShift version
./sno-iso.sh config-mysno.yaml 4.14.33
# Use specific release channel
./sno-iso.sh config-mysno.yaml stable-4.14
Available Options:
config file
: Path to configuration file (optional, defaults toconfig.yaml
)ocp version
: OpenShift version or channel (optional, defaults tostable-4.14
)
Deploy SNO using BMC/Redfish integration:
# Deploy latest generated cluster
./sno-install.sh
# Deploy specific cluster
./sno-install.sh mysno
Supported Platforms:
- HPE iLO
- ZT Systems
- Dell iDRAC
- KVM with Sushy tools
Apply post-deployment configurations:
# Apply to latest cluster
./sno-day2.sh
# Apply to specific cluster
./sno-day2.sh mysno
Validate cluster configuration and health:
# Validate latest cluster
./sno-ready.sh
# Validate specific cluster
./sno-ready.sh mysno
cluster:
domain: example.com # Cluster domain
name: mysno # Cluster name
ntps: # NTP servers (optional)
- pool.ntp.org
host:
hostname: mysno.example.com # Node hostname
interface: ens1f0 # Primary network interface
mac: b4:96:91:b4:9d:f0 # MAC address
disk: /dev/nvme0n1 # Installation disk
cpu:
isolated: 2-31,34-63 # Isolated CPUs for workloads
reserved: 0-1,32-33 # Reserved CPUs for system
host:
ipv4:
enabled: true
dhcp: false
ip: 192.168.1.100
dns:
- 192.168.1.1
gateway: 192.168.1.1
prefix: 24
machine_network_cidr: 192.168.1.0/24
host:
ipv6:
enabled: true
dhcp: false
ip: 2001:db8::100
dns:
- 2001:db8::1
gateway: 2001:db8::1
prefix: 64
machine_network_cidr: 2001:db8::/64
host:
vlan:
enabled: true
name: ens1f0.100
id: 100
Configure operators and system tunings during installation:
day1:
workload_partition: true # Enable workload partitioning
kdump:
enabled: true
blacklist_ice: false # Set true for HPE servers
boot_accelerate: true # Enable boot acceleration
ztp_hub: false # Enable ZTP hub components
crun: true # Use crun container runtime
rcu_normal: true # Enable RCU normal mode
sriov_kernel: true # Enable SR-IOV kernel optimizations
sync_time_once: true # Enable time synchronization
cgv1: true # Use cgroup v1 (false for v2)
container_storage:
enabled: false
device: /dev/nvme0n1
startMiB: 250000
sizeMiB: 0
operators:
ptp:
enabled: true
version: ptp-operator.4.14.0-202405070741 # Optional version lock
sriov:
enabled: true
version: sriov-network-operator.v4.14.0-202405070741
local-storage:
enabled: true
provision: true # Create logical volumes
lca:
enabled: true # Lifecycle Agent
oadp:
enabled: true # Backup and restore
cluster-logging:
enabled: true
metallb:
enabled: true
nmstate:
enabled: true
nfd:
enabled: true # Node Feature Discovery
gpu:
enabled: false
kubevirt:
enabled: false
fec:
enabled: false # SR-IOV FEC
adp:
enabled: false # Accelerated Data Processing
mce:
enabled: false # Multi-cluster Engine
rhacm:
enabled: false # Red Hat Advanced Cluster Management
gitops:
enabled: false # OpenShift GitOps
talm:
enabled: false # Topology Aware Lifecycle Manager
Configure post-deployment settings:
day2:
operators:
ptp:
enabled: true
ptpconfig:
enabled: true
mode: boundary # ordinary, boundary, or mixed
logLevel: 2
summary_interval: -4
logReduce: true
enableEventPublisher: true
sriov:
enabled: true
sriovOperatorConfig:
enabled: true
enableInjector: true
enableOperatorWebhook: true
logLevel: 2
cluster-logging:
enabled: true
clusterlogforwarder:
enabled: true
outputs:
- name: remote-syslog
type: syslog
url: tcp://192.168.1.10:514
metallb:
enabled: true
metallbConfig:
enabled: true
addressPools:
- name: main-pool
protocol: layer2
addresses:
- 192.168.1.200-192.168.1.220
performance:
profile:
enabled: true
name: sno-perfprofile
realTimeKernel: true
userLevelNetworking: true
hugepages:
enabled: true
size: 1G
count: 32
hardwareTuning:
isolatedCpuFreq: 2700000
reservedCpuFreq: 2700000
tuned:
enabled: true
scaling_max_freq: 2700000
Include custom resources during installation:
day1:
extra_manifests:
- ${HOME}/custom-manifests
- ./additional-configs
- ${OCP_Y_VERSION}/version-specific
Configure proxy settings:
proxy:
enabled: true
http: http://proxy.example.com:8080
https: https://proxy.example.com:8080
noproxy: localhost,127.0.0.1,.example.com
Configure disconnected/mirrored registry:
mirror:
enabled: true
registry: registry.example.com:5000
namespace: ocp4/openshift4
The repository includes comprehensive test configurations:
# Run basic test
./test/test.sh
# Test specific configuration
./test/test-sno130.sh
# Test KVM environment
./test/test-kvm.sh
# Test hub cluster
./test/test-hub.sh
The sno-ready.sh
script validates:
- ✅ Cluster Health: Node status, operator health, pod status
- ✅ Machine Configs: CPU partitioning, kdump, performance settings
- ✅ Performance Profile: Isolated/reserved CPUs, real-time kernel
- ✅ Operators: PTP, SR-IOV, Local Storage, Cluster Logging
- ✅ Network: SR-IOV node state, network diagnostics
- ✅ System: Kernel parameters, cgroup configuration, container runtime
- ✅ Monitoring: AlertManager, Prometheus, Telemetry settings