Data networks still matter! And it's still tricky to keep them happy.
Given the increasing complexity of modern networks, the rise of SDN (Software-Defined Networking) and Intent-Based Networking, as well as the need for faster deployment cycles and reduced maintenance downtime, it is more and more critical to count with an automated, DevOps-oriented approach towards the management of our data networks.
- β‘οΈ Manage network configurations as code with automated workflows
- ππ½ Validate and deploy network changes using pipelines
- π€ Enable dynamic, business-integrated networks using APIs and programmability
The purpose of this project is to demonstrate a DevOps approach towards BGP network configurations using GitHub Actions and the Cisco Crosswork Network Services Orchestrator (NSO).
This is the definition of the CI pipeline which GitHub will trigger everytime a commit is done to any branch. You can explore the different stages and the scripts invoked on each. Everything is based on bash scripts for ease of runner portability (no need to have anything else than bare Linux) and execution speed.
π inventory/bgp-inventory.yaml
This file contains the BGP configurations in YAML format that we want to have in our data network
All the resources for configuring, running and stopping the pipeline are here. This includes all the bash scripts for the different CI stages, the NSO preconfigs, and some utils for enabling functionalities. Everything is very self-explainatory inside.
Our very basic NSO services for testing purposes. Here you would be versioning your real-life, way more complex service packages.
π services/tests/environments.yaml
Every service contains its own tests/
folder for its corresponding Robot test suite, as well as the definition of its environments for testing and provisoning. There must be at least two environments for the purpose of this demo: test
and production
.
In a nutshell, the CICD pipeline follows this approach:
- Every time that the inventory file is updated - either via a new branch or with a merge request against the
main
branch, a CI pipeline is triggered - The file is linted for proper YAML formatting
- A staging environment is built. It consists on a Cisco NSO container loaded with NEDs (Network Element Drivers) which support the Cisco IOSXR devices that the inventory file targets
- The repository contains a custom service called "devopsproeu-bgp" written in Python and YANG which is mounted on the NSO container
- Dummy virtual devices are created to simulate the provisioning of the BGP configurations of the inventory file
- A Robot test for BGP configurations is run in the NSO container
β A Dry-Run of the inventory configs in JSON format is done in the NSO container using RESTCONF. If the return status is
200
, the orchestrator indicates that the configurations are valid, and the test passes.
β A Commit of the inventory configs in JSON format is done afterwards. If the return status is
204
, the orchestrator indicates that the configurations raise no conflicts, and the test passes.
π₯ Any other return code marks the test as invalid.
- The test reports are bundled into an artifact which is uploaded as a zip file in this job
- The staging environment is cleaned. This is, the NSO container is removed and any additional resources are wiped away
- If all tests passed and the target branch is
main
, the same testing procedure is applied but this time to the designated production NSO server.
β A Dry-Run of the inventory configs in JSON format is done in the NSO production node using RESTCONF. If the return status is
200
, the orchestrator indicates that the configurations are valid, and the test passes.
β A Commit of the inventory configs in JSON format is done afterwards. If the return status is
204
, the orchestrator indicates that the configurations raise no conflicts against any existing configurations, and the test passes. This is a final commit in the target devices of a production environment.
π₯ Any other return code marks the test as invalid, and provisioning does not go any further.
- The test reports are bundled into an artifact which is uploaded as a zip file in this job
- A release is published in the repository, including the inventory latest file and the Robot test reports
To make this demo yours, you can simply fork the project into your own repository and then change the files to suit your needs. Once forked, you can download the project to your self-hosted runner by using the following command:
git clone https://github.com/<your_github_user>/devopsproeu-netdevops-demo.git
This demo makes use of python3.x to render the file pipeline/setup/docker-compose.js
. It is the only requirement for the self-hosted runner. Navigate to the root dir of this repo and run the following command:
pip install -r requirements.txt
Afterwards, download the official NSO Docker Image and free NEDs available in this link. At the time of writing this doc, the versions available are for NSO v6.4.
Once downloaded, install the Docker image in your self-hosted runner using the following command:
docker load < <your_nso_docker_image.tar>
For this demo, we will use the production image as this one allows us to compile packages, create netsims and run NSO in the same container. About the architecture (AMD, x86), choose the one that suits your self-hosted runner. I have tested this demo using both, and I can say that there are stable, solid image releases.
Afterwards, navigate to the file pipeline/setup/config.yaml
and populate the required fields. Each section explains the information needed.
We have now all our files in place. The only thing left to do is to setup and run your own self-hosted Github Runner. For that, follow the instructions in this link..
You can save the self-hosted runner folder in the root directory of this repo with the name actions-runner
. It will be ignored by git for future commits. Now, activate the runner with the following command:
./run.sh
If you see the following output, your self-hosted runner is ready for showtime!
β Connected to GitHub
2019-10-24 05:45:56Z: Listening for Jobs
Now, commit your changes to any branch, and you will see the progress of the pipeline at the Actions
tab of your repository.
Made with lots and lots of βοΈ by Alfonso (Poncho) Sandoval