This project leverages the capibilities of Containerlab to build and run container-based labs and uses torero as a unified execution layer across your automation tools - Ansible, OpenTofu, Python, and more. This project is meant to explore the art of the possible as it pertains to scaling automation across on-premises and public-cloud infrastructure.
This project contains the following components:
- library contains ready-to-use automations separated by tool (Ansible, OpenTofu, and Python).
- labs holds purpose-built Containerlab topologies to run automations against; each lab has a 1:1 relationship with an import file.
- imports are .yml files that inventory a set of automations pertaining to a given lab; these are referenced in Containerlab topology files and get imported to the lab at runtime.
Finding examples of automation is great, but doesn't get you very far unless you have purpose-built environments to run the automation against. To get the maximum value out of this project, you will need to setup the following:
- Containerlab - an open-source tool for building ephemeral, container-based networking labs with multi-vendor topologies. Its ephemeral nature means that labs can be easily deployed, tested, and then destroyed, making it ideal for short-term experimentation and continuous integration workflows.
- Network OS Images - network operating system images (e.g., Cisco, Arista, Juniper, Nokia) required to run Containerlab topologies. You must obtain these images directly from the respective vendors, as they are not provided by Containerlab or Itential due to licensing restrictions. You will need to follow different steps for downloading and importing these images depending on the vendor.
Let's start out with running a basic lab to demonstrate backing up configuration from an Arista device. Now, we could just configure a single Arista node and backup the configuration, but where is the fun in that? This lab in particular, provisions a Layer 3 Spine-Leaf topology with BGP EVPN-VXLAN overlay for multi-tenant network segmentation.
- 2 spine switches and 4 leafs
- Layer 3 routed links between spine and leaf switches with /31 subnets
- BGP for underlay (IPv4) and overlay (EVPN) routing
- VXLAN configuration with VNIs for VLAN bridging and L3 VRF routing
- ECMP load balancing across spine switches
Follow the documentation here to download and install Arista cEOS. Arista requires you to create an account at https://arista.com prior to downloading any images.
Clone the showtime repository to the machine where you have Containerlab installed and change directories to the backup-arista lab:
git clone https://github.com/torerodev/showtime.git \
&& cd showtime/labs/ansible/backup-arista \
&& ls -lA topology file in Containerlab is a .yml-based definition of your virtual network setup, including nodes (e.g., routers, switches, or hosts), their properties, and the links between them. This allows you to describe complex scenarios in a declarative way. Each topology will launch torero as an automation gateway node, and import the automations in scope for a given lab at runtime. To make this easy to track, import file names match lab + topology file names minus the required .clab suffix.
---
name: backup-arista # Lab name matches imports/ansible/backup-arista.yml
mgmt:
network: backup-arista
ipv4-subnet: 198.18.1.0/24
topology:
kinds:
arista_ceos:
image: ceos:${CEOS_VERSION:=4.34.1F} # Use ENV variable to pass version else use default
nodes:
agw:
kind: linux
image: ghcr.io/torerodev/torero-container:${TORERO_VERSION:=latest}
mgmt-ipv4: 198.18.1.5
ports:
- "8001:8000"
- "8080:8080"
env:
ENABLE_SSH_ADMIN: "true" # Enable simple ssh login with admin:admin
ENABLE_API: "true" # Enable API at runtime
ENABLE_MCP: "true" # Enable MCP at runtime
binds:
- $PWD/data:/home/admin/data
# Import automations that are in scope for this lab at runtime
exec:
- "runuser -u admin -- torero db import --repository https://github.com/torerodev/showtime.git imports/ansible/backup-arista.yml"
# spines
spine1:
kind: arista_ceos
startup-config: config/spine1.cfg # Base config applied to device at startup
mgmt-ipv4: 198.18.1.11 # Management IP; Assigned based on inventory file
spine2:
kind: arista_ceos
startup-config: config/spine2.cfg
mgmt-ipv4: 198.18.1.12
# leafs
leaf1:
kind: arista_ceos
startup-config: config/leaf1.cfg
mgmt-ipv4: 198.18.1.21
leaf2:
kind: arista_ceos
startup-config: config/leaf2.cfg
mgmt-ipv4: 198.18.1.22
leaf3:
kind: arista_ceos
startup-config: config/leaf3.cfg
mgmt-ipv4: 198.18.1.23
leaf4:
kind: arista_ceos
startup-config: config/leaf4.cfg
mgmt-ipv4: 198.18.1.24
links:
# spine1 to leaf connections
- endpoints: ["spine1:eth1", "leaf1:eth1"]
- endpoints: ["spine1:eth2", "leaf2:eth1"]
- endpoints: ["spine1:eth3", "leaf3:eth1"]
- endpoints: ["spine1:eth4", "leaf4:eth1"]
# spine2 to leaf connections
- endpoints: ["spine2:eth1", "leaf1:eth2"]
- endpoints: ["spine2:eth2", "leaf2:eth2"]
- endpoints: ["spine2:eth3", "leaf3:eth2"]
- endpoints: ["spine2:eth4", "leaf4:eth2"]Use the following command to deploy the topology:
export CEOS_VERSION=4.34.1F # Set environment variable to use version of image you imported
export TORERO_VERSION=latest # Set environment variable to pull specific version from dockerhub
clab deploy -t backup-arista.clab.ymlNote
Be sure to set environment variable for images that you have imported in your environment. You can do this prior to running the clab depoy command. Example: export CEOS_VERSION=4.34.1F. Deploying a topology without setting environment variables will run the topology with the default values set in the topology file.
First, let's connect to the torero node via SSH with the default login 'admin:admin'
ssh admin@198.18.1.5Next, we can run the service:
torero run service ansible-playbook backup-aristaThanks for exploring! This project is designed to empower network engineers and automation enthusiasts to experiment with scalable, container-based automation workflows using Containerlab and torero. By combining ephemeral network labs with a unified automation execution layer, the possibilities are endless. We will continue to add automation examples and labs to the project over time.




