Setting up a VX lab environment in GNS3 1.4


Userlevel 2
I've long been a big fan of GNS3, the open-source virtual network simulation environment that enables folks to run a live multivendor network topology with a nice graphical front end. I have been using GNS3 since the 0.7.x days to lab out various network scenarios, and have used it to prepare for a few certification exams in the process. Believe me when I say that things have certainly come a long way since the old days 🙂

I was also very excited when a few years ago I heard there was a new vendor on the block, offering a pure Linux distribution that served as the OS for switching platforms, that has since taken the networking world by storm. Yes, I'm referring to our kind hosts here, Cumulus Networks, with their Cumulus Linux offering. I remember pleading back then with the powers that be to release a VM-based Cumulus Linux, so we could lab and learn in our own environments (Cumulus has for a long time offered their "Workbench" remote labs for signup, but that has a pre-set topology, and not really available "on demand" to the student...) So of course I was very pleased when in August 2015 they released Cumulus Linux VMs for VirtualBox, VMware and KVM.

Now, I have used Vagrant for a while to spin up Linux platforms for test purposes, and have certainly also used Vagrant to construct multi-host topologies with Cumulus VX for various labs. This has been made even more powerful with the recent release of the Cumulus Topology Converter, which uses "dot" files (Graphviz graph files, as used with PTD) to contruct Vagrantfiles that instantiate multi-host labs. That's all very good, but sometimes you want to see a diagram of what you are working on, and be able to interact with it while you are working, whether it's making changes to the topology in a graphical way, or working with the elements of it. That's exactly what GNS3 brings to the table. And so, GNS3 remains my lab platform of choice - I guess I'm just a visual learner 🙂 Over the last few months, I have gotten a decent GNS3 lab platform working with Cumulus VX, and decided to share what I've learned along the way. Hopefully this will be helpful to other GNS3 users looking to create labs with Cumulus VX.

Since the 1.2 version of GNS3 was released, it has been possible to separate the GUI portion of the app from the backend ("server") portion that actually orchestrates the VM systems (Dynamips for the legacy Cisco router emulation, IOS-on-Linux for the newer Cisco platform emulations, and VMware, VirtualBox and QEMU/KVM for more traditional VMs, such as Linux or network vendor offerings such as Juniper's vMX and vSRX, Arista's vEOS, and of course Cumulus' VX.) So I run the GUI on either my Linux laptop, or my MacBook, and the backend machine is a nice Dell PowerEdge server loaded with RAM, which runs CentOS 7 Linux. This allows me to use a lightweight laptop to construct the topology and control the devices, but I can have way bigger topologies and more resource-intensive VMs than I ever could on a laptop. (Of course, this also means I cannot run topologies when I'm offline, but as I'm not a road warrior, it's not been a problem for me.) You can find the server installation instructions at the GNS3 project's site - I choose to install it via pip3 since they helpfully publish the software to PyPi (I'm currently running v1.4.0.) I then run the server "in the foreground" by typing "gns3server" at the bash prompt in a tmux session: this allows me to see the server's log messages in realtime as it starts up and when I start / modify / stop a topology, but also allows me to detach the session and log out of the server when I'm done (with the gns3server process still running.) They do also have instructions on how to run the server as a daemon, so it will automatically start up when the system boots, and run in the background as normal.

On my GNS3 server, I created a user account ("gns3user") to use to run the GNS3 server process. In its' home directory /home/gns3user, I created a "GNS3" directory to hold the server-side files, and in that made two sub-directories: "images" and "projects". Of course in "images" I have my various IOS, Juniper, and VX image/VM-disk files; "projects" is the directory where all of the instantiated GNS3 project files will live under on the server. Once that is established, I turn to my laptop where I have installed the UI portion (GNS3-gui) and make the following settings changes:

Under Edit > Preferences, Server preferences pane:
  • "Local server" tab - UNcheck "Enable local server"
  • "Local GNS3 VM" tab - UNcheck "Enable the local GNS3 VM"
  • "Remote servers" tab - Set up the remote host parameters based on your remote GNS3 server's details (the important parts for me were the remote host IP, and the remote username [gns3user] and password.)
I then took the Cumulus VX Qcow2 image that I had downloaded (I use v2.5.5 currently) and uploaded the image to my gns3server via scp to /home/gns3user/GNS3/images. The first time that I tried to use the image with GNS3, I found some "gotcha's" with it[1][2], and devised some workarounds. So before setting it up in GNS3, I ran it directly from the server's bash prompt in order to make the needed edits to enable the serial console in the VM (which is needed for the GNS3 "Console" menu option to work.) The command I used to run it was:
/usr/bin/qemu-system-x86_64 -curses -name CumulusVX-2.5.5-base -m 256M -smp cpus=1 -enable-kvm -boot order=c -drive file=/home/gns3user/GNS3/images/CumulusVX-2.5.5-cc665123486ac43d.qcow2,if=ide,index=0,media=disk
Log in on the console that pops up, "sudo -i" to root, and make the edits specified in [2] below, and when done, issue the command "/sbin/shutdown -h now" to power off the VM.

Then after that was done, back in the GNS3-gui under Edit > Preferences, QEMU VMs pane, I set the parameters up for my VX VM by clicking the "New" button below the VM list section. Most of the settings are self-explanatory, but some points to this are:
  • In the "Disk Image" screen, you must type/paste in the relevant path on the remote server of the .qcow2 file to be used; the "Browse" button (unsurprisingly) only works for local paths. (On my system, the value I used was "/home/gns3user/GNS3/images/CumulusVX-2.5.5-cc665123486ac43d.qcow2" - yours probably will be different.)
  • After the disk is entered, click the "Finish" button, which will create the entry for the VM template, then click to highlight the VM you just created, and click the "Edit" button.
  • Click on the "Network" tab in the resulting window, then drop the "Type" box and select "Paravirtualized Network I/O (virtio-net-pci)". Also, ensure that the checkbox for "Use the legacy networking mode" is UNchecked. Finally, change the "Adapters:" box to be the value "4" (this will create a VX VM with one eth0 management interface, and three front-panel "swp" ports [1-3].)
  • On the next tab, "Advanced settings", ensure that the checkbox for "Use as a linked base VM" is checked (it should be, but doesn't hurt to check if it's so...)
When all of the above settings are done, click the "OK" button to save the VM configuration.

In the GNS3 GUI, I then instantiated a few virtual switches by dragging out the new QEMU VM object (I named it "CumulusVX") a few times onto the canvas, and started them up. I verified that the console access was working by right-clicking on each VX switch immediately after starting it, and choosing "Console" (I also could have clicked the ">_" button on the top toolbar to bring up consoles to all devices at once.) I could see them boot and finally land at the "cumulus:" prompt. That being done, I logged in to each in turn and gracefully powered them off (since QEMU VMs can only be interconnected in GNS3 while in the down state.)

Now, to create an interface that is used to connect to the management interfaces of the VX VMs, I created a "tap" interface on the server via the command: sudo ip tuntap add tap0 mode tap user gns3user
And then gave it an IP address thusly: sudo ip addr add 192.168.0.254/24 dev tap0

I later set this up as a boot-time initialized interface in CentOS by creating a /etc/sysconfig/network-scripts/ifcfg-tap0 file with the contents:
DEVICE=tap0
ONBOOT=yes
TYPE=Tap
USERCTL=yes
BOOTPROTO=none
NETWORK=192.168.0.0
IPADDR=192.168.0.254
NETMASK=255.255.255.0
That being done, I then went back into the GNS3 GUI, and drug out a "Cloud" object, then right-clicked and choose "Configure". I clicked on the "TAP" tab, and entered "tap0" in the TAP interface field, then clicked "Add". This created an entry "nio_tap:tap0" in the box below. I clicked "OK" to save the configured Cloud object, then I drug out a "Ethernet switch" object (think of it as a virtual Netgear switch 🙂 and connected each VX VM "Ethernet0" port (this is the VX's 'eth0' management port) to a port on the Ethernet switch object. Finally, I connected another port on the Ethernet switch object to the Cloud's "nio_tap:tap0" interface. This all then composed the "out-of-band" management network that I could use to interact with the VX switches from the GNS3 server's command line. After this was all done, I also interconnected one VX switch to another as desired with links between their "Ethernet1" thru "Ethernet3" ports (these are the VX's 'swp1' thru 'swp3' interfaces.)

Finally, to provide DHCP service thru the management network, I installed the 'dnsmasq' package, and created a "dnsmasq" subdirectory in the /home/gns3user directory for the dnsmasq configuration and leases files. In the configuration file 'default.conf' I put the following directives:
domain-needed
bogus-priv
no-resolv
no-poll
server=192.168.1.254
server=8.8.8.8
no-hosts
addn-hosts=/etc/dnsmasq.d/hosts.conf
expand-hosts
domain=test.local
bind-dynamic
interface=tap0
dhcp-authoritative
dhcp-range=tap0,192.168.0.2,192.168.0.99,4h
dhcp-leasefile=/home/gns3user/dnsmasq/default.leases
And then started up dnsmasq via the command: sudo dnsmasq --conf-file=/home/gns3user/dnsmasq/default.conf

Then the moment of truth - I started all of the VX VMs via the GNS3 GUI, logged in via the console, and saw that they all had obtained DHCP addresses on my 192.168.0.0/24 management network 🙂 I could then SSH to each VX VM directly from a shell on my GNS3 server, and also use tools such as Ansible from the server to manage and configure them.

Since I've written most of a small book here, I think I'll end at this point 🙂 If you have any questions, leave a comment here and I'll do my best to check in and answer questions as they come... I can also be found at@willarddennis on the Twitters, or@wdennis on the Cumulus Slack community.

Happy Labbing!

[1] https://community.cumulusnetworks.com/cumulus/topics/cannot-successfully-connect-vx-vms-in-gns3
[2] https://community.cumulusnetworks.com/cumulus/topics/enable

2 replies

Userlevel 4
This is a great post! I was not aware GNS3 can decouple the frontend from the backend. This could be a useful option for virtual interop testing... where VMs from Cisco/Arista/Junos don't support Vagrant. I'll have to poke and prod at this at some point!
I didn't seem to be able to get a VNC console using Cumulus VX 3.0.0 on Ubuntu 16.04 - looked like it was doing something funky with the VGA console.

Anyway, I added '-vga cirrus' the the advaned Options and that fixed it in case anyone has the same issue.

Reply