The MLAG demo is in the vagrant/demos/mlag directory. It uses Vagrant with VirtualBox to create a large layer 2 network of 8 switches and 10 hosts. For simplicity, the whole configuration is contained in the single Vagrantfile. No other provisioning tools, such as Ansible, Puppet, or Chef, are used.
The network is a collection of hosts that are dual-connected to a pair of top-of-rack switches, forming a rack. Each rack is then dual-cross-connected to a pair of pod switches, which collectively is called a pod. And each pod is dual-cross-connected to a pair of data center switches, which collectively encompasses the entire data center. This is shown graphically below:
The file properties.yml defines the number of hosts per rack (default is 5) the number of racks per pod (default is 2) and the number of pods per data center (default is 1). That creates a total of 18 virtual machines; 8 switches (4 ToR, 2 pod, and 2 data center) and 10 hosts. Note that this is different from the number of hosts and switches shown in the figure above.
The switches, of course, run Cumulus VX and the hosts run Ubuntu Precise Pangolin (12.04). You can change the size of the "deployment" by changing the values in the properties.yml file. But beware, changing the values in this file can cause a HUGE number of VMs to be created.
The Vagrantfile defines the properties of the virtual machines, such as the number of network interfaces, the amount of memory, as well as the connections, or "networks" in VirtualBox terminology, between the virtual machines. The Vagrantfile also provisions each virtual machine, loading whatever configuration is necessary. For the switches, this means creating an /etc/network/interfaces file and loading it. For the hosts, the bonding driver must be installed, loaded, and then the /etc/network/interfaces file is loaded.
NOTE: When running MLAG on a virtual machine, you must include the "--vm" option to clagd. Include this line with the other clagd configuration options on the peer link in /etc/network/interfaces:
clagd-args --vmTo use this virtual environment, make sure that you have VirtualBox and Vagrant installed on your system. Then run:
vagrant upThis will spin up all 18 virtual machines. Vagrant does this serially, so be patient. You should make sure that you've got plenty of available memory, since each virtual machine takes about 192MB. You can check to make sure that the entire virtual network is up and running by executing the ServerSpec supplied with the demo. Make sure you've got ServerSpec installed on your system, and then run:
rake specOnce the VMs have been created, provisioned, and verified with ServerSpec, you can log into them with:
vagrant ssh hostnameWhere hostname is the name of the host or switch. The hosts are named:
- s1 and s2, for the two data center switches
- pNs1 and pNs2 for the two pod switches at the top of each pod, where: N is the pod number, starting from 1. For example, p1s1 or p1s2.
- pNrMs1 and pNrMs2 for the two top of rack switches, where N is the pod number, starting from 1, and M is the rack number, starting from 1. For example, p1r1s1, p1r1s2, p1r2s1, or p1r2s2.
- pNrMhH for the hosts, where N is the pod number, starting from 1, and M is the rack number, starting from 1, and H is the host number, starting from, you guessed it, 1. For example, p1r1h1, p1r1h5, p1r2h2, or p1r2h4.
$ clagctlOn the hosts you can ping any of the other hosts:
$ ip link show
$ brctl show
$ sudo vi /etc/network/interfaces
$ ping 10.99.0.3Once you are done using your virtual MLAG environment, your can get rid of everything with:
vagrant destroy -fAfter using the demo, try modifying it to include other features or to more closely match your environment. For example, you could place the L2/L3 boundary at the pod switches by using VRR and OSPF.