Routing on the host , Quagga in Docker

Hi folks

I have recently saw the the new Quagga in Docker in Cumulus GitHub.
I myself like the idea of going L3 , and ecmp from host towards the leaf switches.
however I'm trying to understand how practical is the deployment.

In most of environments , the hosts and compute nodes are either:
-vmware ESX
-OpenStack compute node
- hyper converged appliances like Nutanix,
- other virtualization nodes (xen, proxmox, Ovirt, red hat , etc)

in such nodes, how we can deploy the quagga docker ? I think it only works in a physical host running Linux , right ?


1 reply

Userlevel 4

-vmware ESX

You could do something like a Virtual Router (e.g. use Cumulus VX as a Virtual Router) but this has not been tested and my concerns is that a a Virtual Router still relies on the underlying hypervisor for correct hashing, and this has not been tested. This is kind of how I draw a Virtual Router (where it sits between the VMs and the NICs).

Now imagine 4 NICs. It might hash per packet vs per flow so this would require testing. I think a lot of Virtual Routers on the market are more of virtual switches and don't think about this concern 😕 Let me know if you test this.

-OpenStack compute node

This works and we have a demo 🙂

- hyper converged appliances like Nutanix,

I think Nutanix would be a prime candidate. However a lot of their designs require L2. The Nutanix boxes actually peer via L2 to create a mesh of storage. I think a L3 solution + VXLAN would be great for scaling Nutanix. I expect to see more on this soon. I am thinking of doing a blog article on Hyper-converged architecture. Think of the new 4 x 10Gb blades that the Nutanix has. You have 4 nodes per 2RU so this means 16 x 10Gbps per 2RU. Imagine a rack with 4 ToRs (42-4) so 38 spare RUs for Nutanix. You can now fit 19 Nutanix 2RU.

16 * 19 Nutanix Appliances = 304 x 10Gbps ports

a Trident 2 breaks out to 104 ports, so even with 3 ToRs we just have barely enough ports to cover the Nutanix VMs (104*3 = 312 > 304).

The oversubscription of high density hyper-converged starts making you want to have more switches which makes white box even more compelling (automation + cost reduction) . Again I need to draw this out for it to make more sense... look for a blog post soon.

- other virtualization nodes (xen, proxmox, Ovirt, red hat , etc)

Well any Linux host running Docker makes this easy. This is why we are focusing in the short term on containers. The containers just make packaging really easy for us to distribute Quagga on the host. Expect to see more on this soon.