Routing on the Host user guide


Userlevel 5
Here is the Routing on the Host user guide.

https://docs.cumulusnetworks.com/display/ROH/Routing+on+the+Host

In a typical data center, connections between servers and the leaf or top of rack switches are often done at layer 2. In order to build more resilient data centers, many Cumulus Networks customers are leveraging the Linux ecosystem to run routing protocols directly to their servers, running layer 3 protocols like OSPF (Open Shortest Path First) or BGP (Border Gateway Protocol) directly on the hosts. This is often referred to as Routing on the Host. In Cumulus Linux 3.0, Routing on the Host works on server hosts in a number of different environments:
  • Ubuntu 12.04, 14.04 and 16.04
  • Red Hat Enterprise Linux 7
  • Docker containers
Routing on the Host provides you with:
  • Simplified, modern data center design
  • Subnet freedom and mobility
  • Enhanced redundancy
  • Stateless services with Anycast

33 replies

Simon Woodhead wrote:

This is a good guide but can you possibly point me to one for the containerisation side of things...

I'll check it out. Thanks!
Userlevel 3
Simon Woodhead wrote:

This is a good guide but can you possibly point me to one for the containerisation side of things...

Hi Simon, one of our sales engineers wrote up an anycast design guide, that I published just the other day. Let me know if this helps you at all.

https://docs.cumulusnetworks.com/display/DOCS/Anycast+Design+Guide
Simon Woodhead wrote:

This is a good guide but can you possibly point me to one for the containerisation side of things...

Thanks Eric. This is about where we've ended up with a slight difference. Because we want a single prefix to span multiple hosts, we need to ensure that any container seeking to reach addresses in that prefix are routed away from the host rather than flooded out the relevant interface given there is no layer2 adjacency between them. The primary application for this is anycast but also the kind of mobility you describe without any specific host configuration.

We found this plugin (https://github.com/medallia/cnm-routed-plugin04377) which together with a priveleged container looks like it'll do the job. As you say, we'll use the host ip as the next hop rather than routing through Quagga.
Thanks again,
Simon

Userlevel 5
Simon Woodhead wrote:

This is a good guide but can you possibly point me to one for the containerisation side of things...

What I see is that the server's management IP address is typically already applied which is how the Quagga installation is initially deployed/configured, (if the server image is not deployed with a base Quagga config on initial turn-up). Most people run a single instance of Quagga in a privileged container giving it access to the kernel/server interfaces to setup routing adjacencies and install routes into the kernel. This container will advertise-out all the other bridges/l3 subnets/host-routes running for the containers on that server as the case may be. Using tech like BGP unnumbered gets you away from having to worry about the IP addresses on interfaces etc. Allows the server to be picked-up and moved across the DC arbitrarily as the BGP relationship will re-establish anywhere it happens to get plugged-in. Complicated bridge configs are not needed here unless you prefer that method or have some other constraint. The important thing to remember (which I always forget to mention) is that we're not using Quagga as a Vrouter with namespaces or anything like that; all external traffic leaving/entering the containers is not passing through the Quagga container first.
Userlevel 1
Simon Woodhead wrote:

This is a good guide but can you possibly point me to one for the containerisation side of things...

The container networks on the host would be advertised to it's neighbors via OSPF or BGP. Your host has L3 connectivity, instead of L2, to your Cumulus switch.
This is a good guide but can you possibly point me to one for the containerisation side of things? As a network guy getting a packet to the host is easy and running Quagga in a container to make the announcements makes perfect sense. The bit I'm struggling to articulate to the DevOps guys is how that packet gets to a container once it lands on the host without the host having the IP address configured on it beforehand! Should we for example, be running Quagga in every container, the next hop being a loopback address configured in the container? The DevOps-y way of solving all this seems to be bridges and ridiculously complicated (to me!) sequences of commands which just feels wrong. We're using RancherOS so anything Docker-like we should be able to adapt and hopefully find a common language!! Thanks!
Userlevel 5
Mr Roger wrote:

how does this integrate with KVM using openvswitch, would like to find out how to inject the rout...

I have not looked at that particular integration however it should be possible. If you can add a hook to openvswitch to add the route via a CLI command that would work; the other option that comes to mind is to write a little daemon that looks at configured bridges for some predefined interval and adds/removes them from the routing fabric.
how does this integrate with KVM using openvswitch, would like to find out how to inject the routes when using bridges on vlans.
Gabriel Stoicea wrote:

Hi,

Is it any chance to use "routing on a host"/Cumulus Quagga on RHEL/Centos 6.8?
Routing on a...

Hi Donald,

Thanks a lot for your help.
First I didn't cut-n-pasted everything because it was to much, but I can provide you the entire output if necessary.
Trying to debug my rpm building problem I observed myself that the spec file and the package is for systemd (centos/rhel 7). And it was the next question...
Then I'll wait for a 6.8 version.
Thanks again.

Best regards,
Gabriel
Gabriel Stoicea wrote:

Hi,

Is it any chance to use "routing on a host"/Cumulus Quagga on RHEL/Centos 6.8?
Routing on a...

Gabriel - Your compile errors are centered around the HAVE_POLL define which don't show up in your configure line as cut-n-pasted above. Additionally our quagga.spec that we provide with the source does not have snmp enabled. This sure looks like a comple that got broken and started over without cleaning up properly. I would guess that we need to completely clean out the build system and start over. Additionally the problem with the Quagga.spec file that we have provided though is that it assumes systemd. Let me build up a centos 6.8 and see what I can get for you
Gabriel Stoicea wrote:

Hi,

Is it any chance to use "routing on a host"/Cumulus Quagga on RHEL/Centos 6.8?
Routing on a...

OK. Thanks a lot.
Userlevel 5
Gabriel Stoicea wrote:

Hi,

Is it any chance to use "routing on a host"/Cumulus Quagga on RHEL/Centos 6.8?
Routing on a...

Gabriel,

I am reaching out to the team to have someone reach-out to give you some assistance.

Gabriel Stoicea wrote:

Hi,

Is it any chance to use "routing on a host"/Cumulus Quagga on RHEL/Centos 6.8?
Routing on a...

Thank you for help. I'll keep you posted about that.
Userlevel 5
Gabriel Stoicea wrote:

Hi,

Is it any chance to use "routing on a host"/Cumulus Quagga on RHEL/Centos 6.8?
Routing on a...

You will need to install from source. The link I posted above should help you get it setup.
Gabriel Stoicea wrote:

Hi,

Is it any chance to use "routing on a host"/Cumulus Quagga on RHEL/Centos 6.8?
Routing on a...

I tried the Centos7 RPM on 6.8, but installation is not working because of glibc 2.14 dependency and most important because centos7 is using systemd and not initd like centos 6.8.
Userlevel 5
Gabriel Stoicea wrote:

Hi,

Is it any chance to use "routing on a host"/Cumulus Quagga on RHEL/Centos 6.8?
Routing on a...

Gabriel,

Here is the source for it as well.

https://github.com/CumulusNetworks/quagga

If the CentOS 7 packages do not work.
Gabriel Stoicea wrote:

Hi,

Is it any chance to use "routing on a host"/Cumulus Quagga on RHEL/Centos 6.8?
Routing on a...

OK. I'll try to install the Centos7 RPM on 6.8 and see if it works.
Userlevel 5
Gabriel Stoicea wrote:

Hi,

Is it any chance to use "routing on a host"/Cumulus Quagga on RHEL/Centos 6.8?
Routing on a...

You don't need to deploy "Routing on the Host" inside of a container; you can deploy it directly on the bare-metal host via an RPM install. That RPM is built for Centos7 but you might want to try it on 6.8. The RPM is on this page --> https://cumulusnetworks.com/routing-on-the-host/ under the "Download" section.
Hi,

Is it any chance to use "routing on a host"/Cumulus Quagga on RHEL/Centos 6.8?
Routing on a host seems a nice solution for me. Unfortunately I'm bound to use Centos 6.x on my servers in the data centre for another year at least. And from what I've seen docker is available only for RHEL//Centos 7.
Thank you in advance.

Cheers,
Gabriel
Userlevel 3
Ryan wrote:

Looking at the user guide, it appears RoH within a hypervisor (VMware, for example) is done via a...

Not at the moment, no. Have you looked into redistribute neighbor?

https://docs.cumulusnetworks.com/display/DOCS/Redistribute+Neighbor
Userlevel 1
Ryan wrote:

Looking at the user guide, it appears RoH within a hypervisor (VMware, for example) is done via a...

Is Cumulus Quagga supported on an ESXi Server? I understand that it is not ran within the hypervisor.
Userlevel 3
Ryan wrote:

Looking at the user guide, it appears RoH within a hypervisor (VMware, for example) is done via a...

Hi Ryan, that image is wrong, sorry about the confusion. RoH is not within a hypervisor; Quagga is indeed installed and run on the servers. I'll update that image now.
Looking at the user guide, it appears RoH within a hypervisor (VMware, for example) is done via a router within the hypervisor, and not Quagga @ guest level. Am I correctly interpreting that? See: Subnet Freedom and Mobility diagram.
Userlevel 1
Sergei Hanus wrote:

In the user guide I see, that VM IP addresses are being redistributed into routing protocol.
Coul...

Eric, I totally agree with flexibility point.
As you said, for Docker there are options out-of-the-box (like adding /32 to loopback).
As for traditional vms - there's working solution from Cumulus (redistribute neighbor), which just needs to be "blessed", like you did for Quagga, in order to be ported to hypervisor - and we get out-of-the-box solution for traditional vms as well. That what I meant to point in my post.

Sergei.
Userlevel 5
Sergei Hanus wrote:

In the user guide I see, that VM IP addresses are being redistributed into routing protocol.
Coul...

There is a lot of flexibility in what can be advertised, and how. In the case of Docker you may want to redistribute entire bridge subnets while disabling the NAT component; or you may want to add the newly created /32 Docker IPs to a loopback and simply redistribute the loopback in so as new docker containers are provisioned they have an additional IP added to the loopback. Or in the case of traditional VMs you may want to build a script to inspect the arp entries on that bridge using the "arp -n" command and advertise those into BGP /w a /32 network statement or into a separate kernel routing table that can be redistributed into Quagga. There are truly a ton of different options -- I think our goal is to enable people to explore new ways of deploying applications with routing that can be tailored to your precise needs.

Reply