When will Cumulus Linux support PIM and VRF-lite?


When will Cumulus Linux support PIM and VRF-lite?

10 replies

Userlevel 5
Thomas,

Great question. Both items are projects I know we are actively engaging. Right now we do not have a specific time frame for this release, but please reach out to sales@cumulusnetworks.com and they can help answer specific questions.
Ok thank you. These features are in most cases showstoppers.
Thomas - just out of curiosity and so that we better understand requests, could you elaborate a bit more on your use-case for both asks?
We use multicast a lot in the campus network and in the datacenter. We have video/audio sources in the campus network and receivers in the datacenter and vice-versa. We try to route as much as possible to avoid spanning-tree: we try to route at the wiring closet/server access switches. We also virtualise/segment our network. So we need PIM for multicast routing and vrf-lite for segmentation.
Thanks for the insight Thomas. Going a bit deeper, would you mind providing bit more details for these features in your use-case.

Multicast (PIM-SM):
What scale do you intend to run at? (# of groups, # of sources per group)
Proposed Topology (3-tier, CLOS etc.)
Interoperability with other systems
IPv4, IPv6, Both
Rate of Multicast streams creation
Rate of receivers joining/leaving groups
Failover time requirements for streams in progress

VRF:
Number of VRFs
Number of Routes
IPv4, IPv6, Both
Routing Protocol (BGP, OSPF)
Multicast within a VRF table (Y/N)
Mohit Mehta wrote:

Thanks for the insight Thomas. Going a bit deeper, would you mind providing bit more details for ...

Hi,

About the multicast scaling questions: I honestly have no idea at this moment...

VRF: in the datacenter we would have round 2-4 vrf's.
In the campus we have at some places 5 vrf's.

All new equipment is required to support ipv4 and ipvs6.

Userlevel 1
Thomas,

It is currently possible to do PIM-SM routing with Cumulus. On our campus network we have multiple multicast sources and receivers. Our DC is a spine-leaf L2 CLAG (for vmware) setup and in the campus we have a Layer 3 routed access design to the building closets. We use OSPF for our interior routing.

PIM-SM is a requirement for us also, so we needed to make it work. We really love our Cumulus devices and were determined to make them fit our reqs. The following text is a result of trial and error to find what worked for us. This setup has been in production for close to a year now and has served us very well. Please be advised that this may not be a cumulus supported method or design.

To do PIM routing with the Cumulus devices you need the pimd package. We originally used it from the Debian repositories. However, since then I believe it may have been made available from the cumulus repos.

Here is /etc/pimd.conf from our spine RP:
phyint eth0 disable  phyint peer.3100 disable
cand_rp time time 5 priority 0
cand_bootstrap_router priority 5
group_prefix 224.0.0.0 masklen 4
switch_data_threshold rate 50000 interval 20 # 50kbps (approx.)
switch_register_threshold rate 50000 interval 20 # 50kbps (approx.)
We have eth0 and peer.3100 disabled to prevent multicast on the management network and clag peer links.

The key to making pimd work is to have unicast routes that allow the mulitcast packets to traverse so that the pimd deamon can route and prune the muticast traffic. A unicast multicast range route needs to be added to all routed interfaces you want to multicast route. Without it packets will only exit from interfaces with a default route. That is with the exception of the link-local traffic that is not routed like 224.0.0.0/24.

The quagga config to add these routes looks like this:
ip route 224.0.0.0/4 swp9
ip route 224.0.0.0/4 swp10
ip route 224.0.0.0/4 swp11s0
ip route 224.0.0.0/4 swp13
At any given point we have around 25-50 multicast groups with up to hundreds of sources and receivers all over campus. Most of our traffic is short bursty mice like zone pages or SSDP/UPnP. However, we do occasionally get long elephants like disk imaging a lab. This setup has worked great for us. We ran it in testing for months. Throughout that time and while in production it has held up well. We have seen little to no hit on CPU or memory usage on our devices and packet delay, jitter, and bandwidth was good in our tests.

We also tested xorp and did manage to get that to multicast route also. Performance was sub par compared to pimd. It was also difficult to keep xorp from interfering with quagga and found it to be a bigger pain to manage config wise (even when using puppet).

We are also anxiously awaiting VRF. It would greatly simplify our management network setup especially for ZTP. Thank you Cumulus team for working to provide this.


                
        
            
Joshua Hash wrote:

Thomas,

It is currently possible to do PIM-SM routing with Cumulus. On our campus network we ha...

Thank you!
I will try this in my VX VM.
Is there any update on this feature?
Userlevel 5
Thomas,

Thanks, for following up. As of right now VRF is in Beta with a general release in the coming weeks. PIM is still on the roadmap and I can tell you we will be shouting it's release from the building tops when it is ready.

Reply