VxLAN and multiple leafs


Hi all,

just starting my Cumulus config (2 Spines and 4 leaves) and was wondering how to handle VXLANs overs more than 2 leaves as the only example I found here are with 2 leaves.

Thanks a lots.

7 replies

Userlevel 4
Is this a Cumulis VX deployment or Cumulus Linux on physical switches? How many tenants do you have? What's the hypervisor? Is this LNV?
Hi Sean,

this is Cumulus Linux on physical switches.
Regarding tenancy so far I can't tell as this would be the basci setup for an Mirantis openstack based architecture so I was just looking on the way to get the basic VLAN needed to be transported across my 2 racks (4 VLAN so far).
OSPF is already set between all the switches.
Hypervisor will be KVM.

Cheers

Userlevel 1
Hi David,

FYI: we should have a Cumulus VX laptop demo made available shortly using VXLAN and L3 in the entire network (quagga to the host), if this helps. We already have a VLAN-based Cumulus ML2 mechanism driver demo available as well:

https://support.cumulusnetworks.com/hc/en-us/articles/215832697

Andrius Benokraitis wrote:

Hi David,

FYI: we should have a Cumulus VX laptop demo made available shortly using VXLAN and L...

Hi,

good to know.
Does this mean that for bare metal host we have to rely on std VLAN extension between TOR ?

Or should this be build by hand (n-1) Vxlan set-up per leaf per VLAN ?

Thanks

Userlevel 1
Andrius Benokraitis wrote:

Hi David,

FYI: we should have a Cumulus VX laptop demo made available shortly using VXLAN and L...

David, your choice on this - depends on if you want to use the VLAN type driver or the VxLAN type driver for OpenStack. If you use VxLAN type driver it's best to use VxLAN in the whole rack (either L3 addressing on the host or BGP unnumbered with a quagga package on each host). If you use the VLAN type driver you can do MLAG everywhere per our Validated Design Guide on our website, or L3 between the leaf and spine. Does this help? Too many options, yes!
David, The physical switch topology job is to provide VTEP reachability. So long as you can ping from one VTEP to another VTEP then your Cumulus IP fabric is configured correctly. According to Mirantis documentation, if you selected "Neutron with tunneling segmentation", then it assigns the "br-mesh" interface on the server as the VTEP. So in your connectivity tests, make sure that each br-mesh interface can ping another br-mesh on a different server. Regarding understanding how MAC learning occurs in a Vxlan environment, this is managed by Openstack, and not by the switch IP fabric. Mirantis provides the ability to turn on Openstack L2Population for managing this. The option is found in the "Setting" Section under "Neutron Advanced Configuration". If you did indeed turn on "neutron with tunneling segmentation" mirantis recommend you enable l2population. One thing though that may be missing, after looking at a default installation of Mirantis-7 with neutron tunneling segmentation enabled, is the vxlan ttl setting. So this means that the vxlan packet may be set with a ttl of 1, and thus will not cross an IP fabric. Please verify this as you perform your vxlan tests. Use sniffer captures if necessary to verify the ttl setting.
Stanley Karunditu wrote:

David, The physical switch topology job is to provide VTEP reachability. So long as you can ping ...

Hi Stanley,

Thanks for your inputs.
Question I have is at start (no Openstack yet set-up) do I have to create VTEP between all my leaf to trasport the basic VLAN''s needed for a Mirantis Openstack Setup.
My IP fabric is ok so far using OSPF, but ideally the goal would be to automate VLAN creation for the VM's that will be created an automate the VxLAN creation accordingly on the TOR to have the tenants being able to communicate between them.
Don't know if this is clear/possible

David