Inter-vlan routing on a L2 leaf-spine topology

Userlevel 1
Hi everyone, in a L2 leaf-spine topology, as proposed in various Cumulus PDFs, which is the best practice for inter-vlan routing? Suppose I have 3 different vlan (vlan 10, 20 and 30) on an ESXi VMware cluster, which is the best practice to directly put in communication the VMs in the vlan10 with the VMs in the vlan20? An SVI on the spine switches? I know about Cumulus routing on the host, indeed very interesting, but for now I'm curious about inter-vlan routing on the classic leaf-spine model like the one presented here for example: Thank you very much for you help. Alberto

2 replies

Hi Alberto

We are using to put SVI on the spine switches and are then using ACL rules to deny and allow communication between the SVIs on the spine

I hope that helps point you in the direction you want to go.
Userlevel 4
Hey Alberto,

which is the best practice

It really depends on the requirements. When we do a jumpstart with a customer we go over the requirements of their applications and hosts to figure out what they need. I think the best way for me to illustrate this is what some examples:

Customer A:
  1. 8 ToR switches, 2 Spines
  2. Every VLAN needs to exist on every switch, but we want no VLAN to route to any other VLAN unless it reaches a firewall
Possible Solution:
  • What Kyle listed above is pretty much would I recommend. You can use ACLs to enforce VLAN tenant separation. The difference is instead of VRR or SVIs you could just make the Firewall have an address on each VLAN so everything has to communicate through the FW (for this particular customer). If this network started growing more than 8 ToRs I would start exploring VXLAN with LNV or Midokura. See the customer C below.
Customer B:
  1. 8 ToR Switches, 4 Spines
  2. VLAN separation is just for Broadcast domains, not tenant (don't care if VLANs talk to each other when they need to, actually encourage fastest way to communicate).
  3. VLANs don't have to exist on every rack (no L2 requirement across racks or rows)
Possible Solution:
  • Now we have 4 Spines.... This is when I start encouraging shrinking the L2 domain greatly. In this particular customer there is no requirement for L2 across racks so we can shrink MLAG to the rack. I would highly recommend looking at application requirements and host OS requirements and seeing if RoH makes sense here, sometimes applications can't bind to multiple IPs or Unnumbered on a host may not work in you environment (ability to put the same IP on multiple interfaces). If there is a L2 requirement for the host OS I would use MLAG w/ VRR on every pair of ToRs and the Spines are just Routing between the ToRs.
Customer C:
  • 32 ToR switches, 6 Spine Switches.
  • VLAN separation is used for Tenant Separation.
  • VLAN can exist on any rack.
  • L2 requirements across racks.
Possible Solution:
  • No now we have larger scale (6 spine switches) and L2 requirements. These don't clash easily b/c MLAG requires a 'pair of switches' to share state so we would need multiple L2 layers which is not fun to troubleshoot. This is a prime example of where a VXLAN solution would make a ton of sense. We can turn each rack into a VTEP (Virtual Tunnel End Point) or turn the hosts into VTEPs (VXLAN decap/encap on the host). Both are possible solution depending on your host os, applications, etc. This will allow your spines to route so you can use all 6 spines at the same time, but you still can get L2 adjacency across racks.