Time to load bridge when > 100


Hi,

I'd like to have my IP fabric be able to terminate (VTEP) and transport about 100 Vxlans.
So I did create my 100+ Vlans/Vnis but when issuing a "ifreload -a" it take more than 5 minutes to give the prompt back.
Commands like netshow are really long too.

cumulus@leaf4$ time netshow bridges
real 0m12.635s
user 0m9.000s
sys 0m3.523s

cumulus@leaf4$ time netshow l2
real 1m40.361s
user 1m29.908s
sys 0m9.557s

Is there a way to accelerate this ?
Running HP Altoline switches

7 replies

Userlevel 5
David,

Can you post the configuration for the setup? You should be able to run this without the slowdown you are seeing.

Also take a look at the VLAN aware bridge model. We are adding VTEP support in the upcoming release.
Hi,

Here's one config I'm using on my leaf (same on the others).
Regaring VLAN aware support for VTEP I'm waiting for is as it'll ease de config files.
# The primary network interface VRF admin  auto eth0  iface eth0 inet static          address 172.30.244.131/24          post-up ip route add 172.30.244.0/24 dev eth0 table mgmt          post-up ip route add default via 172.30.244.127 dev eth0 table mgmt          post-up ip route del 172.30.244.0/24 dev eth0 table main          post-down ip route del 172.30.244.0/24 dev eth0 table mgmt          post-down ip route del default via 172.30.244.127 dev eth0 table mgmt    # The loopback network interface  auto lo  iface lo inet loopback   address 10.30.30.4   vxrd-svcnode-ip 10.10.10.10   vxrd-src-ip 10.30.30.4   clagd-vxlan-anycast-ip 10.20.20.10      #Unumbered intf  auto swp49  iface swp49          address 10.30.30.4   mtu 9000  auto swp50  iface swp50          address 10.30.30.4   mtu 9000  auto swp51  iface swp51          address 10.30.30.4   mtu 9000    #Inter-switch Link  auto swp52  iface swp52   mtu 9000  auto swp53  iface swp53   mtu 9000      #MLAG Setup  auto peerlink  iface peerlink      bond-slaves swp52 swp53      bond-mode 802.3ad      bond-miimon 100      bond-use-carrier 1      bond-lacp-rate 1      bond-min-links 1      bond-xmit-hash-policy layer3+4      mtu 9000    auto peerlink.4094  iface peerlink.4094      address 169.254.255.1/30      clagd-priority 8192      clagd-enable yes      clagd-peer-ip 169.254.255.2      clagd-backup-ip 172.30.244.132      clagd-sys-mac 44:38:39:ff:01:02      mtu 9000    ########## Macro configs #######  ######  #BOND Setup (each if is mirrored on the clag partner)  %for port in range(1,49):  #####  auto bond${port}  iface bond${port}          bond-slaves swp${port}          bond-mode 802.3ad          bond-miimon 100          bond-use-carrier 1          bond-lacp-rate 1          bond-min-links 1          bond-lacp-bypass-allow 1          bond-xmit-hash-policy layer3+4          mstpctl-portadminedge yes          mstpctl-bpduguard yes          clag-id ${port}  %endfor      #VNI / TAGGED if set-up / VLAN set-up  %for v in [424,425,426,427,428]:  ###  auto vni-${v}  iface vni-${v}    vxlan-id ${v}    vxlan-local-tunnelip 10.30.30.4  %endfor    #### HOSTS ports  %for v in [424,425,427,428]:  ###  %for host in range(1,49):  auto bond${host}.${v}  iface bond${host}.${v}    %endfor    %endfor    %for v in range(1000,1101):  ###  auto vni-${v}  iface vni-${v}    vxlan-id ${v}    vxlan-local-tunnelip 10.30.30.4  %for host in range(1,49):  auto bond${host}.${v}  iface bond${host}.${v}    %endfor    %endfor      auto vlan426  iface vlan426          bridge-ports vni-426 peerlink.426 \  %for i in range (1,49):  bond${i} \  %endfor  ####    %for v in [424,425,427,428]:  ###  auto vlan${v}  iface vlan${v}        bridge-ports vni-${v} peerlink.${v} \  %for i in range (1,49):  bond${i}.${v} \  %endfor      %endfor    %for v in range(1000,1101):  ###  auto vlan${v}  iface vlan${v}        bridge-ports vni-${v} peerlink.${v} \  %for i in range (1,49):  bond${i}.${v} \  %endfor      %endfor
Userlevel 5
Thank you for posting this. From what I can see it looks good. Let me do some internal questioning. I will let you know something as soon as possible.
Userlevel 5
So the team that works on ifupdown2 is aware and this is a known issue that is currently being worked on for inclusion in an upcoming release.
Do you have an ETA on this subject ?

Cheers
Userlevel 5
You are currently running 2.5.6, correct? The 3.x release is tentatively scheduled for the end of the month. Possibly may be pushed a week.
Hi,

no 2.5.7 I think.

David

Reply