Assign IPv4 address to bond via ansible playbook


Hi Everyone,
I am using Cumulus VX in vagrant environment along with virtualbox.
I am also trying to push configuration through ansible playbook to my small topology.
My topology is configured with 2 spine through MLA and 2 Leaf through MLAG and then a bond interface b/w spine and leaf.

When I run my playbook with CL_bond module the configuration is pushed correctly to all switches and I can see that all switches have interface configuration in /etc/network/interface.d.
However ipv4 address dont show up in the configuration.
From the play i can see they are successfully pushed but on the switches I dont see them.

Since I am running CLAG, I have also assinged different clag-id for each pair of bond interface.

All comes up and except IPv4 addres...

Any idea what is missing here?

11 replies

Userlevel 5
So do you see any errors in the system logs on the VMs? What happens if you try to bring them up manually? Do you have your playbooks somewhere we can see them?

Well i can see in my ifconfig that they are up... its just that I dont see IP.. below some logs.

vagrant@Spine-112:~$ clagctlThe peer is alive
Peer Priority, ID, and Role: 4096 08:00:27:03:77:71 primary
Our Priority, ID, and Role: 8192 08:00:27:fc:f8:55 secondary
Peer Interface and IP: peerlink.4094 169.254.1.1
Backup IP: - (inactive)
System MAC: 00:00:00:00:00:01

Dual Attached Ports
Our Interface Peer Interface CLAG Id
---------------- ---------------- -------
L-S-bond-2 L-S-bond-2 4
L-S-bond-1 L-S-bond-1 2

vagrant@Spine-112:~$ cat /etc/network/interfaces.d/L-S-bond-1auto L-S-bond-1
iface L-S-bond-1
clag-id 2
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-slaves swp2 swp3
alias Downlink bond to Leaf 131 & 132
bond-mode 802.3ad
bond-xmit-hash-policy layer3+4

vagrant@Spine-112:~$ cat /etc/network/interfaces.d/L-S-bond-2
auto L-S-bond-2
iface L-S-bond-2
clag-id 4
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-slaves swp4 swp5
alias Downlink bond to Leaf 141 & 142
bond-mode 802.3ad
bond-xmit-hash-policy layer3+4

cat .\leaf-spine-bond.yml---
- name: leaf spine bond configuration
cl_bond:
name: "{{ item.key }}"
slaves: "{{ item.value.slaves }}"
clag_id: "{{ item.value.clag_id|default(omit) }}"
ipv4: "{{ item.value.ipv4|default(omit) }}"
alias_name: "{{ item.value.alias_name|default(omit) }}"
with_dict: cl_bonds[inventory_hostname]
notify: reload networking
tags: L-S-bond

AND EXTRACT FROM GROUP_VAR FILE:
spine112: peerlink:
alias_name: 'Peerlink'
slaves: ['swp6', 'swp7']
clag_id: 1
L-S-bond-1:
slaves: ['swp2', 'swp3']
clag_id: 2
ipv4: '10.1.1.1/24'
alias_name: 'Downlink bond to Leaf 131 & 132'
L-S-bond-2:
slaves: ['swp4', 'swp5']
clag_id: 4
ipv4: '10.1.2.1/24'
alias_name: 'Downlink bond to Leaf 141 & 142'
Userlevel 4
@Mirza Waqas Ahmed

This one confused me for way longer than I care to admit... (drum roll please...) An L3 address can't be on a L2 bond, I know they are L2 bonds because they are marked as MLAG bonds with the clag-id. I imagine if a brctl show was used we would see L-S-bond-1 and L-S-bond-2 as part of a bridge (most likely a vlan-aware bridge?). So my conclusion->
  • The ansible config is fine, everything works as expected!
  • ifupdown is removing the l3 ip address from the bond because it is a layer2 bond (MLAG)
  • Sean needs more coffee :)
Also to tell if bonds are l2/l3 it might be easier to use the netshow tool (which will be installed by default in 2.5.5) https://support.cumulusnetworks.com/hc/en-us/articles/204075083-Installing-and-Using-the-cl-show-net...
Userlevel 3
In addition to what Sean said, if you do want to assign an IP address, you would assign it to the bridge interface (for a VLAN unaware bridge) or to a VLAN sub-interface of the bridge (for a VLAN aware bridge).
Hi Sean & Scott, Thanks for coming back.
First of all I am following the guide below:
http://docs.cumulusnetworks.com/display/CL25/Configuring+and+Managing+Network+Interfaces

Secondly let me just explain... Please see image below.


The configuration you see for the bond "peerlink" is for MLAG link b/w port 6,7. and that is working fine. I have created "peerlink.4094" bridge to make the CLAG communicate.
However what i want is to make swp2,3 of each switch in one bond and swp4,5 in another bond. However since I want to run OSPF b/w leaf-spine I want to have the bonds b/w leaf-spine to be configured L3... Just like we can have L3 LACP interfaces in Cisco....
The above is what I am trying to achieve. And I used the configuration from the above url to create that bond, where dependencies should work all together.

auto bond1
iface bond1
address 100.0.0.2/16
bond-slaves swp29 swp30
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4

Any suggestions....
Thanks in advance....

Regards,

Waqas
Well its working now... I have done two things, however still a bit unclear about the root cause.
1) I have installed netshow package on all switches
2) I have combined multiple plays into one and grouped all variables into one. (before I was using different play for peerlink, different play for bond and different play for interfaces.)

Though the above is up and I can see through netshow that links are up bonds are L3 and even I can see IPs. I am still unable to ping to the other side of the bond link.

the mission keep going on.
Userlevel 4

The configuration you see for the bond "peerlink" is for MLAG link b/w port 6,7. and that is working fine. I have created "peerlink.4094" bridge to make the CLAG communicate.


This is not a L3 bond, that is a L3 logical sub-interface that uses the bond. This is a corner case for MLAG. Do not use this is any other scenario. As Scott Emery suggested you want to use SVI (Switch VLAN Interfaces) for every other L3 address on a L2 bond. E.g. you do something like this (this example shows an SVI for VLAN 100)

auto bridge.100  iface bridge.100  address 10.10.10.10/24
You also need to use VRR for redundancy with MLAG for gateway-> http://docs.cumulusnetworks.com/display/DOCS/Virtual+Router+Redundancy+-+VRR

However what i want is to make swp2,3 of each switch in one bond and swp4,5 in another bond. However since I want to run OSPF b/w leaf-spine I want to have the bonds b/w leaf-spine to be configured L3... Just like we can have L3 LACP interfaces in Cisco....

Yes this works exactly like Cisco.... To make L3 bonds/etherchannels
  • do not use clag-id, MLAG/clag is only for L2
  • do not use OSPF on MLAG bonds/etherchannels
  • do not have the bond as a member of the bridge
  • add an IP address

The above is what I am trying to achieve. And I used the configuration from the above url to create that bond, where dependencies should work all together.

A bond can not be part of a bridge AND contain an IP address. This is true for Cisco, Juniper, Extreme, or any vendor. This is a LACP standard 802.3ad.

Well its working now... I have done two things, however still a bit unclear about the root cause.
1) I have installed netshow package on all switches
2) I have combined multiple plays into one and grouped all variables into one. (before I was using different play for peerlink, different play for bond and different play for interfaces.)

The bonds are probably not part of the bridge anymore, somehow it is in a broken state 😞, The bonds/etherchannel can't have L3 IP addresses and be part of a bridge. This is how 802.3ad works.

Though the above is up and I can see through netshow that links are up bonds are L3 and even I can see IPs. I am still unable to ping to the other side of the bond link. the mission keep going on.

As I predicted it will not work. The MLAG bonds (southbound to your hosts) need to be L2... they have been made L3. This means the IP address is up and works but they are no longer part of the bridge. There are way to build L3 to the host w/o MLAG but you have chosen a L2 design. Does this make sense?

Bonds/Etherchannels that go North of ToR (Top of Rack / leaf) Switch needs to be L3 only
  • not part of bridge (via bridge-ports command, e.g. bridge-ports bond0)
  • no clag-id (will allow it but not needed, clag-id is only for L2)
  • have a L3 ip address on it
Southbound Bond/Etherchannel (towards hosts) must be L2 only
  • cannot have an IP address on it
  • must be part of bridge
  • must have clag-id
  • SVIs are used as gateways per VLAN, VRR needs to be used for Active/Active.
Hi Sean,
Let me first of all thank you for your time and efforts to respond me back. This is really appreciated.

Moving on, I believe I got your message. However I believe that I am doing what you are saying..apart from 2 things only.
1) I am giving clag-id to bond interfaces soutbound to SPINE (i.e L3 bond b/w Spine and Leaf)
2) I am not YET configuring any Host mlag...

Now to my confusion again 🙂 if you don't mind!

My configuration for CLAG links (b/w spine111 & spine112) and (b/w leaf131 & leaf132) is below. THESE ARE EAST WEST LINKS and PORTS USED ARE swp6 and swp7 ON EACH SWITCH.

**** These links are fine. Clag is up and everything seems to be normal to me as per show command...******

### Secondly, I am not using it for any traffic other than CLAG as per best practice suggested. So in short I have created "peerlink" and "peerlink.4094" interfaces to configure clag b/w spine pair and leaf pair. (Configuration below)

vagrant@Spine-111:~$ cat /etc/network/interfaces.d/peerlink
auto peerlink
iface peerlink
clag-id 1
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-slaves swp6 swp7
alias Peerlink
bond-mode 802.3ad
bond-xmit-hash-policy layer3+4
vagrant@Spine-111:~$ cat /etc/network/interfaces.d/peerlink.4094
auto peerlink.4094
iface peerlink.4094
alias Clag_PeerLink
address 169.254.1.1/30

vagrant@Spine-112:~$ cat /etc/network/interfaces.d/peerlink
auto peerlink
iface peerlink
clag-id 1
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-slaves swp6 swp7
alias Peerlink
bond-mode 802.3ad
bond-xmit-hash-policy layer3+4
vagrant@Spine-112:~$ cat /etc/network/interfaces.d/peerlink.4094
auto peerlink.4094
iface peerlink.4094
alias Clag_PeerLink
address 169.254.1.2/30

vagrant@Leaf-131:~$ cat /etc/network/interfaces.d/peerlink
auto peerlink
iface peerlink
clag-id 1
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-slaves swp6 swp7
alias Peerlink
bond-mode 802.3ad
bond-xmit-hash-policy layer3+4
vagrant@Leaf-131:~$ cat /etc/network/interfaces.d/peerlink.4094
auto peerlink.4094
iface peerlink.4094
alias Clag_PeerLink
address 169.254.1.1/30

vagrant@Leaf-132:~$ cat /etc/network/interfaces.d/peerlink
auto peerlink
iface peerlink
clag-id 1
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-slaves swp6 swp7
alias Peerlink
bond-mode 802.3ad
bond-xmit-hash-policy layer3+4
vagrant@Leaf-132:~$ cat /etc/network/interfaces.d/peerlink.4094
auto peerlink.4094
iface peerlink.4094
alias Clag_PeerLink
address 169.254.1.2/30

#######################
# So I believe as your recommendation this configuration is rightly done...Correct me if I am wrong.
#########################
********************************************************************************
NOW COMING TO THE PROBLEM, IF ABOVE IS CORRECT.**
********************************************************************************

*******Since Spine111 and Spine112 are logically one switch and Leaf131 and Leaf132 are logically one switch; I WOULD LIKE TO CREATE A BOND B/W THEM USING swp2 and swp3 ON EACH SWITCH.
These are southbond to spine switches.

AND I WOULD LIKE TO HAVE them AS L3, for me to be able to run point-to-point OSPF b/w Spine and Leaf.

For that I have created the below configuration.

#######################
# For information Bonds do come up and show as L3 mode in netshow command. However communication doesn't get established completely.
########################

vagrant@Spine-111:~$ cat /etc/network/interfaces.d/L-S-bond-1auto L-S-bond-1
iface L-S-bond-1
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-slaves swp2 swp3
alias Downlink bond to Leaf 131 & 132
bond-mode 802.3ad
address 10.1.1.1/24
bond-xmit-hash-policy layer3+4

vagrant@Spine-112:~$ cat /etc/network/interfaces.d/L-S-bond-1
auto L-S-bond-1
iface L-S-bond-1
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-slaves swp2 swp3
alias Downlink bond to Leaf 131 & 132
bond-mode 802.3ad
address 10.1.1.2/24
bond-xmit-hash-policy layer3+4

vagrant@Leaf-131:~$ cat /etc/network/interfaces.d/L-S-bond-1
auto L-S-bond-1
iface L-S-bond-1
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-slaves swp2 swp3
alias Uplink bond to Spine 111 & 112
bond-mode 802.3ad
address 10.1.1.3/24
bond-xmit-hash-policy layer3+4

vagrant@Leaf-132:~$ cat /etc/network/interfaces.d/L-S-bond-1
auto L-S-bond-1
iface L-S-bond-1
bond-miimon 100
bond-lacp-rate 1
bond-min-links 1
bond-slaves swp2 swp3
alias Uplink bond to Spine 111 & 112
bond-mode 802.3ad
address 10.1.1.4/24
bond-xmit-hash-policy layer3+4

#############################################
I have tried it with and without clag-id with no success.
I have also thought of assigning separate IPs at each bond interface, as you can see above but no success.

As per your suggestion the bonds are not part of bridge, no clag-id and l3 address assigned.

Do you see a solution here?

PS: One funny thing I have observed this evening is that, I am able to ping 10.1.1.3 (which is leaf131 bond's IP) from Spine111 and vice versa.
However I am unable to ping 10.1.1.4 (which is leaf132 bond's IP) and vice versa.
Finally from Spine112 and Leaf141 I am unable to ping any IP, either 10.1.1.1, 10.1.1.2.

Thanks once again for clarifying.

Regards.
To add more information to it, below are the outputs of netshow bonds all commands for spine111, spine112, leaf131 and leaf132 switches.

vagrant@Spine-111:~$ netshow bonds all --oneline--------------------------------------------------------------------
To view the legend, rerun "netshow" cmd with the "--legend" option
--------------------------------------------------------------------
Name Speed MTU Mode Summary
-- -------------------------------------------- ------- ----- ------- ---------------------------------------------
UP L-S-bond-1 (Downlink bond to Leaf 131 & 132) 20G 1500 Bond/L3 Bond Members: swp2(UP), swp3(UN), 10.1.1.1/24
UP L-S-bond-2 (Downlink bond to Leaf 141 & 142) 20G 1500 Bond/L3 Bond Members: swp4(UP), swp5(UP), 10.1.2.1/24
UP peerlink (Peerlink) 20G 1500 Bond Bond Members: swp6(UP), swp7(UP)

vagrant@Spine-112:~$ netshow bonds all --oneline
--------------------------------------------------------------------
To view the legend, rerun "netshow" cmd with the "--legend" option
--------------------------------------------------------------------
Name Speed MTU Mode Summary
-- -------------------------------------------- ------- ----- ------- ---------------------------------------------
UP L-S-bond-1 (Downlink bond to Leaf 131 & 132) 20G 1500 Bond/L3 Bond Members: swp2(UP), swp3(UN), 10.1.1.2/24
UP L-S-bond-2 (Downlink bond to Leaf 141 & 142) 20G 1500 Bond/L3 Bond Members: swp4(UP), swp5(UP), 10.1.2.2/24
UP peerlink (Peerlink) 20G 1500 Bond Bond Members: swp6(UP), swp7(UP)

vagrant@Leaf-131:~$ netshow bonds all --oneline
--------------------------------------------------------------------
To view the legend, rerun "netshow" cmd with the "--legend" option
--------------------------------------------------------------------
Name Speed MTU Mode Summary
-- ------------------------------------------- ------- ----- ------- ---------------------------------------------
UP L-S-bond-1 (Uplink bond to Spine 111 & 112) 20G 1500 Bond/L3 Bond Members: swp2(UP), swp3(UP), 10.1.1.3/24
UP L-S-bond-2 (Uplink bond to Spine 121 & 122) 20G 1500 Bond/L3 Bond Members: swp4(UP), swp5(UP), 10.1.4.3/24
UP peerlink (Peerlink) 20G 1500 Bond Bond Members: swp6(UP), swp7(UP)

vagrant@Leaf-132:~$ netshow bonds all --oneline
--------------------------------------------------------------------
To view the legend, rerun "netshow" cmd with the "--legend" option
--------------------------------------------------------------------
Name Speed MTU Mode Summary
-- ------------------------------------------- ------- ----- ------- ---------------------------------------------
DN L-S-bond-1 (Uplink bond to Spine 111 & 112) 20G 1500 Bond/L3 Bond Members: swp2(UP), swp3(UP), 10.1.1.4/24
DN L-S-bond-2 (Uplink bond to Spine 121 & 122) 20G 1500 Bond/L3 Bond Members: swp4(UP), swp5(UP), 10.1.4.4/24
UP peerlink (Peerlink) 20G 1500 Bond Bond Members: swp6(UP), swp7(UP)

Userlevel 4
@Mirza Waqas Ahmed,

My worry with this thread is that the original question was about Ansible and now we are talking about how to configure MLAG. I highly recommend looking at one of our 'Boot Camps' at some point, I think you could gain a lot of value (http://cumulusnetworks.eventbrite.com/). I think each question need to separate to simplify this process for community support. Anyways let me try to answer->

So what is the goal with this topology? do you want VLAN connectivity between host-1 and host-2? You have three choices you can->
  • Do MLAG all the way up to the Spine, VRR runs on one of your Spine pairs, or both
  • Do HOST MLAG, MLAG stops at Leaf layer, VRR is running on Leaf layer
  • Do no MLAG, route the whole way <-usually what I prefer to do
If you want MLAG all the way to the Spine (e.g. VLAN10 on leaf-131, Spine112, Leaf-141, gives access to VLAN10 on any device) the way you setup is incorrect, you will want
  • No OSPF, literally turn quagga off, it will not be needed in the diagram you provided
  • clag-id on every bond
  • no IP address on any bond
  • every bond IS a member of the bridge on its respective switch
  • The pairs are setup correctly in your diagram, Spine-111 and Spine-112, Leaf-131 and Leaf-132, Spine-121 and Spine-122, Leaf-141 and Leaf-142
This creates a huge layer 2 domain, but many customers do this and have no problems.

If you want just Host MLAG (layer2 to the leaf layer).
  • no clag-id on spine->leaf bonds/etherchannels
  • clag-id on leaf->host bonds/etherchannels
  • IP address on spine->leaf bonds
  • no MLAG on spine switches at all
  • no bridge containing bonds/etherchannels
  • VRR / MLAG is running on leaf-131 and leaf-132 as a pair, and leaf-141 and leaf-142 as a pair
You do no MLAG
  • literally delete every swp6,7 config
  • turn quagga on every device
  • turn all bonds/etherchannels into l3 bonds
  • remove all bridges
  • either manually add network ospf statements for each host subnet on the leafs, or run quagga on host-1 and host-2 (routing on host).
If you have an L2 requirement between host-1 and host-2 you have two options
  1. Use MLAG, VLANs have to exist everywhere
  2. do not use MLAG, use VXLAN to tunnel across L3 network so hosts have L2 connectivity
If you use VXLAN consider looking at LNV:
http://docs.cumulusnetworks.com/display/DOCS/LNV+Full+Example

Ok Thank you Sean, for your extended answer, I do understand the discussion went to a different discussion while looking at original answer.

I will look deeper my self on this and see where I reach.

However my option is as below, which I am struggling to run, even though I am setting all the parameters.

Host MLAG (layer2 to the leaf layer).
  • no clag-id on spine->leaf bonds/etherchannels
  • clag-id on leaf->host bonds/etherchannels
  • IP address on spine->leaf bonds
  • MLAG on spine switches at all
  • OSPF b/w leaf and spine
  • no bridge containing bonds/etherchannels
  • VRR / MLAG is running on leaf-131 and leaf-132 as a pair, and leaf-141 and leaf-142 as a pair
Thank you once again and apologies for the long thread.