Cumulus VX ovs vtep ARP handling


I'm trying to configure openvswitch-vtep to work with L2GW openstack plugin. I've got to the point where L2GW agent can successfully populate hw vtep database of the cumulus VX and i've got ping working but only one way. What fails is the ARP request from baremetal server to a VM. I've checked the ovsdb database and can't see any inconsistencies there. I've tried this with other vendor's hw vteps (e.g. HP) and it worked fine so it looks like there's something specific to VX that breaks it. This is the relevant parts of ovsdb dump. 10.0.0.5 is the tunnel IP of cumulus VX. 10.0.0.3 is the overlay IP of the VM I'm trying to ping. Ucast_Macs_Remote table MAC _uuid ipaddr locator logical_switch ------------------- ------------------------------------ ---------- ------------------------------------ ------------------------------------ "fa:16:3e:3c:51:d7" 53fade4c-6083-4e7a-adcf-43852422e18e "10.0.0.3" ceea04d3-360a-4cc8-ae6a-4abcb34f0b94 fe516b37-b4d0-4501-9487-f6c94462d63b Mcast_Macs_Local table MAC _uuid ipaddr locator_set logical_switch ----------- ------------------------------------ ------ ------------------------------------ ------------------------------------ unknown-dst 2b15b45d-bcfc-4ba3-8f85-119ea964335a "" febe0ca8-b718-455f-a145-031667a26bc4 fe516b37-b4d0-4501-9487-f6c94462d63b Physical_Locator_Set table _uuid locators ------------------------------------ -------------------------------------- febe0ca8-b718-455f-a145-031667a26bc4 [9f01e1c8-524b-49da-a6a1-b06571a7f310] Physical_Locator table _uuid dst_ip encapsulation_type ------------------------------------ ----------- ------------------ 9f01e1c8-524b-49da-a6a1-b06571a7f310 "10.0.0.5" "vxlan_over_ipv4"

3 replies

Userlevel 5
Michael,

What version of VX are you using? What tests have you done on the link to test other than ping? Do you have any results you can share?
Hi Scott, I'm using VX 2.5.6. I have only done ping tests. The first test was to ping a baremetalIP from a VM. In this case ARP replication is handled by a source compute host. This test worked fine and I was able to ping both ways since the ARP cache of the baremetal device has been populated with VM's MAC. However, when I clean the baremetal's ARP cache and try pinging a VM I can see ARP request leaving the baremetal server, hitting the interface of the VX (swp3 in my case). However after that , it simply disappears. I'll try to rephrase my question. The official Cumulus LNV configuration guide states that head-end replication is the default option and it's definitely supported by trident 2 chipset. Has this functionality been ported to VX? Can a VX do head-end replication without having to rely on either a service node or multicast ?
I think I may have found the answer to my problem. The only BUM replication mode supported by OVS hw vtep until recently was service node replication. Head-end replication support was only introduced on the 10th of May 2016 so it'll be a while until this update trickles down to Cumulus repos.

Reply