OpenBGPd to Quagga - IPv4 over IPv6 BGP

Userlevel 1
I’ve been trying to get a PFSense firewall running openBGPd to advertise a IPv4 route over IPv6 peering and IPv6 next hop to a Cumulus Linux switch (3.1 currently). I have come across a few unexpected things which I was hoping somebody could clear up for me.
Firstly, from what I can gather openBGPd doesn’t support RFC5549 and so it won’t automagically send a IPv6 address as a valid nexthop for an IPv4 route. However, as long as the openBGPd IPv4 nexthop is on the same subnet as the Cumulus switch, I can successfully peer using a IPv6 BGP session and advertise a IPv4 route, using the IPv4 nexthop from PFSense.
After doing a bit more research, I was hoping to maybe be able to use route maps in Quagga on the Cumulus Linux switch to manipulate the incoming route and add a IPv6 nexthop address via the “set ipv6 next-hop peer-address” command. This is where things start to get interesting. Firstly, using that command, I start getting errors in the Quagga log saying “DENIED due to: martian or self next-hop”, on trying to hunt through the Quagga source to work out why this wasn’t working, it became apparent that this line is not in the Cumulus Quagga git repo. In fact, the only place I managed to find it was here:
Committed by Donald Sharp from Cumulus Networks?
So I’m a little confused why the version of Quagga shipping with Cumulus Linux is different to your git repo and also why adding the IPv6 peer-address results in this error, as the comments in the code in the above link suggest that IPv6 link local address shouldn’t trigger it. Any ideas?
So accepting that I wasn’t getting anywhere with the “set ipv6 next-hop peer-address” route-map, I then tried configuring a IPv6 global address on the interface on the switch and created a route-map containing “set ipv6 next-hop global xx::xx”. This gets applied without any errors, but unfortunately doesn’t actually change/add the nexthop on the incoming route. I’m guessing that there must also be an attribute on the route which also needs updating to reflect that it can support IPv6 nexthop?
So I think after this I am left with two questions:
1. Is what I am trying to do possible in anyway?
2. Are there any thoughts on fixing the route-map behaviour to try and achieve what I’m trying to do?

5 replies

Userlevel 4
Hey Nick,

I have not tested OpenBGPd but unless they support RFC5549 I highly doubt that will work. There are "config enhancements" we have done with Cumulus but the ability to do IPv4 over IPv6 is RFC5549. I think I would try reaching out to the OpenBGP community and seeing what they say. It might be easy as using our Quagga on pfsense box if its an x86 machine to get the desired outcome.
Userlevel 4
To support that on the Cumulus side, you need to enable the "capability extended-nexthop" command under the neighbor on the Cumulus side. Trying to mimic the behavior via other means will rub up against built-in protections for mis-configuration. Agree with Sean, this needs proper support on the remote side. That being said, I'm curious to try and understand what your end goal is with this configuration. Is it simplicity in not needing to explicitly configure an IP address for a BGP peer on one side of the link? Is it general experimentation or something else?
Userlevel 1
Thanks for your response guys. Your pretty spot on Eric in that the goal is to try and simplify configuration. We will have several racks with multi tenant hyper-visors in them. Each tenant has a cloned from template virtual pfsense fw. Instead of splitting the public IP range into /29's...etc and running VRP on the leaf switches for each tenants Public range, which would lead to a lot of wasted addresses and also restrict mobility between leaf switches, I would like to run BGP on the FW's and advertise public /32 IP's via loopback aliases.

As each pfsense is cloned, it would be much simpler if the BGP config could stay the same between all firewalls and also means that our front line support staff won't have to deal with BGP configuration. We also want to be in a position to start using IPv6, and so being able to advertise both over a single BGP session would be advantageous.

I completely get that what I am trying to do is outside the realm of 5549, but I was originally hoping that route maps might have let me achieve the goal, considering most of the support to manipulate the next hop to use IPv6 peer address is already there. If worse comes to worse I guess I can probably create a /24 IPv4 subnet on each leaf switch to be used as nexthop for the pfense FW's, but it doesn't feel as elegant as using linklocal v6.

I will look into building the Cumulus Quagga packages for pfsense, but it would be nice to be able to use the built in BGP solution as it is more visible than a 3rd party option. Could you also confirm my query about the Cumulus Quagga repository?

Userlevel 4
I'm not aware of all of the constraints of your environment however I've used IPv4 Link Local addresses in the past for slightly different constraints. That technique might have some use for you too. You can use the same IPv4 Link Local (169.254.x.x) addresses on each pair of Leafs/ToRs. There is no need to advertise them into the fabric, they're only present to establish the BGP adjacency. All routing is performed on top of the BGP session, a default route or whatever else is advertised down to the hosts/vms/containers and the individual host/container/VM ips are advertised up into the fabric as host routes. Because you're using the same IP addressing everywhere it can be more readily stampable. The official Cumulus Quagga repository is:
Userlevel 1
Thanks for confirming the git repository, after cloning it and doing a git grep it appears that github's search is broken on the bgp_route.c, which is why I could not find where that error was coming from. And thanks for the LL IPv4 suggestion, that might do the trick.

I'm still going to look through the code and see if I can work out if there is something I can do with IPv6, will post back if I manage to get anywhere.