Clarification on FIB / forwarding table size for Cumulus on Broadcom

  • 10 January 2017
  • 3 replies

There seems to be some discrepancy on Number of Supported Route Entries, by Platform section for Broadcom Trident switches. The docs here show a max of 128K IPv4 or 20K IPv6. Some switch vendor docs, e.g. the edge-core AS6701-32X (data sheet) with a BCM56850 Trident II shows 64K IPv4 routes and 20K IPv6 routes, so fewer IPv4 routes than the Cumulus docs. Other Trident II switches, e.g. the edge-core AS6712-32X (data sheet), also with a BCM56850 Trident II (though with an Intel Atom rather than Freescale CPU) get more complicated, showing:
  • 16K IPv4 routes (LPM) in TCAM
  • 112K max. host entries
  • 8K IPv6 routes (LPM) in TCAM
  • 56K max. host entries
This seems to be regarding a Broadcom feature in Trident II and up that can keep host routes in CAM rather than TCAM (UFT?). If we take those IPv4 LPM and host entries combined we get 128K, like the Cumulus docs , but the combined IPv6 LPM and host routes add up to 64K, which is much higher than the 20K listed in the Cumulus docs.
Any guidance on which info is correct (as there even seems to be conflict between data sheets of the two different edge-core switches both using a BCM56850 Trident II chipset)? Aside from the CAM-carving profiles described here, can Cumulus take advantage of UFT?
Also: I'm assuming that the exact match routing in UFT applies to any /32 or /128 routes, not just direct adjacencies. As an example: In a container deployment in a regular routed/L3 container using something like calico, a /32 and/or /128 route is advertised per container. Am I correct in saying that those /32 and /128 routes should fit in the CAM via UFT rather than going into the more limited TCAM using longest prefix matching (LPM)?

3 replies

Userlevel 2
Hi Hugo,

Some specs that are provided about the chipset might not be reflecting a real world situation. The values in our tables are tested on the different platforms we support. In general it would be better to get your information from the specs that we have provided.

The values that you can get with changing the routing profiles can be found here:

The shared TCAM also explains why we have less IPv6 space available than the hardware specs say, because we've dedicated more to IPv4.
Userlevel 4
Thanks for the concise answer here, Attilla. Hugo, I updated the documentation and added a note that hopefully explains this.
Thanks for the info; that is quite helpful. I just noticed in e.g. the edge-core AS6712-32X datasheet that the FIB numbers are prefixed with "Subject to NOS", which can help explain the discrepancy.

Is there the possibility of adding profiles that have different biases, e.g. more balance between IPv4 and IPv6 entries, and is exact-match heavy rather than LPM heavy? In other words: is there still a split between the v4 and v6 tables such that we can't "borrow" more exact match entries from the v4 side to beef up the capacity on the v6 side, or can that 128K/20K split slide further to the v6 side? I'm thinking specifically of fully dual-stack L3 routed deployments that utilize host routes per VM/container like calico, and this bit from Facebook re: the Trident 2 piqued my interest:
We found two features on the Trident 2 - Unified forwarding tables (UFT) and Algorithmic LPM (ALPM) - that could be of help here, and we choose UFT. This allowed us to partition the CAM and TCAM memories in a way that suited us. We leveraged the fact that Trident 2 lets you put host routes in CAM tables, leaving precious TCAM space to be utilized solely for prefixes that really need longest prefix matching.
Also: Is there any more info on ALPM? I only see if mentioned in passing in the routing docs and can't really find much about it elsewhere online.

We may end up stuffing VXLAN on top of it and protecting the FIB that way, but are starting to do some homework on this at the moment.

Cheers & thanks!