Cumulus VX - VirtualBox - virtio NIC "slow"


Userlevel 1
Hi All,

I am using VirtualBox v4.3.30 rc10 running the Cumulus VX image. I see that NIC1 (eth0) is a virtio-net type. I'm attempting to do a 100MB file transfer, using wget, from a local web server. I see that this takes about 5minutes (very slow). When I do the same wget from my MacOS to the same web server it takes about 5 seconds.

I am wondering if there is a way to improve the speed of NIC1. I did try changing the NIC 1 type to the various other options, like Intel PRO/1000, but when I did this the eth0 link does not come up. I also googled around on this topic, but did not find anything that I could do within the bounds of using the Cumulus VX image.

Any help, suggestions would be appreciated.

Thank you!
-- Jeremy

21 replies

Hi Jeremy,

Is the eth0 NIC from the VirtualBox perspective in NAT mode? One of the things which slows down transfers to any locally hosted VM is having to re-write every incoming packet with new L2 and L3 headers.

Would you be able to test with the NIC in bridged mode?



It should be noted thought that this will put the VM on the same subnet as your laptop. With virtual environments, sometimes NAT is truly necessary based on your security requirements and use-case.

If eth0 is configured in /etc/network/interfaces to receive a DHCP offer, you may need to enter "dhclient eth0" to get it to renew.

Thanks,

Scott
Userlevel 1
Hi Scott,

Thank you for your prompt reply. I am confirming that the NIC is in *bridged* mode, see attached screen shot.



I appreciate any other suggestions or ideas you might have. Or perhaps the NIC is operating at the "correct" speed - this is the first time I am using virtio-net types. So my "ask" to you is if you could please reproduce a setup where you are doing a "wget" from your local webserver and see what you get. That would be very helpful, thank you.

Cheers,
-- Jeremy

Userlevel 3
This is interesting. I'm not seeing any slowness. I've got the CumulusVX image and running VirtualBox 4.3.26_Ubuntur98988. I did a wget of the Virtual Box file and it transferred at a rate of 61.1M/s.

Length: 510749520 (487M)
Saving to: 'CumulusLinux-2.5.3.box'

100%[================================================================>]

510,749,520 61.1M/s in 8.2s
2015-08-12 23:08:48 (59.5 MB/s) - 'CumulusLinux-2.5.3.box' saved [510749520/510749520]
I did a similar test using scp, which was slower, but still pretty fast
CumulusLinux-2.5.3.box                                                  100%  487MB  34.8MB/s   00:14    
Transferred: sent 126728, received 511251264 bytes, in 13.7 seconds
Bytes per second: sent 9235.9, received 37259841.4
Perhaps there's an issue with the Virtio driver on the MAC.

Userlevel 1
Hi Scott,

Thank you for running the test, it's good to know that it's not a function of the Cumulus VX image!!

Also, good suggestion on checking on my MacOS virtio drivers; hadn't thought of that. When I installed Virtual Box, I just assumed that the virtio drivers would be upgraded/latest. I'll look into it and see what I find. If you have any experience in checking/upgrading these types of things, any pointers would be greatly appreciated.

Cheers,
-- Jeremy

Userlevel 1
Scott,

If you happen to have a MacOS handy and you can check, that would be helpful. Or if anyone else following this thread could that would be great. I didn't find anything useful from googling the topic on host driver upgrades. I also did upgrade to VirtualBox 5.0 just for testing, and see the same results.

Cheers,
-- Jeremy
Userlevel 1
Scott,

Did some additional testing with my Cumulus VX and discovered something of interest. I neglected to mention that my Cumulus VX image has 32 NIC ports enabled 🙂 I am generally seeing about 400K/s

When I re-tested using the "stock" Cumulus VX image with 8 NIC ports I did see better performance. I got about 2.8M/s.

Interesting.

Cheers,
-- Jeremy
Userlevel 3
Jeremy,

When you had 32 NIC ports enabled were the corresponding swp ports up (ip link show)? And were they all connected to the same internal network name (default is intnet)? If so, then that internal network is acting like a bridge, such that when one swp port sends out a packet, like LLDP would normally do, it will get broadcast to, and received by, all other swp interfaces. That could be chewing up lots of CPU on your MAC, thus reducing the effective bandwidth on eth0.
Userlevel 1
Hi Scott,

I was careful to put each NIC in a separate internal network so as to mimic individual ports. See attached snapshot (just shows 8 ports, but they are all like this).

Hope this helps.

Cheers,
-- Jeremy

Userlevel 3
Jeremy,

I tried the same configuration as you had. I configured 32 NICs, all using virtio and their own internal network:
$ vboxmanage showvminfo CumulusVX-2.5.3-NICs | grep ^NIC
NIC 1: MAC: 0800270F6B21, Attachment: Bridged Interface 'eth0', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 2: MAC: 0800271315EB, Attachment: Internal Network 'intnet_2', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 3: MAC: 08002704B034, Attachment: Internal Network 'intnet_3', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 4: MAC: 080027098CD4, Attachment: Internal Network 'intnet_4', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 5: MAC: 080027797294, Attachment: Internal Network 'intnet_5', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 6: MAC: 080027B999C3, Attachment: Internal Network 'intnet_6', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 7: MAC: 080027BEC946, Attachment: Internal Network 'intnet_7', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 8: MAC: 0800273E2B3D, Attachment: Internal Network 'intnet_8', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 9: MAC: 0800278F09A5, Attachment: Internal Network 'intnet_9', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 10: MAC: 080027DC87F7, Attachment: Internal Network 'intnet_10', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 11: MAC: 08002740B3C2, Attachment: Internal Network 'intnet_11', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 12: MAC: 08002788FFE0, Attachment: Internal Network 'intnet_12', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 13: MAC: 080027051B29, Attachment: Internal Network 'intnet_13', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 14: MAC: 080027B1FC9E, Attachment: Internal Network 'intnet_14', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 15: MAC: 080027AD2448, Attachment: Internal Network 'intnet_15', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 16: MAC: 0800278A156D, Attachment: Internal Network 'intnet_16', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 17: MAC: 0800276B529A, Attachment: Internal Network 'intnet_17', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 18: MAC: 08002712C7ED, Attachment: Internal Network 'intnet_18', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 19: MAC: 080027BE7FAC, Attachment: Internal Network 'intnet_19', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 20: MAC: 08002736B8CD, Attachment: Internal Network 'intnet_20', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 21: MAC: 08002788EF51, Attachment: Internal Network 'intnet_21', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 22: MAC: 0800276BC905, Attachment: Internal Network 'intnet_22', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 23: MAC: 080027C2E618, Attachment: Internal Network 'intnet_23', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 24: MAC: 0800278D145E, Attachment: Internal Network 'intnet_24', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 25: MAC: 0800278F0474, Attachment: Internal Network 'intnet_25', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 26: MAC: 080027F74B14, Attachment: Internal Network 'intnet_26', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 27: MAC: 080027745146, Attachment: Internal Network 'intnet_27', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 28: MAC: 0800276B2D08, Attachment: Internal Network 'intnet_28', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 29: MAC: 0800272CFC19, Attachment: Internal Network 'intnet_29', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 30: MAC: 08002723B184, Attachment: Internal Network 'intnet_30', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 31: MAC: 0800271154E9, Attachment: Internal Network 'intnet_31', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 32: MAC: 0800277B0096, Attachment: Internal Network 'intnet_32', Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 33: disabled
NIC 34: disabled
NIC 35: disabled
NIC 36: disabled
I then booted the VM and up'd all of the interfaces:

cumulus@cumulus$ ip link show 
1: lo: mtu 16436 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:0f:6b:21 brd ff:ff:ff:ff:ff:ff
3: swp1: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:13:15:eb brd ff:ff:ff:ff:ff:ff
4: swp2: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:04:b0:34 brd ff:ff:ff:ff:ff:ff
5: swp3: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:09:8c:d4 brd ff:ff:ff:ff:ff:ff
6: swp4: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:79:72:94 brd ff:ff:ff:ff:ff:ff
7: swp5: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:b9:99:c3 brd ff:ff:ff:ff:ff:ff
8: swp6: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27🇧🇪c9:46 brd ff:ff:ff:ff:ff:ff
9: swp7: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:3e:2b:3d brd ff:ff:ff:ff:ff:ff
10: swp8: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:8f:09:a5 brd ff:ff:ff:ff:ff:ff
11: swp9: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:dc:87:f7 brd ff:ff:ff:ff:ff:ff
12: swp10: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:40:b3:c2 brd ff:ff:ff:ff:ff:ff
13: swp11: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:88:ff:e0 brd ff:ff:ff:ff:ff:ff
14: swp12: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:05:1b:29 brd ff:ff:ff:ff:ff:ff
15: swp13: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:b1:fc:9e brd ff:ff:ff:ff:ff:ff
16: swp14: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27🇦🇩24:48 brd ff:ff:ff:ff:ff:ff
17: swp15: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:8a:15:6d brd ff:ff:ff:ff:ff:ff
18: swp16: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:6b:52:9a brd ff:ff:ff:ff:ff:ff
19: swp17: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:12:c7:ed brd ff:ff:ff:ff:ff:ff
20: swp18: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27🇧🇪7f:ac brd ff:ff:ff:ff:ff:ff
21: swp19: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:36:b8:cd brd ff:ff:ff:ff:ff:ff
22: swp20: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:88:ef:51 brd ff:ff:ff:ff:ff:ff
23: swp21: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:6b:c9:05 brd ff:ff:ff:ff:ff:ff
24: swp22: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:c2:e6:18 brd ff:ff:ff:ff:ff:ff
25: swp23: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:8d:14:5e brd ff:ff:ff:ff:ff:ff
26: swp24: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:8f:04:74 brd ff:ff:ff:ff:ff:ff
27: swp25: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:f7:4b:14 brd ff:ff:ff:ff:ff:ff
28: swp26: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:74:51:46 brd ff:ff:ff:ff:ff:ff
29: swp27: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:6b:2d:08 brd ff:ff:ff:ff:ff:ff
30: swp28: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:2c:fc:19 brd ff:ff:ff:ff:ff:ff
31: swp29: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:23:b1:84 brd ff:ff:ff:ff:ff:ff
32: swp30: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:11:54:e9 brd ff:ff:ff:ff:ff:ff
33: swp31: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:7b:00:96 brd ff:ff:ff:ff:ff:ff
I then did a wget and it was still the same speed:

Length: 510749520 (487M)
Saving to: 'CumulusLinux-2.5.3.box'

100%[==============================================>] 510,749,520 59.1M/s in 8.2s

2015-08-13 16:09:54 (59.1 MB/s) - 'CumulusLinux-2.5.3.box' saved [510749520/510749520]

Are you sure you were using the en5:USB Ethernet interface when you were doing the wget outside the VM your MAC? I'm really grasping at straws here, since I can't reproduce what you are seeing.
Userlevel 1
Hi Scott,

I believe it's good enough that you've tried testing this on your end. Even though you're getting different / better results, I think we can wrap on this topic. It's gotta be something on my end, and I'll noodle through this with some friends that are a bit savvy with both MacOS and VirtualBox. If I find anything of interest I will definitely post it here.

Thank you again for all your help!

Cheers,
-- Jeremy
I tried the same thing on VirtualBox / OS X:

I had the same exact results as Jeremy. VM -> Cumulus transfers were slow, Cumulus -> anything was fast, and OS X to/from Cumulus was fast. I'll try this again later on Linux.
Userlevel 1
@John - thank you for testing this out and verifying the same results I am seeing.
Userlevel 4
just for kicks... can you provide a vagrant@l1$ sudo iptables -L

wondering if a policer is on or something weird.... cl-acltool is not installed on vx (hardware dependent tool) and there shouldn't be any iptables rules...
Userlevel 1
@sean - looks like iptables is clear:


Userlevel 4
thanks, thats what I get too... 😞

For kicks... can you use something other than the USB NIC? Curious if the USB BUS is being used up... how many USB NICs are there? e.g. attach it to the wireless or a real ethernet port (thunderbolt?) and see what happens. I have a feeling either the USB BUS or the adapter are flaking out (not meeting expectations)

(I tested this with curl vs wget on Vagrant doing a wget to a remote server and I am seeing similar things to Scott, I can't recreate) I am on OSX 10.10.4.
Userlevel 1
@sean - I did try using the "Host only adapter", and got the same slow results.
Userlevel 1
@sean - fwiw, I am *not* using Vagrant. I am just using straight Virtual Box. I cannot see how this would be a factor, but it is a difference in our setups.
Userlevel 1
@sean - any chance you could run a test where you have the Cumulus VX eth0 on a "host-only" network type?
Userlevel 1
Hi All,

I finally got a working solution where the Cumulus VX does work as expected. YEA!

The key here is that all of the NICs from the various virtual-machines had to *ALL* be of the same network adapater type, in this case paravirtualized.

When I did my original testing, I had all of the VMs on a bridged network. The ZTP server was using an Intel PRO adapter type, and the Cumulus had to be paravirtualized.

So I took everything off of bridged network and into a "host-only" network. Changed all network adapter types to paravirtualized. Now it works super fast.

Really appreciate everyone's help on this. Feeling this right now:
http://despair.com/collections/posters/products/mistakes

😃

Cheers,
-- Jeremy
Userlevel 1
Jeremy Schulman wrote:

Hi All,

I finally got a working solution where the Cumulus VX does work as expected. YEA!

The ...

Thanks for figuring this out!

I built a 4 switch leaf/spine network running under VirtualBox on a Mac Pro and was getting poor performance. Out of band management ports used a Bridged Adapter so that the switches can be accessed by external systems. Links are created using Internal Networks.

A few Ubuntu 14.04.3-server VMs act as load generators and VirtualBox had selected Intel as the default adapter type. Switching to paravirtualized adapters for the VMs increased iperf throughput across the fabric from 1Mbits/sec to 500Mbits/sec!
Good Job Jeremy! You should be feeling this instead: http://despair.com/collections/posters/products/achievement 😉

Reply