Accelerated packet pushers, e.g. XDP vs DPDK - what do you use, and why?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.


consummate homelabber
Mar 17, 2017
Near Seattle

I have a VM running VyOS I'm using as a packet pusher for a storage network. I'd like to offload it to an old E3 board I'm using as an OPNsense router. But my storage network is 40Gbps, and it's impossible for even a fast board to get that kind of throughput, let alone a dual core E3 clocked at 2.0Ghz (It's the 1220L v2). E.g. my E5s might hit 17Gbps doing VM migration on a good day when everything else is idle.

So I was wondering if anyone is using one of these new fangled packet pushing technologies for 10+ Gbps networks. Unfortunately just found out TNSR doesn't support my Connectx-3s, so my final solution might have to be vanilla, but I'm curious to hear about anyone's all-in solutions, as well.

Which one do you like? What are the advantages / disadvantages? How can I get started? Is any software actually using XDP yet, or is it still primarily in the dev realm?

Edit: It looks for XDP stuff is still rather DIY. However, it appears to be documented rather well. I found some links that might be useful:

How to build a BGP router leveraging XDP:

XDP hands on programming tutorial:

BPF functions included by kernel version (with complete list of functions):

XDP supported drivers:
Last edited:


Dec 12, 2019
I think I read somewhere on this site that Danos, based on ATT's dNOS, itself based on vyatta (like VyOS) uses dpdk to good effect.

It's something I've been wanting to look at for general curiosity, but just personally I can't think of any use case (for myself) where I would need that kind of speed and anything more than super basic ACLs on a switch with ASICs at that speed. I.e. I can't justify the NICs and software vs just using a switch with 40Gb+ interfaces, so I'm curious what the use case is.


Jan 17, 2020
I am a bit lost if you pursue BGP peering or storage network design.

For 10Gbps+ depends how much +. Just 10Gbps 2667v2 on a VM with passed through X520 and suricata, with reasonable number of rules does full in-line IPS under pfSense which has none of that magic.

VyOS, TNSR etc. are all products based on top of a specific SW stack. Performance wise and this is my opinion only they don't deliver anything on top of DPDK+VPP. You get nicer management, support, cli command subsystem, dynamic routing protocols whatnot in a package. But this package will not outperform the stack it is based on. FRR delivers BGP and there's a manual how to get this working with DPDK+VPP.

Since you referenced a VM it is not clear if you pass-through/SR-IOVed nics, otherwise virtual switches face the same SW restrictions. I have no experience with XDP but very likely concept is similar to DPDK. Remove interface from the kernel packet processing to increase speed and to lower the latency. With CPU becoming a bottleneck results should be comparable.

For storage network it will be also latency and pps that play critical role rather than raw Gbps performance. VM migration is like p2p compared to hundreds of endpoints accessing data of random sizes at random times on a regular storage server. Then you have what talks to it, iscsi, NFS, SMB how well multithreaded they are and how fast client CPU is. Between 2 2667v2 via SX6012 and X-3 I was able to push 36Gbps in multiple threads, upload to NAS VM is more 9ish Gbps.
Last edited: