Stateless Packet Generation with VPP
This post is a HOWTO/notes-to-self about setting up VPP as a traffic generator to test other equipment. It’s longer than I initially intended, but I hope it serves as a useful reference. I plan to post some test results from live equipment soon!
Prologue
(If you just want to get to installation, skip to the next section.)
Over the years I’ve built several software firewalls, as I love open-source solutions. As I’ve started to dabble with NAT64, I’ve wanted to stress-test the devices to determine their throughput.
Without boring you too much with background, I wanted to create a packet generating box with the following properties:
- Capable of sending line-rate 10Gbps bidirectional traffic
- Capable of minimum frame size sending) as most devices are limited by packets-per-second moreso than bits-per-second
- Ability to specify frame sizes, address ranges, ports, etc
- Cheap / free
There are several software options available for Linux that generate packets. The one I liked for a while was mausezahn, though it is a bit old. It could saturate a 1Gbps connection with small packets and had a nice packet definition language. Unfortunately, I wasn’t able to get much beyond 1Gbps due to limitations of the kernel and the software.
That lead me to the DPDK project, which is a pure-userspace networking library that allows for very fast packet processing. It’s more of a hassle to set up (uses its own drivers), but once it’s going you can get very impressive performance.
Cisco maintains an open-source packet generator called TRex that is built on top of DPDK. It replays pcap files and substitutes new address ranges so that you can replay real-life traffic patterns against a test device. I tried it out and it works pretty well. There is a fair amount of setup, but the documentation is decent and it does pump out the packets (I saturated 2 10Gbps interfaces with minimum-sized packets). It also has a nice console that shows live traffic statistics.
Unfortunately, TRex doesn’t do as good a job with IPv6. While it can generate v6 packets, some of the fancier features (like NAT traversal) are IPv4-only. Additionally, the generator doesn’t do any “normal” networking prior to its test (like IPv6 neighbor discovery), so getting v6 flows to the device you’re testing can be difficult and require a lot of static neighbor discovery entries. I finally paused my evaluation of TRex to look for something else.
I was looking at another Cisco-donated project called VPP (Vector Packet Processing to act as a router/NAT44/NAT64 box. It is also built on DPDK and boasts very high performance. While reading the documentation, I saw that it features a packet generator mode. Though it is only stateless traffic (e.g., it doesn’t process the replies), it’s capable of generating a huge volume of traffic which is all I needed for stress testing.
Installing VPP
I’m using a refurbished Dell r620 1U server with 16GB of RAM and 2 Intel E5-2637 v3 (8 hyperthreaded 3.50GHz cores). Also included is Dell’s 2x 10Gb + 2x 1Gb network daughter card which shows up as 2x 82599ES (10Gb) and 2x I350 (1Gb) devices under Linux. These are supported by DPDK and while they are not the highest performance, they ended up working fine for my goal of 10Gb traffic. Total server cost was under $1,000.
Normally I’m a pure Debian guy, but looking at the VPP package repository showed that they were out of date on their Debian builds (they had packages for Bullseye, while Bookworm is current). The Ubuntu packages were more current, so I opted to build an Ubuntu Jammy box as the path of least resistance.
DPDK
First thing was to get DPDK on the box:
apt install dpdk dpdk-kmods-dkms python3-pip
That will build the DPDK kernel modules, which we’ll need in a bit.
From my adventures with TRex I knew of a few other changes that I should make right up front that will help with DPDK performance down the line.
I had to edit /etc/default/grub
and change the kernel command line
to enable IOMMU, CPU isolation, and hugepages:
GRUB_CMDLINE_LINUX_DEFAULT="iommu=pt intel_iommu=on default_hugepagesz=1G hugepagesz=1G hugepages=8 isolcpus=2,3,4,5,6,7"
After that, running update-grub
builds everything out for the next reboot.
I also needed to enable the vfio-pci
kernel module, so I created a
drop file named /etc/modules-load.d/sa-dpdk.conf
that contained the
following:
# suffield: enable for dpdk
vfio-pci
With that in place, I rebooted the machine. In the BIOS setup I made sure the following were set:
- Virtualization
- IO Virtualization (look for DMA, VT-d, anything mentioning virtual I/O)
VFIO depends on those items, so that’s why they need to be on.
Then let the machine boot into Linux. As a quick reality check, you can see if DPDK is able to see your network cards by doing the following:
dpdk-devbind.py --status
You should get a list of network interfaces on your machine. Mine, with the Dell 4-port daughter card, looks like:
Network devices using kernel driver
===================================
0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=eno1 drv=ixgbe unused=
0000:01:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=eno2 drv=ixgbe unused=
0000:07:00.0 'I350 Gigabit Network Connection 1521' if=eno3 drv=igb unused=
0000:07:00.1 'I350 Gigabit Network Connection 1521' if=eno4 drv=igb unused= *Active*
In order for DPDK to use a NIC you need to “unbind” it from the kernel and move it over to the DPDK drivers. You can do this by using the PCI address (first column in the output above). Wildcards are supported, so to adopt both 10Gb interfaces, I ran:
dpdk-devbind.py --bind=vfio-pci 01:00.*
Running --status
again showed the NICs now listed under
DPDK-compatible driver
, so I was good to go!
VPP
Next up is to install VPP itself. You can download a shell script fragment from https://packagecloud.io/fdio/release that will set up an APT repository for the packages. Then, you can follow the official project documentation to install all the necessary packages. In my case, it was:
apt install vpp vpp-plugin-core vpp-plugin-dpdk python3-vpp-api vpp-dbg vpp-dev
A few quick notes about where files are kept, since the documentation
isn’t super clear about this. VPP is started as a daemon using
systemctl
. The unit file includes some basic setup, and points the
daemon to use /etc/vpp/startup.conf
as its configuration file. It’s
even possible to run multiple instances of VPP with different config
files and then connect them together in a virtual network for testing.
Many of the tutorials for VPP involve running it in a virtual mode with kernel device drivers. This is super-simple to set up, but performance is limited. We want to use real devices via DPDK for performance.
Edit /etc/vpp/startup.conf
. The Ubuntu package already has a file
with many options set. The two things we need to do are set the
number of workers, and adopt our interfaces for DPDK.
In the cpu
section of the file, you need to set the main-core
, and
then either the worker cores or the number of workers. When we edited
our kernel command line with grub above we reserved several cores from
the linux scheduler, so we’re going to opt to use those explicitly:
cpu {
main-core 1
corelist-workers 2-7
In the dpdk
section we need to tell VPP which interfaces and driver
to use. Assuming the default config (where most items are commented
out), you can just add dev
entries for your PCI devices (you can use
dpdk-devbind.py --status
if you don’t remember what the addresses
are), and specify the vfio driver:
dpdk {
dev 0000:01:00.0
dev 0000:01:00.1
uio-driver vfio-pci
}
Finally, you need to make sure that the devices are not bound to the
kernel so that VPP can adopt them. Run dpdk-devbind.py --status
and
confirm your devices are not listed under “kernel driver”. If they
are, you should “unbind” them:
dpdk-devbind.py --unbind 01:00.*
You’re now ready to launch VPP:
systemctl restart vpp.service
Check the output of systemctl status vpp.service
to make sure it is
up and running, and then you’re ready to launch the vppctl
shell:
vppctl
(Note that should be possible by any member of the vpp
group, but in my default install the permissions weren’t correct and you had to be root.)
That should drop you into the config shell for VPP.
VPP Configuration
Hardware configuration check
Before we do anything else, let’s make sure the interfaces are here. At the vppctl
prompt, run:
show hardware-interfaces
If your dpdk
config settings are correct, the interfaces should be
available and shown here. They will have a different name (e.g.,
TenGigabitEthernet...
), but they should correspond to your system.
On mine it looks like the numbering follows the PCI address, so my
cards at 01:00.0 and 01:00.1 show up as TenGigabitEthernet1/0/0
and
TenGigabitEthernet1/0/1
.
Saving configurations
I’m still very new to this, but it appears to me that there is no
obvious way to save your configuration changes in the vppctl
shell.
The interface looks like a Cisco router (some tab completion, show ...
, etc), but there is no write memory
or show running-config
.
Thus, if you plan to enter a lot of commands you’re probably best off
saving them in a file so you can easily reload it after making
changes.
If you have a text file with commands in it, you can run it from the
vppctl
shell with:
exec /path/to/file.txt
Commands are entered one per line. Most importantly, you must use backslash as a line continuation character in the config file. This may sound obvious, but many of the documentation examples online do not show backslashes, and the error messages are extremely unhelpful. It wasn’t until I dug into the source tree that I realized backslashes were necessary.
There is a comment
“command” you can use to annotate your file, so a
simple configuration file might look like:
comment { A simple router with two interfaces; assign addresses and bring up }
set int ip address TenGigabitEthernet1/0/0 2001:db8:dead:a::2/64
set int ip address TenGigabitEthernet1/0/1 2001:db8:dead:b::2/64
set int state TenGigabitEthernet1/0/0 up
set int state TenGigabitEthernet1/0/1 up
You should set up your interfaces and then try to ping them from an external device. The lines above automatically enable IPv6 neighbor discovery, so once run you should have a functioning interface that you can ping. You may need to set static routes in VPP and/or your external equipment to be able to reach addresses that aren’t on the same subnet.
Packet generation
Topology
I’m using an old Juniper layer-3 switch to connect all my equipment. I could cable my VPP box and the device under test (DUT) directly together, but as I plan to test several boxes this is easier (I can just assign interfaces to different VRFs without having to move cables). Also, the switch can show me interface statistics so I can see where I’m losing traffic (and I fully intend to lose traffic during testing).
So, while there are many potential ways to set this up, I am using a pure layer-3 setup that looks like this (click the image for a full-size version):
There’s a lot going on here, but the TL;DR is that our test traffic
with addresses 2001:db8:feed:a::
and 2001:db8:feed:b::
will be
forced through the DUT before being returned to the VPP. Here’s a
breakdown:
-
We assume that each piece of equipment is logically divided into an
A
side andB
side. Traffic flows from theA
side of the VPP into theA
side of the DUT, flows through the DUT, and then leaves theB
side of the DUT and returns to theB
side of the VPP. If there is bidirectional traffic, it follows the reverse path (sourced from theB
side, flowing through the DUT, returning to theA
side). -
To help with consistency, everything on the
A
side hasA
in the name, and theB
side hasB
in the name. Note that the v6 addresses havea
orb
in the network portion of the address and the VRFs are namedA
andB
. -
The router is connected to each piece of equipment via a dedicated L3 link, shown between boxes. The local interface names are shown on each end (using VPP, Juniper, and OpenBSD conventions). The addresses of these links aren’t important for the test traffic, but are necessary to route the traffic through the boxes. The v6 addresses have
d05
(aka “DoS”, meaning the packet generator) in the network portion when connected to the VPP, and7e57
(aka “test”) when connected to the DUT. -
The router always gets the
1
address on the link, and the other device (VPP or DUT) gets2
, as shown below the local interface name. Combined with all the other network info, the complete v6 address tell you which piece of equipment, which handoff, and which “side” they’re connected to. -
The test traffic uses a range outside of any of the handoff networks, identified by
feed
in the network portion of the address. As this is used for the source and destination of the test traffic, all three devices must have routes to ensure the traffic flows through the devices (instead of taking a shortcut):-
The VPP doesn’t have a traditional route table entry for the test traffic; it has a default L2 next hop that you’ll see below (e.g., all traffic is dumped on a specific interface). It does contain a discard route for the test traffic so that when it is returned (after having passed through the DUT), it does not leave the box again.
-
The DUT needs to have two routes for the
A
andB
test traffic pointing to the correct router interface so that the traffic flows in the correct direction as it leaves the device. -
The router needs to have two VRFs so that traffic can be forced through the DUT instead of being directly routed back to the VPP. Each VRF has two routes to send the
A
andB
traffic to the correct device. Which device depends on the VRF: theA
VRF sendsA
traffic to the VPP andB
traffic to the DUT, while theB
VRF does the opposite. Without the two VRFs the router would simply return the VPP traffic back to itself rather than sending it through the DUT.
-
IP Configuration
VPP
We set up the handoff network for each interface, as well as discard routes for the test traffic:
comment { A side interface - handoff to router }
set int ip address TenGigabitEthernet1/0/0 2001:db8:d05:a::2/64
set int state TenGigabitEthernet1/0/0 up
comment { B side interface - handoff to router }
set int ip address TenGigabitEthernet1/0/1 2001:db8:d05:b::2/64
set int state TenGigabitEthernet1/0/1 up
comment { Discard routes for test traffic }
ip route add 2001:db8:feed:a::/64 via drop
ip route add 2001:db8:feed:b::/64 via drop
We’ll cover the packet generation config later.
DUT
Here you’ll need to set up both the handoff networks as well as a static route to send the A and B traffic to the correct router interface. The exact configuration will depend on the device/system (e.g., OpenBSD, Linux, VPP, etc).
OpenBSD
OpenBSD uses per-interface config files, so these appear in the files as commented below:
For the A
side:
# /etc/hostname.ix0
# A side interface - handoff to router
inet6 2001:db8:7e57:a::2 64
# Static route for test traffic destined for A
!route add 2001:db8:feed:a::/64 2001:db8:7e57:a::1
up
For the B
side:
# /etc/hostname.ix1
# B side interface - handoff to router
inet6 2001:db8:7e57:b::2 64
# Static route for test traffic destined for B
!route add 2001:db8:feed:b::/64 2001:db8:7e57:b::1
up
Router
The router is the most complicated as it needs two separate routing tables, usually accomplised by separate VRFs (though terminology/technique may vary by system). Here is what the relevant portions of our Juniper config look like (with some comments to help):
# Set up physical interfaces with L3 handoff IPv6 addresses
interfaces {
xe-0/0/4 {
description "Handoff VPP A ixgbe-mdio-0000:01:00.0";
unit 0 {
family inet6 {
address 2001:db8:d05:a::1/64;
}
}
}
xe-0/0/5 {
description "Handoff VPP B ixgbe-mdio-0000:01:00.1";
unit 0 {
family inet6 {
address 2001:db8:d05:b::1/64;
}
}
}
xe-0/0/8 {
description "Handoff DUT A obsd ix0";
unit 0 {
family inet6 {
address 2001:db8:7e57:a::1/64;
}
}
}
xe-0/0/9 {
description "Handoff DUT B obsd ix1";
unit 0 {
family inet6 {
address 2001:db8:7e57:b::1/64;
}
}
}
}
# Establish VRFs with separate routing tables
routing-instances {
VRF-A {
instance-type virtual-router;
interface xe-0/0/4.0; # VPP A handoff
interface xe-0/0/8.0; # DUT A handoff
routing-options {
# static routes just for this VRF
rib VRF-A.inet6.0 {
static {
# Traffic to A goes to the VPP
route 2001:db8:feed:a::/64 next-hop 2001:db8:d05:a::2;
# Traffic to B goes to the DUT
route 2001:db8:feed:b::/64 next-hop 2001:db8:7e57:a::2;
}
}
}
}
VRF-B {
instance-type virtual-router;
interface xe-0/0/5.0; # VPP B handoff
interface xe-0/0/9.0; # DUT B handoff
routing-options {
# static routes just for this VRF
rib VRF-B.inet6.0 {
static {
# Traffic to A goes to the VPP
route 2001:db8:feed:b::/64 next-hop 2001:db8:d05:b::2;
# Traffic to B goes to the DUT
route 2001:db8:feed:a::/64 next-hop 2001:db8:7e57:b::2;
}
}
}
}
}
Verification
Once everything is all set up, make sure that you can ping from each end of the handoff addresses, and that the route tables look correct for the test traffic (ping may not work as there is no box actually answering for the test addresses).
Now is also a good time to make sure your DUT is configured to forward
packets. OpenBSD and Linux do not necessarily forward packets between
interfaces, so you may need to enable forwarding with sysctl
,
possibly in combination with a filtering engine like pf
or
nftables
.
VPP Configuration
With all the network plumbing set up, the last thing to do is configure the packet generator. We’ll start with a simple setup that we can use to test if packets are flowing correctly, then move to a full line-rate example.
Simple Test
Assuming you already have a file that contains the interface
configuration and handoff network definitions, add the following to
the bottom of it. Note: you must replace the MAC addresses on the
IP6
line with those of the VPP and router interfaces. The sender
(dead.beef.f00d
) should be replaced with the VPP MAC and the
receiver (cafe.face.feed
) should be that of the router (next hop):
packet-generator new { \
name test \
worker 0 \
limit 0 \
rate 1.0e2 \
size 64-64 \
tx-interface TenGigabitEthernet1/0/0 \
node TenGigabitEthernet1/0/0-output \
data { IP6: dead.beef.f00d -> cafe.face.feed \
UDP: 2001:db8:feed:a::1 -> 2001:db8:feed:b::1 \
UDP: 34567 -> 34890 \
incrementing 0 \
} \
}
Reload your configuration and confirm that there are no errors. The
config creates a packet generator, but it does not begin sending
packets on the wire until you issue this command in the vppctl
shell
(or config file):
packet-generator enable-stream
Once you issue that command, you should see a stream of 100 packets
per second (corresponding to rate 1.0e2
in the config) from the
listed interface coming into your router. If you check your router
interface counters (e.g., monitor interface traffic
on Juniper), you
should see the packet counters going up at roughly 100 packets per
second on each of the four interfaces that are involved. Each pair of
interfaces (VPP and DUT) should have a pairing of “input” and “output”
counters incrementing:
Interface Link Input packets (pps) Output packets (pps)
xe-0/0/4 Up 14900 (100) 6 (0)
xe-0/0/5 Up 0 (0) 14906 (100)
xe-0/0/8 Up 2 (0) 14807 (100)
xe-0/0/9 Up 14804 (100) 10 (0)
If that all looks good, then the packet path is correct and you’re ready to ramp things up!
Line Rate
With the path validated (and I do recommend you validate well, lest you flood your network with garbage), we’re ready to step things up.
The packet-generator
stanza has several configurable items. The
most obvious knob is rate
, which determines how quickly packets are
sent on the interface. On a 10Gbps interface with 60-byte packets,
the theoretical transmit rate is about 14.88 million pps. However,
we’re using IPv6 with UDP, so the minimum packet size is actually 62
bytes (14B ethernet + 40B IPv6 + 8B UDP). Once you tack on the
ethernet framing info (8B preamble + 1B CRC + 11B inter-frame gap)
that bloats up to 86 bytes (688 bits) on the wire. So our theoretical
throughput is 10.0e9 / 688 or 14,534,883 pps. Thus, you can try
cranking the rate up to 14.53e6
and reloading the configuration.
According to my router, I’m within a few hundred pps (99.9978%) of line rate with this config!
Interface Link Input packets (pps) Output packets (pps)
xe-0/0/4 Up 10077664468 (14534577) 197 (0)
Multiple Workers
You may want to generate “bidirectional” traffic (I put that in quotes because it’s stateless, so there is no request-response in the traffic).
Fortunately, you can spawn multiple worker threads in VPP, and each one can generate packets independently. Just create a second packet generator on a different worker thread.
Here I have two workers, attached to two interfaces, generating traffic in two directions (if you copy these, don’t forget to change the MAC of the IP6 handoff):
packet-generator new { \
name A-to-B-01 \
worker 0 \
limit 0 \
rate 14.53e6 \
size 62-62 \
tx-interface TenGigabitEthernet1/0/0 \
node TenGigabitEthernet1/0/0-output \
data { IP6: dead.beef.f00d -> cafe.face.feed \
UDP: 2001:db8:feed:a::1 -> 2001:db8:feed:b::1 \
UDP: 34567 -> 34890 \
incrementing 0 \
} \
}
packet-generator new { \
name B-to-A-01 \
worker 1 \
limit 0 \
rate 14.53e6 \
size 62-62 \
tx-interface TenGigabitEthernet1/0/1 \
node TenGigabitEthernet1/0/1-output \
data { IP6: dead.beef.f00d -> cafe.face.feed \
UDP: 2001:db8:feed:b::1 -> 2001:db8:feed:a::1 \
UDP: 34890 -> 34567 \
incrementing 0 \
} \
}
I’m now generating/sinking 10Gbps of traffic in each direction (In marketing-speak, this would be “40Gbps” since it’s 10 in both directions on two interfaces). Well, almost 10Gbps. It looks like my hardware isn’t quite enough to handle full line rate in both directions, even with two workers.
Fortunately our next section has a solution…
Distributed Load
Each worker is assigned to its own core, so you can keep scaling up your workers so long as you have cores to run them on. You can generate multiple streams on the same device with different workers, allowing you to get more throughput (or a mix of packet sizes, etc).
Note that if you’re using multiple threads in the same direction you should set each worker to a proportional amount of the desired target rate (so they add up to the desired rate).
Here’s a working config that my machine uses to get bidirectional line-rate traffic. Note that I use two different addresses for each worker just to help me keep the traffic straight:
packet-generator new { \
name A-to-B-01 \
worker 0 \
limit 0 \
rate 7.3e6 \
size 62-62 \
tx-interface TenGigabitEthernet1/0/0 \
node TenGigabitEthernet1/0/0-output \
data { IP6: dead.beef.f00d -> cafe.face.feed \
UDP: 2001:db8:feed:a::1 -> 2001:db8:feed:b::1 \
UDP: 34567 -> 34890 \
incrementing 0 \
} \
}
packet-generator new { \
name A-to-B-02 \
worker 1 \
limit 0 \
rate 7.3e6 \
size 62-62 \
tx-interface TenGigabitEthernet1/0/0 \
node TenGigabitEthernet1/0/0-output \
data { IP6: dead.beef.f00d -> cafe.face.feed \
UDP: 2001:db8:feed:a::2 -> 2001:db8:feed:b::2 \
UDP: 34567 -> 34890 \
incrementing 0 \
} \
}
packet-generator new { \
name B-to-A-01 \
worker 2 \
limit 0 \
rate 7.3e6 \
size 62-62 \
tx-interface TenGigabitEthernet1/0/1 \
node TenGigabitEthernet1/0/1-output \
data { IP6: dead.beef.f00d -> cafe.face.feed \
UDP: 2001:db8:feed:b::1 -> 2001:db8:feed:a::1 \
UDP: 34890 -> 34567 \
incrementing 0 \
} \
}
packet-generator new { \
name B-to-A-01 \
worker 3 \
limit 0 \
rate 7.3e6 \
size 62-62 \
tx-interface TenGigabitEthernet1/0/1 \
node TenGigabitEthernet1/0/1-output \
data { IP6: dead.beef.f00d -> cafe.face.feed \
UDP: 2001:db8:feed:b::2 -> 2001:db8:feed:a::2 \
UDP: 34890 -> 34567 \
incrementing 0 \
} \
}
Under this setup I’m able to sustain 99% of line rate, which should be enough for my testing purposes. My guess is that the 1% loss is contention for the device between the threads as it didn’t improve even if I increased the packet generation rate.
Interface Link Input packets (pps) Output packets (pps)
xe-0/0/4 Up 31761851079 (14397365) 10337719603 (14399756)
xe-0/0/5 Up 17756153321 (14399748) 23590769625 (14397361)
(Note that the output above only shows two interfaces. I performed this test using a simplified reflector setup to ensure that no other devices were dropping traffic.)
Varying Traffic
The final knob to tweak is to vary the traffic being sent on the wire, using different addresses and ports. The VPP documentation was severely lacking here, though the syntax is straightforward. You can simply add a hyphen to the port options and it will automatically be used as a range:
data { IP6: dead.beef.f00d -> cafe.face.feed \
UDP: 2001:db8:feed:a::1 -> 2001:db8:feed:b::1 \
UDP: 1000-1100 -> 2000-2100 \
incrementing 0 \
} \
Be aware that each ranged item appears to be incremented every packet. Thus, the sequence of port numbers from the definition above would be:
- 1000 -> 2000
- 1001 -> 2001
- 1002 -> 2002
- …
- 1099 -> 2099
- 1100 -> 2100
- 1000 -> 2000
If you want to generate a lot of “flows” with different tuple values, my best guess is to use port ranges that are coprime so that they don’t “line up” when they increment. Something like this should prevent packets from having the same flow data very often:
data { IP6: dead.beef.f00d -> cafe.face.feed \
UDP: 2001:db8:feed:a::1 -> 2001:db8:feed:b::1 \
UDP: 1025-65535 -> 1027-65535 \
incrementing 0 \
} \
The port range sizes are 64511 and 64509, which are coprime, so their combinations will only repeat after 4,161,540,099 packets. At line rate, that’s every 286 seconds, meaning that each “flow” will see a packet about once every 5 minutes. This is great if you’re trying to exhaust states in a DUT, and you can tweak the ranges to get a healthier mix of repeated flows if you’re trying to dial down the torture a little bit.
Unfortunately, while there are examples of being able to use ranges
with addresses, I wasn’t able to get it to work. IPv4 addresses
ranges (e.g., 192.168.0.1 - 192.168.0.255
) work and generate
sequential packets. But IPv6 ranges produce packets with all-zeros as
the address (no errors are thrown) making me think there’s a parse
error somewhere. I couldn’t find any open bugs so for now I’m going
to chalk this up to a lack of IPv6 parity.
Still, with port ranges I can get enough uniqueness to torture my stateful equipment. Stay tuned for another article where I generate some benchmarks!
Epilogue: Simplified Test VRF (Reflect)
The setup described above involves thre devices: VPP, router, and DUT. In some cases, you may want to only have VPP and the router, for example:
-
You don’t have a DUT yet
-
You want to verify that you can acheive line-rate performance when the DUT is not in the packet path
-
You are having problems with the DUT and need to verify your setup
You can configure the router to simply hairpin packets from VPP back to itself, completing the circuit without passing traffic through the DUT. This is a much simpler setup as two routing tables are no longer needed to force the traffic through another device. Thus, you can replace the dual VRFs described above with a single VRF as shown below.
The router interface and handoff addresses do not change.
Additionally, the VPP configuration does not change from above. You
need to deactivate VRF-A
and VRF-B
and replace it with this VRF
that includes both the VPP interfaces:
routing-instances {
# You must deactivate "VRF-A" and "VRF-B" from the previous config
# before enabling this one or the interface assignments will clash
VRF-Reflect {
instance-type virtual-router;
interface xe-0/0/4.0; # VPP A handoff
interface xe-0/0/5.0; # VPP B handoff
routing-options {
# static routes just for this VRF, forward all traffic back to VPP
rib VRF-Reflect.inet6.0 {
static {
route 2001:db8:feed:a::/64 next-hop 2001:db8:d05:a::2;
route 2001:db8:feed:b::/64 next-hop 2001:db8:d05:b::2;
}
}
}
}
}
With this configuration in place, once you start the packet generation on the VPP you should see the interface counters between the VPP and router begin to increment (both input and output should go up on both interfaces unless you are only testing unidirectional traffic):
Interface Link Input packets (pps) Output packets (pps)
xe-0/0/4 Up 2645167941034 (100) 21472412255 (0)
xe-0/0/5 Up 22220393605 (0) 2641229762834 (100)