6WIND Virtual Accelerator Demo

Contents

1   Performance Testing; Documentation and Use Cases

6WIND's Virtual Accelerator product dramatically boosts the network performance of Virtual Network Functions (VNF). In this simple test, the F5® BIG-IP® Local Traffic Manager™ Virtual Edition (F5 BIG-IP LTM® VE) VNF performance is increased by 5x. The operator will download the F5 BIG-IP LTM VE from the F5 download website, install the image on two hypervisors (hosts) and configure them similarly. The difference between the two F5 BIG-IP LTM VE VNFs is that one will be hosted on a hypervisor with Linux KVM and Virtual Accelerator and the other will be hosted on a hypervisor with Linux KVM only. In this demonstration the network bandwidth measurement tool iperf3 is used to show the performance boost. iperf3 is a widely used tool that transmits and receives TCP or UDP data streams and measures the net throughput of a network device or system.

2   Add F5 BIG-IP LTM VE Image

The software can be downloaded at:

https://downloads.f5.com/esd/eula.sv?sw=BIG-IP&pro=big-ip_v11.x&ver=11.6.0&container=Virtual-Edition&path=&file=&B1=I+Accept

Register for an account (if you do not have one already). Once you receive the welcome email follow the instructions. If applicable, at the welcome page select "Download" from the left-side navigation pane. Select "Find a Download," and select "BIG-IP v12.x/Virtual Edition" (the latest version is recommended). Select "Virtual Edition" from the table. Select the "qcow2" version for KVM, BIGIP-12.0.0.0.606.ALL.qcow2.zip.

To add to image pool,

glance image-create --name 'F5 BigIP 12.0' --disk-format qcow2 --container-format bare\
--file BIGIP-12.0.0.0.0.606ALL.qcow2 --progress

If you have the message,

localhost emerg logger: Re-starting chmand

edit the /PLATFORM file to add,

platform=Z100 family=0xC0000000 host=Z100 systype=0x71

(src: https://devcentral.f5.com/questions/big-ip-ltm-ve-on-kvm )

3   F5 BIG-IP LTM VE Use Case

In this test the F5 BIG-IP LTM VE will forward packets from VM1 to VM2.

img_F5/F5netopo.png

The OpenStack creation process of the network and VMs is not detailed nor is the association of floating IP addresses. The net_management subnet (3.0.0.0/24) and the public subnet (10.168.215.0/24) are connected via a router instance. Without the router instance, floating IP addresses cannot be assigned to the F5 BIG-IP LTM VE interface in the net management subnet. Please see, ‘Advantech Remote Evaluation Service Portal Tutorial’ for help on creating VMs and associating floating IP addresses.

Once all the networks are created and the VMs spawned, associate a floating IP to the F5 BIG-IP LTM VE VM.

3.1   Configuring F5 BIG-IP LTM VE

Access the web interface of the F5 BIG-IP LTM VE VM using <Portal_url:Port> where <Portal_url:Port> is associated with the floating IP address of the F5 BIG-IP LTM VE. These values are listed in the "Virtual Machines with external access" table located on the login splash page.

The credentials to connect to the web interface are admin/admin.

Note: the login id to connect to F5 BIG-IP LTM VE VM via console are root/default

Create 2 VLANs in Network/VLANS:

For each VLAN in Network/VLANs, select "Create...". Fill name field, add one interface. Set the MTU to 1400.

Note: if the MTU size is not set, the demo will fail.

img_F5/BIG-IP_vlan-list.png img_F5/BIG-IP_vlan-net1.png

Then associate the IP addresses to each interface in the Network/Self IPs tab: Fill name field, add the IP address of the interface and the netmask and select the VLAN associated with the interface.

Select None in "Traffic Group"

img_F5/BIG-IP_self-ip.png img_F5/BIG-IP_self-ip-net1.png

You can check the configuration in command line using tmsh command:

[root@host-3-0-0-2:Active:Standalone] config #  tmsh list net self
net self net2 {
    address 2.0.0.4/24
    allow-service all
    traffic-group none
    vlan net2
}
net self net1 {
    address 1.0.0.4/24
    allow-service all
    traffic-group none
    vlan net1
}

Following "Configuring a Simple Intranet" from F5 BIG-IP ® Local Traffic Manager: Implementations 11.1 document, create a pool list and a virtual server.

Create a pool list:

In Local Traffic/Pools, select "Create..."

Fill the Name field.

Add the VM2 into the New members: fill the name, IP address and Service port then click on Add.

img_F5/F5_pool-creation.png

Now create a Virtual Server, Traffic/Virtual Servers:

Select "Performance (Layer4)" as Type and the pool you just created as the Default Pool.

Choose a name, an IP address and the service port.

The IP address can be any address. The F5 BIG-IP LTM VE image will redirect it to nodes in the pool you select.

You can select "Auto Map" in Source Address Translation.

img_F5/F5_virtualServer-creation.png
[root@host-3-0-0-2:Active:Standalone] config # tmsh list ltm pool
ltm pool Test_VA {
    members {
        2.0.0.2:any {
            address 2.0.0.2
        }
    }
}
[root@host-3-0-0-2:Active:Standalone] config # tmsh list ltm virtual
ltm virtual Test_VA {
    destination 1.0.0.6:any
    mask 255.255.255.255
    profiles {
        fastL4 { }
    }
    source 0.0.0.0/0
    translate-address disabled
    translate-port disabled
    vs-index 8
}

At the F5 BIG-IP LTM VE console, check that TSO is enabled:

tmsh list sys db tm.tcpsegmentationoffload

Check that LRO is enabled:

tmsh list sys db tm.tcplargereceiveoffload

To enable LRO for example:

tmsh modify sys db tm.tcplargereceiveoffload value enable

4   Usage

On the VM2, launch the iperf3 server:

iperf3 -s

Then, on the VM1 launch the iperf client using the IP used on the virtual server:

iperf3 -c 1.0.0.6

5   Results

The flavor used for the tests have 2 CPU and 4 GB RAM.

iperf3 was used to do the performance tests:

  with VA without VA
one flow 3.4 Gbits/s 0.9 Gbits/s
20 flows (-P 20) 5.2 Gbits/s 0.9 Gbits/s