Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
en:dpi:dpi_components:platform:dpi_config:admin_dpsk:start [2020/12/02 10:49] – edrudichgmailcom | en:dpi:dpi_components:platform:dpi_config:admin_dpsk:start [2022/03/30 09:06] (current) – removed edrudichgmailcom | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== 3 DPDK Version Configuration ====== | ||
- | {{indexmenu_n> | ||
- | |||
- | [[https:// | ||
- | |||
- | ==== System Preparation ==== | ||
- | The first step to work with DPDK is to take the network cards out of the control of the operating system. DPDK works with PCI devices, that can be displayed with the command: | ||
- | < | ||
- | > lspci|grep Eth | ||
- | 41:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02) | ||
- | 41:00.1 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02) | ||
- | c6:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller (rev 01) | ||
- | c6:00.1 Ethernet controller: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller (rev 01) | ||
- | > | ||
- | </ | ||
- | This command will list all PCI ethernet devices. Each line starts with the system PCI device identifier - these PCI identifiers are the unique for the network card in the DPDK. | ||
- | |||
- | Transferring the card to DPDK mode (disconnecting from the system network driver) is carried out by the dpdk-devbind.py utility from the DPDK: | ||
- | |||
- | < | ||
- | # Example - devices 41:00.0 and 41:00.1 transfer to the DPDK mode | ||
- | |||
- | >insmod $RTE/ | ||
- | |||
- | # 25G NICs | ||
- | > | ||
- | > | ||
- | </ | ||
- | |||
- | here, igb_uio - is [[https:// | ||
- | <note important> | ||
- | |||
- | To see if the card is properly initialized to work with DPDK, use the command | ||
- | < | ||
- | > $RTE/ | ||
- | </ | ||
- | If the cards are in DPDK mode, you will see them in '' | ||
- | < | ||
- | > $RTE/ | ||
- | |||
- | Network devices using DPDK-compatible driver | ||
- | ============================================ | ||
- | 0000: | ||
- | 0000: | ||
- | .... | ||
- | </ | ||
- | |||
- | Also you have to reserve huge page: | ||
- | < | ||
- | #!/bin/bash | ||
- | |||
- | # Reserve 4 1G-pages - 4 GB in total: | ||
- | HUGEPAGES_NUM=4 | ||
- | HUGEPAGES_PATH=/ | ||
- | sync && echo 3 > / | ||
- | echo $HUGEPAGES_NUM > / | ||
- | HUGEPAGES_AVAIL=$(grep HugePages_Total / | ||
- | if [ $HUGEPAGES_AVAIL -ne $HUGEPAGES_NUM ]; then | ||
- | printf " | ||
- | fi | ||
- | </ | ||
- | Usually 2-4 GB for a huge page is enough for the normal functioning of Stingray SG. If it is not enough, Stingray SG will display a critical error in fastdpi_alert.log and will not start. All the memory necessary for the operation of Stingray SG is allocated when starting from the huge page, so if the SSG has started with the current settings, the system will not need more and more memory from the huge page. In case of startup errors associated with a shortage of huge pages, you need to increase the number of allocated huge pages in the script above and try to run the Stingray SG again. | ||
- | <note important> | ||
- | All these actions - transferring cards into DPDK mode and reserving the huge page - must be performed at OS startup. | ||
- | </ | ||
- | |||
- | ==== Stingray SG Configuration ==== | ||
- | When the system is configured to work with DPDK, you can start configuring the Stingray SG. | ||
- | The interfaces are configured with «in»-«out» pairs (for the future convenience, | ||
- | < | ||
- | # In - port 41:00.0 | ||
- | in_dev=41-00.0 | ||
- | # Out - port 41:00.1 | ||
- | out_dev=41-00.1 | ||
- | </ | ||
- | This configuration sets a single bridge 41-00.0 ←→ 41-00.1 \\ | ||
- | You can specify a group of interfaces with ':' | ||
- | < | ||
- | in_dev=41-00.0: | ||
- | out_dev=41-00.1: | ||
- | </ | ||
- | This group forms the following pairs (bridges): \\ | ||
- | 41-00.0 ←→ 41-00.1 \\ | ||
- | 01-00.0 ←→ 01-00.1 \\ | ||
- | 05-00.0 ←→ 05-00.1 \\ | ||
- | The pairs must have devices of the same speed; it is unacceptable to pair 10G and 40G cards. However, the group can have interfaces of different speeds, for example, one pair is 10G, the other is 40G. | ||
- | |||
- | ==== Clusters ==== | ||
- | The DPDK version of Stingray SG supports clustering: you can specify which interfaces are included in each cluster. The clusters are separated with the ' | ||
- | < | ||
- | in_dev=41-00.0|01-00.0: | ||
- | out_dev=41-00.1|01-00.1: | ||
- | </ | ||
- | This example creates two clusters: | ||
- | * cluster with bridge 41-00.0 ←→ 41-00.1 | ||
- | * cluster with bridges | ||
- | Clusters are a kind of a legacy of the Stingray SG pf_ring-version: | ||
- | |||
- | In DPDK, clusters are also isolated from each other, but unlike pf_ring, here a cluster is a more logical concept inherited from pf_ring. DPDK is much more flexible than pf_ring and allows you to build complex multi-bridge configurations with many dispatchers without using clusters. In fact, the only " | ||
- | <note tip>Tip: instead of using clusters, consider switching to a different '' | ||
- | The following descriptions of configurations assume that there is only one cluster (no clustering). | ||
- | |||
- | ==== Number of Cores (Threads) ==== | ||
- | CPU cores are perhaps the most critical resource for the Stingray SG. The more physical cores there are in the system, the more traffic can be processed by the SSG. | ||
- | <note important> | ||
- | Stingray SG needs the following threads to operate: | ||
- | * processing threads - process incoming packets and write to the TX-queue of the card; | ||
- | * dispatcher threads - read the card's RX queues and distribute incoming packets among processing threads; | ||
- | * service threads - perform deferred (time-consuming) actions, receive and process fdpi_ctrl and CLI, connection with PCRF, sending netflow | ||
- | * system kernel - dedicated to the operating system. | ||
- | Processing and dispatcher threads cannot be located on the same core. At start, Stingray SG binds threads to cores. | ||
- | Stingray SG by default selects the number of handler threads depending on the interface speed:\\ | ||
- | 10G - 4 threads\\ | ||
- | 25G - 8 threads\\ | ||
- | 40G, 50G, 56G - 16 threads\\ | ||
- | 100G - 32 threads\\ | ||
- | For a group, the number of threads is equal to the sum of threads number for each pair; e.g., for the cards: | ||
- | < | ||
- | # 41-00.x - 25G NIC | ||
- | # 01-00.x - 10G NIC | ||
- | in_dev=41-00.0: | ||
- | out_dev=41-00.1: | ||
- | </ | ||
- | 12 processing threads will be created (8 for 25G card and 4 for 10G card) | ||
- | |||
- | In fastdpi.conf, | ||
- | < | ||
- | # 41-00.x - 25G NIC | ||
- | # 01-00.x - 10G NIC | ||
- | in_dev=41-00.0: | ||
- | out_dev=41-00.1: | ||
- | |||
- | num_threads=4 | ||
- | </ | ||
- | |||
- | This configuration will create 8 (num_threads=4 * 2 bridges) processing threads. | ||
- | |||
- | <note important> | ||
- | |||
- | In addition to the handler threads, for operating you also need at least one dispatcher thread (and therefore at least one more core) that reads the rx-queues of the interfaces. The dispatcher' | ||
- | |||
- | The internal architecture of working with one or many dispatchers is strikingly different, therefore Stingray provides several engines configured by the '' | ||
- | * '' | ||
- | * '' | ||
- | * '' | ||
- | * '' | ||
- | |||
- | Further, all these engines are described in detail, their configuration features and areas of application, | ||
- | === Explicit Binding to Cores === | ||
- | You can explicitly bind threads to cores in fastdpi.conf. The parameters: | ||
- | * '' | ||
- | * '' | ||
- | |||
- | The format for specifying these lists is the same: | ||
- | < | ||
- | # 10G cards - 4 processor threads, 1 dispatcher per cluster | ||
- | in_dev=01-00.0|02-00.0 | ||
- | out_dev=01-00.1|02-00.1 | ||
- | |||
- | # Bind processing threads for cluster #1 to cores 2-5, dispatcher to core 1 | ||
- | # for cluster #2 - to cores 7-10, dispatcher to core 6 | ||
- | engine_bind_cores=2: | ||
- | rx_bind_core=1|6 | ||
- | </ | ||
- | Without clustering: | ||
- | < | ||
- | # 10G cards - 4 processing threads per card | ||
- | in_dev=01-00.0: | ||
- | out_dev=01-00.1: | ||
- | # 2 dispatchers (by directions) | ||
- | dpdk_engine=1 | ||
- | |||
- | # Bind processing threads and dispatcher threads | ||
- | engine_bind_cores=3: | ||
- | rx_bind_core=1: | ||
- | </ | ||
- | |||
- | As noted, the handler and dispatcher threads must have dedicated cores; it is not allowed to bind several threads to one core - the Stingray SG will display an error in fastdpi_alert.log and will not start. | ||
- | <note tip> | ||
- | <note important> | ||
- | |||
- | ==== The Dispatcher Thread Load ==== | ||