DPDK Interfaces Configuration [Документация VAS Experts]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
en:dpi:dpi_components:platform:dpi_config [2024/09/26 15:29] – external edit 127.0.0.1en:dpi:dpi_components:platform:dpi_config [2026/03/02 14:28] (current) – [Ports configuration] elena.krasnobryzh
Line 11: Line 11:
  
 ===== Ports configuration ===== ===== Ports configuration =====
 +
 +<note important>
 +Please note that Mellanox network interface cards cannot be switched to DPDK using the driverctl utility — their driver is installed in a different way. They also remain under operating system control, therefore the interfaces will still appear in the output of ip/ipconfig utilities.  
 +If it is necessary to install a driver to support DPDK on Mellanox network cards, please [[en:dpi:techsupport_info|contact technical support]].
 +</note>
  
 The network cards that Stingray will work with are removed from the control of the operating system and therefore are not visible as Ethernet devices to the operating system.  The network cards that Stingray will work with are removed from the control of the operating system and therefore are not visible as Ethernet devices to the operating system. 
Line 180: Line 185:
   * cluster with bridge 41-00.0 ←→ 41-00.1   * cluster with bridge 41-00.0 ←→ 41-00.1
   * cluster with bridges  01-00.0 ←→ 01-00.1 and 05-00.0 ←→ 05-00.1   * cluster with bridges  01-00.0 ←→ 01-00.1 and 05-00.0 ←→ 05-00.1
-Clusters are a kind of a legacy of the Stingray SG pf_ring-version: in pf_ring, cluster is the basic concept of "one dispatcher thread + RSS handler threads" and is almost the only way to scale. The disadvantage of the cluster approach is that the clusters are physically isolated from each other: it is impossible to forward a packet from the X-interface of cluster #1 to the Y-interface of cluster #2. This can be a significant obstacle in the SKAT L2 BRAS mode.+Clusters are a kind of a legacy of the Stingray SG pf_ring-version: in pf_ring, cluster is the basic concept of "one dispatcher thread + RSS handler threads" and is almost the only way to scale. The disadvantage of the cluster approach is that the clusters are physically isolated from each other: it is impossible to forward a packet from the X-interface of cluster #1 to the Y-interface of cluster #2. This can be a significant obstacle in the SSG L2 BRAS mode.
  
 In DPDK, clusters are also isolated from each other, but unlike pf_ring, here a cluster is a more logical concept inherited from pf_ring. DPDK is much more flexible than pf_ring and allows you to build complex multi-bridge configurations with many dispatchers without using clusters. In fact, the only "pro" argument for clustering in the Stingray-DPDK version is the case when you have two independent networks A and B connected to the Stingray SG, which should not interact with each other in any way. In DPDK, clusters are also isolated from each other, but unlike pf_ring, here a cluster is a more logical concept inherited from pf_ring. DPDK is much more flexible than pf_ring and allows you to build complex multi-bridge configurations with many dispatchers without using clusters. In fact, the only "pro" argument for clustering in the Stingray-DPDK version is the case when you have two independent networks A and B connected to the Stingray SG, which should not interact with each other in any way.
Line 357: Line 362:
   * RX queue count = 1   * RX queue count = 1
   * TX queue count = The processing threads write directly each to its own TX queue card.   * TX queue count = The processing threads write directly each to its own TX queue card.
 +
 +==== dpdk_engine=6: RSS dispatchers per bridge ====
 +<note important>This ''dpdk_engine'' is available starting from version 14.0!</note>
 +This engine is intended for configurations with multiple bridges (dev1:dev2:dev3:...) for 100G+ cards.
 +
 +<code>
 +in_dev=41-00.0:02-00.0:c3-00.0:c1-00.0:04-00.0:04-00.1
 +out_dev=41-00.1:41-00.1:02-00.1:02-00.1:c3-00.1:c3-00.1
 +
 +dpdk_engine=6
 +dpdk_rss=4
 +num_threads=64
 +
 +dpdk_mempool_size=256000
 +mem_tracking_flow=40000000
 +mem_tracking_ip=40000000
 +dpdk_emit_mempool_size=256000
 +mem_ssl_parsers=18000000
 +mem_http_parsers=512000
 +</code>
 +This example creates 24 dispatcher threads — 4 dispatchers per bridge.
 +
 +<note tip>
 +Total number of dispatchers = ''dpdk_rss'' * number of bridges.\\
 +For 100G+ NICs, with a ratio of 1 dispatcher per 10G, the minimum number of dispatchers is 10.\\
 +Starting from version 14.0, the maximum number of dispatchers is 32.
 +</note>
 +
 +
 +On-stick devices are supported.\\
 +SSG configures the cards as follows:
 +  * RX queue count = ''dpdk_rss''
 +  * TX queue count = number of processing threads. Processing threads write directly to their own TX queue on the card.
 +