Both sides previous revisionPrevious revisionNext revision | Previous revision |
en:dpi:dpi_brief:dpi_requirements [2024/12/25 11:02] – [Table] atereschenko | en:dpi:dpi_brief:dpi_requirements [2025/03/14 12:39] (current) – atereschenko |
---|
| |
===== Minimum Requirements ===== | ===== Minimum Requirements ===== |
SSG software runs on general purpose computers. However, due to deep code optimization and integration with the hardware, the hardware configuration has to meet some specific requirements: | SSG software runs on general-purpose x86 servers that are installed in a 19-inch rack and have redundant AC/DC power and cooling fans. Due to the high degree of code optimization and integration with hardware, there are some special requirements: |
| |
<note important>The CPU and RAM parameters are determined according to the required bandwidth. We advise you to look through the [[en:dpi:dpi_brief:dpi_requirements#recommended_requirements|Recommended Requirements]] and agree on the choice of software server with VAS EXPERTS's representatives or our partners to install the software.</note> | <note important>The CPU and RAM parameters are determined according to the required bandwidth. We advise you to look through the [[en:dpi:dpi_brief:dpi_requirements#recommended_requirements|Recommended Requirements]] and agree on the choice of software server with VAS EXPERTS's representatives or our partners to install the software.</note> |
| |
|CPU |**One CPU** supporting **SSE 4.2** staring from [[http://en.wikipedia.org/wiki/Nehalem_(microarchitecture)|Intel Nehalem]] and [[https://en.wikipedia.org/wiki/Zen_2|AMD EPYC Zen2]] with **4 or more processor cores**, **2.5 Ghz clockspeed** and above**.\\ **!SSG only works with one processor!| | | CPU | **One CPU** supporting **SSE 4.2** staring from [[http://en.wikipedia.org/wiki/Nehalem_(microarchitecture)|Intel Nehalem]] and [[https://en.wikipedia.org/wiki/Zen_2|AMD EPYC Zen2]] with **4 or more processor cores**, **2.5 Ghz clockspeed** and above**.\\ **!SSG only works with one processor! | |
|RAM |Not less than 8Gb, it is necessary to install memory modules in all processor channels on the motherboard | | | RAM | Not less than 8Gb, it is necessary to install memory modules in all processor channels on the motherboard | |
|SSD Disks |To host the OS and SSG software, it is necessary to use 2 disks with a capacity of 256GB or more, combined in RAID 1 (mirror). It is necessary to use a hardware RAID controller. NVMe SSD disks (in M.2, U.2 form factor or PCI Express expansion cards) are a priority. If the platform does not support this type of media, we recommend using SATA/SAS SSD (DWPD>=1) instead of HDDs| | | SSD Disks | To host the OS and SSG software, it is necessary to use 2 disks with a capacity of 256GB or more, combined in RAID 1 (mirror). It is necessary to use a hardware RAID controller. NVMe SSD disks (in M.2, U.2 form factor or PCI Express expansion cards) are a priority. If the platform does not support this type of media, we recommend using SATA/SAS SSD (DWPD>=1) instead of HDDs | |
|Number of network ports|At least **3 ports are required**: **one** for the remote management using SSH (any kind of chipset), **the two** other to process network traffic ([[https://core.dpdk.org/supported/nics/|network cards with DPDK support]])| | | Number of network ports | At least **3 ports are required**: **one** for the remote management using SSH (any kind of chipset), **the two** other to process network traffic ([[https://core.dpdk.org/supported/nics/|network cards with DPDK support]]) | |
|Supported network cards|It is recommended to use **only tested cards** on **Intel** chipsets ((if your card is not on the tested list, software adaptation, development, and additional testing will be required)) with 2, 4, or 6 ports ((a specific model list is not provided, as there is a very large selection of manufacturers for these cards: from Intel itself to branded options like Huawei, HP, Dell, Silicom, Advantech, Lanner, Supermicro, Silicom, and dozens of others, as well as built-in cards on motherboards or as part of SOC)). The most popular models: \\ **1GbE interfaces:** \\ - e1000 (82540, 82545, 82546) \\ - e1000e (82571, 82572, 82573, 82574, 82583, ICH8, ICH9, ICH10, PCH, PCH2, I217, I218, I219) \\ - igb (82573, 82576, 82580, I210, I211, I350, I354, DH89xx) \\ - igc (I225) \\ \\ **10GbE interfaces:** \\ - ixgbe (82598, 82599, X520, X540, X550) \\ - i40e (X710, XL710, X722, XXV710) \\ - mlx5 \\ \\ **25GbE interfaces:** \\ - i40e (XXV710) \\ - mlx5 \\ \\ **Many server platforms have bandwidth limitations for 40G/100G ports, we recommend purchasing equipment from our partners for these installations** \\ \\ **40GbE interfaces:** (the x8 PCIe 3.0 card has a maximum bandwidth of 64Gbps. Thus, a 2x40GbE port card can handle no more than 32Gbps in + 32Gbps out in inline mode. In on-stick mode, a 2x40GbE port card can handle no more than 64Gbps in+out across both ports. To avoid these limitations, it is recommended to use only one port on a two-port 40GbE card \\ - i40e (X710, XL710, X722, XXV710) \\ \\ **100GbE interfaces, a motherboard with PCIe 4.0 x16 support is required:** A 2x100GbE port card can handle no more than 50Gbps in + 50Gbps out per port in inline mode. In on-stick mode, a 2x100GbE port card can handle no more than 128Gbps in+out across both ports. To avoid these limitations, it is recommended to use only one port on a two-port 100GbE card \\ - mlx5 (Mellanox ConnectX-4, ConnectX-5, ConnectX-6) \\ - ice (Intel E810) - //make sure the latest firmware is installed on the Intel card: earlier firmware versions did not support GRE tunnels // \\ **For BRAS PPPoE, only 100G Intel E810 cards should be used (Mellanox cards do not support RSS for PPPoE traffic)**| | | Supported network cards | It is recommended to use **only tested cards** on **Intel** chipsets ((if your card is not on the tested list, software adaptation, development, and additional testing will be required)) with 2, 4, or 6 ports ((a specific model list is not provided, as there is a very large selection of manufacturers for these cards: from Intel itself to branded options like Huawei, HP, Dell, Silicom, Advantech, Lanner, Supermicro, Silicom, and dozens of others, as well as built-in cards on motherboards or as part of SOC)). The most popular models: \\ **1GbE interfaces:** \\ - e1000 (82540, 82545, 82546) \\ - e1000e (82571, 82572, 82573, 82574, 82583, ICH8, ICH9, ICH10, PCH, PCH2, I217, I218, I219) \\ - igb (82573, 82576, 82580, I210, I211, I350, I354, DH89xx) \\ - igc (I225) \\ \\ **10GbE interfaces:** \\ - ixgbe (82598, 82599, X520, X540, X550) \\ - i40e (X710, XL710, X722, XXV710) \\ - mlx5 \\ \\ **25GbE interfaces:** \\ - i40e (XXV710) \\ - mlx5 \\ \\ **Many server platforms have bandwidth limitations for 40G/100G ports, we recommend purchasing equipment from our partners for these installations** \\ \\ **40GbE interfaces:** (the x8 PCIe 3.0 card has a maximum bandwidth of 64Gbps. Thus, a 2x40GbE port card can handle no more than 32Gbps in + 32Gbps out in inline mode. In on-stick mode, a 2x40GbE port card can handle no more than 64Gbps in+out across both ports. To avoid these limitations, it is recommended to use only one port on a two-port 40GbE card \\ - i40e (X710, XL710, X722, XXV710) \\ \\ **100GbE interfaces, a motherboard with PCIe 4.0 x16 support is required:** A 2x100GbE port card can handle no more than 50Gbps in + 50Gbps out per port in inline mode. In on-stick mode, a 2x100GbE port card can handle no more than 128Gbps in+out across both ports. To avoid these limitations, it is recommended to use only one port on a two-port 100GbE card \\ - mlx5 (Mellanox ConnectX-4, ConnectX-5 (MCX516A-CDAT), ConnectX-6) \\ - ice (Intel E810, E810-CQDA2) - //make sure the latest firmware is installed on the Intel card: earlier firmware versions did not support GRE tunnels // \\ **For BRAS PPPoE, only 100G Intel E810 cards should be used (Mellanox cards do not support RSS for PPPoE traffic)** | |
|Bypass support |Bypass is supported for Silicom cards [[https://www.silicom-usa.com/pr/server-adapters/networking-bypass-adapters/100-gigabit-ethernet-bypass-networking-server-adapters/p4cg2bpi81-bypass-server-adapter/|100GbE]], [[https://www.silicom-usa.com/pr/server-adapters/networking-bypass-adapters/40-gigabit-ethernet-bypass-networking-server-adapters/pe340g2bpi71-server-adapter/|40GbE]], [[http://www.silicom-usa.com/pr/server-adapters/networking-bypass-adapters/10-gigabit-ethernet-bypass-networking-server-adapters/pe210g2bpi9-ethernet-bypass/|10GbE]] and [[http://www.silicom-usa.com/cats/server-adapters/networking-bypass-adapters/gigabit-ethernet-bypass-networking-server-adapters/|1GbE]]| | | Bypass support | Bypass is supported for Silicom cards [[https://www.silicom-usa.com/pr/server-adapters/networking-bypass-adapters/100-gigabit-ethernet-bypass-networking-server-adapters/p4cg2bpi81-bypass-server-adapter/|100GbE]], [[https://www.silicom-usa.com/pr/server-adapters/networking-bypass-adapters/40-gigabit-ethernet-bypass-networking-server-adapters/pe340g2bpi71-server-adapter/|40GbE]], [[http://www.silicom-usa.com/pr/server-adapters/networking-bypass-adapters/10-gigabit-ethernet-bypass-networking-server-adapters/pe210g2bpi9-ethernet-bypass/|10GbE]] and [[http://www.silicom-usa.com/cats/server-adapters/networking-bypass-adapters/gigabit-ethernet-bypass-networking-server-adapters/|1GbE]] | |
| |
<note important>SSG platform operates only under control of the [[en:veos:installation|VEOS (VAS Experts Operating System)]]</note> | <note important>SSG platform operates only under control of the [[en:veos:installation|VEOS (VAS Experts Operating System)]]</note> |
4.**The use of 100G** interfaces is possible only when the platform is delivered through a partner in order to control the server specification.\\ | 4.**The use of 100G** interfaces is possible only when the platform is delivered through a partner in order to control the server specification.\\ |
5.**Using the option [[en:dpi:dpi_options:opt_shaping|]]** involves additional internal locks, which reduces system performance to 40G of total traffic, regardless of the number of cores.\\ | 5.**Using the option [[en:dpi:dpi_options:opt_shaping|]]** involves additional internal locks, which reduces system performance to 40G of total traffic, regardless of the number of cores.\\ |
6.**Every 256 public IP addresses in NAT Pool (/24 subnet) consume 5GB of RAM. /22 = 10GB, /21 = 20GB, /20 = 80GB, /19 = 160GB.**</note> | 6.**Every 256 public IP addresses in NAT Pool (/24 subnet) consume 5GB of RAM. /23 = 10GB, /22 = 20GB, /21 = 40GB, /20 = 80GB, /19 = 160GB.**</note> |
| |
===== Requirements for Installation on a Virtual Machine ===== | ===== Requirements for Installation on a Virtual Machine ===== |