Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| en:dpi:dpi_components:platform:faq:net_points:start [2021/07/27 11:29] – edrudichgmailcom | en:dpi:dpi_components:platform:faq:net_points:start [2024/07/29 12:41] (current) – removed elena.krasnobryzh | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | ====== 3 Networking ====== | ||
| - | {{indexmenu_n> | ||
| - | |||
| - | ===== Does STP transparently pass? ===== | ||
| - | Yes. | ||
| - | |||
| - | ===== Will your solution design allow the following implementation scheme: the server has one 10G network interface. Is it possible to pass traffic through the SSG by organizing two VLANs on this interface (input and output)? ===== | ||
| - | No. No future support is planned. | ||
| - | |||
| - | ===== Can your system arrange a BGP link with a border to export prefixes, which traffic needs to be routed to the SSG? ===== | ||
| - | Yes, it can. [[en: | ||
| - | |||
| - | ===== We connected an internal LAN to try it out, the ping time did not change. Is there supposed to be a delay? ===== | ||
| - | If the equipment complies with our recommendations, | ||
| - | |||
| - | ===== If you implement mirroring, and traffic with different tags will come to in_dev=dna1: | ||
| - | The SSG sends a response with the original packet tag if no [[en: | ||
| - | |||
| - | ===== What is the Stingray Service Gateway? A router, a NAT, a transparent proxy? Or is it transparent to network devices? ===== | ||
| - | Stingray Service Gateway is a DPI device, analogous to the Cisco SCE. It works as a bridge, without assigning IP addresses, meaning it is not visible in the network. | ||
| - | Latency is not more than 30 microseconds (according to tests it is 16 microseconds), | ||
| - | [[en: | ||
| - | |||
| - | ===== How is aggregated traffic sent? Can ports be grouped via LACP? ===== | ||
| - | Yes, you can use LACP, LAGG for traffic aggregation.\\ | ||
| - | [[en: | ||
| - | |||
| - | ===== At what point should the complex be connected, before termination or after termination on BRAS (in other words, on L2 or L3)? ===== | ||
| - | It depends on the task: if the platform is connected as a DPI, it is implemented after the termination point; if you need BRAS and NAT functionality, | ||
| - | [[en: | ||
| - | |||
| - | ===== The WEB server network stack optimization ===== | ||
| - | |||
| - | # The WEB server network stack optimization | ||
| - | <code bash> | ||
| - | net.core.netdev_max_backlog=10000 | ||
| - | net.core.somaxconn=262144 | ||
| - | net.ipv4.tcp_syncookies=1 | ||
| - | net.ipv4.tcp_max_syn_backlog = 262144 | ||
| - | net.ipv4.tcp_max_tw_buckets = 720000 | ||
| - | net.ipv4.tcp_tw_recycle = 1 | ||
| - | net.ipv4.tcp_timestamps = 1 | ||
| - | net.ipv4.tcp_tw_reuse = 1 | ||
| - | net.ipv4.tcp_fin_timeout = 30 | ||
| - | net.ipv4.tcp_keepalive_time = 1800 | ||
| - | net.ipv4.tcp_keepalive_probes = 7 | ||
| - | net.ipv4.tcp_keepalive_intvl = 30 | ||
| - | net.core.wmem_max = 33554432 | ||
| - | net.core.rmem_max = 33554432 | ||
| - | net.core.rmem_default = 8388608 | ||
| - | net.core.wmem_default = 4194394 | ||
| - | net.ipv4.tcp_rmem = 4096 8388608 16777216 | ||
| - | net.ipv4.tcp_wmem = 4096 4194394 16777216 | ||
| - | </ | ||
| - | |||
| - | ===== Why does one BGP session connects and another does not? ===== | ||
| - | |||
| - | Look at tcpdump \\ | ||
| - | And on the client' | ||
| - | In one session, we see mss 1480 \\ | ||
| - | in sync, and in the second session we see mss 8500 \\ | ||
| - | This means that one peer's mtu on the interface is standard 1500, the other is overdrawn. \\ | ||
| - | The session that has an mss higher than 1480 (there' | ||
| - | we're setting it up in the MX. | ||
| - | <code bash> | ||
| - | traceoptions { | ||
| - | file as12389.log size 1m files 3; | ||
| - | } | ||
| - | | ||
| - | | ||
| - | | ||
| - | peer-as 12389; | ||
| - | tcp-mss 1460; | ||
| - | } | ||
| - | | ||
| - | |||
| - | [[en: | ||