The component can be installed on a VM for testing with minimal requirements:
Example of a QoE server receiving IPFIX from DPI for 100Gbps of peak traffic (in+out): Server platform (2U, AMD EPYC 7713 64 cores, 512 GB RAM, HW RAID Controller, 2 × 960GB SSD RAID1 for OS, 4×3.84TB SSD NVMe RAID0 stripe default disks + HDD/SSD RAID50 for storage according to volume, 2× network adapters 2×25GbE, 2×PSUs)
It is assumed that average daily traffic is 60% of peak total (in+out) traffic.
In the provided calculator, change the traffic value to calculate storage size.
| CPU | Single processor supporting SSE 4.2 instructions starting from Intel Nehalem and AMD EPYC Zen2 with 4 or more cores and a base clock speed of 2.5 GHz or higher. Choose CPUs with more cores. Clock speed is less important. For example, 16 cores at 2600 MHz is better than 8 cores at 3600 MHz. Do not disable Hyper-threading and Turbo-Boost. | |
| RAM | From 16 GB; memory modules must be installed in all CPU channels on the motherboard. The memory size should not be less than the volume of queried data. The more memory, the better performance when generating reports and the lower the disk load. Always disable swap file. | |
| Disks | To optimize storage cost, multiple types of disks are used: 1. default — fast disks for data ingestion and aggregation processes, SSD NVMe in RAID0 recommended. 2. hot — disks for storing data likely to be queried (usually up to 3 months), SSDs in RAID-10, RAID-5, RAID-6, or RAID-50. 3. cold — high-capacity slow disks for long-term storage, HDDs in RAID-10, RAID-5, RAID-6, or RAID-50 recommended. Retention period at each level is configured via GUI. Data migration and cleanup occur automatically according to settings. A mechanism for overflow protection is also provided. The main data volume is stored in /var/lib/clickhouse. Temporary data (IPFIX dumps) are stored in /var/qoestor/backend/dump. For best performance, these directories should be located on a separate disk or array. See Disk space configuration. For OS and QoE Stor software installation, use two drives of at least 256GB combined in RAID1 (mirror). A hardware RAID controller is required. | |
| QoE Cluster (Sharding) | It is better to create several nodes and combine them into a cluster: GUI can optimize queries so that all nodes build reports in parallel. IPFIX-balancer is used for even data distribution across nodes (round-robin), significantly improving performance. If a node fails, the balancer automatically directs data to remaining nodes. General recommendation: more nodes and smaller data portions per node. This ensures: 1. High performance 2. Fault tolerance 3. Scalability (by adding nodes to the cluster) |
You can read operation tips from Yandex ClickHouse at https://clickhouse.yandex/docs/ru/operations/tips/.