Cluster GUI [Документация VAS Experts]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
en:dpi:dpi_components:dpiui:user_guide:admin_section:cluster:start [2022/08/29 15:35] – created arusnaken:dpi:dpi_components:dpiui:user_guide:admin_section:cluster:start [Unknown date] (current) – removed - external edit (Unknown date) 127.0.0.1
Line 1: Line 1:
-====== 5 Cluster GUI ====== 
-{{indexmenu_n>5}} 
- 
-Clustering improves system availability by propagating changes to different servers. In the event of a failure of one of the servers, the others remain available for operation. 
- 
-Clustering dpiui2 is implemented through database and file system replication. 
- 
-Clustering capability is available from version [[en:dpi:dpi_components:dpiui:install_and_update:version_information#version_v2259_08_29_2022|dpiui2-2.25.9]] 
- 
-===== Database Replication (DB) ===== 
- 
-Database replication is implemented using MariaDB Galera Cluster. 
- 
-Galera is a database clustering solution that allows you to set up multi-master clusters using synchronous replication. Galera automatically handles the placement of data on different nodes, while allowing you to send read and write requests to any node in the cluster. 
- 
-More information about Galera can be found at [[https://mariadb.com/kb/en/what-is-mariadb-galera-cluster/|official documentation]]. 
- 
- 
-===== File system replication (FS) ===== 
- 
-File system replication is implemented using GlusterFS. 
- 
-GlusterFS is a distributed, parallel, linearly scalable, fail-safe file system. GlusterFS combines data stores located on different servers into one parallel network file system. GlusterFS runs in user space using FUSE technology, so it does not require support from the operating system kernel and runs on top of existing file systems (ext3, ext4, XFS, reiserfs, etc.). 
- 
-More information about GlusterFS can be found at [[https://docs.gluster.org/en/latest/|official documentation]] 
- 
- 
-===== Installation and setup ===== 
- 
-==== Settings ==== 
- 
-All settings can be made in the dpiui2 .env file or in the GUI Configuration > Cluster Settings section. 
- 
-{{ :dpi:dpi_components:dpiui:user_guide:admin_section:cluster:dpiui2_cluster_setup.png?direct&640 |}} 
- 
-Settings options: 
- 
-**GALERA_PEER_HOSTS** is comma-separated list of Galera cluster hosts. The parameter determines which nodes will be available to the Galera cluster. 
- 
-<note important>!Important: The main (master) node of the cluster must be placed at the beginning of the list. This is important for initial cluster deployment. 
-</note> 
- 
-**CLUSTER_FS_PEER_HOSTS** is Comma-separated list of GlusterFS cluster hosts. The parameter determines which nodes will be available to the GlusterFS cluster. 
- 
-<note important>!Important: The main (master) node of the cluster must be placed at the beginning of the list. This is important for initial cluster deployment.</note> 
- 
-**CLUSTER_PRIMARY_HOST** is the master node for Galera and GlusterFS. The parameter defines the main node at the current moment. This parameter can be changed during operation if the main unit fails for some reason. 
- 
- 
-==== Installing and running Galera ==== 
- 
-To install and start the Galera cluster, you need to run the following script under the root user on all nodes of the cluster, starting from the master node: 
- 
-<code>sh "/var/www/html/dpiui2/backend/app_bash/galera_setup.sh" -a init_cluster</code> 
- 
-<note warning>!!! Important: before running the script on the master node, you need to back up the database.</note> 
- 
-<note important>! Important: before running the script, you must enter [[dpi:dpi_components:dpiui:user_guide:admin_section:cluster:start#settings|settings]]</note> 
- 
-<note important>! Important: there must be IP connectivity between cluster nodes.</note> 
- 
-<note important>! Important: the script must be run as root</note> 
- 
-<note important>! Important: the script must be run first on the master node</note> 
- 
-<note important>! Important: you must wait for the script to finish executing on one node before running it on the next one</note> 
- 
- 
-==== Installing and running GlusterFS  ==== 
- 
- 
-To install and run the GlusterFS cluster, you need to follow the following steps as the root user: 
- 
-1 
- 
-Execute the script on all nodes in sequence: 
- 
-<code>sh "/var/www/html/dpiui2/backend/app_bash/glusterfs_setup.sh" -a init_gluster</code> 
- 
-The script will perform the initial installation of GlusterFS. Requires running on all cluster nodes. 
- 
-2 
- 
-On the main (master) node, execute the script: 
- 
-<code>sh "/var/www/html/dpiui2/backend/app_bash/glusterfs_setup.sh" -a init_peers</code> 
- 
-The script will configure all cluster nodes. Requires running only on the master node. You don't need to run on other nodes. 
- 
-3 
- 
-On the main (master) node, execute the script: 
- 
-<code>sh "/var/www/html/dpiui2/backend/app_bash/glusterfs_setup.sh" -a init_volume</code> 
- 
-The script will configure the distributed storage and file system. Requires running only on the master node. You don't need to run on other nodes. 
- 
-4 
- 
-Execute the script on all nodes in sequence: 
- 
-<code>sh "/var/www/html/dpiui2/backend/app_bash/glusterfs_setup.sh" -a mount</code> 
- 
-The script will mount the replicated directories to the distributed file system. Requires running on all cluster nodes. 
- 
- 
- 
- 
-<note warning>!!! Важно: перед запуском скрипта на мастер-узле, необходимо выполнить резервное копирование каталога /var/www/html/dpiui2/backend/storage/.</note> 
- 
-<note important>! Важно: Перед запуском скрипта необходимо внести [[dpi:dpi_components:dpiui:user_guide:admin_section:cluster:start#настройки|настройки]]</note> 
- 
-<note important>! Важно: между узлами кластера должна быть связанность по IP.</note> 
- 
-<note important>! Важно: Запуск скрипта необходимо выполнять под пользователем root</note> 
- 
-<note important>! Важно: Необходимо дождаться окончания выполнения скрипта на одном узле, прежде чем запускать его на следующем</note> 
- 
- 
-<note warning>!!! Important: before running the script on the master node, be sure to back up the /var/www/html/dpiui2/backend/storage/ directory.</note> 
- 
-<note important>! Important: before running the script, you must follow [[dpi:dpi_components:dpiui:user_guide:admin_section:cluster:start#settings|settings]]</note> 
- 
-<note important>! Important: there must be IP connectivity between cluster nodes.</note> 
- 
-<note important>! Important: the script must be run as root</note> 
- 
-<note important>! Important: you must wait for the script to finish executing on one node before running it on the next one</note> 
- 
- 
- 
-===== Master server ===== 
- 
-Важную роль в кластере играет Главный (мастер) сервер. 
- 
-Мастер сервер устанавливается настройкой [[dpi:dpi_components:dpiui:user_guide:admin_section:cluster:start#настройки|CLUSTER_PRIMARY_HOST]]. 
- 
-Мастер сервер выполняет всю фоновою работу dpiui2: взаимодействие с оборудованием, синхронизация абонентов, услуг, тарифов и т.д. 
- 
-Остальные (slave) узлы не выполняют никаких фоновых действий и находятся в режиме ожидания. При этом к эти узлы доступны для работы: пользователи могут работать с этими узлами также как и с мастер сервером и не увидят разницы. Эту опцию можно использовать для балансировки нагрузки, а также для обеспечения более защищённого доступа. 
- 
-При выходе из строя мастер сервера, необходимо изменить настройку CLUSTER_PRIMARY_HOST и назначить мастером другой сервер. 
- 
- 
-The main (master) server plays an important role in the cluster. 
- 
-The master server is set by setting [[dpi:dpi_components:dpiui:user_guide:admin_section:cluster:start#settings|CLUSTER_PRIMARY_HOST]]. 
- 
-The master server performs all the background work of dpiui2: interaction with equipment, synchronization of subscribers, services, tariffs, etc. 
- 
-The remaining (slave) nodes do not perform any background activity and are idle. At the same time, these nodes are available for work: users can work with these nodes in the same way as with the master server and will not see the difference. This option can be used for load balancing as well as providing more secure access. 
- 
-If the master server fails, you need to change the CLUSTER_PRIMARY_HOST setting and make another server the master. 
- 
- 
-===== Restart nodes ===== 
- 
-For normal operation of the cluster, 3 nodes (3 servers or virtual machines) are required. 
- 
-If you start the cluster on only 2 nodes, there will be problems with restarting the nodes. 
- 
-<note warning>!!! Important: don't try to implement GlusterFS on only 2 nodes. The cluster requires a 3rd server - an arbiter. If you restart any of the 2 nodes, you will lose data.</note> 
- 
-===== Restart nodes ===== 
- 
-In normal mode, you can stop / restart 1 or 2 servers at the same time without consequences. 
- 
-If you need to stop all 3 servers, you need to do it sequentially. It is advisable to stop the master node last. You must first start the server that was stopped last. 
- 
-If all 3 servers were stopped, you will need to initialize the Galera cluster manually: 
- 
-1 
- 
-Stop the database server on all nodes. To do this, run the following command 
- 
-<code>systemctl stop mariadb</code> 
- 
-2 
- 
-Determine which server was stopped last ([[https://galeracluster.com/library/documentation/crash-recovery.html|more information]]) 
- 
-<code>cat /var/lib/mysql/grastate.dat</code> 
- 
-Find the node that has safe_to_bootstrap = 1 or the highest seqno. For this node, run: 
- 
-<code>galera_new_cluster</code> 
- 
-For the rest of the nodes, do: 
- 
-<code>systemctl start mariadb</code> 
-