Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
en:dpi:dpi_components:dpiui:user_guide:admin_section:cluster:start [2022/08/29 15:46] – [Installing and running Galera] arusnak | en:dpi:dpi_components:dpiui:user_guide:admin_section:cluster:start [Unknown date] (current) – removed - external edit (Unknown date) 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== 5 Cluster GUI ====== | ||
- | {{indexmenu_n> | ||
- | |||
- | Clustering improves system availability by propagating changes to different servers. In the event of a failure of one of the servers, the others remain available for operation. | ||
- | |||
- | Clustering dpiui2 is implemented through database and file system replication. | ||
- | |||
- | Clustering capability is available from version [[en: | ||
- | |||
- | ===== Database Replication (DB) ===== | ||
- | |||
- | Database replication is implemented using MariaDB Galera Cluster. | ||
- | |||
- | Galera is a database clustering solution that allows you to set up multi-master clusters using synchronous replication. Galera automatically handles the placement of data on different nodes, while allowing you to send read and write requests to any node in the cluster. | ||
- | |||
- | More information about Galera can be found at [[https:// | ||
- | |||
- | |||
- | ===== File system replication (FS) ===== | ||
- | |||
- | File system replication is implemented using GlusterFS. | ||
- | |||
- | GlusterFS is a distributed, | ||
- | |||
- | More information about GlusterFS can be found at [[https:// | ||
- | |||
- | |||
- | ===== Installation and setup ===== | ||
- | |||
- | ==== Settings ==== | ||
- | |||
- | All settings can be made in the dpiui2 .env file or in the GUI Configuration > Cluster Settings section. | ||
- | |||
- | {{ : | ||
- | |||
- | Settings options: | ||
- | |||
- | **GALERA_PEER_HOSTS** is comma-separated list of Galera cluster hosts. The parameter determines which nodes will be available to the Galera cluster. | ||
- | |||
- | <note important> | ||
- | </ | ||
- | |||
- | **CLUSTER_FS_PEER_HOSTS** is Comma-separated list of GlusterFS cluster hosts. The parameter determines which nodes will be available to the GlusterFS cluster. | ||
- | |||
- | <note important> | ||
- | |||
- | **CLUSTER_PRIMARY_HOST** is the master node for Galera and GlusterFS. The parameter defines the main node at the current moment. This parameter can be changed during operation if the main unit fails for some reason. | ||
- | |||
- | |||
- | ==== Installing and running Galera ==== | ||
- | |||
- | To install and start the Galera cluster, you need to run the following script under the root user on all nodes of the cluster, starting from the master node: | ||
- | |||
- | < | ||
- | |||
- | <note warning> | ||
- | |||
- | <note important> | ||
- | |||
- | <note important> | ||
- | |||
- | <note important> | ||
- | |||
- | <note important> | ||
- | |||
- | <note important> | ||
- | |||
- | <note important> | ||
- | ==== Installing and running GlusterFS | ||
- | |||
- | |||
- | To install and run the GlusterFS cluster, you need to follow the following steps as the root user: | ||
- | |||
- | 1 | ||
- | |||
- | Execute the script on all nodes in sequence: | ||
- | |||
- | < | ||
- | |||
- | The script will perform the initial installation of GlusterFS. Requires running on all cluster nodes. | ||
- | |||
- | 2 | ||
- | |||
- | On the main (master) node, execute the script: | ||
- | |||
- | < | ||
- | |||
- | The script will configure all cluster nodes. Requires running only on the master node. You don't need to run on other nodes. | ||
- | |||
- | 3 | ||
- | |||
- | On the main (master) node, execute the script: | ||
- | |||
- | < | ||
- | |||
- | The script will configure the distributed storage and file system. Requires running only on the master node. You don't need to run on other nodes. | ||
- | |||
- | 4 | ||
- | |||
- | Execute the script on all nodes in sequence: | ||
- | |||
- | < | ||
- | |||
- | The script will mount the replicated directories to the distributed file system. Requires running on all cluster nodes. | ||
- | |||
- | |||
- | <note warning> | ||
- | |||
- | <note important> | ||
- | |||
- | <note important> | ||
- | |||
- | <note important> | ||
- | |||
- | <note important> | ||
- | |||
- | |||
- | |||
- | ===== Master server ===== | ||
- | |||
- | Важную роль в кластере играет Главный (мастер) сервер. | ||
- | |||
- | Мастер сервер устанавливается настройкой [[en: | ||
- | |||
- | Мастер сервер выполняет всю фоновою работу dpiui2: взаимодействие с оборудованием, | ||
- | |||
- | Остальные (slave) узлы не выполняют никаких фоновых действий и находятся в режиме ожидания. При этом к эти узлы доступны для работы: | ||
- | |||
- | При выходе из строя мастер сервера, | ||
- | |||
- | |||
- | The main (master) server plays an important role in the cluster. | ||
- | |||
- | The master server is set by setting [[en: | ||
- | |||
- | The master server performs all the background work of dpiui2: interaction with equipment, synchronization of subscribers, | ||
- | |||
- | The remaining (slave) nodes do not perform any background activity and are idle. At the same time, these nodes are available for work: users can work with these nodes in the same way as with the master server and will not see the difference. This option can be used for load balancing as well as providing more secure access. | ||
- | |||
- | If the master server fails, you need to change the CLUSTER_PRIMARY_HOST setting and make another server the master. | ||
- | |||
- | |||
- | ===== Number of nodes ===== | ||
- | |||
- | For normal operation of the cluster, 3 nodes (3 servers or virtual machines) are required. | ||
- | |||
- | If you start the cluster on only 2 nodes, there will be problems with restarting the nodes. | ||
- | |||
- | <note warning> | ||
- | |||
- | ===== Restart nodes ===== | ||
- | |||
- | In normal mode, you can stop / restart 1 or 2 servers at the same time without consequences. | ||
- | |||
- | If you need to stop all 3 servers, you need to do it sequentially. It is advisable to stop the master node last. You must first start the server that was stopped last. | ||
- | |||
- | If all 3 servers were stopped, you will need to initialize the Galera cluster manually: | ||
- | |||
- | 1 | ||
- | |||
- | Stop the database server on all nodes. To do this, run the following command | ||
- | |||
- | < | ||
- | |||
- | 2 | ||
- | |||
- | Determine which server was stopped last ([[https:// | ||
- | |||
- | < | ||
- | |||
- | Find the node that has safe_to_bootstrap = 1 or the highest seqno. For this node, run: | ||
- | |||
- | < | ||
- | |||
- | For the rest of the nodes, do: | ||
- | |||
- | < | ||
- | |||