Web sitesi Türkiye'de bulunan bahis şirketi Mostbet, dünya çapında yüksek güvenilirliği ve güvenliği ile ünlüdür. Burada şansınızı bir çevrimiçi kumarhanede deneyebilir veya spor bahislerine katılabilirsiniz; ayrıca mükemmel bonuslar ve promosyonlar, ücretsiz bahisler, bedava çevirmeler, yüksek oranlar ve hızlı para çekme işlemleri de bulacaksınız. Ve mobil uygulamamız oyunun tadını sonuna kadar çıkarmanızı sağlayacak!

Methods to Set up Plesk in a Excessive Availability Cluster

HA Cluster: Configuring Software program

Internet hosting-related system providers

Because the software program liable for HA cluster will handle system providers like Plesk, nginx, and so forth, we must always disable all these providers as they shouldn’t be began mechanically with system. With the subsequent command, we disable auto-starting for these providers, which must be executed on the each nodes.

ha-node1 and ha-node2# for i in

plesk-ip-remapping
plesk-php74-fpm.service
plesk-php82-fpm.service
plesk-web-socket.service
plesk-task-manager.service
plesk-ssh-terminal.service
plesk-repaird.socket
sw-engine.service
sw-cp-server.service
psa.service
cron.service
xinetd.service
nginx.service
apache2.service httpd.service
mariadb.service mysql.service postgresql.service
named.service bind9.service named-chroot.service
postfix.service;
do systemctl disable $i && systemctl cease $i;

executed

As an output, you may even see traces like “Did not disable unit: Unit file bind9.service doesn’t exist”. This isn’t a crucial error as a result of the command comprises totally different title of the identical providers for various OSes like CentOS and Ubuntu or title of various providers that gives comparable performance (like MySQL and MariaDB). In case you have put in any further part with Plesk like “php80”, you additionally have to disable a service for that part if a service was added to the server by the Plesk part.

Chances are you’ll run `ps ax` to double test that there aren’t any extra operating providers associated to Plesk or any of its part.

Plesk Recordsdata

Within the earlier weblog submit, you may see how it’s attainable to repeat the Plesk “vhosts” listing to the NFS storage. For HA cluster, we have to do the identical and some further steps for the remainder of the Plesk directories to make them obtainable for nodes of the HA cluster.

On the NFS server, configure exporting of “/var/nfs/plesk-ha/plesk_files” the identical manner as “/var/nfs/plesk-ha/vhosts”. After configuring, it is best to see the directories can be found for distant mounting from the interior community.

ha-nfs# exportfs -v
/var/nfs/plesk-ha/vhosts

10.0.0.0/24(sync,wdelay,cover,no_subtree_check,sec=sys,rw,safe,no_root_squash,no_all_squash)

/var/nfs/plesk-ha/plesk_files

10.0.0.0/24(sync,wdelay,cover,no_subtree_check,sec=sys,rw,safe,no_root_squash,no_all_squash)

When exporting is configured, we might want to copy Plesk to the NFS. For that, we’ve to pre-create a listing on every node for mounting NFS storage:

ha-node1 and ha-node2# mkdir -p /nfs/plesk_files

Recordsdata: vhosts

As we beforehand have determined to think about the ha-node1 as an energetic node, the subsequent instructions must be executed on the ha-node1. To repeat present vhosts listing, run the subsequent instructions:

ha-node1# mount -t nfs -o "laborious,timeo=600,retrans=2,_netdev" 10.0.0.12:/var/nfs/plesk-ha/vhosts /mnt
ha-node1# cp -aRv /var/www/vhosts/* /mnt
ha-node1# umount /mnt

Recordsdata: Plesk-related

Additionally, as we beforehand have determined to think about the ha-node1 as an energetic node, the subsequent instructions must be executed on the ha-node1.

ha-node1# mount -t nfs -o "laborious,timeo=600,retrans=2,_netdev" 10.0.0.12:/var/nfs/plesk-ha/plesk_files /nfs/plesk_files

ha-node1# mkdir -p /nfs/plesk_files/and so forth/{apache2,nginx,psa,sw,sw-cp-server,domainkeys,psa-webmail}
ha-node1# cp -a /and so forth/passwd /nfs/plesk_files/and so forth/
ha-node1# cp -aR /and so forth/apache2/. /nfs/plesk_files/and so forth/apache2
ha-node1# cp -aR /and so forth/nginx/. /nfs/plesk_files/and so forth/nginx
ha-node1# cp -aR /and so forth/psa/. /nfs/plesk_files/and so forth/psa
ha-node1# cp -aR /and so forth/sw/. /nfs/plesk_files/and so forth/sw
ha-node1# cp -aR /and so forth/sw-cp-server/. /nfs/plesk_files/and so forth/sw-cp-server
ha-node1# cp -aR /and so forth/sw-engine/. /nfs/plesk_files/and so forth/sw-engine
ha-node1# cp -aR /and so forth/domainkeys/. /nfs/plesk_files/and so forth/domainkeys
ha-node1# cp -aR /and so forth/psa-webmail/. /nfs/plesk_files/and so forth/psa-webmail

ha-node1# mkdir -p /nfs/plesk_files/var/{spool,named}
ha-node1# cp -aR /var/named/. /nfs/plesk_files/var/named
ha-node1# cp -aR /var/spool/. /nfs/plesk_files/var/spool

ha-node1# mkdir -p /nfs/plesk_files/choose/plesk/php/{7.4,8.2}/and so forth
ha-node1# cp -aR /choose/plesk/php/7.4/and so forth/. /nfs/plesk_files/choose/plesk/php/7.4/and so forth
ha-node1# cp -aR /choose/plesk/php/8.2/and so forth/. /nfs/plesk_files/choose/plesk/php/8.2/and so forth

ha-node1# mkdir -p /nfs/plesk_files/usr/native/psa/{admin/conf,admin/plib/modules,and so forth/modules,var/modules,var/certificates}
ha-node1# cp -aR /usr/native/psa/admin/conf/. /nfs/plesk_files/usr/native/psa/admin/conf
ha-node1# cp -aR /usr/native/psa/admin/plib/modules/. /nfs/plesk_files/usr/native/psa/admin/plib/modules
ha-node1# cp -aR /usr/native/psa/and so forth/modules/. /nfs/plesk_files/usr/native/psa/and so forth/modules
ha-node1# cp -aR /usr/native/psa/var/modules/. /nfs/plesk_files/usr/native/psa/var/modules
ha-node1# cp -aR /usr/native/psa/var/certificates/. /nfs/plesk_files/usr/native/psa/var/certificates

ha-node1# umount /nfs/plesk_files

Occasion Handler to maintain /and so forth/passwd up2date

We have to replace the passwd and group file on the NFS storage each time when Plesk updates the system customers. For that, we’ll create a number of occasion handlers for eventualities like area creating, subscription updating, and so forth. Occasion handlers are saved in a Plesk database, which implies we have to run the subsequent command solely on energetic node.

As we beforehand have determined to think about the ha-node1 as an energetic node, the subsequent instructions must be executed on the ha-node1.

ha-node1# plesk bin event_handler --create -command "/bin/cp /and so forth/passwd /nfs/plesk_files/and so forth/passwd" -priority 50 -user root -event phys_hosting_create
ha-node1# plesk bin event_handler --create -command "/bin/cp /and so forth/passwd /nfs/plesk_files/and so forth/passwd" -priority 50 -user root -event phys_hosting_update
ha-node1# plesk bin event_handler --create -command "/bin/cp /and so forth/passwd /nfs/plesk_files/and so forth/passwd" -priority 50 -user root -event phys_hosting_delete

ha-node1# plesk bin event_handler --create -command "/bin/cp /and so forth/passwd /nfs/plesk_files/and so forth/passwd" -priority 50 -user root -event ftpuser_create
ha-node1# plesk bin event_handler --create -command "/bin/cp /and so forth/passwd /nfs/plesk_files/and so forth/passwd" -priority 50 -user root -event ftpuser_update
ha-node1# plesk bin event_handler --create -command "/bin/cp /and so forth/passwd /nfs/plesk_files/and so forth/passwd" -priority 50 -user root -event ftpuser_delete

ha-node1# plesk bin event_handler --create -command "/bin/cp /and so forth/passwd /nfs/plesk_files/and so forth/passwd" -priority 50 -user root -event site_subdomain_create
ha-node1# plesk bin event_handler --create -command "/bin/cp /and so forth/passwd /nfs/plesk_files/and so forth/passwd" -priority 50 -user root -event site_subdomain_update
ha-node1# plesk bin event_handler --create -command "/bin/cp /and so forth/passwd /nfs/plesk_files/and so forth/passwd" -priority 50 -user root -event site_subdomain_delete
ha-node1# plesk bin event_handler --create -command "/bin/cp /and so forth/passwd /nfs/plesk_files/and so forth/passwd" -priority 50 -user root -event site_subdomain_move

SHARE THIS POST