High Availability – VitalPBX Wiki https://wiki.vitalpbx.org Learn how our latest VitalPBX version will enhance your business communication Sat, 16 Dec 2023 16:26:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://wiki.vitalpbx.org/wp-content/uploads/2023/11/cropped-vitalbpxwikiicon-8-32x32.png High Availability – VitalPBX Wiki https://wiki.vitalpbx.org 32 32 High Availability Using DRBD https://wiki.vitalpbx.org/wiki/high-availability/high-availability/ https://wiki.vitalpbx.org/wiki/high-availability/high-availability/#respond Fri, 10 Nov 2023 22:16:35 +0000 https://wiki.vitalpbx.org/?post_type=docs&p=587 Let’s take a look at the first option, which creates a high availability environment between two VitalPBX instances.

In this High Availability environment, we will be using DRBD or Distributed Replicated Block Device. This involves ensuring critical systems and services are available with minimal downtime in case of a failure. DRBD enables real-time data replication between nodes to
ensure data availability and integrity.
First, let’s look into the requirements for this type of High Availability.

  • Physical Infrastructure and Networking – Two or more identical nodes (servers) to
    implement redundancy. These need to have the same hardware specifications and
    VitalPBX licensing. This means that if you are using a Carrier Plus license, each server
    will need its own license. This will ensure that both servers have the same permissions
    when the configurations are being replicated. We also need a reliable and low-latency
    network connection between the nodes. This can be a dedicated replication network
    (preferred) or a shared network if it’s of high quality.
  • Operating System / VitalPBX version – Nodes should run the same operating
    system using the same version and the same version of VitalPBX.
  • Disk Partitioning – The storage device to be replicated should be partitioned and
    accessible on both nodes. Each node should have sufficient storage space to
    accommodate replication and data.
  • Network Configuration – Each node should have static IP addresses and resolve
    correctly in the local DNS system or in the /etc/hosts file of the other node. Host
    names should be consistent across nodes.
  • DRBD – Install and configure DRBD on both nodes. Configure DRBD resources that
    define which devices will be replicated and how replication will be established. At the
    time of installation leave the largest amount of space on the hard drive to store the
    variable data on both servers. Define node roles: primary and secondary nodes.

With these requirements met and understood, we can start by installing Debian and VitalPBX on two servers. You can start by following the Installation Section for this guide. When you get to the partitions part of the installation, you must select Guided – use the entire disk.

Next, select the option All files in one partition (recommended for new users)

On the next screen select the #1 Primary partition to delete it.

Delete the partition #1 Primary partition to create Free Space.

With the partition deleted, we select the pri/log FREE SPACE option.

You will now select how to use the free space. Select the Create a new partition option.

Now, change the capacity of this partition to 20GB. This partition is solely for the OS and its applications. We make sure that it has enough space for the future. As a rule of thumb, this partition must be at least 20GB or 10% of your total storage space. So if you have a 1TB drive, you would allocate 100GB, for example.

We then define this partition as a Primary Partition. Afterward, we will select the location for this partition to be the Beginning.

With this set, we will be shown a summary of the changes to this partition. Select the option Done setting up the partition.

Next, we are shown the partitions to be set on the drive. Select the option Finish partitioning and write changes to disk.

Later we will be using the rest of the FREE SPACE that is available.

Finally, we are shown a summary of the changes to be made on the drive. Select Yes to the question: Write the changes to disks.

You can then proceed with the installation as normal, you can follow the steps in the Installation Section for this guide. This includes the installation of VitalPBX using the VPS installation script.

Remember, the installation process with the partitioning needs to be done twice. One for each server in our high-availability environment.

With the installation done, we can start configuring our servers. It is a good idea to write down the networking information beforehand so we can work more orderly in our high-availability environment. For this guide, we will be using the following information.

NamePrimary ServerSecondary Server
Hostnamevitalpbx-primary.localvitalpbx-secondary.local
IP Address192.168.10.31192.168.10.32
Netmask255.255.255.0255.255.255.0
Gateway192.168.10.1192.168.10.1
Primary DNS8.8.8.88.8.8.8
Secondary DNS8.8.4.48.8.4.4

Next, we will allow remote access using the root user on both servers. This will allow us to SSH login with the root user.
From the CLI, we will use nano to edit the sshd_config file.

Change the following line.

With the following.

Save the changes and exit nano. Then, restart the sshd service

root@debian:~# systemctl restart sshd

With this, you can now SSH login with the root user and password. This will make it easier to copy and paste the commands from this guide. Remember, this has to be done on both servers.

Once you are logged in with an SSH connection, we will set the static IP addresses for both servers. For this, we will use nano to modify the interfaces configuration file.

root@debian:~# nano /etc/network/interfaces

Here, change the following lines.

For the Primary Server, enter the following.

For the Secondary Server, enter the following.

Note: Your installation may have a different name for the primary network interface. Make sure that you are using the correct name for your interface

Next, we will install dependencies on both servers.

With the dependencies installed, we must set the hostnames for both VitalPBX servers. For this, we go to Admin > System Settings > Network Settings in the VitalPBX Web UI.

After setting the hostname, click the green Save button. With the hostname set from the Web UI, we will now configure the hostnames in the hosts file for each server.

Set the hostname on the Primary Server with the following command.

AAnd in the Secondary Server as follows.nd in the Secondary Server as follows.

Afterward, on both servers modify the hosts file using nano.

Add the following lines.

This way, both servers will be able to see each other using their hostnames.
Now, we will create a new partition to allocate the rest of the available space for both servers.
For this, we will use the fdisk command.

Answer as follows on the presented prompts.

Then restart both servers so that the new table is available.

With the servers rebooted, we will proceed with the HA (High Availability) cluster configuration.

Now, we will create an authorization key for the access between both servers. This way, we can access both servers without entering credentials every time.

Create an authorization key in the Primary Server.

Next, create an authorization key in the Secondary Server.

Now, we can proceed in two ways. One is using a script we made, or you can follow the manual step-by-step. If you proceed with the following script, you can skip all the steps until you reach the add-on installation in the lesson.

Afterward, you can download and run the following script from the Primary Server, using these commands.

You will then be prompted to enter the information for the servers in the cluster.

Note: The hacluster password can be anything of your liking. It does not have to be an existing password for any user in any node.

Note: Before doing any high-availability testing, make sure that the data has finished synchronizing. To do this, use the cat /proc/drbd command

The script will start configuring the HA cluster for you. CONGRATULATIONS! You now have a high availability environment with VitalPBX 4!

The following steps are if you want to proceed with the cluster configuration manually, rather than using the provided script. You can skip these steps if you decide to use the script and proceed to the add-on installation in the next lesson.

To configure the HA cluster manually, first, we need to configure the Firewall. This can be done by adding the services and rules from the VitalPBX Web UI. Here is the list of services we will configure. This needs to be configured in both servers.

ProtocolPortDescription
TCP2224This protocol is needed by the pcsd Web UI and required for node-to-node communication. It is crucial to open port 2224 in such a way that pcs from any node can talk to all nodes in the cluster, including itself. When using the Booth cluster ticket manager or a quorum device you must open port 2224 on all related hosts, such as Booth arbiters or the quorum device host.
TCP3121Pacemaker’s crmd daemon on the full cluster nodes will contact the pacemaker_remoted daemon on Pacemaker Remote nodes at port 3121. If a separate interface is used for cluster communication, the port only needs to be open on that interface. At a minimum, the port should open on Pacemaker Remote nodes to full cluster nodes. Because users may convert a host between a full node and a remote node, or run a remote node inside a container using the host’s network, it can be useful to open the port to all nodes. It is not necessary to open the port to any hosts other than nodes.
TCP5403Required on the quorum device host when using a quorum device with corosync-qnetd. The default value can be changed with the -p option of the corosync-qnetd command.
UDP5404Required on corosync nodes if corosync is configured for multicast UDP.
UDP5405Required on all corosync nodes (needed by corosync)
TCP21064Required on all nodes if the cluster contains any resources requiring DLM (such as clvm or GFS2)
TCP, UDP9929Required to be open on all cluster nodes and booth arbitrator nodes to connections from any of those same nodes when the Booth ticket manager is used to establish a multi-site cluster.
TCP7789Required by DRBD to synchronize information.

In the VitalPBX Web UI for both servers go to Admin > Firewall > Services. Add the services from the table above by clicking the Add Service button.

With all the services added, Apply Changes.

Next, we go to Admin > Firewall > Rules to add the rules to ACCEPT all the services we just created.

With all the rules added, Apply Changes. Remember, you need to add the services and rules to both servers’ firewalls.

Now, Let’s create a directory where we are going to mount the volume with all the information to be replicated in both servers.

Afterward, we will format the new partition we made in both servers using the following commands.

With all of this done, we can proceed to configure DRBD on both servers. Start by loading the module and enabling the service in both nodes using the following command.

Then, create a new global_common.conf file in both servers.

Add the following content.

Save and Exit nano. Next, create a new configuration file called drbd0.res for the new resource named drbd0 in both servers using nano.

Add the following content.

Save and Exit nano.

Note: Although the access interfaces can be used, which in this case is ETH0. It is recommended to use an interface (i.e. ETH1) for synchronization, this interface must be directly connected between both servers.

Now, initialize the metadata storage in each node by executing the following command in both servers.

Afterward, define the Primary Server as the DRBD primary node first.

Then, on the Secondary Server, run the following command to start the drbd0.

You can check the current status of the synchronization while it is being performed, using the following command.

Here is an example of the output of this command.

In order to test the DRBD functionality, we must create a file system, mount the volume, write some data in the Primary Server, and finally switch the primary node to the Secondary Server.

Run the following commands in the Primary Server to create an XFS file system in the /dev/ drbd0 directory, and mount it to the /vpbx_data directory.

Create some data using the following command in the Primary Server.

Run the following command to list the content of the /vpbx_data directory.

The command will return the following list.

Now, let’s switch the primary node “Primary Server” to the secondary node “Secondary Server” to check if the data replication works.

We will need to unmount the volume drbd0 in the Primary Server and change it from the primary node to the secondary node, and we will turn the Secondary Server into the primary node.

In the Primary Server, run the following commands.

Change the secondary node to the primary node, by running this command on the Secondary Server.

In the Secondary Server, mount the volume and check if the data is available with the following command.

The command should return something like this.

As you can see the data is being replicated, since these files were created in the Primary Server, and we are seeing them in the Secondary Server.

Now, let’s normalize the Secondary Server. Unmount the volume drbd0 and set it as the secondary node. In the Secondary Server, run the following commands.

Then, normalize the Primary Server. Turn it into the primary node, and mount the drbd0 volume to the /vpbx_data directory. In the Primary Server, run the following commands.

With the replication working, let’s configure the cluster for high availability. Create a password for the hacluster user on both servers.

Note: The hacluster password can be anything of your liking. This does not have to be a password for the root or any other user.

Then, start the PCS service on both servers, using the following command.

We must enable the PCS, Corosync, and Pacemaker services to start on both servers, with the following commands.

Now, let’s authenticate as the hacluster user using PCS Auth in the Primary Server. Enter the following commands.

The command should return the following.

Next, use the PCS cluster setup command in the Primary Server to generate and synchronize the corosync configuration.

Start the cluster in the Primary Server, with the following commands.

Note: It’s recommended to prevent resources from moving after recovery. In most circumstances, it is highly desirable to prevent healthy resources from being moved around the cluster. Moving resources always requires a period of downtime. For complex services such as databases, this period can be quite
long.

To prevent resources from moving after recovery, run this command in the Primary Server.

Now, create the resource for the use of a Floating IP Address, with the following commands in the Primary Server.

Then, create the resource to use DRBD. use the following commands in the Primary Server.

Next, create the file system for the automated mount point, using the following commands in the Primary Server.

Stop and disable all services on both servers, using the following commands.

Create the resource for the use of MariaDB in the Primary Server, using the following commands.

Change the MariaDB Path on the Secondary Server as well, using the following command.

Now, run the following commands in the Primary Server to create the MariaDB resource.

Set the paths for the Asterisk service in both servers, using the following commands.

Next, create the resource for Asterisk in the Primary Server, using the following commands.

Copy the Asterisk and VitalPBX folders and files to the DRBD partition in the Primary Server using the following commands.

Now, configure the symbolic links on the Secondary Server with the following commands.

Then, create the VitalPBX Service in the Primary Server, using the following commands.

Create the Fail2Ban Service in the Primary Server, using the following commands.

Initialize the Corosync and Pacemaker services in the Secondary Server with the following commands.

Note: All configurations are stored in the /var/lib/pacemaker/cib/cib.xml file.

Now let’s see the cluster status by running the following command in the Primary Server.

This command will return the following.

Note: Before doing any high availability testing, make sure that the data has finished synchronizing. To do this, use the cat /proc/drbd command.

With our cluster configured, we now must configure the bind address. Managing the bind address is critical when using multiple IP addresses on the same NIC (Network Interface Card).
This is our case when using a Floating IP address in this HA cluster. In this circumstance, Asterisk has a habit of listening for SIP/IAX on the virtual IP address, but replying on the base address of the NIC, causing phones and trunks to fail to register.

In the Primary Server, go to Settings > Technology Settings >PJSIP Settings, and configure
the Floating IP address in the Bind and TLS Bind fields.

Now that the Bind address is set. We will create the Bascul command in both servers. This command will allow us to easily move the services between nodes. This will essentially allow us to move between the Primary and Secondary Servers.

To start creating the bascul command, we can begin by downloading the following file using wget on both servers.

Or we can create it from scratch using nano on both servers.

And add the following content.

Save and Exit nano. Next, add permissions and move to the /usr/local/bin directory using the following commands in both servers.

Now, create the Role command in both servers. You can download the following file using wget.

Or you can create the file using nano on both servers.

Add the following content.

Save and Exit nano. Next, we copy it to the /etc/profile.d/ and permissions directory using the following commands on both servers.

Now, add execution permissions and move to the /usr/local/bin directory using the following commands on both servers.

Afterward, we will create the drbdsplit command in both servers. Split-Brain can be caused by intervention by cluster management software or human error during a period of failure for network links between cluster nodes, causing both nodes to switch to the primary role while disconnected. Split-brain occurs when both high availability nodes switch into the primary role while disconnected. This behavior can allow data to be modified on either node without being replicated on the peer, leading to two diverging sets of data on each node, which can be difficult to merge. The drbdsplit command allows us to recover from split-brain in case it ever happens to us. To create the drbdsplit command, we can download the following file using the wget command on both servers.

Or we can create it from scratch using nano on both servers.

Add the following content.

Save and Exit nano. Now, add permissions and move it to the /usr/local/bin directory using the following commands on both servers.

With this, you have a full high availability environment! CONGRATULATIONS, you now have high availability with VitalPBX 4.

]]>
https://wiki.vitalpbx.org/wiki/high-availability/high-availability/feed/ 0
Installing Add-Ons in High Availability https://wiki.vitalpbx.org/wiki/high-availability/installing-add-ons-in-high-availability/ https://wiki.vitalpbx.org/wiki/high-availability/installing-add-ons-in-high-availability/#respond Thu, 23 Nov 2023 23:16:21 +0000 https://wiki.vitalpbx.org/?post_type=docs&p=1354 Now we can see how to install add-ons in a high-availability environment. We will take an in-depth look into the process of adding add-ons that have a service running on the servers. This includes the Sonata Suite, VitXi, OpenVPN, and Maintenance add-on modules. These add-ons require special steps for installation in high-availability environments.

Let’s begin with the Sonata Suite. The first application of Sonata Suite we will look into installing is Sonata Switchboard.

Before proceeding, make sure that the Primary Server is set as a Master. You can use the role command to verify this.

First, go to the Floating IP Address on your browser. There, install Sonata Switchboard as normal from Admin > Add-Ons > Add-Ons. Then, go to the Sonata Switchboard URL, which is https://FloatingIPAddress/switchboard by default. Follow the installation wizard and finish installing Sonata Switchboard.

Next, copy the files that we are going to replicate from the Primary Server to the Secondary Server. Run the following commands in the Primary Server.

Afterward, run the bascul command on the Primary Server to turn the Secondary Server into the Master.

Now, go back to the Floating IP Address on your browser. You will now be navigating on the Secondary Server. Here, install Sonata Switchboard as you did in the Primary Server by installing the add-on and following the Wizard.

Then, create the symbolic link for the settings.conf file in the Secondary Server.

Finally, execute the bascul command once again from the Primary Server to return the cluster back to normal where the Primary Server is set as the Master.

With this, you now have Sonata Switchboard installed in your high-availability environment.

Let’s move on to Sonata Recordings. The process will be very similar to the Sonata Switchboard. With the difference of adding the Sonata Recordings service automation at the end.

First, go to the Floating IP Address on your browser. Here, install Sonata Recordings from the add-ons module under Admin > Add-Ons > Add-Ons. Once installed, go to the Sonata Recordings URL, which is https://FloatingIPAddress/recordings by default. There, follow the wizard to finish with the installation.

Next, copy the files that we are going to replicate from the Primary Server to the Secondary Server. Run the following commands on the Primary Server.

Now, execute the bascul command on the Primary Server to turn the Secondary Server into the Master.

Afterward, go back to the Floating IP Address on your browser. You will now be navigating the Secondary Server. Here, install Sonata Recordings as you did on the Primary Server.

Then, create the symbolic link for the settings.conf file on the Secondary Server.

Now, execute the bascul command on the Primary Server to return the cluster to normal where the Primary Server is the Master.

Finally, add the Sonata Recordings service automation. Run the following commands on both servers.

Then, run the following commands on the Primary Server.

With this, you now have Sonata Recordings installed in your high-availability environment.

Let’s move towards Sonata Billing. The process will be exactly the same as the Sonata Switchboard.

First, go to the Floating IP Address and install Sonata Billing from the add-ons module under Admin > Add-Ons > Add-Ons. Once installed, go to the Sonata Billing URL, which is https://FloatingIPAddress/billing by default. There, follow the wizard and finish the installation.

Next, copy the files that we are going to replicate from the Primary Server to the Secondary Server. Run the following commands on the Primary Server.

Now, execute the bascul command on the Primary Server to turn the Secondary Server into the Master.

Go back to the Floating IP Address on your browser. You will now be navigating the Secondary Server. Here, install Sonata Billing as you did in the Primary Server.

Then, create the symbolic link for the settings.conf file on the Secondary Server.

Finally, execute the bascul command once again from the Primary Server to return the cluster back to normal where the Primary Server is set as the Master.

With this, you now have Sonata Billing installed in your high-availability environment.

Let’s continue with Sonata Stats. This process is similar to Sonata Recordings but has some additional files to copy.

First, go to the Floating IP Address on your browser and install Sonata Stats from the add-ons module under Admin > Add-Ons > Add-Ons. Then, go to the Sonata Stats URL which is https://FloatingIPAddress/stats by default. There, follow the wizard to finish with the installation. Next, copy the files that we are going to replicate from the Primary Server to the Secondary Server. Run the following commands on the Primary Server.

Now, execute the bascul command on the Primary Server to turn the Secondary Server into the Master.

Go back to the Floating IP Address on your browser. You will now be navigating the Secondary Server. Here, install Sonata Stats as you did in the Primary Server.

Then, create the symbolic link for the .env file on the Secondary Server.

Now, execute the bascul command on the Primary Server to return the cluster to normal where the Primary Server is the Master.

Finally, add the Sonata Stats service automation. Run the following commands on both servers.

Then, run the following commands in the Primary Server.

With this, you now have Sonata Billing installed in your high-availability environment.

Let’s finish with the Sonata Suite with Sonata Dialer. The process is similar to the Sonata Stats.

First, go to the Floating IP Address on your browser, and install Sonata Dialer from the add-ons module under Admin > Add-Ons > Add-Ons. Once installed, go to the Sonata Dialer URL which is https://FloatingIPAddress/dialer by default. There, follow the wizard to finish the installation.

Next, copy the files that we are going to replicate from the Primary Server to the Secondary Server. Run the following commands on the Primary Server.

Now, execute the bascul command on the Primary Server to turn the Secondary Server into the Master.

Go back to the Floating IP Address on your browser. You will now be navigating the Secondary Server. Here, install Sonata Stats as you did in the Primary Server.

Then, create the symbolic links for the .env file on the Secondary Server.

Now, execute the bascul command on the Primary Server to return the cluster to normal where the Primary Server is the Master.

Finally, add the Sonata Dialer service automation. Run the following commands on both servers

Then, run the following commands in the Primary Server.

With this, you now have Sonata Dialer installed in your high-availability environment. Now all of the Sonata Suite is up and running in your high-availability environment!

Let’s move towards VitXi, our WebRTC solution.

Beforehand, make sure that your cluster is normalized with the Primary Server set as the Master. You can use the role command to verify this.

First, go to the Floating IP Address on your browser, and follow the installation procedure for VitXi from VitXi’s manual. VitXi installation requires you to install the add-on from Admin > Add-Ons > Add-ons, make various VitalPBX configurations, and follow the installation wizard from VitXi’s URL which is https://FloatingIPAddress/webrtc by default. This process is detailed in VitXi’s manual.

With VitXi completely installed and VitalPBX configured, copy the files that we are going to replicate from the Primary Server to the Secondary Server. Run the following commands on the Primary Server.

Run the bascul command on the Primary Server to turn the Secondary Server into the Master.

Go back to the Floating IP Address on your browser. You will now be navigating the Secondary Server. Here, install Sonata Stats as you did in the Primary Server.

Then, create the symbolic links for the .env file on the Secondary Server.

Now, execute the bascul command on the Primary Server to return the cluster to normal where the Primary Server is the Master.

Finally, add the Sonata Dialer service automation. Run the following commands on both servers.

Then, run the following commands in the Primary Server.

With this, you now have VitXi installed in your high-availability environment!

We can now move towards the OpenVPN add-on module.

First, go to the Floating IP Address on your browser, and install the OpenVPN add-on from the add-ons module under Admin > Add-Ons > Add-Ons.

Once installed, copy the files that we are going to replicate from the Primary Server to the Secondary Server. Run the following commands on the Primary Server.

Run the bascul command on the Primary Server to turn the Secondary Server into the Master.

Go back to the Floating IP Address on your browser. You will now be navigating the Secondary Server. Here, install the OpenVPN add-on as you did in the Primary Server.

Then, create the symbolic link for the OpenVPN Path on the Secondary Server.

Finally, add the OpenVPN service automation. Run the following commands on both servers.

Then, run the following commands in the Primary Server.

With this, you now have OpenVPN installed in your high-availability environment!

Let’s move toward the Maintenance add-on. This is the final add-on that requires special steps for installation in a high-availability environment.

First, go to the Floating IP Address on your browser and install the Maintenance add-on module from the add-ons module under Admin > Add-Ons > Add-Ons.

Once installed, execute the bascul command on the Primary Server to turn the Secondary server into the Master.

Go back to the Floating IP Address on your browser. You will now be navigating the Secondary Server. Install the Maintenance add-on as you did on the Primary Server.

Afterward, run the bascul command on the Primary Server once again to return the cluster to normal, where the Primary Server is set as the Master.

Because the Maintenance cron file is copied to a shared directory at /etc/cron.d/vpbx_maintenace that has other system cron services active, we recommend first creating the Maintenance on the Primary Server first, then using the bascul command to switch the server roles, and finally create the same maintenance on the Secondary Server.

Since the databases are synchronized, it is only necessary to go to Admin > Tools > Maintenance and click on Save and Apply Changes.

With this, you now have the Maintenance add-on installed in a high-availability environment.

These are all the add-ons that require special attention and steps for installation in a High Availability environment. The rest of the add-ons can be installed similarly to how the Maintenance add-on was installed. The general steps for the rest of the add-ons are as follows.

  1. Verify that the server cluster is normalized where the Primary Server is set as the Master. This can be verified with the role command.
  2. Go to the Floating IP Address on your browser.
  3. Install the add-on on the Primary Server from the add-ons module under Admin > Add-Ons > Add-Ons.
  4. Use the bascul command to switch the servers’ roles.
  5. Install the add-on on the Secondary Server from the add-ons module under Admin > Add-Ons > Add-Ons.
  6. Use the bascul command again to normalize the server cluster and have the Primary Server as the Master.

With this done, you now have a full high-availability environment with add-ons in VitalPBX 4!

]]>
https://wiki.vitalpbx.org/wiki/high-availability/installing-add-ons-in-high-availability/feed/ 0
DRBD High-Availability Monitoring and Updates https://wiki.vitalpbx.org/wiki/high-availability/drbd-high-availability-monitoring-and-updates/ https://wiki.vitalpbx.org/wiki/high-availability/drbd-high-availability-monitoring-and-updates/#respond Thu, 23 Nov 2023 23:22:55 +0000 https://wiki.vitalpbx.org/?post_type=docs&p=1357 Now, let’s take a look into additional configurations and processes you can do to troubleshoot your high-availability environment using DRBD.

Let’s look into a custom context that will allow us to monitor the current state of the high-availability environment from any registered extension.

First, we will create a new context using nano. Run the following command from the Primary Server.

Add the following content.

Instead of *777, you can use any other code of your liking that doesn’t interfere with your numbering plan.

For us to be able to use this context, we must restart the Asterisk Dial Plan. Run the following command from the Primary Server.

To test the custom context, dial *777 or your code of choosing from any registered extension.

Note: as a Warning, if you must turn off the servers for any reason, when turning them back on always try to start with the Primary Server. Wait for the Primary Server to fully start, and then turn on the Secondary Server.

Let’s look into updating VitalPBX or any Add-On. The steps for updating VitalPBX or any add-on are as follows.

  1. Go to the Floating IP Address on your browser, and log in as the administrator.
  2. Update VitalPBX from the Web UI. Under the user menu in the upper right-hand corner, click on Check for Updates.
  3. Run the bascul command on the Primary Server. This will turn the Secondary Server into the Master.
  4. Go back to the Floating IP Address on your browser, and log in as the administrator.
  5. Update VitalPBX from the Web UI.
  6. Run the bascul command again to normalize the server cluster where the Primary Server is the Master.

With this, you will have updated both servers. Basically, you update the Primary Server first, then switch to the Secondary Server and update it next. Then, you normalize the cluster.

]]>
https://wiki.vitalpbx.org/wiki/high-availability/drbd-high-availability-monitoring-and-updates/feed/ 0
Changing One of the Servers in DRBD High Availability https://wiki.vitalpbx.org/wiki/high-availability/changing-one-of-the-servers-in-drbd-high-availability/ https://wiki.vitalpbx.org/wiki/high-availability/changing-one-of-the-servers-in-drbd-high-availability/#respond Thu, 23 Nov 2023 23:50:10 +0000 https://wiki.vitalpbx.org/?post_type=docs&p=1360 There could be occasions where you need to change one of the servers. This can be due to one of them getting damaged or hardware modification.

Let’s look into changing the Secondary Server. First, we will proceed to destroy the cluster on the Primary Server.

Next, if we still have access to the Secondary Server, we destroy the cluster as well on the Secondary Server.

Since when destroying the cluster the DRBD volume is unmounted, in the Primary Server we must mount it again manually to avoid interrupting the normal operation of our services. Run the following commands on the Primary Server.

Now, enable the services on the Primary Server to make sure our server continues to work as normal.

At this moment, the Primary Server is now working as normal. We must now configure the replica with our new server.

First, prepare the new server following the steps for the installation in Section 13, Lesson 1, with the disk partition and the free space. Then enable remote access with the root user.
Change the IP address to a static IP address. Install the dependencies. Change the hostname in the Web UI and CLI using the same hostname as the old Secondary Server. Create the sda3 partition using the fdisk command. And finally, configure the firewall with the ports established in the table from lesson 1. These are the steps for the manual HA setup presented in lesson 1.
This should only be configured in the new Secondary Server.

Now, we will proceed to format the new partition on the new Secondary Server using the following commands.

Load the DRBD module and enable the service on the new Secondary Server using the following commands.

Next, we must create new configuration files called drbd0.res in the /etc/drbd.d directory on the new Secondary Server for the new resource named drbd0.

Add the following content.

Note: Although the access interfaces can be used, which in this case is ETH0. It is recommended to use an interface (ETH1) for synchronization, this interface must be directly connected between both servers.

Initialize the metadata storage on each server by executing the following command on the new Secondary Server.

Run the following command on the new Secondary Server to start the drbd0.

You can check the current status of the synchronization using the cat /proc/drbd command.

Once the synchronization is done, you can follow the steps from Section 13, Lesson 1 for configuring the cluster starting with the hacluster user and password up to the creation of the fail2ban service. Then, follow the steps for creating the bascul command and the creation of the role command. All of this is for the new Secondary Server.

After this, you will now have replaced the old Secondary Server with a new one!

Now, let’s look into changing the Primary Server. This method differs from changing the Secondary Server.

First, we proceed to destroy the cluster in the Primary Server if we still can. Run the following commands in the Primary Server.

Next, we do the same in the Secondary Server with the following commands.

Since by destroying the cluster the DRBD unit is unmounted, in the Secondary Server, we must mount it again manually to avoid interrupting the normal operation of our services. Run the following commands in the Secondary Server.

Now, enable the services on the Secondary Server to make sure that our server continues to work as normal.

With this, our Secondary Server is now working as normal. We now proceed to configure the replica with our new server.

First, prepare the new server following the steps for the installation in Section 13, Lesson 1, with the disk partition and the free space. Then enable remote access with the root user. Change the IP address to a static IP address. Install the dependencies. Change the hostname in the Web UI and CLI using the same hostname as the old Primary Server. Create the sda3 partition using the fdisk command. And finally, configure the firewall with the ports established in the table from lesson 1. These are the steps for the manual HA setup presented in lesson 1. This should only be configured in the new Primary Server.

Now, proceed to format the new partition on the new Primary Server using these commands.

Then, load the module on the new Primary Server and enable the service with the following commands.

Next, back up the original global_common.conf file, and create a new one using nano on the Primary Server. Use the following commands on the new Primary Server.

Add the following content.

Next, we must create new configuration files called drbd0.res in the /etc/drbd.d directory on the new Primary Server for the new resource named drbd0.

Add the following content.

Note: Although the access interfaces can be used, which in this case is ETH0. It is recommended to use an interface (ETH1) for synchronization, this interface must be directly connected between both servers.

Initialize the metadata storage on each server by executing the following command on the new Primary Server.

On the new Primary Server, run the following command to start the drbd0.

You can check the current status of the synchronization while it’s being performed. The cat /proc/drbd command displays the creation and synchronization progress of the resource.

Once the synchronization is done, you can follow the steps from Section 13, Lesson 1 for configuring the cluster starting with the hacluster user and password up to the creation of the fail2ban service. Then, follow the steps for creating the bascul command and the creation of the role command. All of this is for the new Primary Server.

After this, you will now have replaced the old Primary Server with a new one!

With this, you now have the necessary tools to run VitalPBX in a high-availability environment. Allowing you to rest assured that you have two servers working together for minimal to no downtime.

]]>
https://wiki.vitalpbx.org/wiki/high-availability/changing-one-of-the-servers-in-drbd-high-availability/feed/ 0
DRBD High Availability Useful Commands https://wiki.vitalpbx.org/wiki/high-availability/drbd-high-availability-useful-commands/ https://wiki.vitalpbx.org/wiki/high-availability/drbd-high-availability-useful-commands/#respond Thu, 23 Nov 2023 23:57:36 +0000 https://wiki.vitalpbx.org/?post_type=docs&p=1362 root@debian:~# bascul

This command is used to change roles between high-availability servers. If all is well, a confirmation question should appear if we wish to execute the action. The bascul command permanently moves services from one server to another. If you want to return the services to the main server you must execute the command again.

This command shows the status of the current server.

This command is used to poll all resources even if the status is unknown.

This command stops the server node where it is running. The node status is stopped.

root@debian:~# pcs node unstandby host

In some cases, the bascul command does not finish tilting, which causes one of the servers to be on standby (stopped), with this command the state is restored to normal.

This command removes the resource so it can be created.

This command creates the resource.

root@debian:~# corosync-cfgtool -s

This command is used to check whether cluster communication is happy.

root@debian:~# ps axf

This command confirms that Corosync is functional, we can check the rest of the stack. The pacemaker has already been started, so verify the necessary processes are running.

This command allows you to check the pcs status output.

This command allows you to check the validity of the configuration.

This command shows the integrity status of the disks that are being shared between both servers in high availability. If for some reason the status of Connecting or Standalone returns to us, wait a while and if the state remains it is because there are synchronization problems between both servers, and you should execute the drbdsplit command.

This command shows you the state of your device is kept in /proc/drbd.

This command shows you the status on the other node (the split brain survivor), if its connection state is also StandAlone, you will need to enter.

This command is another way to check the role of the block device.

This command allows you to switch the DRBD block device to Primary using drbdadm.

This command allows you to switch the DRBD block device to Secondary using drbdadm.

This script creates the cluster automatically.

This script will completely destroy the cluster, leaving the DRBD intact.

This script recreates the cluster starting from the fact that the DRBD is already configured on both servers.

]]>
https://wiki.vitalpbx.org/wiki/high-availability/drbd-high-availability-useful-commands/feed/ 0
High Availability using Proxmox™ https://wiki.vitalpbx.org/wiki/high-availability/high-availability-using-proxmox/ https://wiki.vitalpbx.org/wiki/high-availability/high-availability-using-proxmox/#respond Fri, 24 Nov 2023 00:24:18 +0000 https://wiki.vitalpbx.org/?post_type=docs&p=1364 Another way to have a high-availability environment we have studied is using Proxmox™ VE. Proxmox VE is a complete open-source server management platform for enterprise virtualization. It tightly integrates the KVM hypervisor and Linux Containers (LXC), software-defined storage, and networking functionality, into a single platform. With the built-in web-based user interface, you can manage virtual machines and containers, high availability for clusters, or the built-in disaster recovery tools with ease. You can explore Proxmox as an option to create your VitalPBX instances in your local data center infrastructure.

To start using Proxmox VE for an HA environment, there are some prerequisites that need to be fulfilled.

  • System Requirements
  • For production servers, high-quality server equipment is needed. Proxmox VE supports clustering, which means that multiple installations of Proxmox VE can be centrally managed thanks to the built-in cluster functionality. Proxmox VE can use local storage such as (DAS), SAN, and NAS, as well as shared and distributed storage (Ceph).
  • Recommended Hardware
  • CPU – Intel EMT64 or AMD64 with Intel VT/AMD-V CPUs running at 3.5 GHz.
  • Memory – At least 4 GB for OS services and Proxmox VE. More memory needs to be designated for guests. For Ceph or ZFS additional memory is required, approximately 1GB of memory for every TB of storage used.
  • Fast and redundant storage – Better results with SSD drives.
  • OS storage Hardware RAID with battery-protected write cache (“BBU”) or non-RAID with ZFS cache and SSD.
  • VM storage – For local storage, use hardware RAID with battery-backed write cache (BBU) or no RAID for ZFS. Neither ZFS nor Ceph supports a hardware RAID controller. Shared and distributed storage is also possible.
  • Redundant Gigabit NICs – Additional NICs depending on preferred storage technology and cluster configuration: 10 Gbit and higher are also supported.
  • For PCI(e) pass-through – a CPU with CPU flag VT-d/AMD-d is required.
  • For Testing (Minimal Hardware for Testing Only)
  • CPU 64-bit (Intel EMT64 or AMD64)
  • Motherboard – Intel VT/AMD-V compatible CPU/motherboard (for full KVM virtualization support)
  • Memory – Minimum 2 GB of RAM
  • Storage – 500GB HDD
  • Network – One Gigabit NIC

Additionally, we are going to need an NFS Storage system. For this, in this example, we are going to be using another server to work as a NAS server.

With these prerequisites fulfilled, we can proceed to install Proxmox VE.

Proxmox needs a clean hard drive because it will remove all partitions and data from the hard drive during installation.

First, download Proxmox VE’s ISO image from Proxmox’s official website, https://proxmox.com/en/downloads/category/iso-images-pve.

If you are going to install it on physical hardware, you must flash the ISO image on a USB drive of at least 8GB. We recommend the Balena Etcher software, https://www.balena.io/etcher/.

Afterward, we can install Proxmox VE on dedicated hardware using the USB drive. Once we boot into the image we will be greeted with the following screen.

Select the Install Proxmox VE option, and press Enter. After a few seconds, the EULA will appear. Click on the I agree button.

Afterward, you must select the hard disk where you want to install Proxmox. Once selected,
click on Next.

On the next screen, enter your location, time zone, and language. Then click on Next.

Now you must enter the password for the root user which is used both in the Proxmox web interface and in the server console via SSH. Since this is a very sensitive user, we recommend a fairly complex password.

For the Email address, you can use any email address of your liking. Click on Next.
Afterward, we must enter our Networking configurations.

  • Management Interface – Select the interface through which we will manage
    Proxmox.
  • Hostname (FQDN) – A valid FQDN is recommended. For local testing, we will use
    a .local domain.
  • IP Address (CIDR) – Enter the IP address for the Proxmox server, using the CIDR
    format for the netmask.
  • Gateway – Enter the default gateway for the Proxmox server.
  • DNS Server – Enter the IP address for a DNS server to solve for server names. i.e.
    8.8.8.8

Once the network has been configured, click on Next.
Finally, you will be presented with a summary of your installation configurations. Verify the information and click on Install.

The installation will proceed and may take a few minutes depending on your hardware.

Once the installation is done, enter the IP Address plus the port 8006 on your browser, i.e. 192.168.20.220:8006.

Another option to install Proxmox is by installing a minimal installation of Debian. For this, you can follow the Debian installation instructions in Section 1, Lesson 1. During the installation, set the hostname for your Proxmox installation. Later we will show you the scheme we used for our example.

Once you have Debian installed, make sure you have SSH and WGET installed.

Then, make the root user available to log in via SSH, using nano.

Go to the following line.

Change it to

Save and Exit nano.

Next, we must set the static IP address for this server using nano.

Go to the following line.

Save and Exit nano.

Note: Your interface name and IP Address will vary from these instructions. This information is based on the scheme we present later. Make sure that you are using the interface name and appropriate IP Address for your environment.

Now, we must make changes to the hosts file so the hostname resolves the server IP Address. Use nano to modify the hosts file.

Go to the following line.

Change it to the following.

Save and Exit nano. You can check your configurations with the following command.

With this setup, we must add the Proxmox VE repositories. Run the following command.

Next, add the Proxmox repository key as root.

For Debian 11.

For Debian 12.

Afterward, update the repository and system by running the following command.

Then, install the Proxmox kernel.

For Debian 11.

For Debian 12.

Reboot the system.

Once the system has rebooted, install the Proxmox packages using the following command.

For Debian 11.

For Debian 12.

Then, remove the original Linux Kernel.

For Debian 11.

For Debian 12.

Afterward, update the GRUB.

root@debian:~# update-grub

The os-prober package will scan all the partitions to create dual-boot GRUB entries. This can include the ones assigned to virtual machines, which we don’t want to add a boot entry. If you didn’t install Proxmox as a dual-boot with another OS, it is recommended to remove the os-prober package using the following command.

Finally, reboot the system using the following command.

Once the system reboots, go to the Proxmox IP Address using port 8006 on your browser, i.e. 192.168.20.220:8006. Here, login using the root user and password.

Go to Datacenter > Your Proxmox node > System > Network, and click Create. Here, create a Linux Bridge and call it vmbr0. Add the IP Address of your server with the netmask in CIDR format, the gateway, and enter the network interface name under the Bridge port. Then click OK.

Note: During this step, you may have to remove the IP address from the Network interface listed here as well. For this, you must click the interface, and then click on Edit. There, you can remove the IP Address and Gateway. Then, click OK.

After the installation, you will see a message saying “You do not have a valid subscription for this server.” To remove this message, you will need to purchase a valid subscription. For this, you can go to the following link, https://www.proxmox.com/en/proxmox-ve/pricing.

For High Availability, you must complete this process three times on three separate servers. Ideally with similar hardware specifications. This is due to us needing what is called a quorum. The cluster of Proxmox servers will vote on which server the virtual machines will be transferred to.

For the latest instructions on how to install Proxmox in Debian, you can check the Proxmox wiki at https://pve.proxmox.com/wiki/Category:Installation.

For this example, we will be creating and using the following.

  • 3 Physical Servers with similar hardware specifications.
  • 16GB of Memory
  • 256GB of Storage
  • 4-Core CPU
  • A NAS Server with enough capacity to store all the Virtual Machines we are going to create.

We will be using the following scheme.

  • Server 1
  • Name: prox1.vitalpbx.local
  • IP Address: 192.168.20.220

  • Server 2
  • Name: prox2.vitalpbx.local
  • IP Address: 192.168.20.221

  • Server 3
  • Name: prox3.vitalpbx.local
  • IP Address: 192.168.20.222

  • NFS Server
  • Name: truenas.vitalpbx.local

This diagram shows the infrastructure we are going to be building. We have the three Proxmox servers and an NFS server where we are going to be storing our Virtual Machines.

The next step is to configure our NFS storage. For this example, we will be using a TrueNAS server to create our NFS shares. You can use any NAS server that allows you to create an NFS share.

Since this part will vary based on the NAS or Storage Server you choose to use, we will go briefly on what needs to be configured.

On our TrueNAS installation, we will first create a new user and password that we are going to use to log into our NFS share. We go to Accounts > Users.

Here, a new group has been created as well that we have named proxmox. You can verify this under Accounts > Groups.

With the user and group created, we then created a new Pool with a Dataset where we are going to be storing our Virtual Machines. For this, we go to Storage > Pools.

Then, we edit the permissions for the dataset we named data, add our group, and modify the Access Mode to allow us to read and write data to the dataset. Remember to check the Apply Group box before clicking on Save.

Next, we go to Sharing > Unix Shares (NFS) to create our NFS share. We point this share to the dataset we created previously.

Finally, remember to have the networking configured so you can reach this server from our Proxmox nodes. Under Network > Global Configurations, we configure our DNS servers, hostname, and Default Gateway.

And under Network > Interfaces, we configure the static IP Address for our server.

With this, you now have an NFS share available to store your Virtual Machines in Proxmox.

Now that we have three Proxmox servers and our NAS server with the NFS shares configured, we can configure our connection to the NAS server and create our cluster for High Availability using Proxmox.

Go to the first Proxmox node and go to Datacenter > Storage. Click Add and select NFS. Here, we configure the ID to identify this NFS share, add the server IP address, set the path for our data set under Export, and select the content we want to include in this storage.

Under node, you will only see the current proxmox node available. You can go to every node and add the NFS share using the same ID. You can also come back to this Share on the first node after we create the cluster, and add the other two nodes afterward.

Next, we configure our Cluster. On the first node, go to Datacenter > Cluster and click on Create Cluster. Once you enter the Cluster Name, click on Create.

Go back to Datacenter > Cluster, and click on Join Information. This will show you the information we need to add to the other two nodes. Click on Copy Information.

Go to the second Proxmox node go to Datacenter > Cluster and click Join Cluster. Here, you will need to enter the Join Information for the first node.

You will need to enter the root password for the first Proxmox node here. Click on Join ‘proxmox-cluster’. After a few minutes, the second node is added to the cluster.

Repeat these two steps on the third Proxmox node, by adding the Join Information and root password from the first node. Then click on Join ‘proxmox-cluster’ and wait a few minutes.

Now, we will create a VitalPBX instance. On the first Proxmox node, go to Datacenter > prox1 > truenas (prox1). Here, click on CT Templates and then click on debian-11-standard.  Then, click on Download.

With this, we can now create a container where we are going to install VitalPBX 4. Go back to Datacenter > prox1 > truenas (prox1) and click on Create CT in the upper right-hand corner.

First, in the General tab, we are going to add the Hostname and root Password.

Next, on the Template tab, select the NFS Storage we created and the Template we downloaded.

Then, on the Disks tab, select the NFS Storage and enter the Disk size in GiB for the Virtual Machine.

Afterward, on the CPU tab, enter the number of cores you want to give to this Virtual Machine.

Under the Memory tab, enter the amount of Memory and Swap in MiB you wish to give to this Virtual Machine.

Then, on the Network tab, enter a static IP address and Gateway for the Virtual Machine.

Under the DNS tab, you can configure the DNS servers the Virtual Machine will use. You can leave these fields blank to use the host settings.

Finally, you can verify the configurations under the Confirm tab.

Click on Finish, and wait a few minutes. The instance with Debian 11 is now ready.

With the Debian 11 instance created, we can install VitalPBX 4 on it. On the left navigation menu, you will now see the instance listed. Right-click the instance, and click on Start.

Once started, a console window will pop up. Run the following commands to install VitalPBX.

After a few minutes, VitalPBX will have been installed and the Virtual Machine will reboot. You now have a VitalPBX instance on your Proxmox server!

With the instance created, you can now migrate it around the cluster between nodes. You can do this Hot (With the server ON) or Cold (With the server OFF).

To perform a Cold Migration, turn off the VitalPBX instance, by running a poweroff command from the console.

Right-click the instance, and click Migrate. You will be presented with a prompt selecting where you want to migrate the instance.

Once the migration is complete, you will see the following message.

The instance will now be in the second Proxmox node.
For a Hot Migration, while the VitalPBX instance is turned on in the first Proxmox node, rightclick on it and click Migrate.

You will see that the mode now says Restart Mode. This means that the server will be restarted once it has migrated.
After the migration, you will see the following message.

With the migration done, the instance is now on the second Proxmox node already started.

Now that we have a VitalPBX instance that we can migrate between nodes, we can set up our cluster to work in High Availability in Proxmox.

Go to Datacenter > HA > Resources and click Add. Here, enter the VitalPBX instance’s ID number and click Add.

You will see that the cluster has the quorum OK and that the VitalPBX instance is now a resource for High Availability.

To test the High Availability, you can turn off the first Proxmox node and after some time the VitalPBX instance will automatically move from the first node to another one.

The time it takes to move the instance will depend on your server hardware and network speed.

When turning the first Proxmox node off, you will need to access the cluster from the second or third node to manage the cluster.

With this done, you now have a full High Availability Environment using Proxmox!

One thing that differs between HA with Proxmox and using DRBD is that with Proxmox, there is no need to have multiple VitalPBX licenses. Since in this case, it is the same machine that is migrated over multiple Proxmox nodes, the licensing is moved alongside the virtual machine.

]]>
https://wiki.vitalpbx.org/wiki/high-availability/high-availability-using-proxmox/feed/ 0
Multi-Instance with Proxmox https://wiki.vitalpbx.org/wiki/high-availability/multi-instance-with-proxmox/ https://wiki.vitalpbx.org/wiki/high-availability/multi-instance-with-proxmox/#respond Fri, 24 Nov 2023 15:05:16 +0000 https://wiki.vitalpbx.org/?post_type=docs&p=1369 We saw previously how you can have a Multi-Tenant environment with VitalPBX. Now, let’s look at the other side of the coin, deploying a Multi-Instance server using Proxmox. This is quite simple as Proxmox VE is a virtual machine environment.

For this, you can follow the steps from the previous lesson to create a container. Once you have your container, install VitalPBX. With VitalPBX installed, you can now power off the container/virtual machine, and right-click the instance on the Proxmox navigation menu on the left. This will show you the Create Template action. By creating a template, you can easily deploy a new VitalPBX instance to your liking and as your server supports it.

Now, you can have many VitalPBX instances running. Keep in mind, when running multiple instances, you need to keep in mind the resources allocated. Each VitalPBX instance will be completely independent and each will need its own license. As such, each instance will need to be updated on its own.

With the template created, to deploy a new instance, you can right-click the instance again and click Clone.

When creating a CT template, you can no longer start that machine. So you can use this first instance only to clone it into new instances. Once cloned, the new instance will have the IP address set on the original template. So we recommend you first set the network address for the new cloned CT or VM first thing. As well as using different hostnames when you create a new instance.

]]>
https://wiki.vitalpbx.org/wiki/high-availability/multi-instance-with-proxmox/feed/ 0