8.0 Re-imaging a HARC System to UCX 7.0
Section Contents
8.2 Temporarily assign the Cluster IP to one of the nodes
8.3 Create a backup on an external drive
8.4 Re-image the "inactive" system
8.5 Assign the Cluster IP to the re-imaged Release 7.0 system
8.6 Re-image the remaining UCX Release 6.0 system
8.7 Release the Cluster IP address from the active system
8.8 Re-configure HARC on one of the two re-imaged Release 7.0 systems
8.9 Update InfinityOne (if deployed)
8.0 Re-imaging a UCX 6 Release 6.0 system running HARC to UCX Release 7.0
Steps to re-image a UCX Release 6.0 operating with HARC (High Availability) to UCX Release 7.0 are as follows:
8.1 Split the HARC Cluster
There is no mechanism to re-image a UCX Release 6.0 system running in HARC mode to be re-imaged to UCX Release 7.0 directly. Follow the steps for Splitting a Cluster during initial configuration as documented in High Availability - Web-based Configuration Utility Rls 6.0.
After the cluster is split, both of the systems will be accessible via their originally assigned IP addresses and the Cluster IP address will not be assigned to anything.
8.2 Temporarily assign the Cluster IP to one of the nodes
The devices like IP telephones, DSM16s, FXS16s, PRI cards, IP Trunks, or third party gateways are programmed to connect to the Cluster IP address. Once you have split the cluster, communication with any of the devices and interfaces will be temporarily interrupted. Therefore, E-MetroTel recommends that you reassign the cluster IP address to one of the nodes that is still running UCX Release 6.0 to temporarily maintain service during the maintenance window.
This system will become the "active" system.
8.3 Create a backup on an external drive
You only need to do this on one system as both systems will be identical after running in HARC mode. Note that if you have a system with redundant hard drives, you may temporarily remove one of the drives as this would already contain a full system backup.
8.4 Re-image the "inactive" system
Follow the standard update procedures to update the "inactive" system (the one without the Cluster IP address) to UCX Release 7.0 outlined in sections 1 to 6 of this document (Release 7.0 Update Process). This includes updating the system with the latest updates, retrieving the new Release 7.0 licenses, and installing any required add-ons (including the High Availability add-on).
After verifying the update is complete, you may wish to make another backup of the active system and use this newly created Release 6.0 backup when restoring the database on the inactive system as part of step 5.6 Restore the original Telephony Configuration of described in 5.0 Completing the Reimage to UCX 7.0 Process.
8.5 Assign the Cluster IP to the updated Release 7.0 system
You can now temporarily assign the Cluster IP address to the newly re-imaged Release 7.0 system. This will allow service to be temporarily maintained while re-imaging the second system.
8.6 Re-image the remaining UCX Release 6.0 system
Follow the standard re-image procedures to re-image the remaining UCX Release 6.0 system (the one without the Cluster IP address) to UCX Release 7.0 outlined in sections 1 to 6 of this document (Release 7.0 Update Process). This includes updating the system with the latest updates, retrieving the new Release 7.0 licenses, and installing any required add-ons (including the High Availability add-on).
On this sytem, however, DO NOT restore the database.
8.7 Release the Cluster IP address from the active system
8.8 Re-configure HARC on one of the two re-imaged Release 7.0 systems
At this point in time both systems are re-imaged and ready to be re-configured for the High Availability (HARC) configuration. To configure HARC, follow the steps identified in High Availability - Installation and Configuration Rls 7.0.
Once the cluster is created, the two systems will synchronize and the HARC configuration is complete.
8.9 Update InfinityOne (if deployed)
Refer to 6.0 Completing the Update to InfinityOne 4.0 for steps to update InfinityOne to Release 4.0. This can be completed while the two nodes are synchronizing.