Skip to main content
Skip table of contents

Operating System Patch Management

This page provides guidance for applying security patches to both Windows, CentOS, and Rocky operating systems used for the LogRhythm SIEM and NetMon. Each section describes a way to patch for online (internet-connected) and offline systems.

With the exception of NetMon appliances, LogRhythm recommends that the latest security patches from the OS vendor. Users should patch the underlying OSs as part of their regular patching cycle per their internal patching policy.

Patch Standard Deployments (XM or PM)

Microsoft Windows Operating System

To run online Windows Updates from the Microsoft repositories, follow the steps in this Microsoft article: https://support.microsoft.com/en-us/windows/get-the-latest-windows-update-7d20e88c-0568-483a-37bc-c3885390d212

To download updates from the Windows catalog and distribute them manually, follow the steps in this Microsoft article: https://support.microsoft.com/en-gb/help/323166/how-to-download-updates-that-include-drivers-and-hotfixes-from-the-win

SQL Server

The latest SQL service packs and Microsoft SQL Studio versions are fully tested and supported by LogRhythm.

For the latest updates per SQL server version, see https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates.

For the latest updates per SQL Studio version, see https://learn.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms.

To update a DR or HA deployment, see the Patch Disaster Recovery-Enabled Deployments or Patch High Availability-Enabled Deployments sections below.

To update HA+DR combined solutions, see the Patch HA+DR Combined Solution Deployments section below.

Rocky Linux Data Indexer (DX)

  • LogRhythm DX servers should be patched with Rocky "baseos" repositories only. Other repositories should not be created or enabled on DX servers as they can interfere with the DX or Common Services installers. Other packages, including Elasticsearch, must only be updated as part of a LogRhythm upgrade. If you detect any vulnerabilities after following these instructions, please contact LogRhythm Support.
  • Currently, LogRhythm supports Rocky 9.2 and 9.3. Future versions of 9.x are expected to continue to be supported.

Update Online Systems

To update online systems with Rocky Linux repositories, update DX servers against the Rocky official base repository. Additional repositories should not be used for DX patching.

BASH
sudo yum --disablerepo=* --enablerepo=baseos update

Validate no LogRhythm Packages are listed in "upgrading" before confirming. If a kernel update is required, a reboot may be necessary. 

Update Offline Systems

To update offline systems, choose one of these three methods:

  • Locally mount the latest Rocky 9.x official image file (updated quarterly). Instructions and examples can be found here.
  • Copy repository/package files you need to update to the DX Servers.
  • Configure your own local repository.

CentOS-Based Data Indexer (DX)

  • LogRhythm DX servers should be patched with CentOS base repositories only. Other repositories should not be created or enabled on DX servers. Other packages, including Elasticsearch, must only be updated as part of a LogRhythm upgrade. If you detect any vulnerabilities after following these instructions, please contact LogRhythm Support.
  • Currently, LogRhythm only supports CentOS version 7. DXs should not be upgraded to CentOS version 8.
  • Before patching using CentOS repositories, LogRhythm 7.1.x must be upgraded to 7.2.x or later.

Because of the lack of necessary metadata in the yum repositories, "yum-plugin-security" is non-functional on CentOS. Alternatively, yum update updates all packages, including security patches.

Update Online Systems

To update online systems with CentOS repositories, update DX servers against the CentOS official base repository. Additional repositories should not be used for DX patching.

BASH
sudo yum --disablerepo=* --enablerepo=base,updates update

Update Offline Systems

To update offline systems, choose one of these three methods:

  • Use the CentOS official image file (see the Use the CentOS Official Image File section below).
  • Copy repository/package files to the DX servers.
  • Configure your own local repository.
There are multiple ways to install and configure a local repository for CentOS Linux. The following instructions are provided as an example.
Use the CentOS Official Image File

Updating CentOS using an .iso file is a fast way to patch DX servers, as it does not require creating a local repository.

Although CentOS releases new image files frequently, compared to CentOS online repositories, these images may not contain the most recent version of packages, as CentOS releases a new ISO every eight weeks.
  1. Download the latest official CentOS 7.x minimal .iso file. The mirror links for CentOS images can be found at http://isoredirect.centos.org/centos/7/isos/x86_64/.
  2. Copy the .iso file to the DX appliance.
  3. If it does not already exist, create the directory /media/CentOS:

    BASH
    mkdir -p /media/CentOS
  4. Use the following command to mount the .iso locally, replacing "$FilePath.iso" to full path of the CentOS .iso file:

    BASH
    mount -t iso9660 -o loop $FilePath.iso /media/CentOS/

    For example, if the downloaded .iso filename is "loopCentOS-7-x86_64-Minimal-2003.iso" and it is copied to /home/logrhythm/, the mount command would be:

    BASH
    mount -t iso9660 -o loop /home/logrhythm/loopCentOS-7-x86_64-Minimal-2003.iso /media/CentOS/
  5. Create or edit a local yum repository file named "/etc/yum.repos.d/CentOS-Media.repo" and update the file address to point to the directory mounting the .iso file:

    BASH
    # CentOS-Media.repo
    #
    #  This repo can be used with mounted DVD media, verify the mount point for
    #  CentOS-7.  You can use this repo and yum to install items directly off the
    #  DVD ISO that we release.
    #
    # To use this repo, put in your DVD and use it with the other repos too:
    #  yum --enablerepo=c7-media [command]
    #
    # or for ONLY the media repo, do this:
    #
    #  yum --disablerepo=\* --enablerepo=c7-media [command]
    
    [c7-media]
    name=CentOS-$releasever - Media
    baseurl=file:///media/CentOS/
            file:///media/cdrom/
            file:///media/cdrecorder/
    gpgcheck=1
    enabled=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    
    
  6. Run the yum update command:

    BASH
    sudo yum clean all 
    yum --disablerepo=* --enablerepo=c7-media update
    
  7. When the update is complete, disable the repository by editing the file "/etc/yum.repos.d/CentOS-Media.repo" and changing the enabled value to 0:

    BASH
    # CentOS-Media.repo
    #
    #  This repo can be used with mounted DVD media, verify the mount point for
    #  CentOS-7.  You can use this repo and yum to install items directly off the
    #  DVD ISO that we release.
    #
    # To use this repo, put in your DVD and use it with the other repos too:
    #  yum --enablerepo=c7-media [command]
    #
    # or for ONLY the media repo, do this:
    #
    #  yum --disablerepo=\* --enablerepo=c7-media [command]
    
    [c7-media]
    name=CentOS-$releasever - Media
    baseurl=file:///media/CentOS/
            file:///media/cdrom/
            file:///media/cdrecorder/
    gpgcheck=1
    enabled=0
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    
  8. Unmount the .iso file:

    BASH
    umount /media/CentOS/
Copy Repository Files to DX Servers and Run the Update Command Against the Repository on the Server's Local Storage

Instead of using the local network to access to the repository server, a repository can be created and copied to each DX server manually. They are then updated with the local repository available on the local disk. 

  1. Install a new CentOS 7.x server with internet access. To verify whether the correct version of CentOS is installed, run the following command on the offline repository server, and then run the same command on DX servers to confirm that the values for "arch," "releasever," "infra," and "basearch" are identical.

    BASH
    sudo python -c 'import yum, json; yb = yum.YumBase(); print json.dumps(yb.conf.yumvar,indent=2)'
  2. On the repository server, install the following packages:

    BASH
    sudo yum install createrepo yum-utils 
  3. Create a directory named repository on volume "var":

    BASH
    sudo mkdir -p /var/repository
  4. Run the four following commands separately to download and update the local repository for the base, centosplus, extras, and updates repositories. To download the most recent updates from the CentOS repository prior to any subsequent patching run, repeat these commands.

    BASH
    sudo reposync -g -l -d -m --repoid=base --newest-only --download-metadata --download_path=/var/repository/
    sudo reposync -g -l -d -m --repoid=centosplus --newest-only --download-metadata --download_path=/var/repository/
    sudo reposync -g -l -d -m --repoid=extras --newest-only --download-metadata --download_path=/var/repository/
    sudo reposync -g -l -d -m --repoid=updates --newest-only --download-metadata --download_path=/var/repository/
  5. When all packages sync, use the createrepo command to create and update the repodata repository. Run the following command each time reposync commands are re-run for subsequent patching.

    BASH
    sudo createrepo /var/repository/
  6. Create a tar.gz file from the repository directory. For example, create the tar file in the /home/logrhythm/ directory.

    BASH
    sudo tar czf /home/logrhythm/repository.tar.gz  /var/repository/ 
  7. Copy repository.tar.gz to the /home/logrhythm/ directory on the DX server, and then extract the tar file using the following command:

    The volume "/var/" has limited space. If the repository size is larger than the available space on the volume, extract the tar file to a different location.
    BASH
    sudo tar xzf /home/logrhythm/repository.tar.gz -C /

    The repository files should be extracted to /var/repository/ on the DX server.

  8. On the DX servers, create a new yum repository file:

    BASH
     sudo vi /etc/yum.repos.d/CentOS-Local.repo
  9. Copy the following content to the file in CentOS-Local.repo, where the value /var/repository/ in front of "baseurl" is the location to which repository.tar.gz was extracted.

    BASH
    [LocalRepository]
    name=LocalRepository
    baseurl=file:///var/repository
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    enabled=1
  10. Run the yum update command on the DX server:

    BASH
    sudo yum clean all 
    sudo yum update 
  11. When the yum update is complete, edit the repo file /etc/yum.repos.d/CentOS-Local.repo again:

    BASH
     sudo vi /etc/yum.repos.d/CentOS-Local.repo
  12. To disable the repository, set the value of Enabled to 0:

    BASH
    [LocalRepository]
    name=LocalRepository
    baseurl=file:///var/repository
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    enabled=0
  13. After the update is complete, remove the repository files.

Restart DX Servers

The Linux kernel is only loaded into memory during the boot process. To verify whether the new kernel is installed on the OS, check the version of the running kernel and compare it to the list of of installed versions.

Check If an OS Reboot Is Required
  1. To show the version of the current kernel OS, run the following command:

    BASH
    sudo uname -r 
  2. To show kernel versions installed on the system, run the following command:

    BASH
    sudo rpm -qa kernel
  3. If there is a newer kernel version available in the rpm -qa output, restart the OS to load the new kernel.

    Restarting a single node DX introduces downtime for DX, and it can take a significant amount of time to recover and resume indexing.
Restart a Single-Node Cluster
  1. Stop indexing and perform a synced flush to speed up shard recovery:

    BASH
    curl -XPOST "localhost:9200/_flush/synced?pretty"
  2. Check the response to make sure there are no failures. Synced flush operations that fail due to pending indexing operations are listed in the response body, although the request itself still returns a 200 OK status. If there are failures, reissue the request.

  3. Shut down all DX services on the server:

    BASH
    sudo /usr/local/logrhythm/tools/stop-all-services-linux.sh
  4. Restart the DX server:

    Before restarting the servers, verify iDRAC access to the server.
    BASH
     sudo systemctl --force reboot
  5. When the node restarts, verify that the cluster is stable (green). This can take a significant amount of time. To monitor the cluster, use the following command:

    BASH
     watch 'curl -s "localhost:9200/_cluster/health?pretty"'
Rolling Restart a Multi-Node Cluster

The following steps should be run on one node at a time. For example, if there are three nodes in a DX cluster, run these steps on each node sequentially.

  1. Disable shard allocation:

    BASH
    curl -XPUT localhost:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d '{"transient":{"cluster.routing.allocation.enable":"none"}}'
  2. Stop indexing and perform a synced flush to speed up shard recovery:

    BASH
    curl -XPOST "localhost:9200/_flush/synced?pretty"
  3. Check the response to make sure there are no failures. Synced flush operations that fail due to pending indexing operations are listed in the response body, although the request itself still returns a 200 OK status. If there are failures, reissue the request.

  4. Stop all DX services on one server at a time:

    BASH
    sudo /usr/local/logrhythm/tools/stop-all-services-linux.sh
  5. Restart the node:

    Before restarting the node, verify iDRAC access to the server.
    BASH
     sudo systemctl --force reboot
  6. Confirm the node you restarted joins the cluster by checking the log file or by submitting a _cat/nodes request on any other node in the cluster:

    BASH
     watch 'curl -s  -XGET "localhost:9200/_cat/nodes?pretty&s=name&v"'
  7. After the node has joined the cluster, re-enable shard allocation. To enable shard allocation and start using the node, remove the cluster.routing.allocation.enable setting:

    BASH
    curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'{"transient": { "cluster.routing.allocation.enable": "all" }}'
  8. (Optional) To speed up the recovery process, increase the limit of total inbound and outbound recovery traffic and concurrent incoming shard recoveries allowed on a node:

    BASH
    curl -XPUT localhost:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d'{"transient":{"cluster.routing.allocation.node_concurrent_recoveries": 10}}'
    curl -XPUT localhost:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d '{"transient":{"indices.recovery.max_bytes_per_sec": "1000mb"}}'

    These limit increases persist unless they are removed at the end of the procedure, or all nodes are fully stopped and then started again.

    To remove these limit increases, see optional step 11.

  9. When the node has recovered and the cluster is green, repeat these steps for each node that needs to be restarted.

  10. When the node has restarted, verify the cluster until it is stable (green). To monitor the cluster, use the following command:

    BASH
     watch 'curl -s "localhost:9200/_cluster/health?pretty"'
  11. (Optional, recommended if step 8 was taken) Once the full rolling restart is completed, query for all currently used cluster settings, filtered for transient settings only:

    CODE
    curl -sX GET "http://localhost:9200/_cluster/settings?flat_settings=true&filter_path=transient&pretty"

    Revert all transient cluster settings:

    CODE
    curl -sX PUT "http://localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d '{"transient":{"*":null}}'

LogRhythm NetMon

Patches are applied with NetMon version updates. It is recommended to update the NetMon to its latest version. If additional packages need to be updated, raise a ticket with LogRhythm Support to add the package into next available NetMon release.

Yum Update

Running a yum update on a NetMon system is not supported.


Update Firmware and Drivers on Dell EMC PowerEdge Servers

To access the most recent firmware from Dell's website, contact Dell Support at this link (Dell service tag required): https://www.dell.com/support/home/en-us?app=products

To run firmware and drivers updates for Dell servers, see https://www.dell.com/support/article/en-us/sln300662/updating-firmware-and-drivers-on-dell-emc-poweredge-servers#WhatarethedifferentsmethodstoupdateaPowerEdge.

For information on iDRAC and BIOS using the IDRAC interface, see https://www.dell.com/support/article/en-us/sln292363/dell-poweredge-update-the-firmware-of-single-system-components-remotely-using-the-idrac?lang=en.

Patch High Availability-Enabled Deployments (XM or PM)

Microsoft Windows Operating System

Patch the Windows Server OS following your regular internal patching policy.

SQL Server

The LogRhythm HA solution employs full-disk mirroring between protected nodes to synchronize the drives that contain the PM SQL databases (both system and LogRhythm). The system databases reside on the mirrored volume for the D: drive. When a SQL Server service pack or cumulative update is applied to an instance of SQL Server, the system (and LogRhythm) databases are also updated with the build number. When the master database version is modified with the build number from the SQL Server update, that change is mirrored to the inactive nodes via DataKeeper. If the inactive node were brought online in the middle of the patching process, the SQL Server executable would recognize that the master database build number is not consistent with the version that the service executable expects. This would cause the database engine to fail to start.

It is strongly recommended to exclude all HA nodes from automatic SQL Server service packs and cumulative updates. This recommendation is applicable to SQL Server updates only—Windows security patches should still be applied per internal security policy.

On the currently inactive HA node:

  1. Open DataKeeper.

  2. Ensure that all mirror jobs are in a status of mirroring & synchronized.

  3. Right-click each mirror job, and then click Pause and unlock All Mirrors.

  4. Start SQL Server services on the inactive HA node.

  5. Install the SQL Server updates.

  6. If a reboot is prompted:

    Reboot is recommended.
    1. Reboot the node, and then log in again.

    2. Open DataKeeper.

    3. Right-click each mirror job, and then click Pause and unlock All Mirrors.
    4. Start SQL Server services on the inactive HA node.
    5. Proceed to step 8.

  7. If no reboot is prompted, proceed to step 8.

  8. Confirm successful SQL Server startup via the ERRORLOG and login availability via SQL Server Management Studio.

  9. Manually stop the SQL Server service on the node.

  10. (Optional) Repeat these steps for any additional inactive HA nodes.

On the currently active HA node:

  1. Open DataKeeper.

  2. Confirm that all mirrors are in a status of paused & unlocked. If necessary, right-click each mirror job, and then click Pause and Unlock All Mirrors.

  3. Click the LifeKeeper GUI, and then set the top-level ResTag to Out of Service.

  4. Manually stop all LogRhythm Services on the node.

    Leave the SQL Server services running. They are required for the SQL Server update.
    After this step, LogRhythm services will be non-functional.
  5. Install the SQL Server updates.

  6. If reboot is prompted:

    Reboot is recommended.
    1. Reboot the node, and then log in again.

    2. Wait for SQL and all LogRhythm services to start. This could take 10-15 minutes.

    3. Proceed to step 8.

  7. If no reboot is prompted, proceed to step 8.

  8. Confirm successful SQL Server startup via the ERRORLOG and login availability via SQL Server Management Studio.

  9. Confirm that all LogRhythm services are started and operational by checking the various component logs.

  10. Open DataKeeper on the node.

  11. Right-click each mirror job, and then click Continue and Lock All Mirrors.

  12. Click the LifeKeeper GUI, and then set the top-level ResTag to In Service.

  13. Wait for the disk mirror jobs to synchronize fully.

  14. In DataKeeper, verify that all mirror jobs are fully synchronized.

  15. (Optional) Perform a full switchover of resources to the secondary node to verify HA resource protection, then perform a full switchback of resources to the primary node to verify HA resource protection.

Patch Disaster Recovery-Enabled Deployments (XM or PM)

Microsoft Windows Operating System

Patch the Windows Server OS following your regular internal patching policy.

SQL Server

This procedure assumes that the DR failover process has been successfully tested in your environment. If this is not the case, remediate the DR failover process, and then proceed with the steps below.

DR-enabled SQL Servers should not be subject to automatic updates (such as Service Packs and Cumulative Updates) on the SQL Server instance. To avoid database corruption, these updates should be applied in a specific pattern. It is highly recommended to exclude DR-enabled LogRhythm systems from automatic patches to the SQL Server, though OS updates are still advised.

To patch SQL Server 2016:

  1. Perform the service pack or cumulative update on the secondary replica.

  2. Use the DR Control utility to perform a cluster failover of all database availability groups from the primary replica to the secondary replica.

  3. Perform the service pack or cumulative update on the primary replica.

  4. Use the DR Control utility to perform a cluster failback of all database availability groups from the secondary replica to the primary replica.

    It is recommended to perform a system reboot immediately after performing the service pack or cumulative update on the SQL Server instance, even if the installer does not call for one.

For further detail on SQL patching Always On Availability Groups, see https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/upgrading-always-on-availability-group-replica-instances?view=sql-server-2016.

Patch HA+DR Combined Solution Deployments (XM or PM)

Microsoft Windows Operating System

Before patching the Windows OS on an HA+DR combined solution, ensure the following prerequisites are met:

  • Ensure all databases are backed up.
  • Reboot each of the servers one at the time (first the secondary HA server, then the DR server) prior to any patching being applied.
  • This procedure assumes that the DR failover process has been successfully tested in your environment. If this is not the case, remediate the DR failover process, and then proceed with the steps below.
  • Prior to patching the 3rd Node (passive Primary HA box), ensure the other two boxes are able to restore operational service.

On the currently inactive HA node:

  1. Open DataKeeper.

  2. Ensure that all mirror jobs are in a status of mirroring & synchronized.

  3. Right-click each mirror job, and then click Pause and unlock All Mirrors.

  4. Check for Windows Updates.
  5. Apply updates as necessary.
  6. Reboot the inactive HA node.
  7. Repeat steps 4 through 6 if necessary.
  8. (Optional.) Repeat these steps for any additional inactive HA nodes.

On the currently active HA node:

Do not make any changes to the services.

  1. Open DataKeeper.

  2. Confirm that all mirrors are in a status of paused & unlocked. If necessary, right-click each mirror job, and then click Pause and Unlock All Mirrors.

  3. Click the LifeKeeper GUI, and then set the top-level ResTag to Out of Service on the primary server.

  4. Apply Windows patches to the HA secondary node.
  5. Apply Windows patches to DR Server.

    At this point you should have Mirrors paused on the HA pairing, and the patches installed on the Secondary HA Node and the DR box.

  6. Restart the HA Secondary Node.
  7. Once the Secondary HA node is back up, restart the DR node.
  8. Check the Client Console Deployment Monitor to ensure all heartbeats are still updating on the active server, checking the various component logs for affected services if required.

  9. Open DataKeeper on the node.

  10. Right-click each mirror job, and then click Continue and Lock All Mirrors.

  11. Click the LifeKeeper GUI, and then set the top-level ResTag to In Service.

  12. Wait for the disk mirror jobs to synchronize fully.

  13. In DataKeeper, verify that all mirror jobs are fully synchronized.

    At this point the Secondary and DR site are patched and LK and DK should be active. The mirror must be fully synced before proceeding to the next step.

  14. Once the mirror is synced, failover from the Primary HA node onto the Secondary HA node.
  15. When the secondary is active, verify that the components are all checking in and there are no errors in the logs.

    Now the Secondary Node is the Active node. Repeat the above steps 8 through 13 to pause DK and LK (pause mirrors, take out ResTag) and apply the patches onto the Primary Passive node.

  16. Once the patches are applied, reboot and bring back online the DK/LK mirrors/ResTags.
    The HA/DR pair is restored, and you can now failover back to the primary.

SQL Server

Before patching the SQL Server on an HA+DR combined solution, ensure the following prerequisites are met:

  • Ensure all databases are backed up.
  • Reboot each of the servers one at the time (first the secondary HA server, then the DR server) prior to any patching being applied.
  • This procedure assumes that the DR failover process has been successfully tested in your environment. If this is not the case, remediate the DR failover process, and then proceed with the steps below.
  • Prior to patching the 3rd Node (passive Primary HA box), ensure the other two boxes are able to restore operational service.

On the currently inactive HA node:

  1. Open DataKeeper.

  2. Ensure that all mirror jobs are in a status of mirroring & synchronized.

  3. Right-click each mirror job, and then click Pause and unlock All Mirrors.

  4. Check for SQL Server Updates.
  5. Apply updates as necessary.
  6. Reboot the inactive HA node.
  7. Repeat steps 4 through 6 if necessary.
  8. (Optional.) Repeat these steps for any additional inactive HA nodes.

On the currently active HA node:

Do not make any changes to the services.

  1. Open DataKeeper.

  2. Confirm that all mirrors are in a status of paused & unlocked. If necessary, right-click each mirror job, and then click Pause and Unlock All Mirrors.

  3. Click the LifeKeeper GUI, and then set the top-level ResTag to Out of Service on the primary server.

  4. On the HA secondary node, stop all LogRhythm services by running the following admin PowerShell command:

    CODE
    ; gsv  -displayname 'LogRhythm*' | stop-service
  5. Apply SQL patches (service pack/cumulative updates) to the HA Server.
  6. Access the DR node, and stop all LogRhythm services by running the following admin PowerShell command:

    CODE
    gsv  -displayname 'LogRhythm*' | stop-service
  7. Apply SQL patches (service pack/cumulative updates) to the DR server.
  8. Reboot to bring services back online.

    At this point Mirrors should be paused on the HA pairing, and the patches should be installed on the Secondary HA Node and the DR box.

  9. Restart the HA secondary node.
  10. Once the Secondary HA node is back up, restart the DR node, ensuring all services come back online.

    If necessary, services can be restarted using the following admin PowerShell command:

    CODE
    gsv  -displayname 'LogRhythm*' | start-service
  11. Check the Client Console Deployment Monitor to ensure all heartbeats are still updating on the active server, checking the various component logs for affected services if required.

  12. Open DataKeeper on the node.

  13. Right-click each mirror job, and then click Continue and Lock All Mirrors.

  14. Click the LifeKeeper GUI, and then set the top-level ResTag to In Service.

  15. Wait for the disk mirror jobs to synchronize fully.

  16. In DataKeeper, verify that all mirror jobs are fully synchronized.

    At this point the Secondary and DR site are patched and LK and DK should be active. The mirror must be fully synced before proceeding to the next step.

  17. Once the mirror is synced, failover from the Primary HA node onto the Secondary HA node.
  18. When the secondary is active, verify that the components are all checking in and there are no errors in the logs.

    Now the Secondary Node is the Active node. Repeat the above steps 11 through 16 to pause DK and LK (pause mirrors, take out ResTag) and apply the patches onto the Primary Passive node.

  19. Once the patches are applied, reboot and bring back online the DK/LK mirrors/ResTags.
    The HA/DR pair is restored, and you can now failover back to the primary.


JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.