Operating System Patch Management
This page provides guidance for applying security patches to both Windows and CentOS-based operating systems used for the LogRhythm SIEM, NetMon, and Open Collector. Each section describes a way to patch for online (internet-connected) and offline systems.
With the exception of NetMon appliances, LogRhythm recommends that the latest security patches from the OS vendor be applied. Users should patch the underlying OSs as part of their regular patching cycle per their internal patching policy.
Patch Standard Deployments (XM or PM)
Microsoft Windows Operating System
To run online Windows Updates from the Microsoft repositories, follow the steps in this Microsoft article: https://support.managed.com/kb/a2071/how-to-install-windows-updates-on-a-windows-server.aspx
To download updates from the Windows catalog and distribute them manually, follow the steps in this Microsoft article: https://support.microsoft.com/en-gb/help/323166/how-to-download-updates-that-include-drivers-and-hotfixes-from-the-win
SQL Server
The latest SQL service packs and Microsoft SQL Studio versions are fully tested and supported by LogRhythm.
For the latest updates per SQL server version, see https://docs.microsoft.com/en-us/sql/database-engine/install-windows/latest-updates-for-microsoft-sql-server?view=sql-server-ver15.
For the latest updates per SQL Studio version, see https://learn.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-ver16.
To update a DR or HA deployment, see Patch Disaster Recovery-Enabled Deployments or Patch High Availability-Enabled Deployments.
To update HA+DR combined solutions, see Patch HA+DR Combined Solution Deployments.
CentOS-Based Data Indexer (DX)
- LogRhythm DX servers must be patched with CentOS base repositories only. Other repositories should not be created or enabled on DX servers. Other packages, including Elasticsearch, must only be updated as part of a LogRhythm upgrade. If you detect any vulnerabilities after following these instructions, please contact LogRhythm Support.
- Currently, LogRhythm only supports CentOS version 7. DXs should not be upgraded to CentOS version 8.
- Before patching using CentOS repositories, LogRhythm 7.1.x must be upgraded to 7.2.x or later.
Because of the lack of necessary metadata in the yum repositories, "yum-plugin-security" is non-functional on CentOS. Alternatively, yum update updates all packages, including security patches.
Update Online Systems
To update online systems with CentOS repositories, update DX servers against the CentOS official base repository. Additional repositories should not be used for DX patching.
sudo yum --disablerepo=* --enablerepo=base,updates update
Update Offline Systems
To update offline systems, choose one of these three methods:
- Using the CentOS official image file
- Creating a local repository and updating the DX servers using an HTTP server
- Copying repository files to the DX servers
For a local yum server with locally mirrored repositories, the OS for LogRhythm DX appliances can also be updated. If DX appliances do not have internet access, LogRhythm recommends creating a local repository to update DX servers.
- To create local mirrors for updates or installs from the CentOS repositories, see https://wiki.centos.org/HowTos/CreateLocalMirror.
- To create local repositories, see https://wiki.centos.org/HowTos/CreateLocalRepos.
- When the local repository is created and added to each DX appliance, all packages can be updated.
Use the CentOS Official Image File
Updating CentOS using an .iso file is a fast way to patch DX servers, as it does not require creating a local repository.
- Download the latest official CentOS 7.x minimal .iso file. The mirror links for CentOS images can be found at http://isoredirect.centos.org/centos/7/isos/x86_64/.
- Copy the .iso file to the DX appliance.
If it does not already exist, create the directory /media/CentOS:
BASHmkdir -p /media/CentOS
Use the following command to mount the .iso locally, replacing "$FilePath.iso" to full path of the CentOS .iso file:
BASHmount -t iso9660 -o loop $FilePath.iso /media/CentOS/
For example, if the downloaded .iso filename is "loopCentOS-7-x86_64-Minimal-2003.iso" and it is copied to /home/logrhythm/, the mount command would be:
BASHmount -t iso9660 -o loop /home/logrhythm/loopCentOS-7-x86_64-Minimal-2003.iso /media/CentOS/
Create or edit a local yum repository file named "/etc/yum.repos.d/CentOS-Media.repo" and update the file address to point to the directory mounting the .iso file:
BASH# CentOS-Media.repo # # This repo can be used with mounted DVD media, verify the mount point for # CentOS-7. You can use this repo and yum to install items directly off the # DVD ISO that we release. # # To use this repo, put in your DVD and use it with the other repos too: # yum --enablerepo=c7-media [command] # # or for ONLY the media repo, do this: # # yum --disablerepo=\* --enablerepo=c7-media [command] [c7-media] name=CentOS-$releasever - Media baseurl=file:///media/CentOS/ file:///media/cdrom/ file:///media/cdrecorder/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Run the yum update command:
BASHsudo yum clean all yum --disablerepo=* --enablerepo=c7-media update
When the update is complete, disable the repository by editing the file "/etc/yum.repos.d/CentOS-Media.repo" and changing the enabled value to 0:
BASH# CentOS-Media.repo # # This repo can be used with mounted DVD media, verify the mount point for # CentOS-7. You can use this repo and yum to install items directly off the # DVD ISO that we release. # # To use this repo, put in your DVD and use it with the other repos too: # yum --enablerepo=c7-media [command] # # or for ONLY the media repo, do this: # # yum --disablerepo=\* --enablerepo=c7-media [command] [c7-media] name=CentOS-$releasever - Media baseurl=file:///media/CentOS/ file:///media/cdrom/ file:///media/cdrecorder/ gpgcheck=1 enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Unmount the .iso file:
BASHumount /media/CentOS/
Create a Local Repository and Update the DX Servers Using an HTTP Server
Install a new CentOS 7.x server with internet access. To verify whether the correct version of CentOS is installed, run the following command on the offline repository server, then run the same command on DX servers and confirm that the values for "arch," "releasever," "infra," and "basearch" are identical.
BASHsudo python -c 'import yum, json; yb = yum.YumBase(); print json.dumps(yb.conf.yumvar,indent=2)'
Enable an EPEL repository on the server to install nginx packages:
BASHsudo yum install epel-release
On the repository server, install the "createrepo yum-utils" and "nginx" packages:
BASHsudo yum install createrepo yum-utils nginx
Set the OS firewall to allow HTTP.
As part of the installation, the HTTP port should be allowed by default.BASHsudo firewall-cmd --zone=public --permanent --add-service=http
Create a directory named "repository":
BASHsudo mkdir -p /var/www/repository
Run the following four commands separately to download and update the local repository for the base, centosplus, extras, and updates repositories. To download the most recent updates from the CentOS repository prior to any subsequent patching run, repeat these commands.
BASHsudo reposync -g -l -d -m --repoid=base --newest-only --download-metadata --download_path=/var/www/repository/ sudo reposync -g -l -d -m --repoid=centosplus --newest-only --download-metadata --download_path=/var/www/repository/ sudo reposync -g -l -d -m --repoid=extras --newest-only --download-metadata --download_path=/var/www/repository/ sudo reposync -g -l -d -m --repoid=updates --newest-only --download-metadata --download_path=/var/www/repository/
When all packages are synchronized, use the createrepo command to create or update the repodata repository. Run the command each time the reposync command is used.
BASHsudo createrepo /var/www/lighttpd/repository/
Rename the /etc/nginx/nginx.conf file to /etc/nginx/nginx.conf.oem:
BASHsudo mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.oem
Create a new /etc/nginx/nginx.conf file:
BASHsudo vi /etc/nginx/nginx.conf
Add the following content:
BASHuser nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; server { listen 80 default_server; listen [::]:80 default_server; server_name _; root /var/www/repository; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { allow all; sendfile on; sendfile_max_chunk 1m; autoindex on; autoindex_exact_size off; autoindex_format html; autoindex_localtime on; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } }
On the DX servers, create a new yum repository file:
BASHsudo vi /etc/yum.repos.d/CentOS-Local.repo
Copy the following content into the CentOS-Local.repo file, replacing $IP in front of "baseurl" with the IP address of the offline repository server:
BASH[LocalRepository] name=LocalRepository baseurl=http://$IP/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 enabled=1
Run the yum update command on the DX server:
BASHsudo yum clean all sudo yum update
Copy Repository Files to DX Servers and Run the Update Command Against the Repository on the Server's Local Storage
Instead of using the local network to access to the repository server, a repository can be created and copied to each DX server manually. They are then updated with the local repository available on the local disk.
Install a new CentOS 7.x server with internet access. To verify whether the correct version of CentOS is installed, run the following command on the offline repository server, and then run the same command on DX servers to confirm that the values for "arch," "releasever," "infra," and "basearch" are identical.
BASHsudo python -c 'import yum, json; yb = yum.YumBase(); print json.dumps(yb.conf.yumvar,indent=2)'
On the repository server, install the following packages:
BASHsudo yum install createrepo yum-utils
Create a directory named repository on volume "var":
BASHsudo mkdir -p /var/repository
Run the four following commands separately to download and update the local repository for the base, centosplus, extras, and updates repositories. To download the most recent updates from the CentOS repository prior to any subsequent patching run, repeat these commands.
BASHsudo reposync -g -l -d -m --repoid=base --newest-only --download-metadata --download_path=/var/repository/ sudo reposync -g -l -d -m --repoid=centosplus --newest-only --download-metadata --download_path=/var/repository/ sudo reposync -g -l -d -m --repoid=extras --newest-only --download-metadata --download_path=/var/repository/ sudo reposync -g -l -d -m --repoid=updates --newest-only --download-metadata --download_path=/var/repository/
When all packages sync, use the createrepo command to create and update the repodata repository. Run the following command each time reposync commands are re-run for subsequent patching.
BASHsudo createrepo /var/repository/
Create a tar.gz file from the repository directory. For example, create the tar file in the /home/logrhythm/ directory.
BASHsudo tar czf /home/logrhythm/repository.tar.gz /var/repository/
Copy repository.tar.gz to the /home/logrhythm/ directory on the DX server, and then extract the tar file using the following command:
The volume "/var/" has limited space. If the repository size is larger than the available space on the volume, extract the tar file to a different location.BASHsudo tar xzf /home/logrhythm/repository.tar.gz -C /
The repository files should be extracted to /var/repository/ on the DX server.
On the DX servers, create a new yum repository file:
BASHsudo vi /etc/yum.repos.d/CentOS-Local.repo
Copy the following content to the file in CentOS-Local.repo, where the value /var/repository/ in front of "baseurl" is the location to which repository.tar.gz was extracted.
BASH[LocalRepository] name=LocalRepository baseurl=file:///var/repository gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 enabled=1
Run the yum update command on the DX server:
BASHsudo yum clean all sudo yum update
When the yum update is complete, edit the repo file /etc/yum.repos.d/CentOS-Local.repo again:
BASHsudo vi /etc/yum.repos.d/CentOS-Local.repo
To disable the repository, set the value of Enabled to 0:
BASH[LocalRepository] name=LocalRepository baseurl=file:///var/repository gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 enabled=0
After the update is complete, remove the repository files.
Restart DX Servers
The Linux kernel is only loaded into memory during the boot process. To verify whether the new kernel is installed on the OS, check the version of the running kernel and compare it to the list of of installed versions.
Check If an OS Reboot Is Required
To show the version of the current kernel OS, run the following command:
BASHsudo uname -r
To show kernel versions installed on the system, run the following command:
BASHsudo rpm -qa kernel
If there is a newer kernel version available in the rpm -qa output, restart the OS to load the new kernel.
Restarting a single node DX introduces downtime for DX, and it can take a significant amount of time to recover and resume indexing.
Restart a Single-Node Cluster
Stop indexing and perform a synced flush to speed up shard recovery:
BASHcurl -XPOST "localhost:9200/_flush/synced?pretty"
Check the response to make sure there are no failures. Synced flush operations that fail due to pending indexing operations are listed in the response body, although the request itself still returns a 200 OK status. If there are failures, reissue the request.
Shut down all DX services on the server:
BASHsudo /usr/local/logrhythm/tools/stop-all-services-linux.sh
Restart the DX server:
Before restarting the servers, verify iDRAC access to the server.BASHsudo systemctl --force reboot
When the node restarts, verify that the cluster is stable (green). This can take a significant amount of time. To monitor the cluster, use the following command:
BASHwatch 'curl -s "localhost:9200/_cluster/health?pretty"'
Rolling Restart a Multi-Node Cluster
The following steps should be run on one node at a time. For example, if there are three nodes in a DX cluster, run these steps on each node sequentially.
Disable shard allocation:
BASHcurl -XPUT localhost:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d '{"transient":{"cluster.routing.allocation.enable":"none"}}'
Stop indexing and perform a synced flush to speed up shard recovery:
BASHcurl -XPOST "localhost:9200/_flush/synced?pretty"
Check the response to make sure there are no failures. Synced flush operations that fail due to pending indexing operations are listed in the response body, although the request itself still returns a 200 OK status. If there are failures, reissue the request.
Stop all DX services on one server at a time:
BASHsudo /usr/local/logrhythm/tools/stop-all-services-linux.sh
Restart the node:
Before restarting the node, verify iDRAC access to the server.BASHsudo systemctl --force reboot
Confirm the node you restarted joins the cluster by checking the log file or by submitting a _cat/nodes request on any other node in the cluster:
BASHwatch 'curl -s -XGET "localhost:9200/_cat/nodes?pretty&s=name&v"'
After the node has joined the cluster, re-enable shard allocation. To enable shard allocation and start using the node, remove the cluster.routing.allocation.enable setting:
BASHcurl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'{"transient": { "cluster.routing.allocation.enable": "all" }}'
(Optional) To speed up the recovery process, increase the limit of total inbound and outbound recovery traffic and concurrent incoming shard recoveries allowed on a node:
BASHcurl -XPUT localhost:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d'{"transient":{"cluster.routing.allocation.node_concurrent_recoveries": 10}}' curl -XPUT localhost:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d '{"transient":{"indices.recovery.max_bytes_per_sec": "1000mb"}}'
These limit increases persist unless they are removed at the end of the procedure, or all nodes are fully stopped and then started again.
To remove these limit increases, see optional step 11.
When the node has recovered and the cluster is green, repeat these steps for each node that needs to be restarted.
When the node has restarted, verify the cluster until it is stable (green). To monitor the cluster, use the following command:
BASHwatch 'curl -s "localhost:9200/_cluster/health?pretty"'
(Optional, recommended if step 8 was taken) Once the full rolling restart is completed, query for all currently used cluster settings, filtered for transient settings only:
CODEcurl -sX GET "http://localhost:9200/_cluster/settings?flat_settings=true&filter_path=transient&pretty"
Revert all transient cluster settings:
CODEcurl -sX PUT "http://localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d '{"transient":{"*":null}}'
LogRhythm NetMon
Patches are applied with NetMon version updates. It is recommended to update the NetMon to its latest version. If additional packages need to be updated, raise a ticket with LogRhythm Support to add the package into next available NetMon release.
Yum Update
Running a yum update on a NetMon system is not supported.
CentOS-Based LogRhythm Open Collector
To update an online system with CentOS repositories, run the following command:
yum clean all
sudo yum update
To update an offline system with local repositories, see the following reference materials:
- To create local mirrors for updates or installs from the CentOS repositories, see https://wiki.centos.org/HowTos/CreateLocalMirror.
- To create local repositories, see https://wiki.centos.org/HowTos/CreateLocalRepos.
- For more information on creating an offline repository, see CentOS-Based Data Indexer.
Update Firmware and Drivers on Dell EMC PowerEdge Servers
To access the most recent firmware from Dell's website, contact Dell Support at this link (Dell service tag required): https://www.dell.com/support/home/en-us?app=products
To run firmware and drivers updates for Dell servers, see https://www.dell.com/support/article/en-us/sln300662/updating-firmware-and-drivers-on-dell-emc-poweredge-servers#WhatarethedifferentsmethodstoupdateaPowerEdge.
For information on iDRAC and BIOS using the IDRAC interface, see https://www.dell.com/support/article/en-us/sln292363/dell-poweredge-update-the-firmware-of-single-system-components-remotely-using-the-idrac?lang=en.
Patch High Availability-Enabled Deployments (XM or PM)
Microsoft Windows Operating System
Patch the Windows Server OS following your regular internal patching policy.
SQL Server
The LogRhythm HA solution employs full-disk mirroring between protected nodes to synchronize the drives that contain the PM SQL databases (both system and LogRhythm). The system databases reside on the mirrored volume for the D: drive. When a SQL Server service pack or cumulative update is applied to an instance of SQL Server, the system (and LogRhythm) databases are also updated with the build number. When the master database version is modified with the build number from the SQL Server update, that change is mirrored to the inactive nodes via DataKeeper. If the inactive node were brought online in the middle of the patching process, the SQL Server executable would recognize that the master database build number is not consistent with the version that the service executable expects. This would cause the database engine to fail to start.
On the currently inactive HA node:
Open DataKeeper.
Ensure that all mirror jobs are in a status of mirroring & synchronized.
Right-click each mirror job, and then click Pause and unlock All Mirrors.
Start SQL Server services on the inactive HA node.
Install the SQL Server updates.
If a reboot is prompted:
Reboot is recommended.Reboot the node, and then log in again.
Open DataKeeper.
- Right-click each mirror job, and then click Pause and unlock All Mirrors.
- Start SQL Server services on the inactive HA node.
Proceed to step 8.
If no reboot is prompted, proceed to step 8.
Confirm successful SQL Server startup via the ERRORLOG and login availability via SQL Server Management Studio.
Manually stop the SQL Server service on the node.
(Optional) Repeat these steps for any additional inactive HA nodes.
On the currently active HA node:
Open DataKeeper.
Confirm that all mirrors are in a status of paused & unlocked. If necessary, right-click each mirror job, and then click Pause and Unlock All Mirrors.
Click the LifeKeeper GUI, and then set the top-level ResTag to Out of Service.
Manually stop all LogRhythm Services on the node.
Leave the SQL Server services running. They are required for the SQL Server update.After this step, LogRhythm services will be non-functional.Install the SQL Server updates.
If reboot is prompted:
Reboot is recommended.Reboot the node, and then log in again.
Wait for SQL and all LogRhythm services to start. This could take 10-15 minutes.
Proceed to step 8.
If no reboot is prompted, proceed to step 8.
Confirm successful SQL Server startup via the ERRORLOG and login availability via SQL Server Management Studio.
Confirm that all LogRhythm services are started and operational by checking the various component logs.
Open DataKeeper on the node.
Right-click each mirror job, and then click Continue and Lock All Mirrors.
Click the LifeKeeper GUI, and then set the top-level ResTag to In Service.
Wait for the disk mirror jobs to synchronize fully.
In DataKeeper, verify that all mirror jobs are fully synchronized.
(Optional) Perform a full switchover of resources to the secondary node to verify HA resource protection, then perform a full switchback of resources to the primary node to verify HA resource protection.
Patch Disaster Recovery-Enabled Deployments (XM or PM)
Microsoft Windows Operating System
Patch the Windows Server OS following your regular internal patching policy.
SQL Server
DR-enabled SQL Servers should not be subject to automatic updates (such as Service Packs and Cumulative Updates) on the SQL Server instance. To avoid database corruption, these updates should be applied in a specific pattern. It is highly recommended to exclude DR-enabled LogRhythm systems from automatic patches to the SQL Server, though OS updates are still advised.
To patch SQL Server 2016:
Perform the service pack or cumulative update on the secondary replica.
Use the DR Control utility to perform a cluster failover of all database availability groups from the primary replica to the secondary replica.
Perform the service pack or cumulative update on the primary replica.
Use the DR Control utility to perform a cluster failback of all database availability groups from the secondary replica to the primary replica.
It is recommended to perform a system reboot immediately after performing the service pack or cumulative update on the SQL Server instance, even if the installer does not call for one.
For further detail on SQL patching Always On Availability Groups, see https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/upgrading-always-on-availability-group-replica-instances?view=sql-server-2016.
Patch HA+DR Combined Solution Deployments (XM or PM)
Microsoft Windows Operating System
Before patching the Windows OS on an HA+DR combined solution, ensure the following prerequisites are met:
- Ensure all databases are backed up.
- Reboot each of the servers one at the time (first the secondary HA server, then the DR server) prior to any patching being applied.
- This procedure assumes that the DR failover process has been successfully tested in your environment. If this is not the case, remediate the DR failover process, and then proceed with the steps below.
- Prior to patching the 3rd Node (passive Primary HA box), ensure the other two boxes are able to restore operational service.
On the currently inactive HA node:
Open DataKeeper.
Ensure that all mirror jobs are in a status of mirroring & synchronized.
Right-click each mirror job, and then click Pause and unlock All Mirrors.
- Check for Windows Updates.
- Apply updates as necessary.
- Reboot the inactive HA node.
- Repeat steps 4 through 6 if necessary.
- (Optional.) Repeat these steps for any additional inactive HA nodes.
On the currently active HA node:
Do not make any changes to the services.
Open DataKeeper.
Confirm that all mirrors are in a status of paused & unlocked. If necessary, right-click each mirror job, and then click Pause and Unlock All Mirrors.
Click the LifeKeeper GUI, and then set the top-level ResTag to Out of Service on the primary server.
- Apply Windows patches to the HA secondary node.
Apply Windows patches to DR Server.
At this point you should have Mirrors paused on the HA pairing, and the patches installed on the Secondary HA Node and the DR box.
- Restart the HA Secondary Node.
- Once the Secondary HA node is back up, restart the DR node.
Check the Client Console Deployment Monitor to ensure all heartbeats are still updating on the active server, checking the various component logs for affected services if required.
Open DataKeeper on the node.
Right-click each mirror job, and then click Continue and Lock All Mirrors.
Click the LifeKeeper GUI, and then set the top-level ResTag to In Service.
Wait for the disk mirror jobs to synchronize fully.
In DataKeeper, verify that all mirror jobs are fully synchronized.
At this point the Secondary and DR site are patched and LK and DK should be active. The mirror must be fully synced before proceeding to the next step.
- Once the mirror is synced, failover from the Primary HA node onto the Secondary HA node.
When the secondary is active, verify that the components are all checking in and there are no errors in the logs.
Now the Secondary Node is the Active node. Repeat the above steps 8 through 13 to pause DK and LK (pause mirrors, take out ResTag) and apply the patches onto the Primary Passive node.
- Once the patches are applied, reboot and bring back online the DK/LK mirrors/ResTags.
The HA/DR pair is restored, and you can now failover back to the primary.
SQL Server
Before patching the SQL Server on an HA+DR combined solution, ensure the following prerequisites are met:
- Ensure all databases are backed up.
- Reboot each of the servers one at the time (first the secondary HA server, then the DR server) prior to any patching being applied.
- This procedure assumes that the DR failover process has been successfully tested in your environment. If this is not the case, remediate the DR failover process, and then proceed with the steps below.
- Prior to patching the 3rd Node (passive Primary HA box), ensure the other two boxes are able to restore operational service.
On the currently inactive HA node:
Open DataKeeper.
Ensure that all mirror jobs are in a status of mirroring & synchronized.
Right-click each mirror job, and then click Pause and unlock All Mirrors.
- Check for SQL Server Updates.
- Apply updates as necessary.
- Reboot the inactive HA node.
- Repeat steps 4 through 6 if necessary.
- (Optional.) Repeat these steps for any additional inactive HA nodes.
On the currently active HA node:
Do not make any changes to the services.
Open DataKeeper.
Confirm that all mirrors are in a status of paused & unlocked. If necessary, right-click each mirror job, and then click Pause and Unlock All Mirrors.
Click the LifeKeeper GUI, and then set the top-level ResTag to Out of Service on the primary server.
On the HA secondary node, stop all LogRhythm services by running the following admin PowerShell command:
CODE; gsv -displayname 'LogRhythm*' | stop-service
- Apply SQL patches (service pack/cumulative updates) to the HA Server.
Access the DR node, and stop all LogRhythm services by running the following admin PowerShell command:
CODEgsv -displayname 'LogRhythm*' | stop-service
- Apply SQL patches (service pack/cumulative updates) to the DR server.
Reboot to bring services back online.
At this point Mirrors should be paused on the HA pairing, and the patches should be installed on the Secondary HA Node and the DR box.
- Restart the HA secondary node.
Once the Secondary HA node is back up, restart the DR node, ensuring all services come back online.
If necessary, services can be restarted using the following admin PowerShell command:
CODEgsv -displayname 'LogRhythm*' | start-service
Check the Client Console Deployment Monitor to ensure all heartbeats are still updating on the active server, checking the various component logs for affected services if required.
Open DataKeeper on the node.
Right-click each mirror job, and then click Continue and Lock All Mirrors.
Click the LifeKeeper GUI, and then set the top-level ResTag to In Service.
Wait for the disk mirror jobs to synchronize fully.
In DataKeeper, verify that all mirror jobs are fully synchronized.
At this point the Secondary and DR site are patched and LK and DK should be active. The mirror must be fully synced before proceeding to the next step.
- Once the mirror is synced, failover from the Primary HA node onto the Secondary HA node.
When the secondary is active, verify that the components are all checking in and there are no errors in the logs.
Now the Secondary Node is the Active node. Repeat the above steps 11 through 16 to pause DK and LK (pause mirrors, take out ResTag) and apply the patches onto the Primary Passive node.
- Once the patches are applied, reboot and bring back online the DK/LK mirrors/ResTags.
The HA/DR pair is restored, and you can now failover back to the primary.