Operating System Patch Management
This page provides guidance for applying security patches to both Windows, CentOS, and Rocky operating systems used for the LogRhythm SIEM and NetMon. Each section describes a way to patch for online (internet-connected) and offline systems.
With the exception of NetMon appliances, LogRhythm recommends that the latest security patches from the OS vendor. Users should patch the underlying OSs as part of their regular patching cycle per their internal patching policy.
Patch Standard Deployments (XM or PM)
Microsoft Windows Operating System
To run online Windows Updates from the Microsoft repositories, follow the steps in this Microsoft article: https://support.microsoft.com/en-us/windows/get-the-latest-windows-update-7d20e88c-0568-483a-37bc-c3885390d212
To download updates from the Windows catalog and distribute them manually, follow the steps in this Microsoft article: https://support.microsoft.com/en-gb/help/323166/how-to-download-updates-that-include-drivers-and-hotfixes-from-the-win
SQL Server
The latest SQL service packs and Microsoft SQL Studio versions are fully tested and supported by LogRhythm.
For the latest updates per SQL server version, see https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates.
For the latest updates per SQL Studio version, see https://learn.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms.
To update a DR or HA deployment, see the Patch Disaster Recovery-Enabled Deployments or Patch High Availability-Enabled Deployments sections below.
To update HA+DR combined solutions, see the Patch HA+DR Combined Solution Deployments section below.
Rocky Linux Data Indexer (DX)
- LogRhythm DX servers should be patched with Rocky "baseos" and "appstream" repositories only. Other repositories should not be created or enabled on DX servers as they can interfere with the DX or Common Services installers. Other packages, including Elasticsearch, must only be updated as part of a LogRhythm upgrade. If you detect any vulnerabilities after following these instructions, please contact LogRhythm Support.
- Currently, LogRhythm supports Rocky 9.2 and 9.3. Future versions of 9.x are expected to continue to be supported.
Update Online Systems
To update online systems with Rocky Linux repositories, update DX servers against the Rocky official base repository. Additional repositories should not be used for DX patching.
sudo yum --disablerepo=* --enablerepo=baseos,appstream update
Validate no LogRhythm Packages are listed in "upgrading" before confirming. If a kernel update is required, a reboot may be necessary.
Update Offline Systems
To update offline systems, choose one of these three methods:
- Locally mount the latest Rocky 9.x official image file (updated quarterly). Instructions and examples can be found here.
- Copy repository/package files you need to update to the DX Servers.
- Configure your own local repository.
CentOS-Based Data Indexer (DX)
- LogRhythm DX servers should be patched with CentOS base repositories only. Other repositories should not be created or enabled on DX servers. Other packages, including Elasticsearch, must only be updated as part of a LogRhythm upgrade. If you detect any vulnerabilities after following these instructions, please contact LogRhythm Support.
- Currently, LogRhythm only supports CentOS version 7. DXs should not be upgraded to CentOS version 8.
- Before patching using CentOS repositories, LogRhythm 7.1.x must be upgraded to 7.2.x or later.
Because of the lack of necessary metadata in the yum repositories, "yum-plugin-security" is non-functional on CentOS. Alternatively, yum update updates all packages, including security patches.
Update Online Systems
To update online systems with CentOS repositories, update DX servers against the CentOS official base repository. Additional repositories should not be used for DX patching.
sudo yum --disablerepo=* --enablerepo=base,updates update
Update Offline Systems
To update offline systems, choose one of these three methods:
- Use the CentOS official image file (see the Use the CentOS Official Image File section below).
- Copy repository/package files to the DX servers.
- Configure your own local repository.
Use the CentOS Official Image File
Updating CentOS using an .iso file is a fast way to patch DX servers, as it does not require creating a local repository.
- Download the latest official CentOS 7.x minimal .iso file. The mirror links for CentOS images can be found at http://isoredirect.centos.org/centos/7/isos/x86_64/.
- Copy the .iso file to the DX appliance.
If it does not already exist, create the directory /media/CentOS:
BASHmkdir -p /media/CentOS
Use the following command to mount the .iso locally, replacing "$FilePath.iso" to full path of the CentOS .iso file:
BASHmount -t iso9660 -o loop $FilePath.iso /media/CentOS/
For example, if the downloaded .iso filename is "loopCentOS-7-x86_64-Minimal-2003.iso" and it is copied to /home/logrhythm/, the mount command would be:
BASHmount -t iso9660 -o loop /home/logrhythm/loopCentOS-7-x86_64-Minimal-2003.iso /media/CentOS/
Create or edit a local yum repository file named "/etc/yum.repos.d/CentOS-Media.repo" and update the file address to point to the directory mounting the .iso file:
BASH# CentOS-Media.repo # # This repo can be used with mounted DVD media, verify the mount point for # CentOS-7. You can use this repo and yum to install items directly off the # DVD ISO that we release. # # To use this repo, put in your DVD and use it with the other repos too: # yum --enablerepo=c7-media [command] # # or for ONLY the media repo, do this: # # yum --disablerepo=\* --enablerepo=c7-media [command] [c7-media] name=CentOS-$releasever - Media baseurl=file:///media/CentOS/ file:///media/cdrom/ file:///media/cdrecorder/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Run the yum update command:
BASHsudo yum clean all yum --disablerepo=* --enablerepo=c7-media update
When the update is complete, disable the repository by editing the file "/etc/yum.repos.d/CentOS-Media.repo" and changing the enabled value to 0:
BASH# CentOS-Media.repo # # This repo can be used with mounted DVD media, verify the mount point for # CentOS-7. You can use this repo and yum to install items directly off the # DVD ISO that we release. # # To use this repo, put in your DVD and use it with the other repos too: # yum --enablerepo=c7-media [command] # # or for ONLY the media repo, do this: # # yum --disablerepo=\* --enablerepo=c7-media [command] [c7-media] name=CentOS-$releasever - Media baseurl=file:///media/CentOS/ file:///media/cdrom/ file:///media/cdrecorder/ gpgcheck=1 enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Unmount the .iso file:
BASHumount /media/CentOS/
Copy Repository Files to DX Servers and Run the Update Command Against the Repository on the Server's Local Storage
Instead of using the local network to access to the repository server, a repository can be created and copied to each DX server manually. They are then updated with the local repository available on the local disk.
Install a new CentOS 7.x server with internet access. To verify whether the correct version of CentOS is installed, run the following command on the offline repository server, and then run the same command on DX servers to confirm that the values for "arch," "releasever," "infra," and "basearch" are identical.
BASHsudo python -c 'import yum, json; yb = yum.YumBase(); print json.dumps(yb.conf.yumvar,indent=2)'
On the repository server, install the following packages:
BASHsudo yum install createrepo yum-utils
Create a directory named repository on volume "var":
BASHsudo mkdir -p /var/repository
Run the four following commands separately to download and update the local repository for the base, centosplus, extras, and updates repositories. To download the most recent updates from the CentOS repository prior to any subsequent patching run, repeat these commands.
BASHsudo reposync -g -l -d -m --repoid=base --newest-only --download-metadata --download_path=/var/repository/ sudo reposync -g -l -d -m --repoid=centosplus --newest-only --download-metadata --download_path=/var/repository/ sudo reposync -g -l -d -m --repoid=extras --newest-only --download-metadata --download_path=/var/repository/ sudo reposync -g -l -d -m --repoid=updates --newest-only --download-metadata --download_path=/var/repository/
When all packages sync, use the createrepo command to create and update the repodata repository. Run the following command each time reposync commands are re-run for subsequent patching.
BASHsudo createrepo /var/repository/
Create a tar.gz file from the repository directory. For example, create the tar file in the /home/logrhythm/ directory.
BASHsudo tar czf /home/logrhythm/repository.tar.gz /var/repository/
Copy repository.tar.gz to the /home/logrhythm/ directory on the DX server, and then extract the tar file using the following command:
The volume "/var/" has limited space. If the repository size is larger than the available space on the volume, extract the tar file to a different location.BASHsudo tar xzf /home/logrhythm/repository.tar.gz -C /
The repository files should be extracted to /var/repository/ on the DX server.
On the DX servers, create a new yum repository file:
BASHsudo vi /etc/yum.repos.d/CentOS-Local.repo
Copy the following content to the file in CentOS-Local.repo, where the value /var/repository/ in front of "baseurl" is the location to which repository.tar.gz was extracted.
BASH[LocalRepository] name=LocalRepository baseurl=file:///var/repository gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 enabled=1
Run the yum update command on the DX server:
BASHsudo yum clean all sudo yum update
When the yum update is complete, edit the repo file /etc/yum.repos.d/CentOS-Local.repo again:
BASHsudo vi /etc/yum.repos.d/CentOS-Local.repo
To disable the repository, set the value of Enabled to 0:
BASH[LocalRepository] name=LocalRepository baseurl=file:///var/repository gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 enabled=0
After the update is complete, remove the repository files.
Restart DX Servers
The Linux kernel is only loaded into memory during the boot process. To verify whether the new kernel is installed on the OS, check the version of the running kernel and compare it to the list of of installed versions.
Check If an OS Reboot Is Required
To show the version of the current kernel OS, run the following command:
BASHsudo uname -r
To show kernel versions installed on the system, run the following command:
BASHsudo rpm -qa kernel
If there is a newer kernel version available in the rpm -qa output, restart the OS to load the new kernel.
Restarting a single node DX introduces downtime for DX, and it can take a significant amount of time to recover and resume indexing.
Restart a Single-Node Cluster
Stop indexing and perform a synced flush to speed up shard recovery:
BASHcurl -XPOST "localhost:9200/_flush/synced?pretty"
Check the response to make sure there are no failures. Synced flush operations that fail due to pending indexing operations are listed in the response body, although the request itself still returns a 200 OK status. If there are failures, reissue the request.
Shut down all DX services on the server:
BASHsudo /usr/local/logrhythm/tools/stop-all-services-linux.sh
Restart the DX server:
Before restarting the servers, verify iDRAC access to the server.BASHsudo systemctl --force reboot
When the node restarts, verify that the cluster is stable (green). This can take a significant amount of time. To monitor the cluster, use the following command:
BASHwatch 'curl -s "localhost:9200/_cluster/health?pretty"'
Rolling Restart a Multi-Node Cluster
The following steps should be run on one node at a time. For example, if there are three nodes in a DX cluster, run these steps on each node sequentially.
Disable shard allocation:
BASHcurl -XPUT localhost:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d '{"transient":{"cluster.routing.allocation.enable":"none"}}'
Stop indexing and perform a synced flush to speed up shard recovery:
BASHcurl -XPOST "localhost:9200/_flush/synced?pretty"
Check the response to make sure there are no failures. Synced flush operations that fail due to pending indexing operations are listed in the response body, although the request itself still returns a 200 OK status. If there are failures, reissue the request.
Stop all DX services on one server at a time:
BASHsudo /usr/local/logrhythm/tools/stop-all-services-linux.sh
Restart the node:
Before restarting the node, verify iDRAC access to the server.BASHsudo systemctl --force reboot
Confirm the node you restarted joins the cluster by checking the log file or by submitting a _cat/nodes request on any other node in the cluster:
BASHwatch 'curl -s -XGET "localhost:9200/_cat/nodes?pretty&s=name&v"'
After the node has joined the cluster, re-enable shard allocation. To enable shard allocation and start using the node, remove the cluster.routing.allocation.enable setting:
BASHcurl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'{"transient": { "cluster.routing.allocation.enable": "all" }}'
(Optional) To speed up the recovery process, increase the limit of total inbound and outbound recovery traffic and concurrent incoming shard recoveries allowed on a node:
BASHcurl -XPUT localhost:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d'{"transient":{"cluster.routing.allocation.node_concurrent_recoveries": 10}}' curl -XPUT localhost:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d '{"transient":{"indices.recovery.max_bytes_per_sec": "1000mb"}}'
These limit increases persist unless they are removed at the end of the procedure, or all nodes are fully stopped and then started again.
To remove these limit increases, see optional step 11.
When the node has recovered and the cluster is green, repeat these steps for each node that needs to be restarted.
When the node has restarted, verify the cluster until it is stable (green). To monitor the cluster, use the following command:
BASHwatch 'curl -s "localhost:9200/_cluster/health?pretty"'
(Optional, recommended if step 8 was taken) Once the full rolling restart is completed, query for all currently used cluster settings, filtered for transient settings only:
CODEcurl -sX GET "http://localhost:9200/_cluster/settings?flat_settings=true&filter_path=transient&pretty"
Revert all transient cluster settings:
CODEcurl -sX PUT "http://localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d '{"transient":{"*":null}}'
LogRhythm NetMon
Patches are applied with NetMon version updates. It is recommended to update the NetMon to its latest version. If additional packages need to be updated, raise a ticket with LogRhythm Support to add the package into next available NetMon release.
Yum Update
Running a yum update on a NetMon system is not supported.
Update Firmware and Drivers on Dell EMC PowerEdge Servers
To access the most recent firmware from Dell's website, contact Dell Support at this link (Dell service tag required): https://www.dell.com/support/home/en-us?app=products
To run firmware and drivers updates for Dell servers, see https://www.dell.com/support/article/en-us/sln300662/updating-firmware-and-drivers-on-dell-emc-poweredge-servers#WhatarethedifferentsmethodstoupdateaPowerEdge.
For information on iDRAC and BIOS using the IDRAC interface, see https://www.dell.com/support/article/en-us/sln292363/dell-poweredge-update-the-firmware-of-single-system-components-remotely-using-the-idrac?lang=en.
Patch High Availability-Enabled Deployments (XM or PM)
Microsoft Windows Operating System
Patch the Windows Server OS following your regular internal patching policy.
SQL Server
The LogRhythm HA solution employs full-disk mirroring between protected nodes to synchronize the drives that contain the PM SQL databases (both system and LogRhythm). The system databases reside on the mirrored volume for the D: drive. When a SQL Server service pack or cumulative update is applied to an instance of SQL Server, the system (and LogRhythm) databases are also updated with the build number. When the master database version is modified with the build number from the SQL Server update, that change is mirrored to the inactive nodes via DataKeeper. If the inactive node were brought online in the middle of the patching process, the SQL Server executable would recognize that the master database build number is not consistent with the version that the service executable expects. This would cause the database engine to fail to start.
On the currently inactive HA node:
Open DataKeeper.
Ensure that all mirror jobs are in a status of mirroring & synchronized.
Right-click each mirror job, and then click Pause and unlock All Mirrors.
Start SQL Server services on the inactive HA node.
Install the SQL Server updates.
If a reboot is prompted:
Reboot is recommended.Reboot the node, and then log in again.
Open DataKeeper.
- Right-click each mirror job, and then click Pause and unlock All Mirrors.
- Start SQL Server services on the inactive HA node.
Proceed to step 8.
If no reboot is prompted, proceed to step 8.
Confirm successful SQL Server startup via the ERRORLOG and login availability via SQL Server Management Studio.
Manually stop the SQL Server service on the node.
(Optional) Repeat these steps for any additional inactive HA nodes.
On the currently active HA node:
Open DataKeeper.
Confirm that all mirrors are in a status of paused & unlocked. If necessary, right-click each mirror job, and then click Pause and Unlock All Mirrors.
Click the LifeKeeper GUI, and then set the top-level ResTag to Out of Service.
Manually stop all LogRhythm Services on the node.
Leave the SQL Server services running. They are required for the SQL Server update.After this step, LogRhythm services will be non-functional.Install the SQL Server updates.
If reboot is prompted:
Reboot is recommended.Reboot the node, and then log in again.
Wait for SQL and all LogRhythm services to start. This could take 10-15 minutes.
Proceed to step 8.
If no reboot is prompted, proceed to step 8.
Confirm successful SQL Server startup via the ERRORLOG and login availability via SQL Server Management Studio.
Confirm that all LogRhythm services are started and operational by checking the various component logs.
Open DataKeeper on the node.
Right-click each mirror job, and then click Continue and Lock All Mirrors.
Click the LifeKeeper GUI, and then set the top-level ResTag to In Service.
Wait for the disk mirror jobs to synchronize fully.
In DataKeeper, verify that all mirror jobs are fully synchronized.
(Optional) Perform a full switchover of resources to the secondary node to verify HA resource protection, then perform a full switchback of resources to the primary node to verify HA resource protection.
Patch Disaster Recovery-Enabled Deployments (XM or PM)
Microsoft Windows Operating System
Patch the Windows Server OS following your regular internal patching policy.
SQL Server
DR-enabled SQL Servers should not be subject to automatic updates (such as Service Packs and Cumulative Updates) on the SQL Server instance. To avoid database corruption, these updates should be applied in a specific pattern. It is highly recommended to exclude DR-enabled LogRhythm systems from automatic patches to the SQL Server, though OS updates are still advised.
To patch SQL Server 2016:
Perform the service pack or cumulative update on the secondary replica.
Use the DR Control utility to perform a cluster failover of all database availability groups from the primary replica to the secondary replica.
Perform the service pack or cumulative update on the primary replica.
Use the DR Control utility to perform a cluster failback of all database availability groups from the secondary replica to the primary replica.
It is recommended to perform a system reboot immediately after performing the service pack or cumulative update on the SQL Server instance, even if the installer does not call for one.
For further detail on SQL patching Always On Availability Groups, see https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/upgrading-always-on-availability-group-replica-instances?view=sql-server-2016.
Patch HA+DR Combined Solution Deployments (XM or PM)
Microsoft Windows Operating System
Before patching the Windows OS on an HA+DR combined solution, ensure the following prerequisites are met:
- Ensure all databases are backed up.
- Reboot each of the servers one at the time (first the secondary HA server, then the DR server) prior to any patching being applied.
- This procedure assumes that the DR failover process has been successfully tested in your environment. If this is not the case, remediate the DR failover process, and then proceed with the steps below.
- Prior to patching the 3rd Node (passive Primary HA box), ensure the other two boxes are able to restore operational service.
On the currently inactive HA node:
Open DataKeeper.
Ensure that all mirror jobs are in a status of mirroring & synchronized.
Right-click each mirror job, and then click Pause and unlock All Mirrors.
- Check for Windows Updates.
- Apply updates as necessary.
- Reboot the inactive HA node.
- Repeat steps 4 through 6 if necessary.
- (Optional.) Repeat these steps for any additional inactive HA nodes.
On the currently active HA node:
Do not make any changes to the services.
Open DataKeeper.
Confirm that all mirrors are in a status of paused & unlocked. If necessary, right-click each mirror job, and then click Pause and Unlock All Mirrors.
Click the LifeKeeper GUI, and then set the top-level ResTag to Out of Service on the primary server.
- Apply Windows patches to the HA secondary node.
Apply Windows patches to DR Server.
At this point you should have Mirrors paused on the HA pairing, and the patches installed on the Secondary HA Node and the DR box.
- Restart the HA Secondary Node.
- Once the Secondary HA node is back up, restart the DR node.
Check the Client Console Deployment Monitor to ensure all heartbeats are still updating on the active server, checking the various component logs for affected services if required.
Open DataKeeper on the node.
Right-click each mirror job, and then click Continue and Lock All Mirrors.
Click the LifeKeeper GUI, and then set the top-level ResTag to In Service.
Wait for the disk mirror jobs to synchronize fully.
In DataKeeper, verify that all mirror jobs are fully synchronized.
At this point the Secondary and DR site are patched and LK and DK should be active. The mirror must be fully synced before proceeding to the next step.
- Once the mirror is synced, failover from the Primary HA node onto the Secondary HA node.
When the secondary is active, verify that the components are all checking in and there are no errors in the logs.
Now the Secondary Node is the Active node. Repeat the above steps 8 through 13 to pause DK and LK (pause mirrors, take out ResTag) and apply the patches onto the Primary Passive node.
- Once the patches are applied, reboot and bring back online the DK/LK mirrors/ResTags.
The HA/DR pair is restored, and you can now failover back to the primary.
SQL Server
Before patching the SQL Server on an HA+DR combined solution, ensure the following prerequisites are met:
- Ensure all databases are backed up.
- Reboot each of the servers one at the time (first the secondary HA server, then the DR server) prior to any patching being applied.
- This procedure assumes that the DR failover process has been successfully tested in your environment. If this is not the case, remediate the DR failover process, and then proceed with the steps below.
- Prior to patching the 3rd Node (passive Primary HA box), ensure the other two boxes are able to restore operational service.
On the currently inactive HA node:
Open DataKeeper.
Ensure that all mirror jobs are in a status of mirroring & synchronized.
Right-click each mirror job, and then click Pause and unlock All Mirrors.
- Check for SQL Server Updates.
- Apply updates as necessary.
- Reboot the inactive HA node.
- Repeat steps 4 through 6 if necessary.
- (Optional.) Repeat these steps for any additional inactive HA nodes.
On the currently active HA node:
Do not make any changes to the services.
Open DataKeeper.
Confirm that all mirrors are in a status of paused & unlocked. If necessary, right-click each mirror job, and then click Pause and Unlock All Mirrors.
Click the LifeKeeper GUI, and then set the top-level ResTag to Out of Service on the primary server.
On the HA secondary node, stop all LogRhythm services by running the following admin PowerShell command:
CODE; gsv -displayname 'LogRhythm*' | stop-service
- Apply SQL patches (service pack/cumulative updates) to the HA Server.
Access the DR node, and stop all LogRhythm services by running the following admin PowerShell command:
CODEgsv -displayname 'LogRhythm*' | stop-service
- Apply SQL patches (service pack/cumulative updates) to the DR server.
Reboot to bring services back online.
At this point Mirrors should be paused on the HA pairing, and the patches should be installed on the Secondary HA Node and the DR box.
- Restart the HA secondary node.
Once the Secondary HA node is back up, restart the DR node, ensuring all services come back online.
If necessary, services can be restarted using the following admin PowerShell command:
CODEgsv -displayname 'LogRhythm*' | start-service
Check the Client Console Deployment Monitor to ensure all heartbeats are still updating on the active server, checking the various component logs for affected services if required.
Open DataKeeper on the node.
Right-click each mirror job, and then click Continue and Lock All Mirrors.
Click the LifeKeeper GUI, and then set the top-level ResTag to In Service.
Wait for the disk mirror jobs to synchronize fully.
In DataKeeper, verify that all mirror jobs are fully synchronized.
At this point the Secondary and DR site are patched and LK and DK should be active. The mirror must be fully synced before proceeding to the next step.
- Once the mirror is synced, failover from the Primary HA node onto the Secondary HA node.
When the secondary is active, verify that the components are all checking in and there are no errors in the logs.
Now the Secondary Node is the Active node. Repeat the above steps 11 through 16 to pause DK and LK (pause mirrors, take out ResTag) and apply the patches onto the Primary Passive node.
- Once the patches are applied, reboot and bring back online the DK/LK mirrors/ResTags.
The HA/DR pair is restored, and you can now failover back to the primary.