Gen5/6 Windows Appliance Software Configuration
After you complete the hardware installation of your LogRhythm Windows Appliance, this document will guide you through the initial configuration of your LogRhythm deployment.
Work with your LogRhythm Professional Services Consultant or an authorized LogRhythm Deployment Certified Partner to complete the procedures outlined in this guide.
Prerequisites
Before starting your configuration, you need:
The LogRhythm License file (.LIC), provided via an email to technical point of contact on purchase or can be obtained by contacting support
The factory default password for the LogRhythm Database accounts (contact support or professional services if you do not have this)
The Platform Manager or XM (if all-in-one) Hostname or IP Address
Configure and Start LogRhythm Components
Configure the Platform Manager Service
On the Start Menu, click the LogRhythm folder, and then click Platform Manager Configuration Manager.
On the Job Manager tab, complete the following fields:
Server. The name or IP address of the Platform Manager database server
Password. The factory default password
On the Alarming and Response Manager tab, complete the following fields:
Server. The name or IP address of the Platform Manager database server
Password. The factory default password
Click OK.
Configure the Data Processor Service
On the Start Menu, click the LogRhythm folder, and then click Data Processor Configuration Manager.
On the General tab, complete the following fields:
Server. The name or IP address of the Platform Manager database server
Password. The factory default password
Click OK.
Configure the AI Engine Service
On the Start Menu, click the LogRhythm folder, and then click AIEngine Configuration Manager.
On the General tab, complete the following fields:
Server. The name or IP address of the Platform Manager database server
Password. The factory default password
Click OK.
Configure the System Monitor Agent Service
On the Start Menu, click the LogRhythm folder, and then click System Monitor Configuration Manager.
On the General tab, complete the following fields:
Data Processor Address. The hostname or IP address of the Data Processor server
System Monitor IP Address. The IP address of the System Monitor
Host Entity ID. The default is zero for system assigned ID
Click OK.
Log in to the Client Console
On the Start Menu, click the LogRhythm folder, and then click LogRhythm Console.
Complete the following fields:
EMDB Server. The hostname or IP address of the Platform Manager server
User ID. logrhythmadmin
Password. The factory default password
Click OK.
Complete New Deployment Wizard
Enter the following information in the New Deployment Wizard:
Windows host name of the Platform Manager
Enter the host name where the Platform Manager is located. To find the host name, start File Explorer, right-click This PC, and then click Properties. Under Computer name, domain, and workgroup settings, get the Full computer name up to the first period where the domain name starts.
If the appliance type is XM, all LogRhythm components are contained in a single appliance.
IP address of the Platform Manager
Enter the IP address where the Platform Manager is located. Appliances are shipped with two Network Interface Cards (NICs). Typically, one NIC is used for Console connections, while the other NIC is used for database intercommunications. The IP address entered here will serve as a Console connection interface.The Platform Manager is also a Data Processor (e.g., an XM appliance)
If this is an XM Appliance, which has all LogRhythm components contained in a single appliance, select this checkbox.The Platform Manager is also an AI Engine Server
If AI Engine is installed on the Platform Manager, which means it is not deployed as a standalone appliance, select this checkbox.LogMart DB Server Override
If the LogMart database is installed on a different host, enter the host IP address here.LogRhythm License file
This file is provided by LogRhythm Support after purchase and shipment of the appliance(s), and it is required to access and configure LogRhythm.Navigate to the location of the license file (*.lic) by clicking the ellipses at the far right.
Locate and select the master license file and click Open. The path and file name are listed in the License File text box.
Click OK.
When prompted, select the appropriate Data Processor licensing mode from the available, valid options. The mode depends on:
Software (n available licenses). Select this option to identify a software only purchase
Appliance Mode for software and appliance purchase. Select this option to identify a software and appliance purchase
Data Processor MPS mode for software and appliance purchase. Select this option to use a Messages Per Second license
Click Next.
The license options vary according to the option selected above.Select the appropriate licensing mode, and then click OK. The Initialize Knowledge Base window appears.
Complete Knowledge Base Import Wizard
After completing the New Deployment Wizard, the New Knowledge Base Deployment Wizard appears.
Deploy the Knowledge Base by selecting one of the three following options:
I have Internet access and want to automatically download the KB (recommended).
Proxy Server Address. Enter the Proxy Server Address for the KB Download.
Proxy Server Port. Enter the port number for the server.
Select the Proxy Server Requires Authentication check box
Enter the appropriate credentials and Host name, if necessary.
Click OK. The Knowledge Base is downloaded.
Click OK. Proceed to the Knowledge Base Importer Wizard section.
I do not have Internet access or want to manually download the KB.
The Manual Knowledge Base Download window appears.Perform one of the following steps
Export Knowledge Base Request File. Select this option to export a Knowledge Base request file and upload it to the Support Portal:
Click OK and download the file to your drive.
The Export Successful page appears.Click OK.
The Knowledge Base Not Loaded page appears.Click OK and the Console closes.
Contact Customer Support. Select this option to obtain the Knowledge Base file from Customer Support:
From a computer with Internet access, log into the Support Portal at https://support.logrhythm.com.
Go to the Downloads section to access the latest version of the Knowledge Base. The request screen displays.
Choose from the following:
Upload the Request File downloaded from the Console.
Enter the License ID, the Deployment ID, and the Product Version.
Click Get Knowledge Base.
Save the Knowledge Base file and transfer it to the computer on which you are loading the Console.
Restart the Console and follow the instructions in the I have already manually downloaded the KB section.
I have already manually downloaded the KB. Select this option to manually import the Knowledge Base file.
The Knowledge Base Export Wizard appears and starts unpacking and validating the Knowledge Base file. The file is checked for compatibility with your current deployment and is prepared for import. This may take several minutes.
Upon completion the message Knowledge Base unpacked appears in the status.Click Next to import the Knowledge Base.
When the Knowledge Base Updated message appears, click OK.
On the Knowledge Base Import Wizard, click Close.
Configure the Platform
After completing the Knowledge Base import, the Missing Platform Manager Platform message is displayed.
Click OK.
In the Platform Manager Properties dialog box, click the browse icon next to the Platform box.
In the Platform Selector table, select the row corresponding to your appliance, and then click OK.
Enter the Email From Address, and then click OK.
The Missing Data Processor Platform error message appears.Click OK.
In the Data Processor Properties dialog box, click the browse icon next to the Platform box.
In the Platform Selector table, select the row corresponding to your appliance, and then click OK.
Click the list box under Cluster Name, then click the default cluster (logrhythm).
In the lower-left corner of the Data Processor Properties dialog box, click Advanced
The Data Processor Advanced Properties dialog box appears.Change the ActiveArchivePath value from C:\LogRhythmArchives\Active to D:\LogRhythmArchives\Active.
Change the InactiveArchivePath value from C:\LogRhythmArchives\Inactive to D:\LogRhythmArchives\Inactive.
Click OK.
The Restart Component message appears.Click OK.
Configure and Start Core Services
Click the Start menu, and then click Windows Administrative Tools.
Double-click the Services shortcut.
Double-click each of the following services, set the startup type to Automatic, and click Start under Service status.
On an XM appliance, you will do the following all on the XM appliance. In a distributed setup, you will need to start these services on the appliance where they are installed.
LogRhythm AI Engine
LogRhythm AI Engine Communication Manager
LogRhythm Alarming and Response Manager
LogRhythm Job Manager
LogRhythm Mediator Server Service
LogRhythm System Monitor Service
Associate the Default System Monitor Agent
In the LogRhythm Client Console, click Deployment Manager, and then click the System Monitors tab.
In the top pane, under New System Monitor Agents, select the Action box next to the pending Agent.
Right-click the selected Agent, and then click Associate.
The Associate New System Monitor Agent with an Existing Agent message appears.Select the Agent and click OK.
The “Associate Successful” message appears.Click OK.
Configure the LogRhythm Data Indexer
Configuring the Data Indexer for Windows and Linux has moved from the individual clusters, to the Configuration Manager on the Platform Manager. You can configure all data Indexers using the LogRhythm Configuration Manager installed on the Platform Manager.
Cluster Name configuration is currently done through environment settings. Before configuring the Data Indexer on Windows, verify that the DX_ES_CLUSTER_NAME environment variable is set on both DR servers.
LogRhythm Service Registry, LogRhythm API Gateway and LogRhythm Windows Authentication API Service must be running before opening LogRhythm Configuration Manager
If you are configuring multiple data Indexers, all can be configured from the Primary PM as the configuration is centralized between servers.
In an MSSP environment, DX Cluster names are visible to all Users of a Web Console, regardless of Entity segregation. For privacy reasons, avoid using cluster names that could be used to identify clients. Data and data privacy are still maintained; only the cluster name is visible
Do not attempt to modify consul configurations manually. If you have any issues, contact LogRhythm Customer Support.
To configure the Data Indexer:
Open the Configuration Manager from programs on the Platform Manager.
From the menu on the left, select the Data Indexers tab.
Each installed Data Indexer has its own section that looks like this:
Data Indexer - Cluster Name: <ClusterName> Cluster Id: <ClusterID>
The Cluster Name and Cluster ID come from the Environment variables, DX_ES_CLUSTER_NAME and DXCLUSTERID on each server. The Cluster Name can be modified in the Configuration Manager. If you change the Cluster Name, the name should be less than 50 characters long to ensure it displays properly in drop-down menus. The DXCLUSTERID is automatically set by the software and should not be modified.
Verify or update the following Data Indexer settings:
Setting | Default | Description |
|---|---|---|
Database User ID | LogRhythmNGLM | Username the DX services will use to connect to the EMDB database. When in FIPS mode, Windows authentication is required (local or domain). When using a domain account, the Database Username must be in domain\username format. |
Database Password | <LogRhythm Default> | Password used by the DX services to connect to the EMDB database. It is highly recommended, and LogRhythm best practice, to change all MS SQL account passwords when setting up a deployment. After you change the LogRhythmNGLM password in Microsoft SQL Server Management Studio, you must set the Database Password to the same value. You should change the password in Microsoft SQL Server Management Studio first, then change it on the Data Indexer page. |
GoMaintain ForceMerge | Disabled | Enables/Disables maintenance Force Merging. This can be left at the default value. |
Integrated Security | Disabled | This should be enabled when FIPS is enabled on the operating system. |
Click Show or Hide in Advanced View to toggle the view for Advanced Settings.
Advanced View Settings:
Setting | Default | Description |
|---|---|---|
Transporter Max Log Size (bytes) | 1000000 | Maximum log size in bytes that can be indexed. This can be left at the default value. |
Transporter Web Server Port | 16000 | Port that the Transporter service listens on. This can be left at the default value. |
Transporter Route Handler Timer (sec) | 10 | Indexing log batch timeout setting. This can be left at the default value. |
Elasticsearch Data Path | Windows: D:\LRIndexer\data Linux:/usr/local/logrhythm/db/data | Path where Data Indexer data will be stored. The path will be created if it does not already exist. Modifying this path after the Data Indexer installed will not move indices, they must be manually moved if the path is changed. |
GoMaintain TTL Logs (#indices) | -1 | Number of indices kept by the DX. This should be left at the default value. |
GoMaintain IndexManage Elasticsearch Sample Interval (sec) | 10 | Number of seconds between resource usage samples. This can be left at the default value. |
GoMaintain Elasticsearch Samples (#Samples) | 60 | Total number of samples taken, before GoMaintain decides to take action, when resource HWMs are reached. |
GoMaintain IndexManager Disk HWM (%diskutil) | 90 | Maximum percentage of the disk for the Drive where the data path is configured. This defaults to 90% in LR 7.21+; the recommended values are 80% for HDD-based DX clusters and 90% for SSD-based DX clusters. |
GoMaintain IndexManage Elasticsearch Heap HWM (%esheap) | 85 | Maximum % Heap used percentage before GoMaintain closes an index to release resources. This can be left at the default value. |
Carpenter SQL Paging Size (#records) | 10000 | Number of records to pull from EMDB at one time when syncing EMDB indices. This can be left at the default value. |
Carpenter EMDB Sync Interval (#minutes) | 5 | Interval of how often Carpenter service will sync EMDB indices. This can be left at the default value. |
Enable Warm Replicas | Disabled | Turn replicas on for Warm Indices. This setting will only affect Linux Data Indexer clusters that contain warm nodes. This can be left at the default value. |
Columbo Warm Tier Search Cycle Days | 20 | Number of Indexes (days) which will be opened in each Warm Tier search cycle. Valid Values: 5 - 30 (prior to version 7.19, the fixed value is 5) |
Columbo Ultra-Warm Tier Open Days | 30 | Number of Indexes (days) in Ultra-Warm Tier open for search. Valid Values: 0 (disabled) - 182 (SIEM version 7.21 or later) |
Columbo Warm Tier Locking | Enabled | Enable to use Warm Tier Locking to prevent concurrent overlapping searches in Warm-Closed tier (SIEM version 7.19 or later). It should be noted that this is an experimental feature undergoing additional testing. |
Columbo Query Mode | Fast | Elasticsearch query mode for Columbo searches. Fast uses QUERY_THEN_FETCH (lower latency, uses per-shard term frequencies). Precise uses DFS_QUERY_THEN_FETCH (global term frequencies, more accurate relevance scoring but higher latency). Fast vs Precise impacts search results of multi-node clusters only. Changing this setting directly impacts the IOPS consumption for search requests. Precise is only recommended on SSDs or high IOPS storage on multi-node clusters. Storage systems under stress changing from Fast to Precise may cause negative impacts to Indexing Rates (DXRP). |
Click Submit.
Do not modify any settings from their defaults unless you fully understand their impact. Modifying a setting incorrectly can negatively impact Data Indexer function and performance.
Automatic Maintenance
Automatic maintenance is governed by several of the above settings by the GoMaintain service. On startup, GoMaintain will continuously take samples from Elasticsearch stats, including disk and heap utilization for the configured time frame.
GoMaintain will automatically perform maintenance when High Water Mark settings are reached. Samples are taken over a period of time and analyzed before GoMaintain will take action on an index. This will depend on the Sample Interval and #Sample settings. By default, this is 60 samples, 1 every 10 seconds for a total of 10 minutes. If it is determined during that sample period that a High Water Mark setting was reached for an extended period of time, indices will be closed, deleted, or moved to warm nodes depending on the data indexer configuration. After an action is taken and completed, the sample period will begin again.
The DX monitors Elasticsearch memory and DX storage capacity. GoMaintain tracks heap pressure on the nodes. If the pressure constantly crosses the threshold, GoMaintain decreases the number of days of indices by closing the index. Closing the index removes the resource needs of managing that data and relieves the heap pressure on Elasticsearch. GoMaintain continues to close days until the memory is under the warning threshold, and continues to delete days based on the default disk utilization setting of 80%.
Logging of configuration and results for force merge can be found in C:\Program Files\LogRhythm\DataIndexer\logs\GoMaintain.log.
GoMaintain TTL Logs (#Indices)
The default configuration value is -1. This value monitors the systems resources and automatically manages the time-to-live (TTL). You can configure a lower TTL by changing this number. If this number is no longer achievable, the Data Indexer sends a diagnostic warning and starts closing the indices. Indices that have been closed by GoMaintain are not actively searchable after 7.9.x, but are maintained for reference purposes.
To show closed indices, run a curl command such as:
curl -s -XGET 'http://localhost:9200/_cat/indices?h=status,index' | awk '$1 == "close" {print $2}'
To show both open and closed indices, open a browser to http://localhost:9200/_cat/indices?v.
Indices can be reopened with the following query, as long as you have enough heap memory and disk space to support this index. If you do not, it immediately closes again.
curl -XPOST 'localhost:9200/<index>/_open?pretty'
After you open the index in this way, you can investigate the data in either the Web Console or Client Console.
Disk Utilization Limit
IndexManager Disk HWM (%diskUtil) Indicates the percentage of disk utilization that triggers maintenance. The default is 80, which means that maintenance starts when the Elasticsearch data disk is 80% full.
If Warm nodes are present, the disk utilization for combined Hot and Warm nodes will be tracked separately.
The value for %diskUtil should not be set higher than 80. This can have an impact on the ability of Elasticsearch to store replica shards for the purpose of failover.
If Warm nodes are present, the oldest index will be moved to the Warm node(s) if the Disk HWM is reached.
Maintenance is applied to the active repository, as well as archive repositories created by Second Look. When the Disk Usage Limit is reached, active logs are trimmed when “max indices” is reached. At this point, GoMaintain deletes completed restored repositories starting with the oldest date.
The default settings prioritize restored repositories above the active log repository. Restored archived logs are maintained while sacrificing active logs. If you want to keep your active logs and delete archives for space, set your min indices equal to your max indices. This forces the maintenance process to delete restored repositories first.
Heap Utilization Limit
IndexManager Heap HWM (%esheap) Indicates the percentage of Elasticsearch (java) heap utilization that triggers maintenance. The default is 85, which means that maintenance starts when the Elasticsearch heap utilization reaches 85%.
The value for %esheap should not be set higher than 85. This can have an impact on the ability of Elasticsearch searches and indexing and can degrade overall Elasticsearch performance.
If the Heap HWM is reached, GoMaintain will automatically close the oldest index in the cluster to release memory resources used by the cluster. If warm nodes are present in the cluster, the index will automatically be moved to the warm nodes before the index is closed.
Closed Indices on Hot nodes cannot be searched and will remain in a closed state on the data indexer until the Utilization Limit is reached.
Force Merge Configuration
Do not modify any of the configuration options under Force Merge Config without the assistance of LogRhythm Support or Professional Services.
The force merge configuration combines index segments to improve search performance. In larger deployments, search performance can degrade over time due to a large number of segments. Force merge can alleviate this issue by optimizing older indices and reducing heap usage.
Enabling Force Merge will show these additional ForceMerge Settings:
Parameter | Default |
|---|---|
GoMaintain ForceMerge Hour (UTC Hour of day) | The hour of the day, in UTC, when the merge operation should begin. If Only Merge Periodically is set to false, GoMaintain merges segments continuously, and this setting is not used. |
GoMaintain Forcemerge Days to Exclude | ForceMerging will take place only on indices excluding the first X indices, moving backwards in time. |
Only Merge Periodically | If set to true, Go Maintain only merges segments once per day, at the hour specified by Hour Of Day For Periodic Merge. If set to false, GoMaintain merges segments on a continuous basis. |
Warm Tier Configuration
Beginning with LogRhythm SIEM version 7.19, a number of new configurable values have been added to optimize the Warm Tier search experience. These settings only apply to customers with Warm Tier Linux Data Indexer clusters, and can be configured independently for each DX cluster with different values.
Columbo Warm Tier Search Cycle Days
This value was designed to prevent over-subscription of the Elasticsearch Heap Segment Terms memory by opening a controlled number of indexes during a given search cycle, then proceeding to paginate through the indexes to cover the duration of a search.
For example, a company has a DX Cluster with 365 days of data, and today is January 1st, 2025.
30d Hot (Dec 2024)
60d Ultra-Warm (Oct/Nov 2024)
275d Warm-Closed (Jan-Sept 2024)
When an analyst runs a search for IP Address 10.10.10.10 for the month of April (30 days), if the "Columbo Warm Tier Search Cycle Days" option is configured for 10 days/indexes, the search cycles through three batches of opening, searching, and closing indexes before the search is completed. Following completion of the three batches, the Warm-Tier lock will be released.
When an analyst runs a search for IP Address 5.5.5.5 for February 10-16th, this only crosses seven days of indexes and, therefore, is completed in one batch.
The default value in SIEM versions 7.19 and later is 20 days/indexes. Values over 30 days/indexes are considered experimental and may carry some risk of OOM condition on Warm Tier Indexers when combined with Ultra-Warm or Disabled Locking.
The default value in versions prior to 7.19 is hard-coded at 5 days/indexes and cannot be modified.
Columbo Ultra-Warm Tier Open Days
Introduced in LogRhythm SIEM version 7.19, the Ultra-Warm Tier offers a much improved search experience over warm-closed while still taking advantage of cost-effective storage tiers for indexes which are not being actively written to. Ultra-Warm stores indexes on the Warm tier nodes, but leaves them open, taking advantage of the local heap memory available on each warm node for faster searching.
Each open Ultra-Warm index consumes resources in Heap Memory and counts against max shards per node for the warm node on which the index resides. The more warm nodes present in your cluster, the more ultra-warm days you can safely open.
For customers with fewer than 60 days of data in Warm-Tier, we recommend setting Ultra-Warm to the full 60 days and disabling locking.
Customers with more than 60 days of data in Warm-Tier can experiment with longer values; however, some absolute maximums exist, and locking should be enabled in these environments if a high number of concurrent users exist.
Shard Maximums apply to Ultra-Warm Indexes/Nodes where each Warm node can hold a max of 2500 shards:
(Hot Node Count * 2 * Index Count) * 2 if warm-replica is enabled = Shard Count
For example, if a DX Cluster has 10 Hot + 1 Warm node without warm-replica (20 shards per index/2500 max shards in 1 warm node), the absolute maximym value for Ultra-Warm is 125 Days (values this high may negatively impact stability of your warm nodes and are not recommended).
In another example, if a DX Cluster has 6 Hot + 4 Warm nodes with warm-replica (24 shard per index/10,000 max shards across 4 warm nodes), the absolute max value for Ultra-Warm is 416 Days (values this high are very experimental and untested; use at your own discretion).
Columbo Warm Tier Locking
Introduced in LogRhythm SIEM version 7.19, this setting allows you to control the use of "Locking" in Warm Tier searching. In versions prior to LogRhythm SIEM 7.19, all warm tier search requests require an available lock to be run. These locks restricted warm tier to only a single concurrent search for each DX Cluster. With the introduction of Ultra-Warm tiers, the locking feature of warm-tier searching may not be applicable to some environments and can be disabled at the discretion of the user.
We recommend disabling locking in these scenarios:
All Warm Data in the environment will be in Ultra-Warm (typically <60 days) and therefore always open.
Searches against Warm-Closed data are exceptionally rare and would rarely be performed by multiple users at the same time.
We recommend enabling locking in these scenarios:
Data in the Warm-Closed tier is frequently accessed many times per hour.
Data in the Warm-Closed tier is accessed by many users concurrently, who may be searching overlapping time ranges.
Configure the Data Indexer on Linux
Install a Single-node Cluster
If you have more than one node in your cluster, follow the instructions in the Install a Multi-node Cluster section.
Before starting the Data Indexer installation, ensure that firewalld is running on all cluster nodes. To do this, log on to each node and run: sudo systemctl start firewalld
Log on to your Indexer appliance or server as logrhythm.
Go to the /home/logrhythm/Soft directory where you copied the updated installation or upgrade script.
If this is an upgrade, you should have a file named hosts in the /home/logrhythm/Soft directory that was used during the original installation.
The contents of the file might look like the following:CODE10.1.23.65 LRLinux1 hotIf you need to create a hosts file, use
vito create a file in /home/logrhythm/Soft called hosts.
The following command sequence illustrates how to create and modify a file withvi:To create the hosts file and open for editing, type
vi hosts.To enter INSERT mode, type i.
Enter the IPv4 address, hostname to use for the Indexer, and box type, separated by a space.
Press Esc.
To exit and save your hosts file type
:wq.
To install DX and make the machine accessible without a password, download the DataIndexerLinux.zip file from the Documentation & Downloads section of the LogRhythm Community, extract the PreInstall.sh file to /home/logrhythm and execute the script.
If you are installing with the LogRhythm ISO, these files will already be in place; however, we recommend checking to ensure you have the matching version of the files for your LogRhythm deployment version.CODEsh ./PreInstall.shGenerate a plan file from the LogRhythm XM/PM using the infrastructure installer, which includes the IP of the Linux DX system and copy the plan.yml from the newly created LRDeploymentPackage folder from XM to the node from where DX-Installation will be done.
Run the installer with the hosts file argument:
CODEsudo sh LRDataIndexer-<version>.x86_64.run --hosts <absolute path to .hosts file> --plan /home/logrhythm/Soft/plan.yml --es-cluster-name <cluster_name>Press Tab after starting to type out the installer name, and the filename autocompletes for you.
If prompted for the SSH password, enter the password for the logrhythm user.
The script installs or upgrades the Data Indexer. Common components are installed at /usr/local/logrhythm/.
LogRhythm Common Components (API Gateway and Service Registry) logs:sudo journalctl -u LogRhythmAPIGateway > lrapigateway.log
sudo journalctl -u LogRhythmServiceRegistry > lrserviceregistry.log
When the installation or upgrade is complete, a confirmation message appears.
Check the status of services by typing
sudo systemctlat the prompt, and look for failed services.
If the installation or upgrade fails with the error — failed to connect to the firewalld daemon — ensure that firewalld is running on all cluster nodes and start this procedure again. To do this, log on to each node and run: sudo systemctl start firewalld
Install a Multi-node Cluster
Run the install once for each cluster, the package installer installs a Data Indexer on each node. Run it on the same machine where you ran the original installer.
Before starting the Data Indexer installation or upgrade, ensure that firewalld is running on all cluster nodes. To do this, log on to each node and run: sudo systemctl start firewalld
Log on to your Indexer appliance or server as logrhythm.
Change to the /home/logrhythm/Soft directory where you copied the script.
You should have a file named hosts in the /home/logrhythm/Soft directory that was used during the original installation.
The contents of the file might look like the following:
10.1.23.65 LRLinux1 hot10.1.23.67 LRLinux2 warmIf you need to create a hosts file, use
vito create a file in /home/logrhythm/Soft called hosts.The following command sequence illustrates how to create and modify a file with
vi:To create the hosts file and open for editing, type
vi hosts.To enter INSERT mode, type i.
Enter the IPv4 address, hostname to use for the Indexer, and box type, separated by a space.
Press Esc.
To exit and save your hosts file type
:wq.
To install DX and make the machine accessible without a password, download the DataIndexerLinux.zip file from the Documentation & Downloads section of the LogRhythm Community, extract the the PreInstall.sh file to /home/logrhythm and execute the script.
CODEsh ./PreInstall.shGenerate a plan file which includes the IP of Linux DX system and copy the plan.yml from the newly created LRDeploymentPackage folder from XM to the node from where DX-Installation will be done.
Run the installer with the hosts file argument:
CODEsudo sh LRDataIndexer-<version>.x86_64.run --hosts <absolute path to .hosts file> --plan /home/logrhythm/Soft/plan.yml --es-cluster-name <cluster_name>Press Tab after starting to type out the installer name, and the filename autocompletes for you.
If prompted for the SSH password, enter the password for the logrhythm user.
The script installs or upgrades the Data Indexer on each of the DX machines. Common components are installed at /usr/local/logrhythm.
LogRhythm Common Components (API Gateway and Service Registry) logs:sudo journalctl -u LogRhythmAPIGateway > lrapigateway.log
sudo journalctl -u LogRhythmServiceRegistry > lrserviceregistry.log
When the installation or upgrade is complete, a confirmation message appears.
Check the status of services by typing
sudo systemctlat the prompt, and look for “failed” services.
If the installation or upgrade fails with the following error — failed to connect to the firewalld daemon — ensure that firewalld is running on all cluster nodes and start the installation again. To do this, log on to each node and run: sudo systemctl start firewalld
Hyper-V and LogRhythm Gen6 Appliances
Starting with the Gen6 appliances, LogRhythm Windows systems ship with the Hyper-V feature enabled by default, although it may not be necessary in all deployments. LogRhythm supports running up to two virtual machines, including the Windows Server 2022 Standard license, for Open Collector and, optionally, the Data Indexer. LogRhythm does not recommend or support running other virtual machine services as they could negatively impact performance of the appliance.
LogRhythm Appliance | Open Collector VM | Data Indexer VM |
|---|---|---|
XM2600 | Optional - Customer Install | Not Used |
XM4600 | Optional - Customer Install | Not Used |
XM6600 | Optional - Customer Install | Not Used |
XM8600 | Optional - Customer Install | Required - Pre-Installed from factory |
PM7600 | Optional - Customer Install | Not Used |
DP7600 | Optional - Customer Install | Not Used |
AIE7600 | Optional - Customer Install | Not Used |
Customers who choose not to use the Hyper-V functionality where it is not required can disable the feature. Open “Server Manager” from the top menu, and then select Manage > Remove Roles and Features. On the “Server Roles” screen, deselect Hyper-V.
Removing Hyper-V functionality requires a reboot.
Verify Appliance Functionality
Verify Log Collection via Tail. For more information, see the Create New Tails topic in the SIEM Help.
Ensure log data is being received by viewing the log data in the Tail display.
Configure the Tail to query all available log sources for the last 24 hours. Do not configure any filters.
Ensure logs are being processed by double-clicking a row in the Log/Event List pane, and checking for metadata parsing and classification. It is sufficient to verify some data loaded into the fields on the Processed Metadata Fields tab.
Verify Event Forwarding by opening the Personal Dashboard and viewing events as they arrive.
Visually check system health and status by opening the Deployment Monitor. The Deployment Monitor provides statistics about log collection and system resource usage.
Log collection happens from the older date to the newer date. If no data is present, repeat the Tail using a timeframe further in the past. It may take your LogRhythm appliance several hours to catch up to the present after collection begins.
Additional Tasks
Activate and register the Microsoft Windows operating system on the appliance.
Ensure that you have the latest LogRhythm software, especially if there was a time lapse between the receipt and the setup of the appliance.
Configure log collection from additional sources.
Run Microsoft Windows Update to confirm that you have the latest Microsoft updates installed on the appliance.