Install the LogRhythm Data Indexer
Deploy OS Image to Each Linux Data Indexer Node
The Linux Data Indexer can be installed with a CentOS 7.x Minimal, Red Hat Enterprise Linux (RHEL) 7 or 9, or Rocky 9.
For new installations we recommend Rocky 9 or RHEL 9.
To simplify the installation. LogRhythm provides an ISO image that contains the Rocky 9 operating system and the Data Indexer installer package.
To use RHEL 7 or RHEL 9, you need to download and install it from the Red Hat website, and then follow the configuration instructions in this guide.
Prepare for the Installation
Before you begin, make sure you have done the following:
Download the installation ISO. The installation ISO requires two physical disks in the Data Indexer system.
The ISO download link should have been provided to you along with your LogRhythm license or can be downloaded from the LogRhythm Community under "Documents & Downloads" then "SIEM". Contact LogRhythm Support if you cannot locate this link.- For a virtual installation, create a new virtual machine that meets the following requirements:
- OS Type is Linux
- OS Version is Red Hat 64-bit
- Hard drive, RAM, and processor meet the requirements stated in the Reference Architecture matching a LogRhythm standard appliance size (DX5x00 or DX7x00)
- Two disks, one used for the OS and the second larger disk for Elasticsearch data
- In the boot order of the system, Hard Disk should be listed before the CD/optical drive
- Note the IP address to be applied to each node, the netmask, the IP address of your default gateway, and the IP address of two NTP servers to use. Single IP addresses per box only, multiple IPs on a box are not supported and will result in instability/outages.
If you are installing a cluster of Data Indexers, note the following Clustering Rules apply:
- Each Hot Data Indexer server must be identical in rating and should be near identical specifications for CPU and Memory. For example if you have a cluster of 3 DX5500s you should not add a DX7500 to that same cluster
- Within a single cluster, all HOT nodes must use the same disk type. Under no circumstances should you mix hot nodes with HDD and SSD within the same cluster.
- When crossing generations of LogRhythm Appliances please ensure the disk types match:
- DX5500-HDD-10k/s
- DX7500-<Q2 2021-HDD-20k/s
- DX7500->Q2 2021-SSD-20k/s
- DX5600-SSD-10k/s
- DX7600-SSD-20k/s
- DX5500s cannot be mixed in the same cluster with DX5600s even though they have the same rating because their disk types are different
- DX7500s purchased after Feb 2021 with SSDs can be mixed with DX7600s as they have near-identical CPU/Memory and matching disk types
- When crossing generations of LogRhythm Appliances please ensure the disk types match:
- Cross-Mixing Physical and Virtual DX hot nodes in the same cluster is not supported when customer-managed virtual infrastructure is being used
- XM8600 ships with a single DX7600 Virtual Machine, this can be mixed with hardware DX7600s in the same cluster as this is LogRhythm managed virtual infrastructure. 10Gb/s or higher networking is a required in this design.
- Warm Nodes within a single cluster can be mixed cross generation however its recommended all warm nodes match storage quantity within the same cluster to make the most use of all warm tier storage in the cluster
- Clusters with a mix of Centos/RHEL 7.x and Rocky/RHEL 9.x are supported, but you only need to run the package installer on one of the cluster nodes.
- Your cluster can contain 1 or 3-10 physical/virtual hot boxes, and 0-10 warm boxes.
Install CentOS or Rocky 9 Minimal using LogRhythm ISO
The ISO installation creates the required “logrhythm” user, creates and sizes all of the required partitions, installs pre-requisite packages and prompts you for network, DNS, and NTP settings upon first logon.
- If you are installing on a physical computer, mount the ISO through a bootable medium such as DVD or OOBM (iDRAC/iLo). For a virtual install, mount the ISO for the installation.
- Boot the computer from the mounted ISO.
- When the boot screen appears, use the arrow keys and the Enter key to select Install CentOS 7/Rocky 9.
The operating system will be installed, which can take up to 10 minutes. - When prompted to log on, enter the following credentials:
- Login: logrhythm
- Password: enter the default LogRhythm password (contact support if you are unsure or need this password)
- You are prompted to run the initial configuration script. The script is optional, but your Indexer will be configured to use DHCP on the primary Ethernet adapter, which is not a supported configuration for a production environment.
- To run the script, type y.
You are prompted for network, DNS, and NTP details. At each prompt, detected or default values are displayed in parentheses. - To accept these values, press Enter.
Enter the network and NTP information, as follows:
Prompt
Description
IP Address The IP address that you want to assign to this Data Indexer node. Netmask The netmask to use. Default Gateway The IP address of the network gateway. Domain name servers The IP address of one or more domain name servers (DNS). If any servers were found via DHCP, they will be displayed as the defaults. If no servers were found, the Google DNS servers will be displayed as the defaults. NTP servers The IP address of one or more NTP servers. Enter the IP address of each server one at a time, followed by Enter. When you are finished, press Ctrl + D to end. After completing the items in the configuration script, the system tests connectivity to the default gateway and the NTP servers. If any of the tests fail, press n when prompted to enter addresses again.
After confirming the NTP values, you will be logged on as the logrhythm user.
Restart the network interfaces to apply the new settings:
CODEsudo systemctl restart network
Restart chrony to apply NTP changes:
CODEsudo systemctl restart chronyd
To stop the sudo password prompt, add the following line to the sudoers file using the
sudo visudo
command:CODElogrhythm ALL=(ALL) NOPASSWD: ALL
Install CentOS 7, Rocky 9 or RHEL 7/9 using Base Minimal ISO
- If you are installing on a physical computer, mount the ISO through a bootable medium such as DVD or OOBM (iDRAC/iLo). For a virtual install, mount the ISO for the installation.
- Boot the computer from the mounted ISO.
- When the boot screen appears, use the arrow keys and the Enter key to select Install CentOS 7/Rocky 9/RHEL 7/RHEL 9.
The operating system will be installed, which can take up to 10 minutes. Depending on the ISO used you may be driven through a GUI installation experience where you can set IP addresses, timezone, etc. Install pre-requisite packages for the LogRhythm Data Indexer
CODE# sudo yum install firewalld # sudo yum install sshpass # sudo yum install chrony # sudo yum install tar
- Configure Time Sync to NTP source.
The default configuration file will have 4 servers listed and may not be valid for your environment. Replace those with your internal NTP servers.
CODE# sudo vi /etc/chrony.conf server 10.10.10.10 server 192.168.0.1 # sudo systemctl restart chronyd # systemctl enable chronyd # chronyc sources
- Configure Data Indexer Storage
For Single Data Disk DX Instances:
Confirm all disks are visible within the instance, you should see your additional storage as /dev/sdb or /dev/sdc.
This can vary depending on hardware/virtual environment, but should follow a pattern where the first disk ends with a, second disk with b, third disk with c, etc.CODE# sudo lsblk | grep disk
Enter the following command to configure a partition on the disk. This will match your disk name from above.
For example, "sudo parted /dev/xvdb"CODE# sudo parted /dev/xvdb mklabel gpt mkpart Partition Name = Leave Blank File System Type = ext2 Start = 1 End = The size of this partition, for a 16TB drive enter 16000GB pr This will print the output, confirm this looks correct quit
Build the file system using this command, specifying the disk you used in the previous step:
CODE# sudo mkfs.ext4 -m 0 /dev/xvdb
Create the directory to which you wish to mount the disk.
This should always be /usr/local/logrhythm.CODE# sudo mkdir -p /usr/local/logrhythm/
Record the block UUID for the disk that you wish to mount:
CODE# sudo blkid
This is a necessary step. Never mount using the device name, always the UUID, or else the drive mapping will fail following an instance change.
Edit fstab and add this drive to be mounted to the directory you created previously "/usr/local/logrhythm":
CODE# sudo vi /etc/fstab UUID=#########-####-####-####-############ /usr/local/logrhythm ext4 nodev,nosuid,nofail 1 2
Mount the new drive:
CODE# sudo mount -a
For Multiple Data Disk DX Instances (LVM)
Confirm all disks are visible within the EC2 instance.
You should see your additional storage as /dev/xvdb, /dev/xvdc, etc. This can vary depending on instance type, but should follow a pattern where the first disk ends with a, second disk with b, third disk with c, etc. Record these values, as you will use them in the next step.CODE# sudo lsblk | grep disk
Create a volume group containing all data disks.
In this command, you will need to edit the disks based on your individual system recorded from the previous stepCODE# sudo vgcreate vg_lrdata /dev/xvdb /dev/xvdc /dev/xvdd
Create a logical volume with data stripping for optimal performance.
In this command, the stripes quantity (-i) should match the number of disks in the volume group. For example, if you have created a volume group with two disks, you should use two here.CODE# sudo lvcreate -i # -I 32 -l 100%FREE -n lv_lrdata vg_lrdata
Format the filesystem:
CODE# sudo mkfs.ext4 /dev/vg_lrdata/lv_lrdata
Record the block UUID for the disk that you wish to mount:
CODE# sudo blkid
Reload the System Daemon to permit mounting of the new volume:
CODE# sudo systemctl daemon-reload
Edit fstab and add this drive to be mounted to the directory you created previously "/usr/local/logrhythm":
CODE# sudo vi /etc/fstab UUID=#########-####-####-####-############ /usr/local/logrhythm ext4 nodev,nosuid,nofail 1 2
Create the directory to which you wish to mount the disk.
This should always be /usr/local/logrhythm.CODE# sudo mkdir -p /usr/local/logrhythm/
Mount the new drive:
CODE# sudo mount -a
Create a LogRhythm user.
Log into the AWS instance and elevate to the root user:
CODE# sudo su
Add new user called logrhythm:
CODE# adduser logrhythm
Set the password for the LogRhythm user:
CODE# passwd logrhythm
Provide and confirm the password for the LogRhythm user.
Add the LogRhythm user to the wheel group:
CODE# usermod -aG wheel logrhythm
Ensure permissions on the /usr/local/logrhythm path are correct for your LogRhythm user:
CODE# sudo chown -R logrhythm.logrhythm /usr/local/logrhythm/
Navigate to the LogRhythm user:
CODE# su - logrhythm
To stop the sudo password prompt, add the following line to the sudoers file using the
sudo visudo
command:CODElogrhythm ALL=(ALL) NOPASSWD: ALL
(Optional.) Use the following command to update Red Hat's yum tool:
CODE# sudo yum update
Performing this update requires internet access. Customers without internet access should perform patching according to their usual procedures.
Install the Data Indexer on Linux
Install a Single-node Cluster
If you have more than one node in your cluster, follow the instructions in the Install a Multi-node Cluster section.
sudo systemctl start firewalld
- Log on to your Indexer appliance or server as logrhythm.
Go to the /home/logrhythm/Soft directory where you copied the updated installation or upgrade script.
If this is an upgrade, you should have a file named hosts in the /home/logrhythm/Soft directory that was used during the original installation.
The contents of the file might look like the following:CODE10.1.23.65 LRLinux1 hot
If you need to create a hosts file, use
vi
to create a file in /home/logrhythm/Soft called hosts.
The following command sequence illustrates how to create and modify a file withvi
:- To create the hosts file and open for editing, type
vi hosts
. - To enter INSERT mode, type i.
- Enter the IPv4 address, hostname to use for the Indexer, and box type, separated by a space.
- Press Esc.
- To exit and save your hosts file type
:wq
.
The box type parameter is optional. If not designated, the installer will assign a box type of hot. Do not use fully qualified domain names for Indexer hosts. For example, use only LRLinux1 instead of LRLinux1.myorg.com.
- To create the hosts file and open for editing, type
To install DX and make the machine accessible without a password, download the DataIndexerLinux.zip file from the Documentation & Downloads section of the LogRhythm Community, extract the PreInstall.sh file to /home/logrhythm and execute the script.
If you are installing with the LogRhythm ISO, these files will already be in place; however, we recommend checking to ensure you have the matching version of the files for your LogRhythm deployment version.This cannot be run assudo
or the DX Installer will fail.CODEsh ./PreInstall.sh
Generate a plan file from the LogRhythm XM/PM using the infrastructure installer, which includes the IP of the Linux DX system and copy the plan.yml from the newly created LRDeploymentPackage folder from XM to the node from where DX-Installation will be done.
Run the installer with the hosts file argument:
Applicable to LogRhythm Versions 7.13-7.15 - If installing a DX on Rocky 9 in an offline setting, dark site, or environment where the DXs do not otherwise have internet access, you will need to temporarily disable the default Rocky Repos or the LRII_Linux component of the DX Installer will fail.
Edit the BaseOS and Appstream Rocky Repos in /etc/yum.repos.d using "sudo vi /etc/yum.repos.d/rocky.repo", modify the lines where "enabled = 1" to "enabled = 0", write-quit vi using ":wq" before attempting to run the DX installer.
CODEsudo sh LRDataIndexer-<version>.x86_64.run --hosts <absolute path to .hosts file> --plan /home/logrhythm/Soft/plan.yml --es-cluster-name <cluster_name>
Press Tab after starting to type out the installer name, and the filename autocompletes for you.
**--es-cluster-name is required only for fresh setup not for an upgrade.If prompted for the SSH password, enter the password for the logrhythm user.
The script installs or upgrades the Data Indexer. Common components are installed at /usr/local/logrhythm/.
LogRhythm Common Components (API Gateway and Service Registry) logs:sudo journalctl -u LogRhythmAPIGateway > lrapigateway.log
sudo journalctl -u LogRhythmServiceRegistry > lrserviceregistry.log
This process may take up to 10 minutes.When the installation or upgrade is complete, a confirmation message appears.
Check the status of services by typing
sudo systemctl
at the prompt, and look for failed services.
sudo systemctl start firewalld
Install a Multi-node Cluster
Run the install once for each cluster, the package installer installs a Data Indexer on each node. Run it on the same machine where you ran the original installer.
sudo systemctl start firewalld
- Log on to your Indexer appliance or server as logrhythm.
Change to the /home/logrhythm/Soft directory where you copied the script.
You should have a file named hosts in the /home/logrhythm/Soft directory that was used during the original installation.
The contents of the file might look like the following:
10.1.23.65 LRLinux1 hot
10.1.23.67 LRLinux2 warmIf you need to create a hosts file, use
vi
to create a file in /home/logrhythm/Soft called hosts.The following command sequence illustrates how to create and modify a file with
vi
:- To create the hosts file and open for editing, type
vi hosts
. - To enter INSERT mode, type i.
- Enter the IPv4 address, hostname to use for the Indexer, and box type, separated by a space.
- Press Esc.
- To exit and save your hosts file type
:wq
.
The box type parameter is optional. If not designated, the installer will assign a box type of hot. Do not use fully qualified domain names for Indexer hosts. For example, use only LRLinux1 instead of LRLinux1.myorg.com.
- To create the hosts file and open for editing, type
To install DX and make the machine accessible without a password, download the DataIndexerLinux.zip file from the Documentation & Downloads section of the LogRhythm Community, extract the the PreInstall.sh file to /home/logrhythm and execute the script.
This cannot be run assudo
or the DX Installer will fail.CODEsh ./PreInstall.sh
Generate a plan file which includes the IP of Linux DX system and copy the plan.yml from the newly created LRDeploymentPackage folder from XM to the node from where DX-Installation will be done.
Run the installer with the hosts file argument:
Applicable to LogRhythm Versions 7.13-7.15 - If installing a DX on Rocky 9 in an offline setting, dark site, or environment where the DXs do not otherwise have internet access, you will need to temporarily disable the default Rocky Repos or the LRII_Linux component of the DX Installer will fail.
Edit the BaseOS and Appstream Rocky Repos in /etc/yum.repos.d using "sudo vi /etc/yum.repos.d/rocky.repo", modify the lines where "enabled = 1" to "enabled = 0", write-quit vi using ":wq" before attempting to run the DX installer. In a multi-node cluster, you will need to do this on all nodes.
CODEsudo sh LRDataIndexer-<version>.x86_64.run --hosts <absolute path to .hosts file> --plan /home/logrhythm/Soft/plan.yml --es-cluster-name <cluster_name>
Press Tab after starting to type out the installer name, and the filename autocompletes for you.
**--es-cluster-name is required only for fresh setup not for an upgrade.If prompted for the SSH password, enter the password for the logrhythm user.
The script installs or upgrades the Data Indexer on each of the DX machines. Common components are installed at /usr/local/logrhythm.
LogRhythm Common Components (API Gateway and Service Registry) logs:sudo journalctl -u LogRhythmAPIGateway > lrapigateway.log
sudo journalctl -u LogRhythmServiceRegistry > lrserviceregistry.log
This process may take up to 10 minutes.When the installation or upgrade is complete, a confirmation message appears.
- Check the status of services by typing
sudo systemctl
at the prompt, and look for “failed” services.
sudo systemctl start firewalld
(Optional) Use the Data Indexer Node Installer
The LogRhythm Data Indexer (LRDX) Node Installer is available to users that have purchased a DX7500 or DX7600. The installer leverages the resources on the large DX machine to improve the indexing and TTL performance by adding a second Elasticsearch instance to each machine. This increases the available heap space for the cluster allowing additional index data to be stored in Hot tier.
The LRDX Node Installer installs and adds the second instance of Elasticsearch to the DX cluster on each DX host, all HOT nodes in the cluster must match.
The LRDX Node Installer is required to hit the specified performance numbers for the DX7500 or DX7600, failing to run the Data Indexer Nodes Installer will result in a ~50% penalty to performance and data index storage TTL
Prerequisites
A CPU core of at least 50 and 200 GB of RAM are required for the LRDX Node Installer to run.
Install a New DX 7500
Before installing the LRDX Installer, you need to follow the standard installation documentation for the version of software you are deploying. For more information, see Install a New LogRhythm Deployment.
- Connect to the Data Indexer system as a LogRhythm user.
Download the LRDXNodeinstaller-<version>.x86_64.run package installer to the Logrhythm user’s home directory on one of your Data Indexer appliances (for example, /home/logrhythm/Soft). The installer is available on the Support/Partner downloads section of the LogRhythm Community.
Change to the soft directory:
CODEcd Soft
Run the LRDX Node Installer with the host file created in the initial install:
CODEsudo sh <installer> --hosts /home/logrhythm/Soft/hosts --add
The hosts file must follow a defined pattern of {IPv4 address}, {hostname}, {boxtype} (mandatory) on each line. This file should already exist from the Data Indexer initial installation. The file might look like the following:
10.1.23.91 LRLinux1 hot
The box type parameter is mandatory in the hosts file, if not designated the installer will fail with a —missing parameter— error.- When prompted for the SSH password, enter the password for the LogRhythm user.
Uninstall a Node
To uninstall the software or a Linux node:
Move the data from the secondary Elasticsearch node back to the primary by running:
CODEsudo sh <installer> --hosts /home/logrhythm/Soft/hosts --move
The time required to complete this task depends on the amount of data stored.
Remove the secondary node by running:
CODEsudo sh <installer> --hosts /home/logrhythm/Soft/hosts --remove
The hosts file must follow a defined pattern of {IPv4 address}, {hostname}, {boxtype}(mandatory) on each line. The file might look like the following:
10.1.23.91 LRLinux1 hot
The box type parameter is mandatory in the hosts file, if not designated the installer will fail with a —missing parameter— error.If the data move to the primary Elasticsearch node is not completed, this operation fails so that data loss is avoided.
Add a Node to an Existing Cluster
Adding a node to an existing cluster requires running the DX installer and will cause downtime for the Data Indexer. The steps for adding a node are generally the same but may differ slightly depending on the type of node being added and the current cluster size.
Prerequisites
These instructions assume:
- the Data Indexer ISO has already been installed on the new server.
- the node is in place and online.
- the first run has been executed and configured.
- the new node has the static IP address set.
- the new node has the hostname set.
- the new node has NTP configured.
- the Soft directory exists.
- the “logrhythm” user/password is set to match the existing “logrhythm” user.
Downtime
The amount of downtime experienced by the cluster will depend on the hardware, number of open indices, and their relative sizes. The larger the indices are, the longer full recovery may take. All data processed by the Data Processors will be spooled to the DXReliablePersist state folder until the cluster is recovered, and the data can be inserted into the cluster.
Hardware Configuration
All hot nodes in the cluster require matching resources. Do not add a node to the cluster if the new node does not have matching CPU, disk/partition, and memory configurations for the existing nodes in the cluster. Hot and Warm node hardware configurations may be different, although all hot nodes in the cluster should have the same configuration, and all warm nodes in the cluster should have the same configuration. A mismatch in CPU, Memory, or disk/partition sizes may cause performance issues and can affect the number of hot and warm indices available across the entire cluster. Warm nodes will still be used for data ingestion.
Data Indexer Installer
Run the installer from the same node the installer was originally run from. Do not run the installer on the new node. Adding a new node to the cluster requires configuration changes on all nodes. Running the installer from the install node will ensure these configurations are pushed to all nodes in the cluster.
Verify that you are installing the correct version of the LogRhythm Data Indexer. If an incorrect version is installed, the Data Indexer cannot be downgraded without fully uninstalling the software.
Verify the current installed version by viewing the version file on an existing node:
cat /usr/local/logrhythm/version
Cluster Health
Elasticsearch is not required to be in a green state while adding a node, but it is best practice to verify the cluster is green before adding the node to ensure the process is successful.
Run the following command on any existing node to see the cluster health:
curl localhost:9200/_cluster/health?pretty
Verify that the status is green. If the cluster status is yellow or red, we recommend correcting any issues with the cluster before proceeding.
Cluster Size
Consider the size of the cluster before adding a node, as there are some restrictions to the sizes of a single cluster.
- Total maximum cluster size is 30 Elasticsearch nodes
- A cluster may have a maximum of 20 physical nodes
- A cluster may have up to 10 software nodes
Cluster Configurations:
A possible configuration: 1 or 3-10 Hot physical Nodes + 0-10 2XDX software nodes + 0-10 Warm Nodes.
A single cluster may not contain only two physical hot nodes (including when 2XDX and warm nodes are part of the cluster). This is to avoid a “split-brain” scenario. Hot nodes on a cluster can be 1 or 3 to a maximum of 10 physical hot nodes.
A single cluster must contain at least one hot node and can contain from 0 up to 10 warm nodes.
2XDX only applies to DX7500 nodes as 2XDX software can only be installed on servers with 256 GB memory and 56 vcpu. Each physical hot node can have one additional instance of the 2XDX (hot software node) if they meet the resource requirements. It is not recommended that you install 2XDX on virtual servers as performance can be impacted. If 2XDX nodes are used in a cluster, it should be installed on all physical hot nodes in the cluster.
Installation Procedure
Follow this sequence for installation:
- Verify that the new node is online and ready to be added to the cluster, noting the current IP and hostname of the new node.
The node should be started and ready for the LogRhythm software to be installed.
You should not need to copy or edit any files for the new node.Note the Hostname and IP Address from the server as these will need to be added to the plan, and to the hosts files for the installed node in later steps.
Use the following commands to get the Hostname and IP of the server. The hostname must be set to the expect hostname before adding the node to the cluster.
Hostname: hostname
IP Address: ip a Identify the install node and verify the currently installed version of Data Indexer.
If you start the install with a LogRhythm Data Indexer version higher than the current installed version on the cluster, you may need to reimage the new server to install a lower version.Verify the currently installed version by running the following command on any existing node in the cluster:
CODEcat /usr/local/logrhythm/version
- When the DX installer is executed in later steps, you will need to run the installer from the same node the DX installer was originally ran on. Usually, this is the first node in the cluster and will be the node that has the existing hosts file created for the original install.
- If you are unsure and need to identify the node, you can use 1 or both of the following methods:
- Check the /home/logrhythm/ and /home/logrhythm/Soft directory on the node for the hosts file. This is the file that was created during the original install. The hosts file will contain all existing nodes, their respective IPs, and the box type. This file does not need to exist on all nodes in the cluster, only the previous install node.
You can also verify if a node is the primary host by viewing the primary_host file on each node.
CODEcat /usr/local/logrhythm/env/primary_host
If is_primary_host=True, then this is the node on which the installer was last run.
If is_primary_host=False or (blank), then this is not the node on which the installer was last run.
- Create an updated LRII package using the LogRhythm Infrastructure Installer on the Platform Manager that includes the new nodes IP address.
- On the Platform Manager server, open the LogRhythm Infrastructure Installer from the LogRhythm programs group.
- Click Add/Remove Host.
- Click Add Host.
- Add the IP Address of the new DX host, and optionally, the host nickname.
- Click Save.
- Click Create Deployment Package.
- Verify the IP Addresses in the list and click Create Deployment Package.
- Select the folder location in which to create the new LRDeploymentPackage, and click Select Folder.
Once the package is created it will provide the path to the LRDeploymentPackage folder. Copy this path to the clipboard if necessary to help locate the newly created package. - Click Next Step.
- Click Run Host Installer on This Host.
This will start the install of the newly generated LRII package on the Platform Manager.
Once the LRII install completes on the Platform Manager, expand “Step 2”. At this point, leave the LogRhythm Deployment Tool screen open on the Platform Manager, you will return to this screen after the node is installed.
Do not close the LogRhythm Deployment Tool window until the cluster is successfully verified. Closing the tool at this step may require starting the process over at the beginning (including the DX install itself) to be able to validate the deployment. - Copy the necessary files to the Data Indexer install node. The currently installed version may already be present in the Soft folder. You will not need to copy any files to the new node as the Data Indexer installer will copy necessary files to all nodes in the cluster during install.
Using WinSCP, or similar, copy the plan.yml file (from the newly created LRDeploymentPackage folder you selected on in the previous steps) to the /home/logrhythm/Soft directory on the Data Indexer install node (not the new node you are adding to the cluster). This file contains the updated plan information for the common components.
Make sure you are using the newly generated plan.yml file. Using a previously generated plan file may render the Data Indexer unable to communicate with other LogRhythm services and servers.- Verify that the Data Indexer installer and the PreInstall.sh file are both present in the Soft folder.
If these files are missing, re-verify that this is the node the installer was originally ran from. If the files were deleted since the last install, download the standalone Linux Data Indexer version installer zip from the community and copy the two files included in the zip to the Soft folder.
PreInstall.sh
LRDataIndexer-{version}.x86_64.run - Update the existing hosts file on the installer node with the new node information. The hosts file is usually created in the /home/logrhythm/Soft directory but may be in /home/logrhythm/. This file should already contain the IP Hostname, and box type, of the existing nodes in the cluster.
Edit the LR specific hosts file used by the Data Indexer Installer using vi or similar editor.
CODEsudo vi /home/logrhythm/Soft/hosts
Type i to enter insert mode.
Edit the necessary lines.
Press Esc to exit insert mode.
Press shift + ; to enter command mode.
Write and quit to exit, type: wqAdd a new line with the IP Address, Hostname, and box type (either hot or warm) in the following format:
CODE<IP> <HOSTNAME> <box type>
Example: 192.168.0.1 mydxhostname hot
box type is optional if there are only hot nodes in the cluster. If the other host lines have the box type, it will need to be added with the new line. If warm nodes exist or you are adding a warm node, the box type will need to be set for all hosts for a successful configuration during install.
- Run the PreInstaller.sh script (on the installer node) to setup PubKey (password-less) Authentication.
(Optional) If you had to copy PreInstall.sh, you will need to set execute permission on the PreInstall.sh script.
CODEsudo chmod +x /home/logrhythm/Soft/PreInstall.sh
Execute the PreInstall.sh script:
CODEsh /home/logrhythm/Soft/PreInstall.sh
- Enter the current ssh password for the logrhythm user (password used to connect to the server).
- Enter the path to the hosts file updated in the last step.
The script will run through multiple steps.
Some steps of the PreInstall.sh may show a warning or error depending on the current configuration. These can be ignored if the Testing ssh as logrhythm user using Public Key Authentication section shows SSH OK, for all hosts in the host file. If SSH: Failed shows for any host, review the output and fix any SSH issues prior to running the DX installer.The Data Indexer installer WILL fail if PubKey Authentication is not successfully setup prior to running the installer. Run the Data Indexer installer to add the node to the cluster. Run the install command following the Data Indexer from the commands below. You will need to supply the full path to the hosts file, the full path plan.yml file, enter the existing cluster name, and add the “—force” switch. The force switch is needed because you are running the installer against the same installed version.
This step assumes the cluster health is green. The existing cluster name can be found in the LogRhythm Console on the Clusters Tab, under Deployment Monitor.Change to the Soft directory:
CODEcd /home/logrhythm/Soft
Run the Base Command:
CODEsudo sh LRDataINdexer-<version>.x86.64.run --hosts <full path to hosts> --plan <full path to plan.yml> --es-cluster-name=<existingclustername> --force
Example:CODEsudo sh LRDataIndexer-10.0.0.121-1.x86_64.run --hosts /home/logrhythm/Soft/hosts --plan /home/logrhythm/Soft/plan.yml --es-cluster-name=mycluster --force
(Optional) If the newly added node is a DX7500 node, run the secondary LR DX Node Installer to add the 2XDX software to the newly installed node.
The LRDXNodeInstaller is a separate installer from the Data indexer installer available from the downloads page.On the install node, execute the LRDXNodeInstaller using the following Base Command:
CODEsudo sh /usr/local/logrhythm/DXNodeInstaller-<version>.x86_64.run --add --hosts <fullpathtohosts> --ma
Example:
CODEsudo sh /usr/local/logrhythm/DXNodeInstaller-11.0.0.4.x86_64.run --add --hosts /home/logrhythm/Soft/hosts --ma
Run the following command to verify that the node was successfully added to the cluster with the correct box type:
CODEcurl localhost:9200/_cat/nodeattrs?v
All nodes for the cluster should be present along with the current box type. Any 2XDX nodes can be identified as they will show as <hostname>-data for the node name.
You can also run the cluster health command to verify the total number of nodes present in the cluster:
CODEcurl localhost:9200/_cluster/health?pretty
Troubleshooting
After the install completes, all Data Indexer services will automatically start on all nodes. it may take a minute or two for Elasticsearch to start on all nodes.
If the Elasticsearch API endpoint does not respond after 5 minutes, check the Elasticsearch /var/log/elasticsearch/<clustername>.log file to identify any errors Elasticsearch may be experiencing on startup. The Elasticsearch Service log will exist on each node in the cluster. You may need to check the log on each individual node to determine the full extent of any issues with the service or cluster starting. The log will be named the same as the cluster name provided in the install command.
Get the service status on a specific node:
sudo systemctl status elasticsearch
Tail the Elasticsearch log:
tail -f /var/log/elasticsearch/<clustername>.log
When the Elasticsearch node services start and the master node is elected, the cluster health will go from red -> yellow -> green. It may take an extended period (hours) for all existing indices to be recovered after the install. The cluster health command will show you the percentage of index shards recovered. Indexing and search will be available once the primary shards have been recovered.
The cluster health change from red to yellow is usually relatively fast, but the time between the health change from yellow to green will depend on the number of indices, and their shard sizes.
You can verify the status of index recovery using the following command on any node:
watch -n2 ‘curl -s localhost:9200/_cat/recovery?v | grep -v done’
The number of shards that are recovered at any time is throttled by Elasticsearch settings.
If shards stop showing in the recovery list, and the cluster health has not yet reported green, please contact LogRhythm Support to investigate why shards are not initializing or assigning as expected.
Validate the Linux Indexer Installation
To validate a successful upgrade of the Linux Indexer, check the following logs in /var/log/persistent:
- ansible.log echoes console output from the upgrade, and should end with details about the number of components that upgraded successfully, as well as any issues (unreachable or failed)
- logrhythm-node-install.sh.log lists all components that were installed or updated, along with current versions
- logrhythm-cluster-install.sh.log should end with a message stating that the Indexer was successfully installed
Additionally, you can issue the following command to verify the installed version of various LogRhythm services, tools, and libraries, as well as third party tools:
sudo yum list installed | grep -i logrhythm
- Verify that the following LogRhythm services are at the same version as the main installer version:
- Bulldozer
- Carpenter
- Columbo
- GoMaintain
- Transporter
- Watchtower
- Verify that the following tools/libraries have been installed:
- Cluster Health
- Conductor
- Persistent
- Silence
- Unique ID
- Upgrade Checker
- Verify the following version of this service:
- elasticsearch 7.10.2
Verify a Warm Node
To identify whether a warm node is working correctly after installation, perform the following:
Verify Warm Node configuration:
CODEcurl localhost:9200/_cat/nodeattrs?v
Verify Node Settings in /usr/local/logrhythm/env/es_datapath:
CODE[root@DX01 env]# cat /usr/local/logrhythm/env/es_datapath DX_ES_PATH_DATA=/usr/local/logrhythm/db/elasticsearch/data DX_ES_CLUSTER_NAME=<cluster name> DX_ES_DISCOVERY_ZEN_PING_UNICAST_HOSTS=<IPs of eligible master nodes> DX_ES_DISCOVERY_ZEN_MINIMUM_MASTER_NODES=<# of eligible master nodes/2 (rounded down) +1> DX_ES_BOX_TYPE=warm DX_ES_IS_MASTER=false
On each node in /usr/local/logrhythm/transporter/logs.json, verify the number of shards and replicas based on number of hot nodes:
CODE"number_of_shards": "<physical hot nodes * 2>" "number_of_replicas": (this will be "0" for single hot node or "1" for a multi hot node cluster)
For 2XDX, physical nodes are only used for the shard calculation. A three-node 2XDX will have six shards.- Verify warm node functionality:
- Wait until Elasticsearch's heap moves an open index to the warm node as a closed index.
- Verify that GoMaintain does not throw errors when moving the index to the warm node as Closed.
- (Optional) Perform an investigation against a closed index on the warm node (though this step alone will not confirm that the warm node is working).
Information about Automatic Maintenance
Automatic maintenance is governed by several settings in GoMaintain Config:
Disk Utilization Limit
Disk Util Limit. Indicates the percentage of disk utilization that triggers maintenance. The default is 80, which means that maintenance starts when the Elasticsearch data disk is 80% full.
Maintenance is applied to the active repository, as well as archive repositories created by SecondLook. When the Disk Usage Limit is reached, active logs are trimmed when “max indices” is reached. At this point, GoMaintain deletes completed restored repositories starting with the oldest date.
The default settings prioritize restored repositories above the active log repository. Restored archived logs are maintained at the sacrifice of active logs. If you want to keep your active logs and delete archives for space, set your min indices equal to your max indices. This forces the maintenance process to delete restored repositories first.
Force Merge Configuration
Force Merge Config. Combines index segments to improve search performance. In larger deployments, search performance could degrade over time due to a large number of segments. Force merge can alleviate this issue by optimizing older indices and reducing heap usage.
Parameter | Default | Value |
---|---|---|
Merging Enabled | If set to true, merging is enabled. If set to false, merging is disabled. | false |
Logging of configuration and results for force merge can be found in C:\Program Files\LogRhythm\DataIndexer\logs\GoMaintain.log.
Index Configs
The DX monitors Elasticsearch memory and DX storage capacity.
GoMaintain tracks heap pressure on the nodes. If the pressure constantly crosses the threshold, GoMaintain decreases the number of days of indices by closing the index. Closing the index removes the resource needs of managing that data and relieves the heap pressure on Elasticsearch. GoMaintain continues to close days until the memory is under the warning threshold and continues to delete days based on the disk utilization setting of 80% by default.
The default config is -1. This value monitors the systems resources and auto-manages the time-to-live (TTL). You can configure a lower TTL by changing this number. If this number is no longer achievable, the DX sends a diagnostic warning and starts closing the indices.
Indices that have been closed by GoMaintain are not actively searchable in 7.6 but are maintained for reference purposes. To see which indices are closed, run a curl command such as the following:
curl -s -XGET 'http://localhost:9200/_cat/indices?h=status,index' | awk '$1 == "close" {print $2}'
Open a browser to http://localhost:9200/_cat/indices?v to show both open and closed indices.
Indices can be reopened with the following query as long as you have enough heap memory and disk space to support this index. If you do not, it immediately closes again.
curl -XPOST 'localhost:9200/<index>/_open?pretty'
After you open the index in this way, you can investigate the data in either the Web Console or Client Console.