Skip to main content
Skip table of contents

Add a Data Indexer Node

 

This procedure involves installation processes. For more information, see the LogRhythm Software Installation Guide on the LogRhythm Community.

Adding a data indexer node requires running the DX installer and will cause downtime for the Data Indexer. The steps for adding a node are generally the same but may differ slightly depending on the type of node being added and the current cluster size.

Prerequisites

These instructions assume:

  • the Data Indexer ISO has already been installed on the new server.
  • the node is in place and online.
  • the first run has been executed and configured.
  • the new node has the static IP address set.
  • the new node has the hostname set.
  • the new node has NTP configured.
  • the Soft directory exists.
  • the “logrhythm” user/password is set to match the existing “logrhythm” user.
See the Prepare for the Installation section of the Install the LogRhythm Data Indexer topic for more information.

Downtime

The amount of downtime experienced by the cluster will depend on the hardware, number of open indices, and their relative sizes. The larger the indices are, the longer full recovery may take. All data processed by the Data Processors will be spooled to the DXReliablePersist state folder until the cluster is recovered, and the data can be inserted into the cluster.

Hardware Configuration

All hot nodes in the cluster require matching resources. Do not add a node to the cluster if the new node does not have matching CPU, disk/partition, and memory configurations for the existing nodes in the cluster. Hot and Warm node hardware configurations may be different, although all hot nodes in the cluster should have the same configuration, and all warm nodes in the cluster should have the same configuration. A mismatch in CPU, Memory, or disk/partition sizes may cause performance issues and can affect the number of hot and warm indices available across the entire cluster. Warm nodes will still be used for data ingestion.

Data Indexer Installer

Run the installer from the same node the installer was originally run from. Do not run the installer on the new node. Adding a new node to the cluster requires configuration changes on all nodes. Running the installer from the install node will ensure these configurations are pushed to all nodes in the cluster.

Verify that you are installing the correct version of the LogRhythm Data Indexer. If an incorrect version is installed, the Data Indexer cannot be downgraded without fully uninstalling the software.

Verify the current installed version by viewing the version file on an existing node:

CODE
cat /usr/local/logrhythm/version

Cluster Health

Elasticsearch is not required to be in a green state while adding a node, but it is best practice to verify the cluster is green before adding the node to ensure the process is successful.

Run the following command on any existing node to see the cluster health:

CODE
curl localhost:9200/_cluster/health?pretty

Verify that the status is green. If the cluster status is yellow or red, we recommend correcting any issues with the cluster before proceeding.

Cluster Size

Consider the size of the cluster before adding a node, as there are some restrictions to the sizes of a single cluster.

  • Total maximum cluster size is 30 Elasticsearch nodes
  • A cluster may have a maximum of 20 physical nodes
  • A cluster may have up to 10 software nodes

Cluster Configurations

A possible configuration: 1 or 3-10 Hot physical Nodes + 0-10 2XDX software nodes + 0-10 Warm Nodes.

A single cluster may not contain only two physical hot nodes (including when 2XDX and warm nodes are part of the cluster). This is to avoid a “split-brain” scenario. Hot nodes on a cluster can be 1 or 3 to a maximum of 10 physical hot nodes.

A single cluster must contain at least one hot node and can contain from 0 up to 10 warm nodes.

2XDX only applies to DX7500 nodes as 2XDX software can only be installed on servers with 256 GB memory and 56 vcpu. Each physical hot node can have one additional instance of the 2XDX (hot software node) if they meet the resource requirements. It is not recommended that you install 2XDX on virtual servers as performance can be impacted. If 2XDX nodes are used in a cluster, it should be installed on all physical hot nodes in the cluster.

Installation Procedure

Follow this sequence for installation:

  1. Verify that the new node is online and ready to be added to the cluster, noting the current IP and hostname of the new node.

    The node should be started and ready for the LogRhythm software to be installed.

    You should not need to copy or edit any files for the new node.

    Note the Hostname and IP Address from the server as these will need to be added to the plan, and to the hosts files for the installed node in later steps.
    Use the following commands to get the Hostname and IP of the server. The hostname must be set to the expect hostname before adding the node to the cluster.
    Hostnamehostname
    IP Addressip a

  2. Identify the install node and verify the currently installed version of Data Indexer.

    If you start the install with a LogRhythm Data Indexer version higher than the current installed version on the cluster, you may need to reimage the new server to install a lower version.
    1. Verify the currently installed version by running the following command on any existing node in the cluster: 

      CODE
      cat /usr/local/logrhythm/version
    2. When the DX installer is executed in later steps, you will need to run the installer from the same node the DX installer was originally ran on. Usually, this is the first node in the cluster and will be the node that has the existing hosts file created for the original install.
    3. If you are unsure and need to identify the node, you can use 1 or both of the following methods:
      1. Check the /home/logrhythm/ and /home/logrhythm/Soft directory on the node for the hosts file. This is the file that was created during the original install. The hosts file will contain all existing nodes, their respective IPs, and the box type. This file does not need to exist on all nodes in the cluster, only the previous install node.
      2. You can also verify if a node is the primary host by viewing the primary_host file on each node.

        CODE
        cat /usr/local/logrhythm/env/primary_host

        If is_primary_host=True, then this is the node on which the installer was last run.
        If is_primary_host=False or (blank), then this is not the node on which the installer was last run.

  3. Create an updated LRII package using the LogRhythm Infrastructure Installer on the Platform Manager that includes the new nodes IP address.
    1. On the Platform Manager server, open the LogRhythm Infrastructure Installer from the LogRhythm programs group.
    2. Click Add/Remove Host.
    3. Click Add Host.
    4. Add the IP Address of the new DX host, and optionally, the host nickname.
    5. Click Save.
    6. Click Create Deployment Package.
    7. Verify the IP Addresses in the list and click Create Deployment Package.
    8. Select the folder location in which to create the new LRDeploymentPackage, and click Select Folder.
      Once the package is created it will provide the path to the LRDeploymentPackage folder. Copy this path to the clipboard if necessary to help locate the newly created package.
    9. Click Next Step.
    10. Click Run Host Installer on This Host
      This will start the install of the newly generated LRII package on the Platform Manager.
      Once the LRII install completes on the Platform Manager, expand “Step 2”. At this point, leave the LogRhythm Deployment Tool screen open on the Platform Manager, you will return to this screen after the node is installed.
    Do not close the LogRhythm Deployment Tool window until the cluster is successfully verified. Closing the tool at this step may require starting the process over at the beginning (including the DX install itself) to be able to validate the deployment.
  4. Copy the necessary files to the Data Indexer install node. The currently installed version may already be present in the Soft folder. You will not need to copy any files to the new node as the Data Indexer installer will copy necessary files to all nodes in the cluster during install.
    1. Using WinSCP, or similar, copy the plan.yml file (from the newly created LRDeploymentPackage folder you selected on in the previous steps) to the /home/logrhythm/Soft directory on the Data Indexer install node (not the new node you are adding to the cluster). This file contains the updated plan information for the common components.

      Make sure you are using the newly generated plan.yml file. Using a previously generated plan file may render the Data Indexer unable to communicate with other LogRhythm services and servers.
    2. Verify that the Data Indexer installer and the PreInstall.sh file are both present in the Soft folder. 
    If these files are missing, re-verify that this is the node the installer was originally ran from. If the files were deleted since the last install, download the standalone Linux Data Indexer version installer zip from the community and copy the two files included in the zip to the Soft folder.
    PreInstall.sh
    LRDataIndexer-{version}.x86_64.run
  5. Update the existing hosts file on the installer node with the new node information. The hosts file is usually created in the /home/logrhythm/Soft directory but may be in /home/logrhythm/. This file should already contain the IP Hostname, and box type, of the existing nodes in the cluster.
    1. Edit the LR specific hosts file used by the Data Indexer Installer using vi or similar editor.

      CODE
      sudo vi /home/logrhythm/Soft/hosts

      Type i to enter insert mode.
      Edit the necessary lines.
      Press Esc to exit insert mode.
      Press shift + ; to enter command mode.
      Write and quit to exit, type: wq

    2. Add a new line with the IP Address, Hostname, and box type (either hot or warm) in the following format:

      CODE
      <IP> <HOSTNAME> <box type>

      Example: 192.168.0.1 mydxhostname hot

      box type is optional if there are only hot nodes in the cluster. If the other host lines have the box type, it will need to be added with the new line. If warm nodes exist or you are adding a warm node, the box type will need to be set for all hosts for a successful configuration during install.
  6. Run the PreInstaller.sh script (on the installer node) to setup PubKey (password-less) Authentication.
    1. (Optional) If you had to copy PreInstall.sh, you will need to set execute permission on the PreInstall.sh script.

      CODE
      sudo chmod +x /home/logrhythm/Soft/PreInstall.sh
    2. Execute the PreInstall.sh script:

      CODE
      sh /home/logrhythm/Soft/PreInstall.sh
    3. Enter the current ssh password for the logrhythm user (password used to connect to the server).
    4. Enter the path to the hosts file updated in the last step.
      The script will run through multiple steps.
    Some steps of the PreInstall.sh may show a warning or error depending on the current configuration. These can be ignored if the Testing ssh as logrhythm user using Public Key Authentication section shows SSH OK, for all hosts in the host file. If SSH: Failed shows for any host, review the output and fix any SSH issues prior to running the DX installer.
    The Data Indexer installer WILL fail if PubKey Authentication is not successfully setup prior to running the installer.
  7. Run the Data Indexer installer to add the node to the cluster. Run the install command following the Data Indexer from the commands below. You will need to supply the full path to the hosts file, the full path plan.yml file, enter the existing cluster name, and add the “—force” switch. The force switch is needed because you are running the installer against the same installed version.

    This step assumes the cluster health is green. The existing cluster name can be found in the LogRhythm Console on the Clusters Tab, under Deployment Monitor.
    1. Change to the Soft directory:

      CODE
      cd /home/logrhythm/Soft
    2. Run the Base Command:

      CODE
      sudo sh LRDataINdexer-<version>.x86.64.run --hosts <full path to hosts> --plan <full path to plan.yml> --es-cluster-name=<existingclustername> --force


      Example:

      CODE
      sudo sh LRDataIndexer-10.0.0.121-1.x86_64.run  --hosts /home/logrhythm/Soft/hosts --plan /home/logrhythm/Soft/plan.yml --es-cluster-name=mycluster --force
    The Data Indexer installer will execute and run through the full install, adding the new node to the cluster. Once the successful message is displayed, the node has been added to the cluster. If you receive a message that the install failed, review the /var/log/persistent/ansible.log for the reasons for the failure, correct any underlying issues, and run the install command again.
  8. (Optional) If the newly added node is a DX7500 node, run the secondary LR DX Node Installer to add the 2XDX software to the newly installed node.

    The LRDXNodeInstaller is a separate installer from the Data indexer installer available from the downloads page.

    On the install node, execute the LRDXNodeInstaller using the following Base Command:

    CODE
    sudo sh /usr/local/logrhythm/DXNodeInstaller-<version>.x86_64.run --add --hosts <fullpathtohosts> --ma

    Example:

    CODE
    sudo sh /usr/local/logrhythm/DXNodeInstaller-11.0.0.4.x86_64.run --add --hosts /home/logrhythm/Soft/hosts --ma
  9. Run the following command to verify that the node was successfully added to the cluster with the correct box type:

    CODE
    curl localhost:9200/_cat/nodeattrs?v

    All nodes for the cluster should be present along with the current box type. Any 2XDX nodes can be identified as they will show as <hostname>-data for the node name. 

    You can also run the cluster health command to verify the total number of nodes present in the cluster:

    CODE
    curl localhost:9200/_cluster/health?pretty

Troubleshooting

After the install completes, all Data Indexer services will automatically start on all nodes. it may take a minute or two for Elasticsearch to start on all nodes.

If the Elasticsearch API endpoint does not respond after 5 minutes, check the Elasticsearch /var/log/elasticsearch/<clustername>.log file to identify any errors Elasticsearch may be experiencing on startup.  The Elasticsearch Service log will exist on each node in the cluster. You may need to check the log on each individual node to determine the full extent of any issues with the service or cluster starting. The log will be named the same as the cluster name provided in the install command.

Get the service status on a specific node:

CODE
sudo systemctl status elasticsearch


Tail the Elasticsearch log

CODE
tail -f /var/log/elasticsearch/<clustername>.log

When the Elasticsearch node services start and the master node is elected, the cluster health will go from red -> yellow -> green. It may take an extended period (hours) for all existing indices to be recovered after the install. The cluster health command will show you the percentage of index shards recovered. Indexing and search will be available once the primary shards have been recovered.

The cluster health change from red to yellow is usually relatively fast, but the time between the health change from yellow to green will depend on the number of indices, and their shard sizes.

Performance may be impacted while the cluster recovers.

You can verify the status of index recovery using the following command on any node:

CODE
watch -n2 ‘curl -s localhost:9200/_cat/recovery?v | grep -v done’

The number of shards that are recovered at any time is throttled by Elasticsearch settings.

If shards stop showing in the recovery list, and the cluster health has not yet reported green, please contact LogRhythm Support to investigate why shards are not initializing or assigning as expected.

Validate the Linux Indexer Installation

To validate a successful upgrade of the Linux Indexer, check the following logs in /var/log/persistent:

  • ansible.log echoes console output from the upgrade, and should end with details about the number of components that upgraded successfully, as well as any issues (unreachable or failed)
  • logrhythm-node-install.sh.log lists all components that were installed or updated, along with current versions
  • logrhythm-cluster-install.sh.log should end with a message stating that the Indexer was successfully installed

Additionally, you can issue the following command to verify the installed version of various LogRhythm services, tools, and libraries, as well as third party tools:

CODE
sudo yum list installed | grep -i logrhythm
  1. Verify that the following LogRhythm services are at the same version as the main installer version:
    • Bulldozer
    • Carpenter
    • Columbo
    • GoMaintain
    • Transporter
    • Watchtower
  2. Verify that the following tools/libraries have been installed:
    • Cluster Health
    • Conductor
    • Persistent
    • Silence
    • Unique ID
    • Upgrade Checker
  3. Verify the following version of this service:
    • elasticsearch 6.8.3

Verify a Warm Node

To identify whether a warm node is working correctly after installation, perform the following:

  1. Verify Warm Node configuration:

    CODE
    curl localhost:9200/_cat/nodeattrs?v
  2. Verify Node Settings in /usr/local/logrhythm/env/es_datapath:

    CODE
    [root@DX01 env]# cat /usr/local/logrhythm/env/es_datapath
    DX_ES_PATH_DATA=/usr/local/logrhythm/db/elasticsearch/data
    DX_ES_CLUSTER_NAME=<cluster name>
    DX_ES_DISCOVERY_ZEN_PING_UNICAST_HOSTS=<IPs of eligible master nodes>
    DX_ES_DISCOVERY_ZEN_MINIMUM_MASTER_NODES=<# of eligible master nodes/2 (rounded down) +1>
    DX_ES_BOX_TYPE=warm
    DX_ES_IS_MASTER=false
  3. On each node in /usr/local/logrhythm/transporter/logs.json, verify the number of shards and replicas based on number of hot nodes:

    CODE
    "number_of_shards": "<physical hot nodes * 2>"
    "number_of_replicas": (this will be "0" for single hot node or "1" for a multi hot node cluster)
    For 2XDX, physical nodes are only used for the shard calculation. A three-node 2XDX will have six shards.
  4. Verify warm node functionality:
    1. Wait until Elasticsearch's heap moves an open index to the warm node as a closed index.
    2. Verify that GoMaintain does not throw errors when moving the index to the warm node as Closed.
    3. (Optional) Perform an investigation against a closed index on the warm node (though this step alone will not confirm that the warm node is working).
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.