Issues while working on ELK stack

A quick post on a couple of errors and their solutions while working on ELK stack.

ELK stack issues and solutions

ELK stack i.e. ElasticSearch Logstash and Kibana. We will walk you through a couple of errors you may see while working on ELK stack and their solutions.

Error: missing authentication token for REST request

First, thing first how to run cluster curl commands which are spared everywhere on the Elastic documentation portal. They have a copy as a curl command which if you run on your terminal will end up in below error –

root@kerneltalks # curl -X GET "localhost:9200/_cat/health?v&pretty"
{
  "error" : {
    "root_cause" : [
      {
        "type" : "security_exception",
        "reason" : "missing authentication token for REST request [/_cat/health?                                                                                        v&pretty]",
        "header" : {
          "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
        }
      }
    ],
    "type" : "security_exception",
    "reason" : "missing authentication token for REST request [/_cat/health?v&pr                                                                                        etty]",
    "header" : {
      "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
    }
  },
  "status" : 401
}
Solution:

You need to use authentication within curl command and you are good to go. It’s good practice to use the only username in command with -u switch so that you won’t reveal your password in command history! Make sure you use the Kibana UI user here.

root@kerneltalks # curl -u kibanaadm -X GET "localhost:9200/_cat/health?v&pretty"
Enter host password for user 'kibanaadm':
epoch      timestamp cluster        status node.total node.data shards  pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1578644464 08:21:04  test-elk green           1         1   522 522    0    0        0             0                  -                100.0%

Issue: How to remove x-pack after 6.2 upgrades

If you are running ELK stack 6.2 and you are performing upgrade then you need to take care of the x-pack module first. Since x-pack is included within 6.3 and later distributions you don’t need it as a separate module. But due to some reason, while upgrade mew stack won’t be able to remove the existing x-pack module. This will lead to having 2 x-pack modules on system and Kibana restarting continuously because of that with below error –

Error: Multiple plugins found with the id \"xpack_main\":\n  - xpack_main at /usr/share/kibana/node_modules/x-pack\n  - xpack_main at /usr/share/kibana/plugins/x-pack
Solution:

So, before the upgrade, you need to remove the x-pack plugin from ElasticSearch and Kibana as well. Using below commands –

root@kerneltalks # /usr/share/elasticsearch/bin/elasticsearch-plugin remove x-pack
-> removing [x-pack]...
-> preserving plugin config files [/etc/elasticsearch/x-pack] in case of upgrade; use --purge if not needed

root@kerneltalks # /usr/share/kibana/bin/kibana-plugin remove x-pack
Removing x-pack...

This will make your upgrade go smooth. If you have already upgraded
(with RPM) and faced the issue, you may try to downgrade packages rpm -Uvh --oldpackage <package_name> and then try to remove x-pack modules.


Issue: How to set Index replicas to 0 on single node ElasticSearch cluster

On single node ElasticSearch cluster if you are running default configuration then you will run into un-assigned replicas issue. In Kibana UI you can see those Index health as Yellow. Also, your cluster health will be yellow too with a message – Elasticsearch cluster status is yellow. Allocate missing replica shards.

Solution:

You need to mark all indices with a replica count to zero. You can do this in one go using below command –

root@kerneltalks # curl -u kibanaadm -X PUT "localhost:9200/_all/_settings?pretty" -H 'Content-Type: application/json' -d'
{
    "index" : {
        "number_of_replicas" : 0
    }
}
'
Enter host password for user 'kibanaadm':
{
  "acknowledged" : true
}

Where _all can be replaced with a specific index name if you want to do it for a specific index. Use the Kibana UI user in command and you will be asked for the password. Once entered it alters all indices setting and will show you output as above.

You can now check-in Kibana UI and your cluster health along with index health will be Green.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.