Home » Run Elasticsearch in Kibana

Run Elasticsearch in Kibana

by Online Tutorials Library

Run Elasticsearch in Kibana

In this section, we will learn how to run Elasticsearch in different platforms like Windows, Linux, macOS, and cloud. Along with it, we will also understand what is the use of curl command in Kibana.

Run Elasticsearch

We may build a hosted deployment on the Elasticsearch Service or set up a multi-node Elasticsearch cluster on our own Linux, macOS, or Windows computer to take Elasticsearch for a test drive.

Let’s Run Elasticsearch on Elastic Cloud Platform

When we build a deployment on the Elasticsearch Software, it provides a three-node cluster with Kibana and APM.

For a deployment:

  1. Register for a free trial, and search the email address.
  2. Get the Password.
  3. Click on the Deployment or Development icon.

After building a deployment, some documents are ready for index.

When we build a deployment on the Elasticsearch Project, we automatically supply a master node and two data nodes. We can start several instances of Elasticsearch locally by downloading it from the source in the format of tar or zip archive to see how a multi-node cluster is behaving.

To run a tri-node Elasticsearch cluster locally, follow the steps given below:

1. Install the repository for the OS (that you are using) on Elasticsearch:

Linux: elasticsearch-7.8.0-linux-x86_64.tar.gz

Run Elasticsearch in Kibana

macOS: elasticsearch-7.8.0-darwin-x86_64.tar.gz

Run Elasticsearch in Kibana

2. Extract the archive file by using the following command.

Linux:

Run Elasticsearch in Kibana

macOS:

Run Elasticsearch in Kibana

Windows PowerShell:

Run Elasticsearch in Kibana

3. Start the Elasticsearch from the bin directory on a different platform:

In Linux and macOS platform:

Run Elasticsearch in Kibana

In the Windows platform:

Run Elasticsearch in Kibana

Now the single-node Elasticsearch cluster must be running correctly!

4. Start two more Elasticsearch instances, so that we can see how a typical multi-node cluster is working. For every node, we need to define specific data and log-paths.

5. Linux and macOS:

Run Elasticsearch in Kibana

Windows:

Run Elasticsearch in Kibana

Similar ids are allocated to the additional nodes. Since, all three nodes are operating locally, they simultaneously connect with the first node to the cluster.

6. Use the cat safety API to check the cluster of the three nodes is up and running. The cat APIs returns information about the cluster and indices in an easier-to-read format than the raw JSON.

Through sending HTTP requests to the Elasticsearch REST API, we can communicate directly with our cluster. Once the Kibana is installed and running, we can send requests through the Dev Console.

When we are ready to start using Elasticsearch in our own applications, we’ll want to check out the Elasticsearch language clients.

Run Elasticsearch in Kibana

The answer should state that the Elasticsearch cluster ‘s status is green and has three nodes:

Run Elasticsearch in Kibana

When we run only a single instance of Elasticsearch, the cluster status will remain yellow. A single node cluster is entirely functional, but for redundancy, data cannot be replicated to another node. Cluster status replica shards must be accessible to be white. If the status of the cluster is red, any data will not be available.

Using curl commands to speak to Elasticsearch

Many of the examples in this guide allow copying the correct cURL command and submitting the request from the command line to our local Elasticsearch instance.

Run Elasticsearch in Kibana

In the above example, we have used the following variables:

<VERB>

The correct form or verb in HTTP, GET, POST, Place, HEAD, or DELETE.

<PROTOCOL>

The <PROTOCAL> uses the security features of Elasticsearch to encrypt HTTP contact.

<HOST>

In our Elasticsearch cluster, the<HOST> is the hostname of any node. Alternatively, we can also use the localhost.

<PATH>

The <PATH> is the endpoint of the API. In Elasticsearch it can include several components, such as cluster / stats or nodes / stats / JVM.

<PORT>

The <PORT> is an instance on which HTTP service of Elasticsearch is running. The default port is 9200.

<QUERY_STRING>

The parameters are optional for query-string. For starters,? The JSON answer will pretty-print to make it easier to read.

<BODY>

A request body encoded in JSON (where necessary).

If the security features of Elasticsearch allows us, then we there will be a need of having a valid username-password credentials with the authority to run the API. For instance, we can use the command parameter -u or -u cURL.

Elasticsearch responds with an HTTP status code, like 200 OK, to any API request. It also returns an answer body encoded in JSON, with the exception of HEAD requests.

Some other configurations option

We can install Elasticsearch from an archive file as well that allows us to install and run several instances locally and very quickly, with the help of which, we are able to try out different stuff.

Also, we can run Elasticsearch in a Docker container. On the other hand, we can also install the Elasticsearch using the DEB or RPM packages on Linux platform. We can install Homebrew on macOS or install MSI software installer on Windows to run as a single instance in our system.


Next TopicKibana Elk Stack

You may also like