An Introduction to Kibana

Yuvraj Gupta

October 2015

In this article by Yuvraj Gupta, author of the book, Kibana Essentials, explains Kibana is a tool that is part of the ELK stack, which consists of Elasticsearch, Logstash, and Kibana. It is built and developed by Elastic. Kibana is a visualization platform that is built on top of Elasticsearch and leverages the functionalities of Elasticsearch.

(For more resources related to this topic, see here.)

To understand Kibana better, let's check out the following diagram:

This diagram shows that Logstash is used to push data directly into Elasticsearch. This data is not limited to log data, but can include any type of data. Elasticsearch stores data that comes as input from Logstash, and Kibana uses the data stored in Elasticsearch to provide visualizations. So, Logstash provides an input stream of data to Elasticsearch, from which Kibana accesses the data and uses it to create visualizations.

Kibana acts as an over-the-top layer of Elasticsearch, providing beautiful visualizations for data (structured or nonstructured) stored in it. Kibana is an open source analytics product used to search, view, and analyze data. It provides various types of visualizations to visualize data in the form of tables, charts, maps, histograms, and so on. It also provides a web-based interface that can easily handle a large amount of data. It helps create dashboards that are easy to create and helps query data in real time. Dashboards are nothing but an interface for underlying JSON documents. They are used for saving, templating, and exporting. They are simple to set up and use, which helps us play with data stored in Elasticsearch in minutes without requiring any coding.

Kibana is an Apache-licensed product that aims to provide a flexible interface combined with the powerful searching capabilities of Elasticsearch. It requires a web server (included in the Kibana 4 package) and any modern web browser, that is, a browser that supports industry standards and renders the web page in the same way across all browsers, to work. It connects to Elasticsearch using the REST API. It helps to visualize data in real time with the use of dashboards to provide real-time insights.

As Kibana uses the functionalities of Elasticsearch, it is easier to learn Kibana by understanding the core functionalities of Elasticsearch. In this article, we are going to take a look at the following topics:

  • The basic concepts of Elasticsearch
  • Installation of Java
  • Installation of Elasticsearch
  • Installation of Kibana
  • Importing a JSON file into Elasticsearch

Understanding Elasticsearch

Elasticsearch is a search server built on top of Lucene (licensed under Apache), which is completely written in Java. It supports distributed searches in a multitenant environment. It is a scalable search engine allowing high flexibility of adding machines easily. It provides a full-text search engine combined with a RESTful web interface and JSON documents. Elasticsearch harnesses the functionalities of Lucene Java Libraries, adding up by providing proper APIs, scalability, and flexibility on top of the Lucene full-text search library. All querying done using Elasticsearch, that is, searching text, matching text, creating indexes, and so on, is implemented by Apache Lucene.

Without a setup of an Elastic shield or any other proxy mechanism, any user with access to Elasticsearch API can view all the data stored in the cluster.

The basic concepts of Elasticsearch

Let's explore some of the basic concepts of Elasticsearch:

  • Field: This is the smallest single unit of data stored in Elasticsearch. It is similar to a column in a traditional relational database. Every document contains key-value pairs, which are referred to as fields. Values in a field can contain a single value, such as integer [27], string ["Kibana"], or multiple values, such as array [1, 2, 3, 4, 5]. The field type is responsible for specifying which type of data can be stored in a particular field, for example, integer, string, date, and so on.
  • Document: This is the simplest unit of information stored in Elasticsearch. It is a collection of fields. It is considered similar to a row of a table in a traditional relational database. A document can contain any type of entry, such as a document for a single restaurant, another document for a single cuisine, and yet another for a single order. Documents are in JavaScript Object Notation (JSON), which is a language-independent data interchange format. JSON contains key-value pairs. Every document that is stored in Elasticsearch is indexed. Every document contains a type and an ID. An example of a document that has JSON values is as follows:

    {
      "name": "Yuvraj",
      "age": 22,
      "birthdate": "2015-07-27",
      "bank_balance": 10500.50,
      "interests": ["playing games","movies","travelling"],
      "movie": {"name":"Titanic","genre":"Romance","year" : 1997}
    }

    In the preceding example, we can see that the document supports JSON, having key-value pairs, which are explained as follows:

    • The name field is of the string type
    • The age field is of the numeric type
    • The birthdate field is of the date type
    • The bank_balance field is of the float type
    • The interests field contains an array
    • The movie field contains an object (dictionary)
  • Type: This is similar to a table in a traditional relational database. It contains a list of fields, which is defined for every document. A type is a logical segregation of indexes, whose interpretation/semantics entirely depends on you. For example, you have data about the world and you put all your data into an index. In this index, you can define a type for continent-wise data, another type for country-wise data, and a third type for region-wise data. Types are used with a mapping API; it specifies the type of its field. An example of type mapping is as follows:
    {
      "user": {
        "properties": {
          "name": {
            "type": "string"
          },
          "age": {
            "type": "integer"
          },
          "birthdate": {
            "type": "date"
          },
          "bank_balance": {
            "type": "float"
          },
          "interests": {
            "type": "string"
          },
          "movie": {
            "properties": {
              "name": {
                "type": "string"
              },
              "genre": {
                "type": "string"
              },
              "year": {
                "type": "integer"
              }
            }
          }
        }
      }
    }

    Now, let's take a look at the core data types specified in Elasticsearch, as follows:

    Type

    Definition

    string

    This contains text, for example, "Kibana"

    integer

    This contains a 32-bit integer, for example, 7

    long

    This contains a 64-bit integer

    float

    IEEE float, for example, 2.7

    double

    This is a double-precision float

    boolean

    This can be true or false

    date

    This is the UTC date/time, for example, "2015-06-30T13:10:10"

    geo_point

    This is the latitude or longitude

  • Index: This is a collection of documents (one or more than one). It is similar to a database in the analogy with traditional relational databases. For example, you can have an index for user information, transaction information, and product type. An index has a mapping; this mapping is used to define multiple types. In other words, an index can contain single or multiple types. An index is defined by a name, which is always used whenever referring to an index to perform search, update, and delete operations for documents. You can define any number of indexes you require. Indexes also act as logical namespaces that map documents to primary shards, which contain zero or more replica shards for replicating data. With respect to traditional databases, the basic analogy is similar to the following:
    MySQL => Databases => Tables => Columns/Rows
    Elasticsearch => Indexes => Types => Documents with Fields

    You can store a single document or multiple documents within a type or index. As a document is within an index, it must also be assigned to a type within an index. Moreover, the maximum number of documents that you can store in a single index is 2,147,483,519 (2 billion 147 million), which is equivalent to Integer.Max_Value.

  • ID: This is an identifier for a document. It is used to identify each document. If it is not defined, it is autogenerated for every document.

    The combination of index, type, and ID must be unique for each document.

  • Mapping: Mappings are similar to schemas in a traditional relational database. Every document in an index has a type. A mapping defines the fields, the data type for each field, and how the field should be handled by Elasticsearch. By default, a mapping is automatically generated whenever a document is indexed. If the default settings are overridden, then the mapping's definition has to be provided explicitly.
  • Node: This is a running instance of Elasticsearch. Each node is part of a cluster. On a standalone machine, each Elasticsearch server instance corresponds to a node. Multiple nodes can be started on a single standalone machine or a single cluster. The node is responsible for storing data and helps in the indexing/searching capabilities of a cluster. By default, whenever a node is started, it is identified and assigned a random Marvel Comics character name. You can change the configuration file to name nodes as per your requirement. A node also needs to be configured in order to join a cluster, which is identifiable by the cluster name. By default, all nodes join the Elasticsearch cluster; that is, if any number of nodes are started up on a network/machine, they will automatically join the Elasticsearch cluster.
  • Cluster: This is a collection of nodes and has one or multiple nodes; they share a single cluster name. Each cluster automatically chooses a master node, which is replaced if it fails; that is, if the master node fails, another random node will be chosen as the new master node, thus providing high availability. The cluster is responsible for holding all of the data stored and provides a unified view for search capabilities across all nodes. By default, the cluster name is Elasticsearch, and it is the identifiable parameter for all nodes in a cluster. All nodes, by default, join the Elasticsearch cluster. While using a cluster in the production phase, it is advisable to change the cluster name for ease of identification, but the default name can be used for any other purpose, such as development or testing.

    The Elasticsearch cluster contains single or multiple indexes, which contain single or multiple types. All types contain single or multiple documents, and every document contains single or multiple fields.

  • Sharding: This is an important concept of Elasticsearch while understanding how Elasticsearch allows scaling of nodes, when having a large amount of data termed as big data. An index can store any amount of data, but if it exceeds its disk limit, then searching would become slow and be affected. For example, the disk limit is 1 TB, and an index contains a large number of documents, which may not fit completely within 1 TB in a single node. To counter such problems, Elasticsearch provides shards. These break the index into multiple pieces. Each shard acts as an independent index that is hosted on a node within a cluster. Elasticsearch is responsible for distributing shards among nodes. There are two purposes of sharding: allowing horizontal scaling of the content volume, and improving performance by providing parallel operations across various shards that are distributed on nodes (single or multiple, depending on the number of nodes running).

    Elasticsearch helps move shards among multiple nodes in the event of an addition of new nodes or a node failure.

    There are two types of shards, as follows:

    • Primary shard: Every document is stored within a primary index. By default, every index has five primary shards. This parameter is configurable and can be changed to define more or fewer shards as per the requirement. A primary shard has to be defined before the creation of an index. If no parameters are defined, then five primary shards will automatically be created.

      Whenever a document is indexed, it is usually done on a primary shard initially, followed by replicas. The number of primary shards defined in an index cannot be altered once the index is created.

    • Replica shard: Replica shards are an important feature of Elasticsearch. They help provide high availability across nodes in the cluster. By default, every primary shard has one replica shard. However, every primary shard can have zero or more replica shards as required. In an environment where failure directly affects the enterprise, it is highly recommended to use a system that provides a failover mechanism to achieve high availability. To counter this problem, Elasticsearch provides a mechanism in which it creates single or multiple copies of indexes, and these are termed as replica shards or replicas. A replica shard is a full copy of the primary shard. Replica shards can be dynamically altered. Now, let's see the purposes of creating a replica. It provides high availability in the event of failure of a node or a primary shard. If there is a failure of a primary shard, replica shards are automatically promoted to primary shards. Increase performance by providing parallel operations on replica shards to handle search requests.

      A replica shard is never kept on the same node as that of the primary shard from which it was copied.

  • Inverted index: This is also a very important concept in Elasticsearch. It is used to provide fast full-text search. Instead of searching text, it searches for an index. It creates an index that lists unique words occurring in a document, along with the document list in which each word occurs. For example, suppose we have three documents. They have a text field, and it contains the following:
    • I am learning Kibana
    • Kibana is an amazing product
    • Kibana is easy to use

    To create an inverted index, the text field is broken into words (also known as terms), a list of unique words is created, and also a listing is done of the document in which the term occurs, as shown in this table:

    Term

    Doc 1

    Doc 2

    Doc 3

    I

    X

     

     

    Am

    X

     

     

    Learning

    X

     

     

    Kibana

    X

    X

    X

    Is

     

    X

    X

    An

     

    X

     

    Amazing

     

    X

     

    Product

     

    X

     

    Easy

     

     

    X

    To

     

     

    X

    Use

     

     

    X

    Now, if we search for is Kibana, Elasticsearch will use an inverted index to display the results:

    Term

    Doc 1

    Doc 2

    Doc 3

    Is

     

    X

    X

    Kibana

    X

    X

    X

    With inverted indexes, Elasticsearch uses the functionality of Lucene to provide fast full-text search results.

    An inverted index uses an index based on keywords (terms) instead of a document-based index.

  • REST API: This stands for Representational State Transfer. It is a stateless client-server protocol that uses HTTP requests to store, view, and delete data. It supports CRUD operations (short for Create, Read, Update, and Delete) using HTTP. It is used to communicate with Elasticsearch and is implemented by all languages. It communicates with Elasticsearch over port 9200 (by default), which is accessible from any web browser. Also, Elasticsearch can be directly communicated with via the command line using the curl command. cURL is a command-line tool used to send, view, or delete data using URL syntax, as followed by the HTTP structure. A cURL request is similar to an HTTP request, which is as follows:
    curl -X <VERB> '<PROTOCOL>://<HOSTNAME>:<PORT>/<PATH>?<QUERY_STRING>' -d '<BODY>'

    The terms marked within the <> tags are variables, which are defined as follows:

    • VERB: This is used to provide an appropriate HTTP method, such as GET (to get data), POST, PUT (to store data), or DELETE (to delete data).
    • PROTOCOL: This is used to define whether the HTTP or HTTPS protocol is used to send requests.
    • HOSTNAME: This is used to define the hostname of a node present in the Elasticsearch cluster. By default, the hostname of Elasticsearch is localhost.
    • PORT: This is used to define the port on which Elasticsearch is running. By default, Elasticsearch runs on port 9200.
    • PATH: This is used to define the index, type, and ID where the documents will be stored, searched, or deleted. It is specified as index/type/ID.
    • QUERY_STRING: This is used to define any additional query parameter for searching data.
    • BODY: This is used to define a JSON-encoded request within the body.

    In order to put data into Elasticsearch, the following curl command is used:

    curl -XPUT 'http://localhost:9200/testing/test/1' -d '{"name": "Kibana" }'

    Here, testing is the name of the index, test is the name of the type within the index, and 1 indicates the ID number.

    To search for the preceding stored data, the following curl command is used:

    curl -XGET 'http://localhost:9200/testing/_search?

    The preceding commands are provided just to give you an overview of the format of the curl command.

Prerequisites for installing Kibana 4.1.1

The following pieces of software need to be installed before installing Kibana 4.1.1:

  • Java 1.8u20+
  • Elasticsearch v1.4.4+
  • A modern web browser—IE 10+, Firefox, Chrome, Safari, and so on

The installation process will be covered separately for Windows and Ubuntu so that both types of users are able to understand the process of installation easily.

Installation of Java

In this section, JDK needs to be installed so as to access Elasticsearch. Oracle Java 8 (update 20 onwards) will be installed as it is the recommended version for Elasticsearch from version 1.4.4 onwards.

Installation of Java on Ubuntu 14.04

Install Java 8 using the terminal and the apt package in the following manner:

  1. Add the Oracle Java Personal Package Archive (PPA) to the apt repository list:
    sudo add-apt-repository -y ppa:webupd8team/java

    In this case, we use a third-party repository; however, the WebUpd8 team is trusted to install Java. It does not include any Java binaries. Instead, the PPA directly downloads from Oracle and installs it.

    As shown in the preceding screenshot, you will initially be prompted for the password for running the sudo command (only when you have not logged in as root), and on successful addition to the repository, you will receive an OK message, which means that the repository has been imported.

  2. Update the apt package database to include all the latest files under the packages:

    sudo apt-get update

  3. Install the latest version of Oracle Java 8:

     sudo apt-get -y install oracle-java8-installer

    Also, during the installation, you will be prompted to accept the license agreement, which pops up as follows:

  4. To check whether Java has been successfully installed, type the following command in the terminal:java –version

    This signifies that Java has been installed successfully.

Installation of Java on Windows

We can install Java on windows by going through the following steps:

  1. Download the latest version of the Java JDK from the Sun Microsystems site at http://www.oracle.com/technetwork/java/javase/downloads/index.html:

                                                                                        

  2. As shown in the preceding screenshot, click on the DOWNLOAD button of JDK to download. You will be redirected to the download page. There, you have to first click on the Accept License Agreement radio button, followed by the Windows version to download the .exe file, as shown here:

  3. Double-click on the file to be installed and it will open as an installer.

  4. Click on Next, accept the license by reading it, and keep clicking on Next until it shows that JDK has been installed successfully.

  5. Now, to run Java on Windows, you need to set the path of JAVA in the environment variable settings of Windows. Firstly, open the properties of My Computer. Select the Advanced system settings and then click on the Advanced tab, wherein you have to click on the environment variables option, as shown in this screenshot:

                                                                                                                   

    After opening Environment Variables, click on New (under the System variables) and give the variable name as JAVA_HOME and variable value as C:\Program Files\Java\jdk1.8.0_45 (do check in your system where jdk has been installed and provide the path corresponding to the version installed as mentioned in system directory), as shown in the following screenshot:

    Then, double-click on the Path variable (under the System variables) and move towards the end of textbox. Insert a semicolon if it is not already inserted, and add the location of the bin folder of JDK, like this: %JAVA_HOME%\bin. Next, click on OK in all the windows opened.

    Do not delete anything within the path variable textbox.

  6. To check whether Java is installed or not, type the following command in Command Prompt:
    java –version


                                                                                                   This signifies that Java has been installed successfully.

Installation of Elasticsearch

In this section, Elasticsearch, which is required to access Kibana, will be installed. Elasticsearch v1.5.2 will be installed, and this section covers the installation on Ubuntu and Windows separately.

Installation of Elasticsearch on Ubuntu 14.04

To install Elasticsearch on Ubuntu, perform the following steps:

  1. Download Elasticsearch v 1.5.2 as a .tar file using the following command on the terminal:

     curl -L -O https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1....

    Curl is a package that may not be installed on Ubuntu by the user. To use curl, you need to install the curl package, which can be done using the following command:

    sudo apt-get -y install curl

  2. Extract the downloaded .tar file using this command:

     tar -xvzf elasticsearch-1.5.2.tar.gzThis will extract the files and folder into the current working directory.

  3. Navigate to the bin directory within the elasticsearch-1.5.2 directory:

    cd elasticsearch-1.5.2/bin

  4. Now run Elasticsearch to start the node and cluster, using the following command:

    ./elasticsearch

    The preceding screenshot shows that the Elasticsearch node has been started, and it has been given a random Marvel Comics character name.

    If this terminal is closed, Elasticsearch will stop running as this node will shut down. However, if you have multiple Elasticsearch nodes running, then shutting down a node will not result in shutting down Elasticsearch.

  5. To verify the Elasticsearch installation, open http://localhost:9200 in your browser.

Installation of Elasticsearch on Windows

The installation on Windows can be done by following similar steps as in the case of Ubuntu. To use curl commands on Windows, we will be installing GIT. GIT will also be used to import a sample JSON file into Elasticsearch using elasticdump, as described in the Importing a JSON file into Elasticsearch section.

Installation of GIT

To run curl commands on Windows, first download and install GIT, then perform the following steps:

  1. Download the GIT ZIP package from https://git-scm.com/download/win.

  2. Double-click on the downloaded file, which will walk you through the installation process.

  3. Keep clicking on Next by not changing the default options until the Finish button is clicked on.

  4. To validate the GIT installation, right-click on any folder in which you should be able to see the options of GIT, such as GIT Bash, as shown in the following screenshot:

The following are the steps required to install Elasticsearch on Windows:

  1. Open GIT Bash and enter the following command in the terminal:

     curl –L –O https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1....

  2. Extract the downloaded ZIP package by either unzipping it using WinRar, 7Zip, and so on (if you don't have any of these, download one of them) or using the following command in GIT Bash:

    unzip elasticsearch-1.5.2.zip
    This will extract the files and folder into the directory.

  3. Then click on the extracted folder and navigate through it to reach the bin folder.

  4. Click on the elasticsearch.bat file to run Elasticsearch.

    The preceding screenshot shows that the Elasticsearch node has been started, and it is given a random Marvel Comics character's name.

    Again, if this window is closed, Elasticsearch will stop running as this node will shut down. However, if you have multiple Elasticsearch nodes running, then shutting down a node will not result in shutting down Elasticsearch.

  5. To verify the Elasticsearch installation, open http://localhost:9200 in your browser.

Installation of Kibana

In this section, Kibana will be installed. We will install Kibana v4.1.1, and this section covers installations on Ubuntu and Windows separately.

Installation of Kibana on Ubuntu 14.04

To install Kibana on Ubuntu, follow these steps:

  1. Download Kibana version 4.1.1 as a .tar file using the following command in the terminal:

     curl -L -O https://download.elasticsearch.org/kibana/kibana/kibana-4.1.1-linux-x64....

  2. Extract the downloaded .tar file using this command:

    tar -xvzf kibana-4.1.1-linux-x64.tar.gz
    The preceding command will extract the files and folder into the current working directory.

  3. Navigate to the bin directory within the kibana-4.1.1-linux-x64 directory:

    cd kibana-4.1.1-linux-x64/bin
  4. Now run Kibana to start the node and cluster using the following command: 

    Make sure that Elasticsearch is running. If it is not running and you try to start Kibana, the following error will be displayed after you run the preceding command:

  5. To verify the Kibana installation, open http://localhost:5601 in your browser.

Installation of Kibana on Windows

To install Kibana on Windows, perform the following steps:

  1. Open GIT Bash and enter the following command in the terminal:

     curl -L -O https://download.elasticsearch.org/kibana/kibana/kibana-4.1.1-windows.zip

  2. Extract the downloaded ZIP package by either unzipping it using WinRar or 7Zip (download it if you don't have it), or using the following command in GIT Bash:

    unzip kibana-4.1.1-windows.zip
    This will extract the files and folder into the directory.

  3. Then click on the extracted folder and navigate through it to get to the bin folder.

  4. Click on the kibana.bat file to run Kibana.

    Make sure that Elasticsearch is running. If it is not running and you try to start Kibana, the following error will be displayed after you click on the kibana.bat file:

  5. Again, to verify the Kibana installation, open http://localhost:5601 in your browser.

Additional information

You can change the Elasticsearch configuration for your production environment, wherein you have to change parameters such as the cluster name, node name, network address, and so on. This can be done using the information mentioned in the upcoming sections..

Changing the Elasticsearch configuration

To change the Elasticsearch configuration, perform the following steps:

  1. Run the following command in the terminal to open the configuration file:

    sudo vi ~/elasticsearch-1.5.2/config/elasticsearch.yml
    Windows users can open the elasticsearch.yml file from the config folder. This will open the configuration file as follows:

  2. The cluster name can be changed, as follows:

    #cluster.name: elasticsearch to cluster.name: "your_cluster_name".

    In the preceding figure, the cluster name has been changed to test. Then, we save the file.

  3. To verify that the cluster name has been changed, run Elasticsearch as mentioned in the earlier section.

    Then open http://localhost:9200 in the browser to verify, as shown here:

    In the preceding screenshot, you can notice that cluster_name has been changed to test, as specified earlier.

Changing the Kibana configuration

To change the Kibana configuration, follow these steps:

  1. Run the following command in the terminal to open the configuration file:

    sudo vi ~/kibana-4.1.1-linux-x64/config/kibana.yml

    Windows users can open the kibana.yml file from the config folder

    In this file, you can change various parameters such as the port on which Kibana works, the host address on which Kibana works, the URL of Elasticsearch that you wish to connect to, and so on

  2. For example, the port on which Kibana works can be changed by changing the port address. As shown in the following screenshot, port: 5601 can be changed to any other port, such as port: 5604. Then we save the file.

  3. To check whether Kibana is running on port 5604, run Kibana as mentioned earlier. Then open http://localhost:5604 in the browser to verify, as follows:

Importing a JSON file into Elasticsearch

To import a JSON file into Elasticsearch, we will use the elasticdump package. It is a set of import and export tools used for Elasticsearch. It makes it easier to copy, move, and save indexes. To install elasticdump, we will require npm and Node.js as prerequisites.

Installation of npm

In this section, npm along with Node.js will be installed. This section covers the installation of npm and Node.js on Ubuntu and Windows separately.

Installation of npm on Ubuntu 14.04

To install npm on Ubuntu, perform the following steps:

  1. Add the official Node.js PPA:

    sudo curl --silent --location https://deb.nodesource.com/setup_0.12 | sudo bash -

    As shown in the preceding screenshot, the command will add the official Node.js repository to the system and update the apt package database to include all the latest files under the packages. At the end of the execution of this command, we will be prompted to install Node.js and npm, as shown in the following screenshot:

  2. Install Node.js by entering this command in the terminal:

    sudo apt-get install --yes nodejs

    This will automatically install Node.js and npm as npm is bundled within Node.js.

  3. To check whether Node.js has been installed successfully, type the following command in the terminal:

    node –v
    Upon successful installation, it will display the version of Node.js.

  4. Now, to check whether npm has been installed successfully, type the following command in the terminal:

    npm –v
    Upon successful installation, it will show the version of npm.

Installation of npm on Windows

To install npm on Windows, follow these steps:

  1. Download the Windows Installer (.msi) file by going to https://nodejs.org/en/download/.

  2. Double-click on the downloaded file and keep clicking on Next to install the software.

  3. To validate the successful installation of Node.js, right-click and select GIT Bash.
    In GIT Bash, enter this:
    node –v
    Upon successful installation, you will be shown the version of Node.js.
    To validate the successful installation of npm, right-click and select GIT Bash.

  4. In GIT Bash, enter the following line:

    npm –v
    Upon successful installation, it will show the version of npm.

Installing elasticdump

In this section, elasticdump will be installed. It will be used to import a JSON file into Elasticsearch. It requires npm and Node.js installed. This section covers the installation on Ubuntu and Windows separately.

Installing elasticdump on Ubuntu 14.04

Perform these steps to install elasticdump on Ubuntu:

  1. Install elasticdump by typing the following command in the terminal:

    sudo npm install elasticdump -g

  2. Then run elasticdump by typing this command in the terminal:

    elasticdump

  3. Import a sample data (JSON) file into Elasticsearch, which can be downloaded from https://github.com/guptayuvraj/Kibana_Essentials and is named tweet.json. It will be imported into Elasticsearch using the following command in the terminal:

    elasticdump \
    
    --bulk=true \
    
    --input="/home/yuvraj/Desktop/tweet.json" \

    --output=http://localhost:9200/
    Here, input provides the location of the file, as shown in the following screenshot:

    As you can see, data is being imported to Elasticsearch from the tweet.json file, and the dump complete message is displayed when all the records are imported to Elasticsearch successfully.

    Elasticsearch should be running while importing the sample file.

Installing elasticdump on Windows

To install elasticdump on Windows, perform the following steps:

  1. Install elasticdump by typing the following command in GIT Bash:

    npm install elasticdump -g

                                                                                                             

  2. Then run elasticdump by typing this command in GIT Bash:

    elasticdump

  3. Import the sample data (JSON) file into Elasticsearch, which can be downloaded from https://github.com/guptayuvraj/Kibana_Essentials and is named tweet.json. It will be imported to Elasticsearch using the following command in GIT Bash:

    elasticdump \
    --bulk=true \
    --input="C:\Users\ygupta\Desktop\tweet.json" \
    --output=http://localhost:9200/

    Here, input provides the location of the file.

    The preceding screenshot shows data being imported to Elasticsearch from the tweet.json file, and the dump complete message is displayed when all the records are imported to Elasticsearch successfully.

    Elasticsearch should be running while importing the sample file.

    To verify that the data has been imported to Elasticsearch, open http://localhost:5601 in your browser, and this is what you should see:

    When Kibana is opened, you have to configure an index pattern. So, if data has been imported, you can enter the index name, which is mentioned in the tweet.json file as index: tweet. After the page loads, you can see to the left under Index Patterns the name of the index that has been imported (tweet).

    Now mention the index name as tweet. It will then automatically detect the timestamped field and will provide you with an option to select the field. If there are multiple fields, then you can select them by clicking on Time-field name, which will provide a drop-down list of all fields available, as shown here:

    Finally, click on Create to create the index in Kibana. After you have clicked on Create, it will display the various fields present in this index.

    If you do not get the options of Time-field name and Create after entering the index name as tweet, it means that the data has not been imported into Elasticsearch.

Summary

In this article, you learned about Kibana, along with the basic concepts of Elasticsearch. These help in the easy understanding of Kibana. We also looked at the prerequisites for installing Kibana, followed by a detailed explanation of how to install each component individually in Ubuntu and Windows.

Resources for Article:


Further resources on this subject:


You've been reading an excerpt of:

Kibana Essentials

Explore Title