About this book

The Elastic Stack is growing rapidly and, day by day, additional tools are being added to make it more effective. This book endeavors to explain all the important aspects of Kibana, which is essential for utilizing its full potential.

This book covers the core concepts of Kibana, with chapters set out in a coherent manner so that readers can advance their learning in a step-by-step manner. The focus is on a practical approach, thereby enabling the reader to apply those examples in real time for a better understanding of the concepts and to provide them with the correct skills in relation to the tool. With its succinct explanations, it is quite easy for a reader to use this book as a reference guide for learning basic to advanced implementations of Kibana. The practical examples, such as the creation of Kibana dashboards from CSV data, application RDBMS data, system metrics data, log file data, APM agents, and search results, can provide readers with a number of different drop-off points from where they can fetch any type of data into Kibana for the purpose of analysis or dashboarding.

Publication date:
January 2019


Introducing Kibana

Kibana is a dashboard tool that's easy to use and works closely with Elasticsearch. We can use Kibana for different use cases, such as system monitoring and application monitoring. Kibana isn't just a visualization tool, it also creates a complete monitoring ecosystem when we leverage the power of Elastic Stack. Here's a small example: you're working on a project where you can't tolerate any outrage, be it due to the database, application, system-related issues, or anything related to the application's performance. In a traditional monitoring system, you can monitor system performance, application logs, and so on. But with Kibana and Elastic Stack, we can do following:

  • Configure Beats to monitor system metrics, database metrics, and log metrics
  • Configure APM to monitor your application metrics and issues if your application platform is supported
  • Configure the JDBC plugin of Logstash to pull RDBMS data into Elasticsearch to make it available to Kibana for creating visualizations on KPIs
  • There are different third-party plugins that help us to get data from those sources, for example, you can use the Twitter plugin to get Twitter feeds
  • You can create alerts for certain thresholds, so that whenever that situation occurs, you get alerts so you don't have to continuously monitor the application
  • You can apply machine learning on your data to get data anomalies or future trends by analyzing the current dataset

Elastic Stack

Kibana with Elastic Stack can be used to fetch data from different sources and filter, process, and analyze it to create meaningful dashboards. Elastic Stack has the following components:

  • Elasticsearch: We can store data in Elasticsearch.
  • Logstash: A data pipeline that we can use to read data from various sources, and can write it to various sources. It also provides a feature to filter the input data before sending it to output.
  • Kibana: A graphical user interface that we can use to do a lot of things, which I will cover in this chapter.
  • Beats: Lightweight data shippers that sit on different servers and send data to Elasticsearch directly or via Logstash:
    • Filebeat
    • Metricbeat
    • Packetbeat
    • Auditbeat
    • Winlogbeat
    • Heartbeat

The following diagram shows how Elastic Stack works:

In the preceding diagram, we have three different servers on which we have installed and configured Beats. These Beats are shipping data to Elasticsearch directly or via Logstash. Once this data is pushed into Elasticsearch, we can analyze, visualize, and monitor the data in Kibana. Let's discuss these components in detail; we're going to start with Elasticsearch.


Elasticsearch is a full-text search engine that's primarily used for searching. It can also be used as a NoSQL database and analytics engine. Elasticsearch is basically schema-less and works in near-real-time. It has a RESTful interface, which helps us to interact with it easily from multiple interfaces. Elasticsearch supports different ways of importing various types of structured or unstructured data; it handles all types of data because of its schema-less behavior, and it's quite easy to scale. We have different clients available in Elasticsearch for the following languages:

  • Java
  • PHP
  • Perl
  • Python
  • .NET
  • Ruby
  • JavaScript
  • Groovy

Its query API is quite robust and we can execute different types of queries, such as boosting some fields over other fields, writing fuzzy queries, or searching on single or multiple fields, along with field search. Applying a Boolean search or wildcard search aggregation is another important feature of Elasticsearch, which helps us to aggregate different types of data; it has multiple types of aggregations, such as metric aggregation, bucket aggregation, and term aggregation.

In fuzzy queries, we match words even then if there's no exact match for the spelling. For example, if we try to search a word with the wrong spelling, we can get the correct result using fuzzy search.

The architecture of Elasticsearch has the following components:

  • Cluster: A collection of one or more nodes that work together is known as a cluster. By default, the cluster name is elasticsearch, which we can change to any unique name.
  • Node: A node represents a single Elasticsearch server, which is identified by a universally unique identifier (UUID).
  • Index: A collection of documents where each document in the collection has a common attribute.
  • Type: A logical partition of the index to store more than one type of document. Type was supported in previous versions and is deprecated from 6.0.0 onward.
  • Document: A single record in Elasticsearch is known as a document.
  • Shard: We can subdivide the Elasticsearch index into multiple pieces, which are called shards. During indexing, we can provide the number of shards required.

Elasticsearch is primarily used to store and search data in the Elastic Stack; Kibana picks this data from Elasticsearch and uses it to analyzes or visualizes it in the form of charts and more, which can be combined to create dashboards.


Logstash is a data pipeline that can take data input from various sources, filter it, and output it to various sources; these sources can be files, Kafka, or databases. Logstash is a very important tool in Elastic Stack as it's primarily used to pull data from various sources and push it to Elasticsearch; from there, Kibana can use that data for analysis or visualization. We can take any type of data using Logstash, such as structured or unstructured data , which comes from various sources, such as the internet. The data can be transformed using Logstash's filter option, which has different plugins to play with different sets of data. For example, if we get an IP address in our data, the GeoIP plugin can add geolocation using that IP address, and in the output, we can get additional information of geolocation, which can then be used in Kibana to plot a map.

The following expression shows us an example of a Logstash configuration file:

path => "/var/log/apache2/access.log"
match => {message => "%{COMBINEDAPACHELOG}"}
hosts => "localhost"

In the preceding expression, we have three sections: input, filter, and output. In the input section, we're reading the Apache access log file data. The filter section is there to extract Apache access log data in different fields, using the grok filter option. The output section is quite straightforward as it's pushing the data to the local Elasticsearch cluster. We can configure the input and output sections to read or write from or to different sources, whereas we can apply different plugins to transform the input data; for example, we can mutate a field, transform a field value, or add geolocation from an IP address using the filter option.

Grok is a tool that we can use to generate structured and queryable data by parsing unstructured data.


In Elastic Stack, Kibana is mainly used to provide the graphical user interface, which we use to do multiple things. When Kibana was first released, we just used it to create charts and histograms, but with each update, Kibana evolves and now we have lots of killer features that make Kibana stand out from the crowd. There are many features in Kibana, but when we talk about the key features, they are as follows:

  • Discover your data by exploring it
  • Analyze your data by applying different metrics
  • Visualize your data by creating different types of charts
  • Apply machine learning on your data to get data anomaly and future trends
  • Monitor your application using APM
  • Manage users and roles
  • A console to run Elasticsearch expressions
  • Play with time-series data using Timelion
  • Monitor your Elastic Stack using Monitoring

Application Performance Monitoring (APM) is built on top of an Elastic Stack that we use to monitor application and software services in real time. We'll look at APM in more detail in Chapter 6, Monitoring Applications with APM.

In this way, there are different use cases that can be handled well using Kibana. I'm going to explain each of them in later chapters.


Beats are single-purpose, lightweight data shippers that we use to get data from different servers. Beats can be installed on the servers as a lightweight agent to send system metrics, or process or file data to Logstash or Elasticsearch. They gather data from the machine on which they are installed and then send that data to Logstash, which we use to parse or transform the data before sending it to Elasticsearch, or we can send the Beats data directly into Elasticsearch.

They are quite handy as it takes almost no time to install and configure Beats to start sending data from the server on which they're installed. They're written to target specific requirements and work really well to solve use cases. Filebeat is there to work with different files like Apache log files or any other files, they keep a watch on the files, and as soon as an update happens, the updated data is shipped to Logstash or Elasticsearch. This file operation can also be configured using Logstash, but that may require some tuning; Filebeat is very easy to configure in comparison to Logstash.

Another advantage is that they have a smaller footprint and they sit on the servers from where we want the monitoring data to be sent. This makes the system quite simple because the collection of data happens on the remote machine, and then this data is sent to a centralized Elasticsearch cluster directly, or via Logstash. One more feature that makes Beats an important component of the Elastic Stack is the built-in Dashboard, which can be created in no time. We have a simple configuration in Beats to create a monitoring Dashboard in Kibana, which can be used to monitor directly or we might have to do some minor changes to use it for monitoring. There are different types of Beats, which we'll discuss here.


Filebeat is a lightweight data shipper that forwards log data from different servers to a central place, where we can analyze that log data. Filebeat monitors the log files that we specify, collects the data from there in an incremental way, and then forwards them to Logstash, or directly into Elasticsearch for indexing.

After configuring Filebeat, it starts the input as per the given instructions. Filebeat starts a harvester to read a single log to get the incremental data for each separate file. Harvester sends the log data to libbeat, and then libbeat aggregates all events and sends the data to the output as per the given instructions like in Elasticsearch, Kafka, or Logstash.


Another lightweight data shipper that can be installed on any server to fetch system metrics. Metricbeat helps us to collect metrics from systems and services and to monitor the servers. Metrics are running on those servers, on which we installed Metricbeat. Metricbeat ships the collected system metrics data to Elasticsearch Logstash for analysis. Metricbeat can monitor many different services, as follows:

  • MySQL
  • PostgreSQL
  • Apache
  • Nginx
  • Redis
  • HAProxy

I've listed only some of the services, Metricbeat supports a lot more than that.


Packetbeat is used to analyze network packets in real time. Packetbeat data can be pushed to Elasticsearch, which we can use to configure Kibana for real-time application monitoring. Packetbeat is very effective in diagnosing network-related issues, since it captures the network traffic between our application servers and it decodes the application-layer protocols, such as HTTP, Redis, and MySQL. Also, it correlates the request and response, and captures important fields.

Packetbeat supports the following protocols:

  • HTTP
  • MySQL
  • PostgreSQL
  • Redis
  • MongoDB
  • Memcache
  • TLS
  • DNS

Using Packetbeat, we can send our network packet data directly into Elasticsearch or through Logstash. Packetbeat is a handy tool since it's difficult to monitor the network packet. Just install and configure it on the server where you want to monitor the network packets and start getting the packet data into Elasticsearch using which, we can create packet data monitoring dashboard. Packetbeat also provides a custom dashboard that we can easily configure using the Packetbeat configuration file.


Auditbeat can be installed and configured on any server to audit the activities of users and processes. It's a lightweight data shipper that sends the data directly to Elasticsearch or using Logstash. Sometimes it's difficult to track changes in binaries or configuration files; Auditbeat is helpful here because it detects changes to critical files, such as different configuration files and binaries.

We can configure Auditbeat to fetch audit events from the Linux audit framework. The Linux audit framework is an auditing system that collects the information of different events on the system. Auditbeat can help us to take that data and push it to Elasticsearch from where Kibana can be utilized to create dashboards.


Winlogbeat is a data shipper that ships the Windows event logs to Logstash or the Elasticsearch cluster. It keeps a watch and reads from different Windows event logs and sends them to Logstash or Elasticsearch in a timely manner. Winlogbeat can send different types of events:

  • Hardware Events
  • Security Events
  • System Events
  • Application Events

Winlogbeat sends structured data to Logstash or Elasticsearch after reading raw event data to make it easy for filtering and aggregating the data.


Heartbeat is a lightweight shipper that monitors server uptime. It can be installed on a remote server; after that, it periodically checks the status of different services and tell us whether they're available. The major difference between Metricbeat and Heartbeat is that Metricbeat tells us whether that server is up or down, while Heartbeat tells us whether services are reachable—it's quite similar to the ping command, which tells us whether the server is responding.


Use cases of Elastic Stack

There are many areas where we can use the Elastic Stack, such as logging where we mainly use Elastic Stack or for searching using Elasticsearch or for dashboarding for monitoring but these are just a few use case of the Elastic Stack which we primarily use, there are many other areas where we can use the power of Elastic Stack. We can use Elastic Stack for the following use cases:

  • System Performance Monitoring
  • Log Management
  • Application Performance Monitoring
  • Application Data Analysis
  • Security Monitoring and Alerting
  • Data Visualization

Let's discuss each of these in detail.

System Performance Monitoring

When we run any application in production, we need to make it stable by avoiding anything that can impact the application's performance; this can be anything, such as the system, database, or any third-party dependencies, since if anything fails it impacts the application. In this section, we'll see how system monitoring can help us to avoid situations where the system can cause the application to outage.

Let's discuss the factors that can hamper application's performance. There can be number of reasons, such as the system memory or CPU creating a bottleneck because of an increase in user hits. In this situation, we can do multiple things, such as optimizing the application if it's possible and increasing the memory or CPU. We can do it to mitigate the outrage of the application, but it's only possible if we're monitoring the system metrics of the servers on which the application has been deployed. Using the monitoring, we can configure the alert whenever the threshold value of any component increases. In this way, you can protect yourself from any application outage because of system performance.

Log Management

Log Management is one of the key use cases of Elastic Stack, and it has been primarily used for this purpose for many years. There are many benefits of log management using Elastic Stack, and I'll explain Elastic Stack simplifies things when it comes to monitoring logs. Let's say you have a log file and you need to explore it to get the root cause of the application outage how are you going to proceed? Where will you open the file and what are you going to search and filter? We just need to push the log data into Elasticsearch and configure Kibana to read this data. We can use Filebeat to read the log files, such as Apache access and error logs. Apart from system logs, we can also configure Filebeat to capture application logs. Instead of Filebeat, we can use Logstash to take file data as input and output it to Elasticsearch.

Application Performance Monitoring

Elastic Stack APM monitors applications and helps developers and system administrators monitor software applications for performance and availability. It also helps them to identify any current issues, or ones that may occur in the near future. Using APM, we can find and fix any bug in the code, as it makes the problems in the code searchable. By integrating APM with our code, we can monitor our code and make it better and more efficient. Elastic APM provides us with custom preconfigured dashboards in Kibana. We can integrate application data using APM and server stats, network details, and log details using Beats. This makes it easy to monitor everything in a single place.

We can apply machine learning to APM data by using the APM UI to find any abnormal behavior in the data. Alerts can also be applied to get an email notification if anything goes wrong in the code. Currently, Elastic APM supports Node.js, Python, Java, Ruby, Go, and JavaScript. It's easy to configure APM with your application and it requires only a few lines of code.

Security, Monitoring, and Alerting with Elastic Stack

With X-Pack, we can enable security, alerting, and monitoring with our Elastic setup. These features are very important and we need them to protect our Elastic Stack from external access and any possible issues. Now let's discuss each of them in detail.


Security is a very important feature of X-Pack; without it, anyone can open the URL and access everything in Kibana, including index patterns, data, visualizations, and dashboards. During X-Pack installation and setup, we create the default user credentials. For security, we have role management and user management, using which we can restrict user access and secure the Elastic Stack.


Monitoring provides us the insight on Elasticsearch, Logstash, and Kibana. Monitoring comes with X-Pack, which we can install after installing the basic Elastic Stack setup. Monitoring-related data is stored in Elasticsearch, which we can see from Kibana. We have built-in status warning in Kibana, custom alerts can be configured on data in the indices used for monitoring.


Elastic Stack uses alerting to keep an eye on any activity, such as whether CPU usage increases, memory consumption goes beyond some threshold, the response time of an application goes up, or 503 errors are increasing. By creating alerts, we can proactively monitor the system or application behavior and can apply a check before anything actually goes wrong.

Using alerts, we can notify every stakeholder without missing anything. We can apply alerts to detect specific issues, such as a user logged in from a different location, credit card numbers are showing in application logs, or the indexing rate of Elasticsearch increases. These are just some examples; we can apply alerts in so many cases.

There are different ways to notify the users, as there are lots of built-in integrations available for emails, slack, and so on. Apart from these built-in options, we can integrate alerts with any existing system by integrating the webhook output provided by Elastic Stack. Alerts also have simple template support, which we can use to customize the notification. I'll cover how we can configure the alerts in the coming chapters.

Data Visualization

Data visualization is the main feature of Kibana and it's the best way to get information from the raw data. As we know, a picture tells a thousand words, so we can easily learn about a data trend by just seeing a simple chart. Kibana is popular because it has the ability to create dashboards for KPIs using data from different sources; we can even use Beats to get ready—made dashboards. We have different types of visualizations in Kibana, such as basic charts, data, time-series, and maps, which we'll cover in coming chapters. If we have data in Elasticsearch, we can create visualizations by creating index patterns in Kibana for those indexes in Elasticsearch.


Installing Elastic Stack

Elastic Stack consists of different components, such as Elasticsearch, Logstash, Kibana, and different Beats. We need to install each component individually, so let's start with Elasticsearch.

The installation steps might change, depending on the release of version 7. The updated steps will be available at the following link once the version is released.


To install Elasticsearch 6, we need at least Java 8. Please ensure first that Java is installed with at least version 8 in your system. Once Java is installed, we can go ahead and install Elasticsearch. You can find the binaries at www.elastic.co/downloads.

Installation using the tar file

Follow the steps to install using the tar file:

  1. First, we need to download the latest Elasticsearch tar, as shown in the following code block:
curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.x.tar.gz
  1. Then, extract it using the following command:
tar -xvf elasticsearch-6.x.tar.gz
  1. After extracting it, we have a bunch of files and folders. Move to the bin directory by executing the following command:
cd elasticsearch-6.x/bin 
  1. After moving to the bin directory, run Elasticsearch using the following command:

Installation using Homebrew

Using Homebrew, we can install Elasticsearch on macOS, as follows:

brew install elasticsearch

Installation using MSI Windows installer

For Windows, we have the MSI Installer package, which includes a graphical user interface (GUI) that we can use to complete the installation process. We can download the Elasticsearch 6.x MSI from the Elasticsearch download section at https://www.elastic.co/downloads/elasticsearch.

We can launch the GUI-based installer by double-clicking on the downloaded executable file. On the first screen, select the deployment directories and install the software by following the installation screens.

Installation using the Debian package

Follow the steps to install using the Debian package:

  1. First, install the apt-transport-https package using the following command:
sudo apt-get install apt-transport-https
  1. Save the repository definition on /etc/apt/sources.list.d/elastic-6.x.list:
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
  1. To install the Elasticsearch Debian package, run the following command:
sudo apt-get update && sudo apt-get install elasticsearch

Installation with the RPM package

  1. Download and then install the public signing key:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
  1. Create a file called elasticsearch.repo for RedHat-based distributions under the /etc/yum.repos.d/ directory. For the OpenSuSE-based distributions, we have to create the file under the /etc/zypp/repos.d/ directory. We need to add the following entry in the file:
name=Elasticsearch repository for 6.x packages

After adding the preceding code, we can install Elasticsearch on the following environments.

  • We can run the yum command on CentOS and older versions of RedHat-based distributions:
sudo yum install elasticsearch
  • On Fedora and other newer versions of RedHat distributions, use the dnf command:
sudo dnf install elasticsearch
  • The zypper command can be used on OpenSUSE-based distributions:
sudo zypper install elasticsearch
  • The Elasticsearch service can be started or stopped using the following command:
sudo -i service elasticsearch start
sudo -i service elasticsearch stop


We have different ways to install Logstash based on the operating system. Let's see how we can install Logstash on different environments.

Using APT Package Repositories

Follow the steps to install using APT Package Repositories

  1. Install the Public Signing key, but before that download the APT package repository. You can do that using the following command:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
  1. On Debian, we have to install the apt-transport-https package:
sudo apt-get install apt-transport-https
  1. Save the following repository definition, under the /etc/apt/sources.list.d/elastic-6.x.list directory:
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
  1. Run the sudo apt-get update command to update the repository. After the update, the repository will be ready to use. We can install Logstash by executing the following command:
sudo apt-get update && sudo apt-get install logstash

Using YUM Package Repositories

Follow the steps to install using YUM Package Repositories:

  1. Download the public signing key and then install it using the following expression:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
  1. Under the /etc/yum.repos.d/ directory, add the following expression in a file with a .repo suffix, for example. See the following code block in the logstash.repo file:
name=Elastic repository for 6.x packages
  1. The repository is ready after we add the preceding code. Using the following command, we can install Logstash:
sudo yum install logstash


From version 6.0.0 onward, Kibana only supports 64-bit operating systems, so we need to ensure we have a 64-bit operating system before installing Kibana.

Installing Kibana with .tar.gz

Follow the steps to install Kibana with .tar.gz

  1. Using the following expression, we can download and install the Linux archive for Kibana v6.x:
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.x-linux-x86_64.tar.gz
tar -xzf kibana-6.1.3-linux-x86_64.tar.gz
  1. Change the directory and move to $KIBANA_HOME by running the following command:
cd kibana-6.1.3-linux-x86_64/
  1. We can start Kibana using the following command:

Installing Kibana using the Debian package

Follow the steps to install Kibana using the Debian package:

  1. For the Debian package, download and install the public signing key using the following command:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
  1. Install the apt-transport-https package using the following expression:
sudo apt-get install apt-transport-https
  1. We need to add the following repository definition under /etc/apt/sources.list.d/elastic-6.x.list:
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
  1. Install the Kibana Debian package, by running the following command:
sudo apt-get update && sudo apt-get install kibana

Installing Kibana using RPM

Follow the steps to install Kibana using RPM:

  1. Install the public signing key after downloading it for the RPM package:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
  1. Create a file called kibana.repo under the /etc/yum.repos.d/ directory for RedHat-based distributions. For OpenSuSE-based distributions, we need to create the file under the /etc/zypp/repos.d/ directory and then add the following expression:
name=Kibana repository for 6.x packages

After adding the preceding expression in our file, we can install Kibana using the following commands:

  • On yum, CentOS, and older RedHat-based distributions, we need to run the following command:
sudo yum install kibana
  • We can use the dnf command on Fedora and newer RedHat distributions:
sudo dnf install kibana

Using zypper on OpenSUSE-based distributions

We can use zypper to install Kibana on OpenSUSE-based distributions using the following command:

sudo zypper install kibana

Installing Kibana on Windows

Follow the steps to install Kibana on Windows:

  1. From the Elastic download section (https://www.elastic.co/downloads/kibana), we can download the .zip windows archive for Kibana v6.x.
  2. Create a folder called kibana-6.x-windows-x86_64 by unzipping the zipped archive; we refer to this folder path as $KIBANA_HOME. Now move to the $KIBANA_HOME directory by using the following expression:
cd c:\kibana-6.x-windows-x86_64
  1. To start Kibana, we need to run the following command:


Beat is a separately-installable product; they are lightweight data shippers. There are many Beats available, as follows:

  • Packetbeat
  • Metricbeat
  • Filebeat
  • Winlogbeat
  • Heartbeat


There are many ways to download and install Packetbeat, depending on your operating system. Let's see different commands for different types of OSes:

  • To install Packtbeat on deb use the following command:
sudo apt-get install libpcap0.8
curl -L -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-6.2.1-amd64.deb
sudo dpkg -i packetbeat-6.2.1-amd64.deb
  • To install Packetbeat on rpm use the following command:
sudo yum install libpcap
curl -L -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-6.x-x86_64.rpm
sudo rpm -vi packetbeat-6.2.1-x86_64.rpm
  • To install Packetbeat on macOS use the following command:
curl -L -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-6x-darwin-x86_64.tar.gz
tar xzvf packetbeat-6.2.1-darwin-x86_64.tar.gz
  • To install Packetbeat on the Windows environment, perform the following steps:
  1. Get the Packtebeat Windows zip file from the Elastic downloads section.
  2. Extract the zip file to C:\Program Files.
  3. Rename the extracted file Packetbeat.
  4. Run the PowerShell prompt as an Administrator.
  5. To install Packetbeat as a Windows service, run the following command:
PS > cd 'C:\Program Files\Packetbeat'
PS C:\Program Files\Packetbeat> .\install-service-packetbeat.ps1


There are different ways to install Metricbeat on your operating system. Using the following expressions, we can install Metricbeat on different OSes:

  • To install Metricbeat on deb use the following command:
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.x-amd64.deb
sudo dpkg -i metricbeat-6.x-amd64.deb
  • To install Meticbeat on rpm use the following command:
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.x-x86_64.rpm
sudo rpm -vi metricbeat-6.x-x86_64.rpm
  • To install Meticbeat on macOS use the following command:
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.x-darwin-x86_64.tar.gz
tar xzvf metricbeat-6.x-darwin-x86_64.tar.gz
  • To install Meticbeat on Windows perform the following steps:
  1. Download the Metricbeat Windows zip from the Elastic download section.
  2. Extract the file to the C:\Program Files directory.
  3. Rename the metricbeat long directory name to Metricbeat.
  4. Run the PowerShell prompt as an Administrator.

If you're running Windows XP, you may need to download and install PowerShell.
  1. Run the following commands to install Metricbeat as a Windows service:
  2. To install Metricbeat, run the following commands from the PowerShell prompt:
PS > cd 'C:\Program Files\Metricbeat'
PS C:\Program Files\Metricbeat> .\install-service-metricbeat.ps1


We can download and install Filebeat on different operating systems in the following ways:

  • To install Filebear on deb use the following command:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.x-amd64.deb
sudo dpkg -i filebeat-6.x-amd64.deb
  • To install Filebeat on rpm use the following command:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.x-x86_64.rpm
sudo rpm -vi filebeat-6.x-x86_64.rpm
  • To install Filebeat on macOS use the following command:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.x-darwin-x86_64.tar.gz
tar xzvf filebeat-6.2.1-darwin-x86_64.tar.gz
  • To install Filebeat on Windows perform the following steps:
  1. From the Elastic downloads section, download the Filebeat Windows zip file.
  2. Extract the zip file into C:\Program Files.
  3. Rename the long filebeat directory to Filebeat.
  4. Open a PowerShell prompt as an administrator.
  5. From the PowerShell prompt, execute the following commands:
PS > cd 'C:\Program Files\Filebeat'
PS C:\Program Files\Filebeat> .\install-service-filebeat.ps1


In this chapter, we introduced you to Elastic Stack, where we discussed the different components of Elastic Stack, such as Elasticsearch, Logstash, Kibana, and different Beats. Then we looked at different use cases of Elastic Stack, such as System Performance Monitoring, where we monitor the system's performance, Log Management, where we collect different logs and monitor them from a central place, and Application Performance Monitoring, where we monitor our application by connecting it to a central APM server. We also covered Application Data Analysis, where we analyze the application's data, Security Monitoring and Alerting, where we secure our stack using X-Pack, monitor it regularly, and configure alerts to keep an eye on changes that can impact the system's performance, and Data Visualization, where we use Kibana to create different types of visualizations using the available data.

In the next chapter, we'll cover different methods of pushing data into Kibana, such as from RDBMS, files, system metrics, CSV, and applications. We'll start with different Beats to demonstrate the complete process of configuring these Beats and sending data directly to Elasticsearch or via Logstash to Elasticsearch. Then, we'll look at how to import data from CSV by configuring Logstash to take input and insert data into Elasticsearch. After CSV, we'll fetch data from RDBMS using SQL queries through the JDBC plugin and insert it into Elasticsearch. We'll use the preceding methods to insert data into Elasticsearch, and then we'll configure Kibana to fetch the data by creating an index pattern. In this way, we can fetch any type of data into Kibana and can then perform different operations on that data.

About the Author
  • Anurag Srivastava

    Anurag Srivastava is a senior technical lead in a multinational software company. He has more than 12 years' experience in web-based application development. He is proficient in designing architecture for scalable and highly available applications. He has handled development teams and multiple clients from all over the globe over the past 10 years of his professional career. He has significant experience with the Elastic Stack (Elasticsearch, Logstash, and Kibana) for creating dashboards using system metrics data, log data, application data, and relational databases. He has authored three other booksMastering Kibana 6.x, and Kibana 7 Quick Start Guide, and Learning Kibana 7 - Second Edition, all published by Packt.

    Browse publications by this author
Kibana 7 Quick Start Guide
Unlock this book and the full library FREE for 7 days
Start now