Although this book is about Kibana, it doesn't make any sense if we are not aware of the complete Elastic Stack (ELK Stack), including Elasticsearch, Kibana, Logstash, and Beats. In this chapter, you are going to learn the basic concepts of the different software, installation, and their use cases. We cannot use Kibana to its full strength unless we know how to get proper data, filter it, and store it in a format that we can easily use in Kibana.
Elasticsearch is a search engine that is built on top of Apache Lucene, which is mainly used for storing schemaless data and searching it quickly. Logstash is a data pipeline that can practically take data from any source and send data to any source. We can also filter that data as per our requirements. Beats is a single-purpose software that is used to run on individual servers and send data to the Logstash server or directly to the Elasticsearch server. Finally, Kibana uses the data that's stored in Elasticsearch and creates beautiful dashboards using different types of visualization options, such as graphs, charts, histograms, word tags, and data tables.
In this chapter, we will be covering the following topics:
- What is ELK Stack?
- The installation of Elasticsearch, Logstash, Kibana, and Beats
- ELK use cases
ELK Stack is a stack with three different open source software—Elasticsearch, Logstash, and Kibana. Elasticsearch is a search engine that is developed on top of Apache Lucene. Logstash is basically used for data pipelining where we can get data from any data source as an input, transform it if required, and send it to any destination as an output. In general, we use Logstash to push the data into Elasticsearch. Kibana is a dashboard or visualization tool, which can be configured with Elasticsearch to generate charts, graphs, and dashboards using our data:
We can use ELK Stack for different use cases, the most common being log analysis. Other than that, we can use it for business intelligence, application security and compliance, web analytics, fraud management, and so on.
In the following subsections, we are going to be looking at ELK Stack's components.
Elasticsearch is a full text search engine that can be used as a NoSQL database and as an analytics engine. It is easy to scale, schemaless, and near real time, and provides a restful interface for different operations. It isschemaless,and it uses inverted indexes for data storage. There are different language clients available for Elasticsearch, as follows:
- Java
- PHP
- Perl
- Python
- .NET
- Ruby
- JavaScript
- Groovy
The basic components of Elasticsearch are as follows:
- Cluster
- Node
- Index
- Type
- Document
- Shard
Logstash is basically used for data pipelining, through which we can take input from different sources and output to different data sources. Using Logstash, we can clean the data through filter options and mutate the input data before sending it to the output source. Logstash has different adapters to handle different applications, such as for MySQL or any other relational database connection. We have a JDBC input plugin through which we can connect to MySQL server, run queries, and take the table data as the input in Logstash. For Elasticsearch, there is a connector in Logstash that gives us the option to seamlessly transfer data from Logstash to Elasticsearch.
To run Logstash, we need to install Logstash and edit the configuration file logstash.conf
, which consists of an input
, output
, and filter
sections. We need to tell Logstash where it should get the input from through the input
block, what it should do with the input through the filter
block, and where it should send the output through the output
block. In the following example, I am reading an Apache Access Log and sending the output to Elasticsearch:
input { file { path => "/var/log/apache2/access.log" } } filter { grok { match => { message => "%{COMBINEDAPACHELOG}" } } } output { elasticsearch { hosts => "http://127.0.0.1:9200" index => "logs_apache" document_type => "logs" } }
The input
block is showing a file key that is set to /var/log/apache2/access.log
. This means that we are getting the file
input and path
of the file, /var/log/apache2/access.log
, which is Apache's log file. The filter
block is showing the grok
filter, which converts unstructured data into structured data by parsing it.
There are different patterns that we can apply for the Logstash filter. Here, we are parsing the Apache logs, but we can filter different things, such as email, IP addresses, and dates.
Kibana is a dashboarding open source software from ELK Stack, and it is a very good tool for creating different visualizations, charts, maps, and histograms, and by integrating different visualizations together, we can create dashboards. It is part of ELK Stack; hence it is quite easy to read the Elasticsearch data. This does not require any programming skills. Kibana has a beautiful UI for creating different types of visualizations, including charts, histograms, and dashboards.
It provides us with different inbuilt dashboards with multiple visualizations when we use Beats, as it automatically creates multiple visualizations that we can customize to create a useful dashboard, such as for CPU usage and memory usage.
Beats are basically data shippers that are grouped to do single-purpose jobs. For example, Metricbeat is used to collect metrics for memory usage, CPU usage, and disk space, whereas Filebeat is used to send file data such as logs. They can be installed as agents on different servers to send data from different sources to a central Logstash or Elasticsearch cluster. They are written in Go; they work on a cross-platform environment; and they are lightweight in design. Before Beats, it was very difficult to get data from different machines as there was no single-purpose data shipper, and we had to do some tweaking to get the desired data from servers.
For example, if I am running a web application on the Apache web server and want to run it smoothly, then there are two things that need to be monitored—first, all of the errors from the application, and second, the server's performance, such as memory usage, CPU usage, and disk space. So, in order to collect this information, we need to install the following two Beats on our machine:
- Filebeat: This is used to collect log data from Apache web server in an incremental way. Filebeat will run on the server and will periodically check for any change in the Apache log. When there is any change in the Apache log file, it will send the log to Logstash. Logstash will receive the data file and execute the filter to find the errors. After filtering the data, it saves the data into Elasticsearch.
- Metricbeat: This is used to collect metrics for memory usage, CPU usage, disk space, and so on. Metricbeat collects the server metrics, such as memory usage, CPU usage, and disk space, and saves the data into Elasticsearch. Metrics data sends a predefined set of data, and there is no need to modify anything; that is why it sends data directly to Elasticsearch instead of sending it to Logstash first.
To visualize this data, we can use Kibana to create meaningful dashboards through which we can get complete control of our data.
For a complete installation of ELK Stack, we first need to install individual components that are explained one by one in the following sections.
Elasticsearch 6.0 requires that we have Java 8 at the least. Before you proceed with the installation of Elasticsearch, please ensure which version of Java is present in your system by executing the following command:
java -version echo $JAVA_HOME
After the setup is complete, we can go ahead and run Elasticsearch. You can find the binaries at www.elastic.co/downloads.
First, we will download Elasticsearch 6.1.3.tar
, as shown in the following code block:
curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.1.3.tar.gz
Then, extract it as follows:
tar -xvf elasticsearch-6.1.3.tar.gz
You will then see that a bunch of files and folders have been created. We can now proceed to the bin
directory, as follows:
cd elasticsearch-6.1.3/bin
We are now ready to start our node and a single cluster:
./elasticsearch
Windows users are recommended to use the MSI Installer package. This package includes a graphical user interface (GUI) that guides the users through the installation process.
First, download the Elasticsearch 6.1.3 MSI from https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.1.3.msi.
Launch the GUI by double-clicking on the downloaded file. On the first screen, select the deployment directories:
On Debian, before you can proceed with the installation process, you may need to install the apt-transport-https
package first:
sudo apt-get install apt-transport-https
Save the repository definition to /etc/apt/sources.list.d/elastic-6.x.list
:
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
You can install the elasticsearch
Debian package with the following code:
sudo apt-get update && sudo apt-get install elasticsearch
Download and install the public signing key:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Create a file named elasticsearch.repo
in the /etc/yum.repos.d/
directory for Red Hat-based distributions or in the /etc/zypp/repos.d/
directory for openSUSE-based distributions, containing the following code:
[elasticsearch-6.x] name=Elasticsearch repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
Your repository is now ready for use. You can now install Elasticsearch with one of the following commands:
You can use yum
on CentOS and older Red Hat-based distributions:
sudo yum install elasticsearch
You can use dnf
on Fedora and other newer Red Hat distributions:
sudo dnf install elasticsearch
You can use zypper
on openSUSE-based distributions:
sudo zypper install elasticsearch
Elasticsearch can be started and stopped using the service
command:
sudo -i service elasticsearch start sudo -i service elasticsearch stop
Logstash requires at least Java 8. Before you go ahead with the installation of Logstash, please check the version of Java in your system by running the following command:
java -version echo $JAVA_HOME
Download and install the public signing key:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
You may need to install the apt-transport-https
package on Debian before proceeding, as follows:
sudo apt-get install apt-transport-https
Save the repository definition to /etc/apt/sources.list.d/elastic-6.x.list
, as follows:
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
Run sudo apt-get update
and the repository will be ready for use. You can install it using the following code:
sudo apt-get update && sudo apt-get install logstash
Download and install the public signing key:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Add the following in your /etc/yum.repos.d/
directory in a file with a .repo
suffix (for example, logstash.repo
):
[logstash-6.x] name=Elastic repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
Your repository is now ready for use. You can install it using the following code:
sudo yum install logstash
Starting with version 6.0.0, Kibana only supports 64-bit operating systems.
The Linux archive for Kibana v6.1.3 can be downloaded and installed as follows:
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.1.3-linux-x86_64.tar.gz
Compare the SHA produced by sha1sum
or shasum
with the published SHA:
sha1sum kibana-6.1.3-linux-x86_64.tar.gz tar -xzf kibana-6.1.3-linux-x86_64.tar.gz
This directory is known as $KIBANA_HOME
:
cd kibana-6.1.3-linux-x86_64/
Download and install the public signing key:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
You may need to install the apt-transport-https
package on Debian before proceeding:
sudo apt-get install apt-transport-https
Save the repository definition to /etc/apt/sources.list.d/elastic-6.x.list
:
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
You can install the Kibana Debian package with the following:
sudo apt-get update && sudo apt-get install kibana
Download and install the public signing key, as follows:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Create a file named kibana.repo
in the /etc/yum.repos.d/
directory for Red Hat-based distributions, or in the /etc/zypp/repos.d/
directory for openSUSE-based distributions, containing the following code:
[kibana-6.x] name=Kibana repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
Your repository is now ready for use. You can now install Kibana with one of the following commands:
- You can use
yum
on CentOS and older Red Hat-based distributions:
sudo yum install kibana
- You can use
dnf
on Fedora and other newer Red Hat distributions:
sudo dnf install kibana
- You can use
zypper
on openSUSE-based distributions:
sudo zypper install kibana
Download the .zip
Windows archive for Kibana v6.1.3 from https://artifacts.elastic.co/downloads/kibana/kibana-6.1.3-windows-x86_64.zip.
Unzipping it will create a folder named kibana-6.1.3-windows-x86_64
, which we will refer to as $KIBANA_HOME
. In your Terminal, CD
to the $KIBANA_HOME
directory; for instance:
CD c:\kibana-6.1.3-windows-x86_64
Kibana can be started from the command line as follows:
.\bin\kibana
After installing and configuring the ELK Stack, you need to install and configure your Beats.
Each Beat is a separately installable product. To get up and running quickly with a Beat, see the getting started information for your Beat:
- Packetbeat
- Metricbeat
- Filebeat
- Winlogbeat
- Heartbeat
The value of a network packet analytics system such as Packetbeat can be best understood by trying it on your traffic.
To download and install Packetbeat, use the commands that work with your system (deb
for Debian/Ubuntu, rpm
for Red Hat/CentOS/Fedora, macOS for OS X, Docker for any Docker platform, and win
for Windows):
- Ubuntu:
sudo apt-get install libpcap0.8 curl -L -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-6.2.1-amd64.deb sudo dpkg -i packetbeat-6.2.1-amd64.deb
- Red Hat:
sudo yum install libpcap curl -L -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-6.2.1-x86_64.rpm sudo rpm -vi packetbeat-6.2.1-x86_64.rpm
- macOS:
curl -L -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-6.2.1-darwin-x86_64.tar.gz tar xzvf packetbeat-6.2.1-darwin-x86_64.tar.gz
- Windows:
- Download and install
WinPcap
from this page.WinPcap
is a library that uses a driver to enable packet capturing. - Download the Packetbeat Windows ZIP file from the downloads page.
- Extract the contents of the ZIP file into
C:\Program Files
. - Rename the
packetbeat-<version>-windows
directory toPacketbeat
. - Open a PowerShell prompt as an administrator (right-click the PowerShell icon and select
Run as administrator
). If you are running Windows XP, you may need to download and install PowerShell. - From the PowerShell prompt, run the following commands to install
Packetbeat
as a Windows service:
- Download and install
PS > cd 'C:\Program Files\Packetbeat' PS C:\Program Files\Packetbeat> .\install-service-packetbeat.ps1
Before starting Packetbeat
, you should look at the configuration options in the configuration file; for example, C:\Program Files\Packetbeat\packetbeat.yml
or /etc/packetbeat/packetbeat.yml
.
Metricbeat should be installed as close as possible to the service that needs to be monitored. For example, if there are four servers running MySQL, it's strongly recommended that you run Metricbeat on each service. This gives Metricbeat access to your service from localhost and in turn does not cause any additional network traffic or prevent Metricbeat from collecting metrics when there are network problems. Metrics from multiple Metricbeat instances will be combined on the Elasticsearch server.
To download and install Metricbeat, use the commands that work with your system (deb
for Debian/Ubuntu, rpm
for Red Hat/CentOS/Fedora, macOS for OS X, Docker for any Docker platform, and win
for Windows), as follows:
- Ubuntu:
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.2.1-amd64.deb sudo dpkg -i metricbeat-6.2.1-amd64.deb
- Red Hat:
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.2.1-x86_64.rpm sudo rpm -vi metricbeat-6.2.1-x86_64.rpm
- macOS:
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.2.1-darwin-x86_64.tar.gz tar xzvf metricbeat-6.2.1-darwin-x86_64.tar.gz
- Windows:
- Download the Metricbeat Windows ZIP file from the downloads page.
- Extract the contents of the ZIP file into
C:\Program Files
. - Rename the
metricbeat-<version>-windows
directory toMetricbeat
. - Open a PowerShell prompt as an administrator (right-click the PowerShell icon and select
Run as administrator
). If you are running Windows XP, you may need to download and install PowerShell. - From the PowerShell prompt, run the following commands to install
Metricbeat
as a Windows service:
PS > cd 'C:\Program Files\Metricbeat' PS C:\Program Files\Metricbeat> .\install-service-metricbeat.ps1
Before starting Metricbeat
, you should look at the configuration options in the configuration file; for example, C:\Program Files\Metricbeat\metricbeat.yml
.
To download and install Filebeat, use the commands that work with your system (deb
for Debian/Ubuntu, rpm
for Red Hat/CentOS/Fedora, macOS for OS X, Docker for any Docker platform, and win
for Windows), as follows:
- Ubuntu:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.1-amd64.deb sudo dpkg -i filebeat-6.2.1-amd64.deb
- Red Hat:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.1-x86_64.rpm sudo rpm -vi filebeat-6.2.1-x86_64.rpm
- macOS:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.1-darwin-x86_64.tar.gz tar xzvf filebeat-6.2.1-darwin-x86_64.tar.gz
- Windows:
- Download the Filebeat Windows ZIP file from the downloads page.
- Extract the contents of the ZIP file into
C:\Program Files
. - Rename the
filebeat-<version>-windows
directory toFilebeat
. - Open a PowerShell prompt as an administrator (right-click the PowerShell icon and select
Run as administrator
). If you are running Windows XP, you may need to download and install PowerShell. - From the PowerShell prompt, run the following commands to install Filebeat as a Windows service:
PS > cd 'C:\Program Files\Filebeat' PS C:\Program Files\Filebeat> .\install-service-filebeat.ps1
In order to install Winlogbeat, we need to follow these steps:
- Download the Winlogbeat ZIP file from the downloads page.
- Extract the contents into
C:\Program Files
. - Rename the
winlogbeat-<version>
directory toWinlogbeat
. - Open a PowerShell prompt as an administrator (right-click on the PowerShell icon and select
Run as administrator
). If you are running Windows XP, you may need to download and install PowerShell. - From the PowerShell prompt, run the following commands to install the service:
PS C:\Users\Administrator> cd 'C:\Program Files\Winlogbeat' PS C:\Program Files\Winlogbeat> .\install-service-winlogbeat.ps1
Note
Security warning: Only run scripts that you trust. Although scripts from the internet can be useful, they can potentially harm your computer. If you trust the script, use Unblock-File
to allow the script to run without this warning message:
Do you want to run C:\Program Files\Winlogbeat\install-service-winlogbeat.ps1? [D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"): R Status Name DisplayName ------ ---- ----------- Stopped winlogbeat winlogbeat
Before starting winlogbeat
, you should look at the configuration options in the configuration file; for example, C:\Program Files\Winlogbeat\winlogbeat.yml
. There's also a full example configuration file named winlogbeat.reference.yml
.
Unlike most Beats, which we install on edge nodes, we typically install Heartbeat as part of a monitoring service that runs on a separate machine and possibly even outside of the network where the services that you want to monitor are running.
To download and install Heartbeat, use the commands that work with your system (deb
for Debian/Ubuntu, rpm
for Red Hat/CentOS/Fedora, macOS for OS X, Docker for any Docker platform, and win
for Windows):
- Ubuntu:
curl -L -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-6.2.1-amd64.deb sudo dpkg -i heartbeat-6.2.1-amd64.deb
- Red Hat:
curl -L -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-6.2.1-x86_64.rpm sudo rpm -vi heartbeat-6.2.1-x86_64.rpm
- macOS:
curl -L -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-6.2.1-darwin-x86_64.tar.gz tar xzvf heartbeat-6.2.1-darwin-x86_64.tar.gz
- Windows:
- Download the Heartbeat Windows ZIP file from the downloads page.
- Extract the contents of the ZIP file into
C:\Program Files
. - Rename the
heartbeat-<version>-windows
directory toHeartbeat
. - Open a PowerShell prompt as an administrator (right-click the PowerShell icon and select
Run as administrator
). If you are running Windows XP, you may need to download and install PowerShell. - From the PowerShell prompt, run the following commands to install Heartbeat as a Windows service:
PS > cd 'C:\Program Files\Heartbeat' PS C:\Program Files\Heartbeat> .\install-service-heartbeat.ps1
Before starting Heartbeat, you should look at the configuration options in the configuration file; for example, C:\Program Files\Heartbeat\heartbeat.yml
or /etc/heartbeat/heartbeat.yml
.
ELK Stack has many different use cases, but here we are only going to discuss some of them.
In any large organization, there will be different servers with different sets of applications. So, in this case, we need to have different teams for different applications whose task is to explore the log files for debugging any issue. However, this is not an easy task, as the format of logs is never user friendly. Here, I am talking about a single application, but what will happen if we ask the team to monitor all different applications that are built using different technologies and their log format is very different from other applications? The answer is very simple: the team has to dig through all the logs from the different servers and then they will spend days and nights to find the issue. ELK Stack is very useful for these situations, and we can solve this problem easily. First of all, we need to set up a central Elasticsearch cluster for collecting all different logs. Now, we need to configure Logstash as per the application log so that we can transform different log formats that we are getting from different application servers. Logstash will output this data into Elasticsearch for storage so that we can explore, search, and update the data. Finally, Kibana can be used to display graphical dashboards on top of Elasticsearch. Using this setup, anyone can get complete control of all logs coming from different sources. We can use Kibana to alert us to any issues in the log file so that the user can get the issue without doing any data drill downs. Many organizations are using ELK for their log management as this is an open source software that can be built easily to monitor different type of logs on a single screen. Not only can we monitor all of our logs in a single screen, but we can also get alerts if something went wrong in the logs.
Security monitoring and alerting is a very important use case of ELK Stack as application security is a vital part, and it costs if there are any security breaches in the application since security breaches are becoming more common, and most importantly, more targeted. Although enterprises are regularly trying to improve their security measures, hackers are successful in penetrating the security layers. Therefore, it is very much required for any enterprise to detect the presence of security attacks on their server, and not only detect but also alert them so that they can take immediate actions to mitigate their losses. Using ELK Stack, we can monitor various things, such as unusual server requests and any suspicious traffic. We can gather security-related log information that can be monitored by security teams to check any alerts to the system.
This way, security teams can prevent the enterprise from attackers who have gone unnoticed for a long time. ELK Stack provides a way through which we can gain an insight and make the attacker's life more difficult. These logs can also be very useful for after-attack analysis; for example, for finding out the time of the attack and the method of attack used. We can understand the activities the attacker performed to attack, and this information can provide us with a way to strengthen that loophole easily. In this way, ELK Stack is useful for both before attack prevention and after attack healing and prevention.
In ELK Stack, we have different tools to grab data from remote servers. In traditional Relational Database Management System (RDBMS), it is quite difficult to save these types of data because they are not structured, so either we have to manually clean the data or leave some part of it in order to save it in the table schema. In the case of Elasticsearch, the schemaless behavior gives us the leverage to push any data from any source. It not only holds that data but also provides us with a feature to search and play with it. An example of web scraping using ELK Stack is a Twitter to Elasticsearch connector, which allows us to set up hashtags from Twitter and grab all the tweets that used those hashtags. After grabbing those hashtags, we can search, visualize, and analyze them in Kibana.
Many of the top e-commerce websites, such as eBay's, are using Elasticsearch for their product search pages. The main reason behind this is the ability of Elasticsearch in full-text searching, building filters, facets, aggregations, fast response time, and the ease it provides in collecting analytic information. Users can easily drill down to get the product set, from where they can easily select the product they want. This is just one side of the picture, through which we are improving the user's experience. On the other side, we can use the same data and by using Kibana, we can monitor the trends, analyze the data, and much more. There is a big competition going on among e-commerce companies to attract more and more customers. Being able to understand the shopping behavior of their customers is a very important feature, as it leverages e-commerce companies to target users with products that they had liked or will like. This is business intelligence, and using ELK Stack, they can achieve it.
ELK Stack's core competency is its full text search feature. It is powerful and flexible, and it provides various features such as fuzzy search, conditional searching, and natural language searching. So, as per our requirements, we can decide which type of searching is required. We can use ELK Stack's full text search capabilities for product searching, autocomplete features, searching text in emails, and so on.
Kibana is an easy-to-use visualization tool that provides us with a rich feature set to create beautiful charts (such as pie charts, bar charts, and stack charts), histograms, geo maps, word tags, data tables, and so on. Visualizing data is always beneficial for any organization as it helps top management to make decisions with ease. We can also easily track any unusual trends and find any outliers in data without digging into the data. We can create dashboards for any existing web-based application as well by simply pushing the application data into Elasticsearch and then use Kibana to create beautiful dashboards. This way, we can plug in an additional dimension into the application and start monitoring it without putting any additional load on the application.
In this chapter, we covered the basics of ELK Stack and their characteristics. We explained how we can use Beats to send logs data, file data, and system metrics to Logstash or Elasticsearch and that Logstash can be configured as a pipeline to modify the data format and then send the output to Elasticsearch. Elasticsearch is a search engine built on top of Lucene. It can store data and provide functionality to do full text searching on data. Kibana can be configured to read Elasticsearch data and create visualizations and dashboards. We can embed these dashboards on existing web pages, which can then be used for decision-making.
Then, we discussed different use cases of ELK Stack. The first one we mentioned was log management, which is the primary use case of ELK Stack and which made it famous. In log management, we can capture logs from different servers/sources and dump them in a central Elasticsearch cluster after modifying it through Logstash. Kibana is used to create meaningful graphical visualization and dashboards by reading the Elasticsearch data. Finally, we discussed security monitoring and alerting, where ELK Stack can be quite helpful. Security is a very important aspect of any software, and often it is the most neglected part of development and monitoring. Using ELK Stack, we can observe any security threat.