Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Learning ELK Stack
Learning ELK Stack

Learning ELK Stack: Build mesmerizing visualizations, analytics, and logs from your data using Elasticsearch, Logstash, and Kibana

By Saurabh Chhajed
$48.99
Book Nov 2015 206 pages 1st Edition
eBook
$39.99 $27.98
Print
$48.99
Subscription
$15.99 Monthly
eBook
$39.99 $27.98
Print
$48.99
Subscription
$15.99 Monthly

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Black & white paperback book shipped to your address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Nov 26, 2015
Length 206 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781785887154
Vendor :
Elastic
Category :
Table of content icon View table of contents Preview book icon Preview Book

Learning ELK Stack

Chapter 1. Introduction to ELK Stack

This chapter explains the importance of log analysis in today's data-driven world and what are the challenges associated with log analysis. It introduces ELK stack as a complete log analysis solution, and explains what ELK stack is and the role of each of the open source components of the stack, namely, Elasticsearch, Logstash, and Kibana. Also, it briefly explains the key features of each of the components and describes the installation and configuration steps for them.

The need for log analysis


Logs provide us with necessary information on how our system is behaving. However, the content and format of the logs varies among different services or say, among different components of the same system. For example, a scanner may log error messages related to communication with other devices; on the other hand, a web server logs information on all incoming requests, outgoing responses, time taken for a response, and so on. Similarly, application logs for an e-commerce website will log business-specific logs.

As the logs vary by their content, so will their uses. For example, the logs from a scanner may be used for troubleshooting or for a simple status check or reporting while the web server log is used to analyze traffic patterns across multiple products. Analysis of logs from an e-commerce site can help figure out whether packages from a specific location are returned repeatedly and the probable reasons for the same.

The following are some common use cases where log analysis is helpful:

  • Issue debugging

  • Performance analysis

  • Security analysis

  • Predictive analysis

  • Internet of things (IoT) and logging

Issue debugging

Debugging is one of the most common reasons to enable logging within your application. The simplest and most frequent use for a debug log is to grep for a specific error message or event occurrence. If a system administrator believes that a program crashed because of a network failure, then he or she will try to find a connection dropped message or a similar message in the server logs to analyze what caused the issue. Once the bug or the issue is identified, log analysis solutions help capture application information and snapshots of that particular time can be easily passed across development teams to analyze it further.

Performance analysis

Log analysis helps optimize or debug system performance and give essential inputs around bottlenecks in the system. Understanding a system's performance is often about understanding resource usage in the system. Logs can help analyze individual resource usage in the system, behavior of multiple threads in the application, potential deadlock conditions, and so on. Logs also carry with them timestamp information, which is essential to analyze how the system is behaving over time. For instance, a web server log can help know how individual services are performing based on response times, HTTP response codes, and so on.

Security analysis

Logs play a vital role in managing the application security for any organization. They are particularly helpful to detect security breaches, application misuse, malicious attacks, and so on. When users interact with the system, it generates log events, which can help track user behavior, identify suspicious activities, and raise alarms or security incidents for breaches.

The intrusion detection process involves session reconstruction from the logs itself. For example, ssh login events in the system can be used to identify any breaches on the machines.

Predictive analysis

Predictive analysis is one of the hot trends of recent times. Logs and events data can be used for very accurate predictive analysis. Predictive analysis models help in identifying potential customers, resource planning, inventory management and optimization, workload efficiency, and efficient resource scheduling. It also helps guide the marketing strategy, user-segment targeting, ad-placement strategy, and so on.

Internet of things and logging

When it comes to IoT devices (devices or machines that interact with each other without any human intervention), it is vital that the system is monitored and managed to keep downtime to a minimum and resolve any important bugs or issues swiftly. Since these devices should be able to work with little human intervention and may exist on a large geographical scale, log data is expected to play a crucial role in understanding system behavior and reducing downtime.

Challenges in log analysis


The current log analysis process mostly involves checking logs at multiple servers that are written by different components and systems across your application. This has various problems, which makes it a time-consuming and tedious job. Let's look at some of the common problem scenarios:

  • Non-consistent log format

  • Decentralized logs

  • Expert knowledge requirement

Non-consistent log format

Every application and device logs in its own special way, so each format needs its own expert. Also, it is difficult to search across because of different formats.

Let's take a look at some of the common log formats. An interesting thing to observe will be the way different logs represent different timestamp formats, different ways to represent INFO, ERROR, and so on, and the order of these components with logs. It's difficult to figure out just by seeing logs what is present at what location. This is where tools such as Logstash help.

Tomcat logs

A typical tomcat server startup log entry will look like this:

May 24, 2015 3:56:26 PM org.apache.catalina.startup.HostConfig deployWAR
INFO: Deployment of web application archive \soft\apache-tomcat-7.0.62\webapps\sample.war has finished in 253 ms

Apache access logs – combined log format

A typical Apache access log entry will look like this:

127.0.0.1 - - [24/May/2015:15:54:59 +0530] "GET /favicon.ico HTTP/1.1" 200 21630

IIS logs

A typical IIS log entry will look like this:

2012-05-02 17:42:15 172.24.255.255 - 172.20.255.255 80 GET /images/favicon.ico - 200 Mozilla/4.0+(compatible;MSIE+5.5;+Windows+2000+Server)

Variety of time formats

Not only log formats, but timestamp formats are also different among different types of applications, different types of events generated across multiple devices, and so on. Different types of time formats across different components of your system also make it difficult to correlate events occurring across multiple systems at the same time:

  • 142920788

  • Oct 12 23:21:45

  • [5/May/2015:08:09:10 +0000]

  • Tue 01-01-2009 6:00

  • 2015-05-30 T 05:45 UTC

  • Sat Jul 23 02:16:57 2014

  • 07:38, 11 December 2012 (UTC)

Decentralized logs

Logs are mostly spread across all the applications that may be across different servers and different components. The complexity of log analysis increases with multiple components logging at multiple locations. For one or two servers' setup, finding out some information from logs involves running cat or tail commands or piping these results to grep command. But what if you have 10, 20, or say, 100 servers? These kinds of searches are mostly not scalable for a huge cluster of machines and need a centralized log management and an analysis solution.

Expert knowledge requirement

People interested in getting the required business-centric information out of logs generally don't have access to the logs or may not have the technical expertise to figure out the appropriate information in the quickest possible way, which can make analysis slower, and sometimes, impossible too.

The ELK Stack


The ELK platform is a complete log analytics solution, built on a combination of three open source tools—Elasticsearch, Logstash, and Kibana. It tries to address all the problems and challenges that we saw in the previous section. ELK utilizes the open source stack of Elasticsearch for deep search and data analytics; Logstash for centralized logging management, which includes shipping and forwarding the logs from multiple servers, log enrichment, and parsing; and finally, Kibana for powerful and beautiful data visualizations. ELK stack is currently maintained and actively supported by the company called Elastic (formerly, Elasticsearch).

Let's look at a brief overview of each of these systems:

  • Elasticsearch

  • Logstash

  • Kibana

Elasticsearch

Elasticsearch is a distributed open source search engine based on Apache Lucene, and released under an Apache 2.0 license (which means that it can be downloaded, used, and modified free of charge). It provides horizontal scalability, reliability, and multitenant capability for real-time search. Elasticsearch features are available through JSON over a RESTful API. The searching capabilities are backed by a schema-less Apache Lucene Engine, which allows it to dynamically index data without knowing the structure beforehand. Elasticsearch is able to achieve fast search responses because it uses indexing to search over the texts.

Elasticsearch is used by many big companies, such as GitHub, SoundCloud, FourSquare, Netflix, and many others. Some of the use cases are as follows:

  • Wikipedia: This uses Elasticsearch to provide a full text search, and provide functionalities, such as search-as-you-type, and did-you-mean suggestions.

  • The Guardian: This uses Elasticsearch to process 40 million documents per day, provide real-time analytics of site-traffic across the organization, and help understand audience engagement better.

  • StumbleUpon: This uses Elasticsearch to power intelligent searches across its platform and provide great recommendations to millions of customers.

  • SoundCloud: This uses Elasticsearch to provide real-time search capabilities for millions of users across geographies.

  • GitHub: This uses Elasticsearch to index over 8 million code repositories, and index multiple events across the platform, hence providing real-time search capabilities across it.

Some of the key features of Elasticsearch are:

  • It is an open source distributed, scalable, and highly available real-time document store

  • It provides real-time search and analysis capabilities

  • It provides a sophisticated RESTful API to play around with lookup, and various features, such as multilingual search, geolocation, autocomplete, contextual did-you-mean suggestions, and result snippets

  • It can be scaled horizontally easily and provides easy integrations with cloud-based infrastructures, such as AWS and others

Logstash

Logstash is a data pipeline that helps collect, parse, and analyze a large variety of structured and unstructured data and events generated across various systems. It provides plugins to connect to various types of input sources and platforms, and is designed to efficiently process logs, events, and unstructured data sources for distribution into a variety of outputs with the use of its output plugins, namely file, stdout (as output on console running Logstash), or Elasticsearch.

It has the following key features:

  • Centralized data processing: Logstash helps build a data pipeline that can centralize data processing. With the use of a variety of plugins for input and output, it can convert a lot of different input sources to a single common format.

  • Support for custom log formats: Logs written by different applications often have particular formats specific to the application. Logstash helps parse and process custom formats on a large scale. It provides support to write your own filters for tokenization and also provides ready-to-use filters.

  • Plugin development: Custom plugins can be developed and published, and there is a large variety of custom developed plugins already available.

Kibana

Kibana is an open source Apache 2.0 licensed data visualization platform that helps in visualizing any kind of structured and unstructured data stored in Elasticsearch indexes. Kibana is entirely written in HTML and JavaScript. It uses the powerful search and indexing capabilities of Elasticsearch exposed through its RESTful API to display powerful graphics for the end users. From basic business intelligence to real-time debugging, Kibana plays its role through exposing data through beautiful histograms, geomaps, pie charts, graphs, tables, and so on.

Kibana makes it easy to understand large volumes of data. Its simple browser-based interface enables you to quickly create and share dynamic dashboards that display changes to Elasticsearch queries in real time.

Some of the key features of Kibana are as follows:

  • It provides flexible analytics and a visualization platform for business intelligence.

  • It provides real-time analysis, summarization, charting, and debugging capabilities.

  • It provides an intuitive and user friendly interface, which is highly customizable through some drag and drop features and alignments as and when needed.

  • It allows saving the dashboard, and managing more than one dashboard. Dashboards can be easily shared and embedded within different systems.

  • It allows sharing snapshots of logs that you have already searched through, and isolates multiple problem transactions.

ELK data pipeline


A typical ELK stack data pipeline looks something like this:

In a typical ELK Stack data pipeline, logs from multiple application servers are shipped through Logstash shipper to a centralized Logstash indexer. The Logstash indexer will output data to an Elasticsearch cluster, which will be queried by Kibana to display great visualizations and build dashboards over the log data.

ELK Stack installation


A Java runtime is required to run ELK Stack. The latest version of Java is recommended for the installation. At the time of writing this book, the minimum requirement is Java 7. You can use the official Oracle distribution, or an open source distribution, such as OpenJDK.

You can verify the Java installation by running the following command in your shell:

> java -version
java version "1.8.0_40"
Java(TM) SE Runtime Environment (build 1.8.0_40-b26)
Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25, mixed mode)

If you have verified the Java installation in your system, we can proceed with the ELK installation.

Installing Elasticsearch

When installing Elasticsearch during production, you can use the method described below, or the Debian or RPM packages provided on the download page.

Tip

You can download the latest version of Elasticsearch from https://www.elastic.co/downloads/elasticsearch.

curl –O https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.5.2.tar.gz 

Note

If you don't have cURL, you can use the following command to install it:

sudo apt-get install curl

Then, unpack the zip file on your local filesystem:

tar -zxvf elasticsearch-1.5.2.tar.gz

And then, go to the installation directory:

cd  elasticsearch-1.5.2

Note

Elastic, the company behind Elasticsearch, recently launched Elasticsearch 2.0 with some new aggregations, better compression options, simplified query DSL by merging query and filter concepts, and improved performance.

More details can be found in the official documentation:

https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html.

Running Elasticsearch

In order to run Elasticsearch, execute the following command:

$ bin/elasticsearch

Add the -d flag to run it in the background as a daemon process.

We can test it by running the following command in another terminal window:

curl 'http://localhost:9200/?pretty'

This shows you an output similar to this:

{
  "status" : 200,
  "name" : "Master",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "1.5.2",
    "build_hash" : "c88f77ffc81301dfa9dfd81ca2232f09588bd512",
    "build_timestamp" : "2015-05-13T13:05:36Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.3"
  },
  "tagline" : "You Know, for Search"
}

We can shut down Elasticsearch through the API as follows:

curl -XPOST 'http://localhost:9200/_shutdown'

Elasticsearch configuration

Elasticsearch configuration files are under the config folder in the Elasticsearch installation directory. The config folder has two files, namely elasticsearch.yml and logging.yml. The former will be used to specify configuration properties of different Elasticsearch modules, such as network address, paths, and so on, while the latter will specify logging-related configurations.

The configuration file is in the YAML format and the following sections are some of the parameters that can be configured.

Network Address

To specify the address where all network-based modules will bind and publish to:

network :
    host : 127.0.0.1

Paths

To specify paths for data and log files:

path:
  logs: /var/log/elasticsearch
  data: /var/data/elasticsearch

The cluster name

To give a name to a production cluster, which is used to discover and auto join nodes:

cluster:
  name: <NAME OF YOUR CLUSTER>

The node name

To change the default name of each node:

node:
  name: <NAME OF YOUR NODE>

Elasticsearch plugins

Elasticsearch has a variety of plugins that ease the task of managing indexes, cluster, and so on. Some of the mostly used ones are the Kopf plugin, Marvel, Sense, Shield, and so on, which will be covered in the subsequent chapters. Let's take a look at the Kopf plugin here.

Kopf is a simple web administration tool for Elasticsearch that is written in JavaScript, AngularJS, jQuery and Twitter bootstrap. It offers an easy way of performing common tasks on an Elasticsearch cluster. Not every single API is covered by this plugin, but it does offer a REST client, which allows you to explore the full potential of the Elasticsearch API.

In order to install the elasticsearch-kopf plugin, execute the following command from the Elasticsearch installation directory:

bin/plugin -install lmenezes/elasticsearch-kopf

Now, go to this address to see the interface: http://localhost:9200/_plugin/kopf/.

You can see a page similar to this, which shows Elasticsearch nodes, shards, a number of documents, size, and also enables querying the documents indexed.

Elasticsearch Kopf UI

Installing Logstash

First, download the latest Logstash TAR file from the download page.

Tip

Check for the latest Logstash release version at https://www.elastic.co/downloads/logstash.

curl –O http://download.elastic.co/logstash/logstash/logstash-1.5.0.tar.gz

Then, unpack the GZIP file on your local filesystem:

tar -zxvf logstash-1.5.0.tar.gz

Now, you can run Logstash with a basic configuration.

Running Logstash

Run Logstash using -e flag, followed by the configuration of standard input and output:

cd logstash-1.5.0
bin/logstash -e 'input { stdin { } } output { stdout {} }'

Now, when we type something in the command prompt, we will see its output in Logstash as follows:

hello logstash
2015-05-15T03:34:30.111Z 0.0.0.0 hello logstash

Here, we are running Logstash with the stdin input and the stdout output as this configuration prints whatever you type in a structured format as the output. The -e flag allows you to quickly test the configuration from the command line.

Now, let's try the codec setting for output for a pretty formatted output. Exit from the running Logstash by issuing a Ctrl + C command, and then we need to restart Logstash with the following command:

bin/logstash -e 'input { stdin { } } output { stdout { codec => rubydebug } }'

Now, enter some more test input:

Hello PacktPub

{
  "message" => " Hello PacktPub",
  "@timestamp" => "2015-05-20T23:48:05.335Z",
  "@version" => "1",
  "host" => "packtpub"
}

The output that you see is the most common output that we generally see from Logstash:

  • "message" includes the complete input message or the event line

  • "@timestamp" will include the timestamp of the time when the event was indexed; or if date filter is used, this value can also use one of the fields in the message to get a timestamp specific to the event

  • "host" will generally represent the machine where this event was generated

Logstash with file input

Logstash can be easily configured to read from a log file as input.

For example, to read Apache logs from a file and output to a standard output console, the following configuration will be helpful:

input {
  file {
    type => "apache"
    path => "/user/packtpub/intro-to-elk/elk.log"
  }
}
output {
  stdout {
    codec => rubydebug
  }
}

Logstash with Elasticsearch output

Logstash can be configured to output all inputs to an Elasticsearch instance. This is the most common scenario in an ELK platform:

bin/logstash -e 'input { stdin { } } output { elasticsearch { host = localhost } }'

Then type 'you know, for logs

You will be able to see indexes in Elasticsearch through http://localhost:9200/_search.

Configuring Logstash

Logstash configuration files are in the JSON format. A Logstash config file has a separate section for each type of plugin that you want to add to the event processing pipeline. For example:

# This is a comment. You should use comments to describe
# parts of your configuration.
input {
  ...
}

filter {
  ...
}

output {
  ...
}

Each section contains the configuration options for one or more plugins. If you specify multiple filters, they are applied in the order of their appearance in the configuration file.

When you run logstash, you use the -flag to read configurations from a configuration file or even from a folder containing multiple configuration files for each type of plugin—input, filter, and output:

bin/logstash –f ../conf/logstash.conf

Note

If you want to test your configurations for syntax errors before running them, you can simply check with the following command:

bin/logstash –configtest ../conf/logstash.conf

This command just checks the configuration without running logstash.

Logstash runs on JVM and consumes a hefty amount of resources to do so. Logstash, at times, has significant memory consumption. Obviously, this could be a great challenge when you want to send logs from a small machine without harming application performance.

In order to save resources, you can use the Logstash forwarder (previously known as Lumberjack). The forwarder uses Lumberjack's protocol, enabling you to securely ship compressed logs, thus reducing resource consumption and bandwidth. The sole input is file/s, while the output can be directed to multiple destinations.

Other options do exist as well, to send logs. You can use rsyslog on Linux machines, and there are other agents for Windows machines, such as nxlog and syslog-ng. There is another lightweight tool to ship logs called Log-Courier (https://github.com/driskell/log-courier), which is an enhanced fork of the Logstash forwarder with some improvements.

Installing Logstash forwarder

Download the latest Logstash forwarder release from the download page.

Tip

Check for the latest Logstash forwarder release version at https://www.elastic.co/downloads/logstash.

Prepare a configuration file that contains input plugin details and ssl certificate details to establish a secure communication between your forwarder and indexer servers, and run it using the following command:

Logstash forwarder -config Logstash forwarder.conf

And in Logstash, we can use the Lumberjack plugin to get data from the forwarder:

input {
  lumberjack {
    # The port to listen on
    port => 12345

    # The paths to your ssl cert and key
    ssl_certificate => "path/to/ssl.crt"
    ssl_key => "path/to/ssl.key"

    # Set the type of log.
    type => "log type"
  }

Logstash plugins

Some of the most popular Logstash plugins are:

  • Input plugin

  • Filters plugin

  • Output plugin

Input plugin

Some of the most popular Logstash input plugins are:

  • file: This streams log events from a file

  • redis: This streams events from a redis instance

  • stdin: This streams events from standard input

  • syslog: This streams syslog messages over the network

  • ganglia: This streams ganglia packets over the network via udp

  • lumberjack: This receives events using the lumberjack protocol

  • eventlog: This receives events from Windows event log

  • s3: This streams events from a file from an s3 bucket

  • elasticsearch: This reads from the Elasticsearch cluster based on results of a search query

Filters plugin

Some of the most popular Logstash filter plugins are as follows:

  • date: This is used to parse date fields from incoming events, and use that as Logstash timestamp fields, which can be later used for analytics

  • drop: This drops everything from incoming events that matches the filter condition

  • grok: This is the most powerful filter to parse unstructured data from logs or events to a structured format

  • multiline: This helps parse multiple lines from a single source as one Logstash event

  • dns: This filter will resolve an IP address from any fields specified

  • mutate: This helps rename, remove, modify, and replace fields in events

  • geoip: This adds geographic information based on IP addresses that are retrieved from Maxmind database

Output plugin

Some of the most popular Logstash output plugins are as follows:

  • file: This writes events to a file on disk

  • e-mail: This sends an e-mail based on some conditions whenever it receives an output

  • elasticsearch: This stores output to the Elasticsearch cluster, the most common and recommended output for Logstash

  • stdout: This writes events to standard output

  • redis: This writes events to redis queue and is used as a broker for many ELK implementations

  • mongodb: This writes output to mongodb

  • kafka: This writes events to Kafka topic

Installing Kibana

Before we can install and run Kibana, it has certain prerequisites:

  • Elasticsearch should be installed, and its HTTP service should be running on port 9200 (default).

  • Kibana must be configured to use the host and port on which Elasticsearch is running (check out the following Configuring Kibana section).

Download the latest Kibana release from the download page.

Tip

Check for the latest Kibana release version at https://www.elastic.co/downloads/kibana.

curl –O https://download.elastic.co/kibana/kibana/kibana-4.0.2-linux-x64.tar.gz

Then, unpack kibana-4.0.2-linux-x64.tar.gz on your local file system and create a soft link to use a short name.

tar -zxvf kibana-4.0.2-linux-x64.tar.gz

ln -s kibana-4.0.2-linux-x64 kibana

Then, you can explore the kibana folder:

cd kibana

Configuring Kibana

The Kibana configuration file is present in the config folder inside the kibana installation:

config/kibana.yml

Following are some of the important configurations for Kibana.

This controls which port to use.

port: 5601.

Property to set the host to bind the server is:

host: "localhost".

Set the elasticsearch_url to point at your Elasticsearch instance, which is localhost by default.

elasticsearch_url: http://localhost:9200

Running Kibana

Start Kibana manually by issuing the following command:

bin/kibana

You can verify the running Kibana instance on port 5601 by placing the following URL in the browser:

http://localhost:5601

This should fire up the Kibana UI for you.

Kibana UI

Note

We need to specify Index name or pattern that has to be used to show data indexed in Elasticsearch. By default, Kibana assumes the default index as logstash-* as it is assuming that data is being fed to Elasticsearch through Logstash. If you have changed the name of the index in Logstash output plugin configuration, then we need to change that accordingly.

Kibana 3 versus Kibana 4

Kibana 4 is a major upgrade over Kibana 3. Kibana 4 offers some advanced tools, which provides more flexibility in visualization and helps us use some of the advanced features of Elasticsearch. Kibana 3 had to be installed on a web server; Kibana 4 is released as a standalone application. Some of the new features in Kibana 4 as compared to Kibana 3 are as follows:

  • Search results highlighting

  • Shipping with its own web server and using Node.js on the backend

  • Advanced aggregation-based analytics features, for example, unique counts, non-date histograms, ranges, and percentiles

Kibana interface

As you saw in the preceding screenshot of the Kibana UI, the Kibana interface consists of four main components—Discover, Visualize, Dashboard, and Settings.

Discover

The Discover page helps to interactively explore the data matching the selected index pattern. This page allows submitting search queries, filtering the search results, and viewing document data. Also, it gives us the count of matching results and statistics related to a field. If the timestamp field is configured in the indexed data, it will also display, by default, a histogram showing distribution of documents over time.

Kibana Discover Page

Visualize

The Visualize page is used to create new visualizations based on different data sources—a new interactive search, a saved search, or an existing saved visualization. Kibana 4 allows you to create the following visualizations in a new visualization wizard:

  • Area chart

  • Data table

  • Line chart

  • Markdown widget

  • Metric

  • Pie chart

  • Tile map

  • Vertical bar chart

These visualizations can be saved, used individually, or can be used in dashboards.

Kibana Visualize Page

Dashboard

Dashboard is a collection of saved visualizations in different groups. These visualizations can be arranged freely with a drag and drop kind of feature, and can be ordered as per the importance of the data. Dashboards can be easily saved, shared, and loaded at a later point in time.

Settings

The Settings page helps configure Elasticsearch indexes that we want to explore and configures various index patterns. Also, this page shows various indexed fields in one index pattern and data types of those fields. It also helps us create scripted fields, which are computed on the fly from the data.

Summary


In this chapter, we gathered a basic understanding of ELK stack, and also figured out why we need log analysis, and why ELK stack specifically. We also set up Elasticsearch, Logstash, and Kibana.

In the next chapter, we will look at how we can use our ELK stack installation to quickly build a data pipeline for analysis.

Left arrow icon Right arrow icon

Key benefits

What you will learn

Install, configure, and run Elasticsearch, Logstash, and Kibana Understand the need for log analytics and the current challenges in log analysis Build your own data pipeline using the ELK stack Familiarize yourself with the key features of Logstash and the variety of input, filter, and output plugins it provides Build your own custom Logstash plugin Create actionable insights using charts, histograms, and quick search features in Kibana4 Understand the role of Elasticsearch in the ELK stack

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Black & white paperback book shipped to your address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Nov 26, 2015
Length 206 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781785887154
Vendor :
Elastic
Category :

Table of Contents

17 Chapters
Learning ELK Stack Chevron down icon Chevron up icon
Credits Chevron down icon Chevron up icon
About the Author Chevron down icon Chevron up icon
About the Reviewers Chevron down icon Chevron up icon
www.PacktPub.com Chevron down icon Chevron up icon
Preface Chevron down icon Chevron up icon
Introduction to ELK Stack Chevron down icon Chevron up icon
Building Your First Data Pipeline with ELK Chevron down icon Chevron up icon
Collect, Parse and Transform Data with Logstash Chevron down icon Chevron up icon
Creating Custom Logstash Plugins Chevron down icon Chevron up icon
Why Do We Need Elasticsearch in ELK? Chevron down icon Chevron up icon
Finding Insights with Kibana Chevron down icon Chevron up icon
Kibana – Visualization and Dashboard Chevron down icon Chevron up icon
Putting It All Together Chevron down icon Chevron up icon
ELK Stack in Production Chevron down icon Chevron up icon
Expanding Horizons with ELK Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela