Free Sample
+ Collection
Code Files

Apache Flume: Distributed Log Collection for Hadoop

Steve Hoffman

If your role includes moving datasets into Hadoop, this book will help you do it more efficiently using Apache Flume. From installation to customization, it’s a complete step-by-step guide on making the service work for you.
RRP $21.99
RRP $36.99
Print + eBook

Want this title & more?

$12.99 p/month

Subscribe to PacktLib

Enjoy full and instant access to over 2000 books and videos – you’ll find everything you need to stay ahead of the curve and make sure you can always get the job done.

Book Details

ISBN 139781782167914
Paperback108 pages

About This Book

  • Integrate Flume with your data sources
  • Transcode your data en-route in Flume
  • Route and separate your data using regular expression matching
  • Configure failover paths and load-balancing to remove single points of failure
  • Utilize Gzip Compression for files written to HDFS

Who This Book Is For

Apache Flume: Distributed Log Collection for Hadoop is intended for people who are responsible for moving datasets into Hadoop in a timely and reliable manner like software engineers, database administrators, and data warehouse administrators.

Table of Contents

Chapter 1: Overview and Architecture
Flume 0.9
Flume 1.X (Flume-NG)
The problem with HDFS and streaming data/logs
Sources, channels, and sinks
Flume events
Chapter 2: Flume Quick Start
Downloading Flume
Flume configuration file overview
Starting up with "Hello World"
Chapter 3: Channels
Memory channel
File channel
Chapter 4: Sinks and Sink Processors
HDFS sink
Compression codecs
Event serializers
Sink groups
Chapter 5: Sources and Channel Selectors
The problem with using tail
The exec source
The spooling directory source
Syslog sources
Channel selectors
Chapter 6: Interceptors, ETL, and Routing
Tiering data flows
Chapter 7: Monitoring Flume
Monitoring the agent process
Monitoring performance metrics
Chapter 8: There Is No Spoon – The Realities of Real-time Distributed Data Collection
Transport time versus log time
Time zones are evil
Capacity planning
Considerations for multiple data centers
Compliance and data expiry

What You Will Learn

  • Understand the Flume architecture
  • Download and install open source Flume from Apache
  • Discover when to use a memory or file-backed channel
  • Understand and configure the Hadoop File System (HDFS) sink
  • Learn how to use sink groups to create redundant data flows
  • Configure and use various sources for ingesting data
  • Inspect data records and route to different or multiple destinations based on payload content
  • Transform data en-route to Hadoop
  • Monitor your data flows

In Detail

Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. Its main goal is to deliver data from applications to Apache Hadoop's HDFS. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with many failover and recovery mechanisms.

Apache Flume: Distributed Log Collection for Hadoop covers problems with HDFS and streaming data/logs, and how Flume can resolve these problems. This book explains the generalized architecture of Flume, which includes moving data to/from databases, NO-SQL-ish data stores, as well as optimizing performance. This book includes real-world scenarios on Flume implementation.

Apache Flume: Distributed Log Collection for Hadoop starts with an architectural overview of Flume and then discusses each component in detail. It guides you through the complete installation process and compilation of Flume.

It will give you a heads-up on how to use channels and channel selectors. For each architectural component (Sources, Channels, Sinks, Channel Processors, Sink Groups, and so on) the various implementations will be covered in detail along with configuration options. You can use it to customize Flume to your specific needs. There are pointers given on writing custom implementations as well that would help you learn and implement them.

By the end, you should be able to construct a series of Flume agents to transport your streaming data and logs from your systems into Hadoop in near real time.


Read More

Recommended for You

Apache Flume: Distributed Log Collection for Hadoop - Second Edition
$ 22.99
iPad Enterprise Application Development BluePrints
$ 29.99
Mastering Pycharm
$ 23.20