Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Apache Flume: Distributed Log Collection for Hadoop

You're reading from  Apache Flume: Distributed Log Collection for Hadoop

Product type Book
Published in Jul 2013
Publisher Packt
ISBN-13 9781782167914
Pages 108 pages
Edition 1st Edition
Languages

Summary


In this chapter we covered downloading the Flume binary distribution. We created a simple configuration file that included one source writing to one channel feeding one sink. The source listened on a socket for network clients to connect and send it event data. Those events were written to an in-memory channel and then fed to a log4j sink to become output. We then connected to our listening agent using the Linux netcat utility and sent some String events into our Flume agent's source. Finally, we verified that our log4j based sink wrote the events out.

In the next chapter we'll take a detailed look at the two major channel types you'll likely use in your data processing workflows:

  • Memory channel

  • File channel

For each type we'll discuss all the configuration knobs available to you, when and why you might want to deviate from the defaults, and, most importantly, why to use one over the other.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}