(For more resources related to this topic, see here.)
A typical configuration might look something as follows:
To use the Avro Source, you specify the type property with a value of avro. You need to provide a bind address and port number to listen on:
Here we have configured the agent on the right that listens on port 42424, uses a memory channel, and writes to HDFS. Here I've used the memory channel for brevity of this example configuration. Also, note that I've given this agent a different name, collector, just to avoid confusion.
The agents on the left—feeding the collector tier—might have a configuration similar to this. I have left the sources off this configuration for brevity:
The hostname value, collector.example.com, has nothing to do with the agent name on that machine, it is the host name (or you can use an IP) of the target machine with the receiving Avro Source. This configuration, named client, would be applied to both agents on the left assuming both had similar source configurations.
Since I don't like single points of failure, I would configure two collector agents with the preceding configuration and instead set each client agent to round robin between the two using a sink group. Again, I've left off the sources for brevity:
In this article, we covered tiering data flows using the Avro Source and Sink. More information on this topic can be found in the book Apache Flume: Distributed Log Collection for Hadoop.
Resources for Article :
- Supporting hypervisors by OpenNebula [Article]
- Integration with System Center Operations Manager 2012 SP1 [Article]
- VMware View 5 Desktop Virtualization [Article]