Write Storm topology to persist data into HDFS
In this section, we are going to cover how we can write the HDFS bolt to persist data into HDFS. In this section, we are focusing on the following points:
- Consuming data from Kafka
- The logic to store the data into HDFS
- Rotating file into HDFS after a predefined time or size
Perform the following steps to create the topology to store the data into the HDFS:
- Create a new maven project with groupId
com.stormadvance
and artifactIdstorm-hadoop
. - Add the following dependencies in the
pom.xml
file. We are adding the Kafka Maven dependency inpom.xml
to support Kafka Consumer. Please refer the previous chapter to produce data in Kafka as here we are going to consume data from Kafka and store in HDFS:
<dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-mapper-asl</artifactId> <version>1.9.13</version> </dependency> ...