Reader small image

You're reading from  Building Big Data Pipelines with Apache Beam

Product typeBook
Published inJan 2022
Reading LevelBeginner
PublisherPackt
ISBN-139781800564930
Edition1st Edition
Languages
Right arrow
Author (1)
Jan Lukavský
Jan Lukavský
author image
Jan Lukavský

Jan Lukavský is a freelance big data architect and engineer who is also a committer of Apache Beam. He is a certified Apache Hadoop professional. He is working on open source big data systems combining batch and streaming data pipelines in a unified model, enabling the rise of real-time, data-driven applications.
Read more about Jan Lukavský

Right arrow

Task 19 – Implementing RPCParDo in the Python SDK

This task will be a reimplementation of Task 8 from Chapter 3, Implementing Pipelines using Stateful Processing. We will use a stateful DoFn to batch the elements for a defined maximal amount of time. As always, we will restate the problem again for clarity.

Problem definition

Use a given RPC service to augment data in the input stream using batched RPCs with batches whose size are about K elements. Also, resolve the batch after, at most, time T to avoid (possibly) an infinitely long waiting time for elements in small batches.

As in the previous task, we will skip the discussion of the problem's decomposition as we discussed that when we implemented Task 8. Instead, we will jump directly into its implementation using the Python SDK.

Solution implementation

The implementation can be found in chapter6/src/main/python/rpc_par_do.py. It can be broken down into the RPCParDoStateful transform, which is declared...

lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Building Big Data Pipelines with Apache Beam
Published in: Jan 2022Publisher: PacktISBN-13: 9781800564930

Author (1)

author image
Jan Lukavský

Jan Lukavský is a freelance big data architect and engineer who is also a committer of Apache Beam. He is a certified Apache Hadoop professional. He is working on open source big data systems combining batch and streaming data pipelines in a unified model, enabling the rise of real-time, data-driven applications.
Read more about Jan Lukavský