Reader small image

You're reading from  Building Big Data Pipelines with Apache Beam

Product typeBook
Published inJan 2022
Reading LevelBeginner
PublisherPackt
ISBN-139781800564930
Edition1st Edition
Languages
Right arrow
Author (1)
Jan Lukavský
Jan Lukavský
author image
Jan Lukavský

Jan Lukavský is a freelance big data architect and engineer who is also a committer of Apache Beam. He is a certified Apache Hadoop professional. He is working on open source big data systems combining batch and streaming data pipelines in a unified model, enabling the rise of real-time, data-driven applications.
Read more about Jan Lukavský

Right arrow

Task 5 – Calculating performance statistics for a sport activity tracking application

Let's explore the most useful applications of stream processing – the delivery of high-accuracy real-time insights to (possibly) high-volume data streams. As an example, we will borrow a use case known to almost everyone – calculating performance statistics (for example, speed and total distance) from a stream of GPS coordinates coming from a sport activity tracker!

Defining the problem

Given an input data stream of quadruples (workoutId, gpsLatitude, gpsLongitude, and timestamp) calculate the current speed and the total tracked distance of the tracker. The data comes from a GPS tracker that sends data only when its user starts a sport activity. We can assume that workoutId is unique and contains a userId value in it.

Let's describe the problem more informally. Suppose we have a stream that looks as follows:

(user1:track1, 65.5384, -19.9108, 1616427100000...
lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Building Big Data Pipelines with Apache Beam
Published in: Jan 2022Publisher: PacktISBN-13: 9781800564930

Author (1)

author image
Jan Lukavský

Jan Lukavský is a freelance big data architect and engineer who is also a committer of Apache Beam. He is a certified Apache Hadoop professional. He is working on open source big data systems combining batch and streaming data pipelines in a unified model, enabling the rise of real-time, data-driven applications.
Read more about Jan Lukavský