Hands-On Big Data Analytics with PySpark
Apache Spark is an open source parallel-processing framework that has been around for quite some time now. One of the many uses of Apache Spark is for data analytics applications across clustered computers. In this book, you will not only learn how to use Spark and the Python API to create high-performance analytics with big data, but also discover techniques for testing, immunizing, and parallelizing Spark jobs.
You will learn how to source data from all popular data hosting platforms, including HDFS, Hive, JSON, and S3, and deal with large datasets with PySpark to gain practical big data experience. This book will help you work on prototypes on local machines and subsequently go on to handle messy data in production and at scale. This book covers installing and setting up PySpark, RDD operations, big data cleaning and wrangling, and aggregating and summarizing data into useful reports. You will also learn how to implement some practical and proven techniques to improve certain aspects of programming and administration in Apache Spark.
By the end of the book, you will be able to build big data analytical solutions using the various PySpark offerings and also optimize them effectively.
|Course Length||5 hours 27 minutes|
|Date Of Publication||29 Mar 2019|
|Using Spark transformations to defer computations to a later time|
|Using the reduce and reduceByKey methods to calculate the results|
|Performing actions that trigger computations|
|Reusing the same rdd for different actions|
|Detecting a shuffle in a process|
|Testing operations that cause a shuffle in Apache Spark|
|Changing the design of jobs with wide dependencies|
|Using keyBy() operations to reduce shuffle|
|Using a custom partitioner to reduce shuffle|