Learning PySpark [Video]
- FREE Subscription Read for free
- $124.99 Video Buy
- $12.99 Video + Subscription Buy
-
What do you get with a Packt Subscription?
- Instant access to this title and 7,500+ eBooks & Videos
- Constantly updated with 100+ new titles each month
- Breadth and depth in over 1,000+ technologies
-
Free ChapterA Brief Primer on PySpark
-
Resilient Distributed Datasets
- Brief Introduction to RDDs
- Creating RDDs
- Schema of an RDD
- Understanding Lazy Execution
- Introducing Transformations – .map(…)
- Introducing Transformations – .filter(…)
- Introducing Transformations – .flatMap(…)
- Introducing Transformations – .distinct(…)
- Introducing Transformations – .sample(…)
- Introducing Transformations – .join(…)
- Introducing Transformations – .repartition(…)
-
Resilient Distributed Datasets and Actions
- Introducing Actions – .take(…)
- Introducing Actions – .collect(…)
- Introducing Actions – .reduce(…) and .reduceByKey(…)
- Introducing Actions – .count()
- Introducing Actions – .foreach(…)
- Introducing Actions – .aggregate(…) and .aggregateByKey(…)
- Introducing Actions – .coalesce(…)
- Introducing Actions – .combineByKey(…)
- Introducing Actions – .histogram(…)
- Introducing Actions – .sortBy(…)
- Introducing Actions – Saving Data
- Introducing Actions – Descriptive Statistics
-
DataFrames and Transformations
-
Data Processing with Spark DataFrames
Apache Spark is an open-source distributed engine for querying and processing data. In this tutorial, we provide a brief overview of Spark and its stack. This tutorial presents effective, time-saving techniques on how to leverage the power of Python and put it to use in the Spark ecosystem. You will start by getting a firm understanding of the Apache Spark architecture and how to set up a Python environment for Spark.
You'll learn about different techniques for collecting data, and distinguish between (and understand) techniques for processing data. Next, we provide an in-depth review of RDDs and contrast them with DataFrames. We provide examples of how to read data from files and from HDFS and how to specify schemas using reflection or programmatically (in the case of DataFrames). The concept of lazy execution is described and we outline various transformations and actions specific to RDDs and DataFrames.
Finally, we show you how to use SQL to interact with DataFrames. By the end of this tutorial, you will have learned how to process data using Spark DataFrames and mastered data collection techniques by distributed data processing.
Style and Approach
Filled with hands-on examples, this course will help you understand RDDs and how to work with them; you will learn about RDD actions and Spark DataFrame transformations. You will learn how to perform big data processing and use Spark DataFrames.
- Publication date:
- February 2018
- Publisher
- Packt
- Duration
- 2 hours 29 minutes
- ISBN
- 9781788396592