Before we delve into Datasets and data wrangling, let's take a broader view of the APIs; we will focus on the relevant functions we need. This will give us a firm foundation when we wrangle with data later in this chapter. Refer to the following diagram:
The preceding diagram shows the broader hierarchy of the org.apache.spark.sql
classes. Interestingly, pyspark.sql
mirrors this hierarchy, except for DataFrame, which is basically the Scala Dataset. What I like about the PySpark interface is that it is very succinct and crisp, offering the same power, performance, and functionality as Scala or Java. But Scala has more elaborate hierarchies and more abstractions. One of the tricks to learn more about its functions is to refer to the Scala documentation, which I found to be a lot more detailed.
Each of these classes is rich with a lot of functions. The diagram shows only the most common ones we need in this chapter. You should refer to either https://spark.apache...