Reader small image

You're reading from  Data Engineering with Scala and Spark

Product typeBook
Published inJan 2024
PublisherPackt
ISBN-139781804612583
Edition1st Edition
Right arrow
Authors (3):
Eric Tome
Eric Tome
author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

Rupam Bhattacharjee
Rupam Bhattacharjee
author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

David Radford
David Radford
author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford

View More author details
Right arrow

Option type

Scala’s option type represents optional values. These values can be of two forms: Some(x), where x is the actual value, or None, which represents a missing value. Many of the Scala collection library methods return a value of the Option[T] type. The following are a few examples:

scala> List(1, 2, 3, 4).headOption
res45: Option[Int] = Some(1)
scala> List(1, 2, 3, 4).lastOption
res46: Option[Int] = Some(4)
scala> List("hello,", "world").find(_ == "world")
res47: Option[String] = Some(world)
scala> Map(1 -> "a", 2 -> "b").get(3)
res48: Option[String] = None

Example 1.52

Option also has a rich API and provides many of the functions from the collection library API through an implicit conversion function, option2Iterable, in the companion object. The following are a few examples of methods supported by the Option type:

scala> Some("hello, world!").headOption
res49: Option[String] = Some(hello, world!)
scala> None.getOrElse("Empty")
res50: String = Empty
scala> Some("hello, world!").map(_.replace("!", ".."))
res51: Option[String] = Some(hello, world..)
scala> Some(List.tabulate(5)(_ + 1)).flatMap(_.headOption)
res52: Option[Int] = Some(1)

Example 1.53

Collections

Scala comes with a powerful collection library. Collections are classified into mutable and immutable collections. A mutable collection can be updated in place, whereas an immutable collection never changes. When we add, remove, or update elements of an immutable collection, a new collection is created and returned, keeping the old collection unchanged.

All collection classes are found in the scala.collection package or one of its subpackages: mutable, immutable, and generic. However, for most of our programming needs, we refer to collections in either the mutable or immutable package.

A collection in the scala.collection.immutable package is guaranteed to be immutable and will never change after it is created. So, we will not have to make any defensive copies of an immutable collection, since accessing a collection multiple times will always yield the same set of elements.

On the other hand, collections in the scala.collection.mutable package provide methods that can update a collection in place. Since these collections are mutable, we need to defend against any inadvertent update, p, by other parts of the code base.

By default, Scala picks immutable collections. This easy access is provided through the Predef object, which is implicitly imported into every Scala source file. Refer to the following example:

object Predef {
  type Set[A] = immutable.Set[A]
  type Map[A, +B] = immutable.Map[A, B]
  val Map = immutable.Map
  val Set = immutable.Set
  // ...
}

Example 1.54

The Traversable trait is the base trait for all of the collection types. This is followed by Iterable, which is divided into three subtypes: Seq, Set, and Map. Both Set and Map provide sorted and unsorted variants. Seq, on the other hand, has IndexedSeq and LinearSeq. There is quite a bit of similarity among all these classes. For instance, an instance of any collection can be created by the same uniform syntax, writing the collection class name followed by its elements:

Traversable(1, 2, 3)
Map("x" -> 24, "y" -> 25, "z" -> 26)
Set("red", "green", "blue")
SortedSet("hello", "world")
IndexedSeq(1.0, 2.0)
LinearSeq(a, b, c)

Example 1.55

The following is the hierarchy for scala.collection.immutable collections taken from the docs.scala-lang.org website.

Figure 1.1 – Scala collection hierarchy

Figure 1.1 – Scala collection hierarchy

The Scala collection library is very rich and has various collection types suited to specific programming needs. If you want to delve deep into the Scala collection library, please refer to the Further reading section (the fifth point).

In this section, we looked at the Scala collection hierarchy. In the next section, we will gain a high-level understanding of pattern matching.

Previous PageNext Page
You have been reading a chapter from
Data Engineering with Scala and Spark
Published in: Jan 2024Publisher: PacktISBN-13: 9781804612583
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford