Reader small image

You're reading from  Data Engineering with Scala and Spark

Product typeBook
Published inJan 2024
PublisherPackt
ISBN-139781804612583
Edition1st Edition
Right arrow
Authors (3):
Eric Tome
Eric Tome
author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

Rupam Bhattacharjee
Rupam Bhattacharjee
author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

David Radford
David Radford
author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford

View More author details
Right arrow

Understanding polymorphic functions

A function that works with multiple types of input arguments or can return a value of different types is called a polymorphic function. While writing a polymorphic function, we provide a comma-separated list of type parameters surrounded by square brackets after the name of the function. For example, we can write a function that returns the index of the first occurrence of an element within List:

scala> def findFirstIn[A](as: List[A], p: A => Boolean): Option[Int] =
     |   as.zipWithIndex.collect { case (e, i) if p(e) => i }.headOption
findFirstIn: [A](as: List[A], p: A => Boolean)Option[Int]
example 1.43

This function will work for any type of list: List[Int], List[String], and so on. For example, we can search for the index of element 5 in a list of integers from 1 to 20:

scala> import scala.util.Random
import scala.util.Random
scala> val ints = Random.shuffle((1 to 20).toList)
ints: List[Int] = List(7, 9, 3, 8, 6, 13, 12, 18, 14, 15, 1, 11, 10, 16, 2, 5, 20, 17, 4, 19)
scala> findFirstIn[Int](ints, _ == 5)
res38: Option[Int] = Some(15)

Example 1.44

In the next section, we are going to look at another property of type parameters, called variance, which defines subtyping relationships between objects, as we will see in the following section.

Previous PageNext Page
You have been reading a chapter from
Data Engineering with Scala and Spark
Published in: Jan 2024Publisher: PacktISBN-13: 9781804612583
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford