Reader small image

You're reading from  Hands-On Mathematics for Deep Learning

Product typeBook
Published inJun 2020
Reading LevelIntermediate
PublisherPackt
ISBN-139781838647292
Edition1st Edition
Languages
Right arrow
Author (1)
Jay Dawani
Jay Dawani
author image
Jay Dawani

Jay Dawani is a former professional swimmer turned mathematician and computer scientist. He is also a Forbes 30 Under 30 Fellow. At present, he is the Director of Artificial Intelligence at Geometric Energy Corporation (NATO CAGE) and the CEO of Lemurian Labs - a startup he founded that is developing the next generation of autonomy, intelligent process automation, and driver intelligence. Previously he has also been the technology and R&D advisor to Spacebit Capital. He has spent the last three years researching at the frontiers of AI with a focus on reinforcement learning, open-ended learning, deep learning, quantum machine learning, human-machine interaction, multi-agent and complex systems, and artificial general intelligence.
Read more about Jay Dawani

Right arrow

Linear maps

A linear map is a function , where V and W are both vector spaces. They must satisfy the following criteria:

  • , for all
  • , for all and

Linear maps tend to preserve the properties of vector spaces under addition and scalar multiplication. A linear map is called a homomorphism of vector spaces; however, if the homomorphism is invertible (where the inverse is a homomorphism), then we call the mapping an isomorphism.

When V and W are isomorphic, we denote this as , and they both have the same algebraic structure.

If V and W are vector spaces in , and , then it is called a natural isomorphism. We write this as follows:

Here, and are the bases of V and W. Using the preceding equation, we can see that , which tells us that is an isomorphism.

Let's take the same vector spaces V and W as before, with bases and respectively. We know that is a linear map, and the matrix T that has entries Aij, where and can be defined as follows:

.

From our knowledge of matrices, we should know that the jth column of A contains Tvj in the basis of W.

Thus, produces a linear map , which we write as .

Image and kernel

When dealing with linear mappings, we will often encounter two important terms: the image and the kernel, both of which are vector subspaces with rather important properties.

The kernel (sometimes called the null space) is 0 (the zero vector) and is produced by a linear map, as follows:

And the image (sometimes called the range) of T is defined as follows:

such that .

V and W are also sometimes known as the domain and codomain of T.

It is best to think of the kernel as a linear mapping that maps the vectors to . The image, however, is the set of all possible linear combinations of that can be mapped to the set of vectors .

The Rank-Nullity theorem (sometimes referred to as the fundamental theorem of linear mappings) states that given two vector spaces V and W and a linear mapping , the following will remain true:

.

Metric space and normed space

Metrics help define the concept of distance in Euclidean space (denoted by ). Metric spaces, however, needn't always be vector spaces. We use them because they allow us to define limits for objects besides real numbers.

So far, we have been dealing with vectors, but what we don't yet know is how to calculate the length of a vector or the distance between two or more vectors, as well as the angle between two vectors, and thus the concept of orthogonality (perpendicularity). This is where Euclidean spaces come in handy. In fact, they are the fundamental space of geometry. This may seem rather trivial at the moment, but their importance will become more apparent to you as we get further on in the book.

In Euclidean space, we tend to refer to vectors as points.

A metric on a set S is defined as a function and satisfies the following criteria:

  • , and when then
  • (known as the triangle inequality)

For all .

That's all well and good, but how exactly do we calculate distance?

Let's suppose we have two points, and ; then, the distance between them can be calculated as follows:

And we can extend this to find the distance of points in , as follows:

While metrics help with the notion of distance, norms define the concept of length in Euclidean space.

A norm on a vector space is a function , and satisfies the following conditions:

  • , and when then
  • (also known as the triangle inequality)

For all and .

It is important to note that any norm on the vector space creates a distance metric on the said vector space, as follows:

This satisfies the rules for metrics, telling us that a normed space is also a metric space.

In general, for our purposes, we will only be concerned with four norms on , as follows:

  • (this applies only if )

If you look carefully at the four norms, you can notice that the 1- and 2-norms are versions of the p-norm. The -norm, however, is a limit of the p-norm, as p tends to infinity.

Using these definitions, we can define two vectors to be orthogonal if the following applies:

Inner product space

An inner product on a vector space is a function , and satisfies the following rules:

  • and

For all and .

It is important to note that any inner product on the vector space creates a norm on the said vector space, which we see as follows:

We can notice from these rules and definitions that all inner product spaces are also normed spaces, and therefore also metric spaces.

Another very important concept is orthogonality, which in a nutshell means that two vectors are perpendicular to each other (that is, they are at a right angle to each other) from Euclidean space.

Two vectors are orthogonal if their inner product is zero—that is, . As a shorthand for perpendicularity, we write .

Additionally, if the two orthogonal vectors are of unit length—that is, , then they are called orthonormal.

In general, the inner product in is as follows:

Previous PageNext Page
You have been reading a chapter from
Hands-On Mathematics for Deep Learning
Published in: Jun 2020Publisher: PacktISBN-13: 9781838647292
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Jay Dawani

Jay Dawani is a former professional swimmer turned mathematician and computer scientist. He is also a Forbes 30 Under 30 Fellow. At present, he is the Director of Artificial Intelligence at Geometric Energy Corporation (NATO CAGE) and the CEO of Lemurian Labs - a startup he founded that is developing the next generation of autonomy, intelligent process automation, and driver intelligence. Previously he has also been the technology and R&D advisor to Spacebit Capital. He has spent the last three years researching at the frontiers of AI with a focus on reinforcement learning, open-ended learning, deep learning, quantum machine learning, human-machine interaction, multi-agent and complex systems, and artificial general intelligence.
Read more about Jay Dawani