Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Graph Machine Learning

You're reading from  Graph Machine Learning

Product type Book
Published in Jun 2021
Publisher Packt
ISBN-13 9781800204492
Pages 338 pages
Edition 1st Edition
Languages
Authors (3):
Claudio Stamile Claudio Stamile
Profile icon Claudio Stamile
Aldo Marzullo Aldo Marzullo
Profile icon Aldo Marzullo
Enrico Deusebio Enrico Deusebio
Profile icon Enrico Deusebio
View More author details

Table of Contents (15) Chapters

Preface Section 1 – Introduction to Graph Machine Learning
Chapter 1: Getting Started with Graphs Chapter 2: Graph Machine Learning Section 2 – Machine Learning on Graphs
Chapter 3: Unsupervised Graph Learning Chapter 4: Supervised Graph Learning Chapter 5: Problems with Machine Learning on Graphs Section 3 – Advanced Applications of Graph Machine Learning
Chapter 6: Social Network Graphs Chapter 7: Text Analytics and Natural Language Processing Using Graphs Chapter 8:Graph Analysis for Credit Card Transactions Chapter 9: Building a Data-Driven Graph-Powered Application Chapter 10: Novel Trends on Graphs Other Books You May Enjoy

Chapter 4: Supervised Graph Learning

Supervised learning (SL) most probably represents the majority of practical machine learning (ML) tasks. Thanks to more and more active and effective data collection activities, it is very common nowadays to deal with labeled datasets.

This is also true for graph data, where labels can be assigned to nodes, communities, or even to an entire structure. The task, then, is to learn a mapping function between the input and the label (also known as a target or an annotation).

For example, given a graph representing a social network, we might be asked to guess which user (node) will close their account. We can learn this predictive function by training graph ML on retrospective data, where each user is labeled as "faithful" or "quitter" based on whether they closed their account after a few months.

In this chapter, we will explore the concept of SL and how it can be applied on graphs. Therefore, we will also be providing an...

Technical requirements

We will be using Jupyter Notebooks with Python 3.8 for all of our exercises. In the following code block, you can see a list of the Python libraries that will be installed for this chapter using pip (for example, run pip install networkx==2.5 on the command line):

Jupyter==1.0.0
networkx==2.5
matplotlib==3.2.2
node2vec==0.3.3
karateclub==1.0.19
scikit-learn==0.24.0
pandas==1.1.3
numpy==1.19.2
tensorflow==2.4.1
neural-structured-learning==1.3.1
stellargraph==1.2.1

In the rest of this book, if not clearly stated, we will refer to nx as the result of the import networkx as nx Python command.

All code files relevant to this chapter are available at https://github.com/PacktPublishing/Graph-Machine-Learning/tree/main/Chapter04.

The supervised graph embedding roadmap 

In SL, a training set consists of a sequence of ordered pairs (x, y), where x is a set of input features (often signals defined on graphs) and y is the output label assigned to it. The goal of the ML models, then, is to learn the function mapping each x value to each y value. Common supervised tasks include predicting user properties in a large social network or predicting molecules' attributes, where each molecule is a graph.

Sometimes, however, not all instances can be provided with a label. In this scenario, a typical dataset consists of a small set of labeled instances and a larger set of unlabeled instances. For such situations, semi-SL (SSL) is proposed, whereby algorithms aim to exploit label dependency information reflected by available label information in order to learn the predicting function for the unlabeled samples.

With regard to supervised graph ML techniques, many algorithms have been developed. However as previously...

Feature-based methods 

One very simple (yet powerful) method for applying ML on graphs is to consider the encoding function as a simple embedding lookup. When dealing with supervised tasks, one simple way of doing this is to exploit graph properties. In Chapter 1, Getting Started with Graphs, we have learned how graphs (or nodes in a graph) can be described by means of structural properties, each "encoding" important information from the graph itself.

Let's forget graph ML for a moment: in classical supervised ML, the task is to find a function that maps a set of (descriptive) features of an instance to a particular output. Such features should be carefully engineered so that they are sufficiently representative to learn that concept. Therefore, as the number of petals and the sepal length might be good descriptors for a flower, when describing a graph we might rely on its average degree, its global efficiency, and its characteristic path length.

This shallow...

Shallow embedding methods 

As we already described in Chapter 3, Unsupervised Graph Learning, shallow embedding methods are a subset of graph embedding methods that learn node, edge, or graph representation for only a finite set of input data. They cannot be applied to other instances different from the ones used to train the model. Before starting our discussion, it is important to define how supervised and unsupervised shallow embedding algorithms differ.

The main difference between unsupervised and supervised embedding methods essentially lies in the task they attempt to solve. Indeed, if unsupervised shallow embedding algorithms try to learn a good graph, node, or edge representation in order to build well-defined clusters, the supervised algorithms try to find the best solution for a prediction task such as node, label, or graph classification.

In this section, we will explain in detail some of those supervised shallow embedding algorithms. Moreover, we will enrich...

Graph regularization methods

Shallow embedding methods described in the previous section show how topological information and relations between data points can be encoded and leveraged in order to build more robust classifiers and address semi-supervised tasks. In general terms, network information can be extremely useful in constraining models and enforcing the output to be smooth within neighboring nodes. As we have already seen in previous sections, this idea can be efficiently used in semi-supervised tasks, when propagating the information on neighbor unlabeled nodes.

On the other hand, this can also be used to regularize the learning phase in order to create more robust models that tend to better generalize to unseen examples. Both the label propagation and the label spreading algorithms we have seen previously can be implemented as a cost function to be minimized when we add an additional regularization term. Generally, in supervised tasks, we can write the cost function...

Graph CNNs

In Chapter 3, Unsupervised Graph Learning, we have learned the main concepts behind GNNs and graph convolutional networks (GCNs). We have also learned the difference between spectral graph convolution and spatial graph convolution. More precisely, we have further seen that GCN layers can be used to encode graphs or nodes under unsupervised settings by learning how to preserve graph properties such as node similarity.

In this chapter, we will explore such methods under supervised settings. This time, our goal is to learn graphs or node representations that can accurately predict node or graph labels. It is indeed worth noting that the encoding function remains the same. What will change is the objective!

Graph classification using GCNs

Let's consider again our PROTEINS dataset. Let's load the dataset as follows:

import pandas as pd
from stellargraph import datasets
dataset = datasets.PROTEINS()
graphs, graph_labels = dataset.load()
# necessary for converting...

Summary 

In this chapter, we have learned how supervised ML can be effectively applied on graphs to solve real problems such as node and graph classification. 

In particular, we first analyzed how graph and node properties can be directly used as features to train classic ML algorithms. We have seen shallow methods and simple approaches to learning node, edge, or graph representations for only a finite set of input data.

We have than learned how regularization techniques can be used during the learning phase in order to create more robust models that tend to generalize better.

Finally, we have seen how GNNs can be applied to solve supervised ML problems on graphs. 

But what can those algorithms be useful for? In the next chapter, we will explore common problems on graphs that need to be solved through ML techniques.

lock icon The rest of the chapter is locked
You have been reading a chapter from
Graph Machine Learning
Published in: Jun 2021 Publisher: Packt ISBN-13: 9781800204492
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}