Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Edge Computing Systems with Kubernetes
Edge Computing Systems with Kubernetes

Edge Computing Systems with Kubernetes: A use case guide for building edge systems using K3s, k3OS, and open source cloud native technologies

By Sergio Mendez
$41.99 $28.99
Book Oct 2022 458 pages 1st Edition
eBook
$41.99 $28.99
Print
$51.99
Subscription
Free Trial
Renews at $15.99p/m
eBook
$41.99 $28.99
Print
$51.99
Subscription
Free Trial
Renews at $15.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

Edge Computing Systems with Kubernetes

Edge Computing with Kubernetes

Edge computing is an emerging paradigm of distributed systems where the units that compute information are close to the origin of that information. The benefit of this paradigm is that it helps your system to reduce network outages and reduces the delays when you process across the cloud. This means you get a better interactive experience with your machine learning or Internet of Things (IoT) applications. This chapter covers the basics and the importance of edge computing and how Kubernetes can be used for it. It also covers different scenarios and basic architectures using low-power devices, which can use private and public clouds to exchange data.

In this chapter, we’re going to cover the following main topics:

  • Edge data centers using K3s and basic edge computing concepts
  • Basic edge computing architectures with K3s
  • Adapting your software to run at the edge

Technical requirements

In this chapter, we are going to run our edge computing on an edge device (such as a Raspberry Pi), so we need to set up a cross-compiling toolchain for Advanced RISC Machines (ARM).

For this, you need one of the following:

  • A Mac with terminal access
  • A PC with Ubuntu installed with terminal access
  • A virtual machine with Ubuntu installed with terminal access

For more detail and code snippets, check out this resource on GitHub: https://github.com/PacktPublishing/Edge-Computing-Systems-with-Kubernetes/tree/main/ch1.

Edge data centers using K3s and basic edge computing concepts

With the evolution of the cloud, companies and organizations are starting to migrate their processing tasks to edge computing devices, with the goal to reduce costs and get more benefits for the infrastructure that they are paying for. As a part of the introductory content in this book, we must learn about the basic concepts related to edge computing and understand why we use K3s for edge computing. So, let’s get started with the basic concepts.

The edge and edge computing

According to the Qualcomm and Cisco companies, the edge can be defined as “anywhere where data is processed before it crosses the Wide Area Network (WAN)”; this is the edge, but what is edge computing? A post by Eric Hamilton from Cloudwards.net defines edge computing as “the processing and analyzing of data along a network edge, closest to the point of its collection, so that data becomes actionable.” In other words, edge computing refers to processing your data near to the source and distributing the computation in different places, using devices that are close to the source of data.

To add more context, let’s explore the next figure:

Figure 1.1 – Components of edge layers

Figure 1.1 – Components of edge layers

This figure shows how the data is processed in different contexts; these contexts are the following:

  • Cloud layer: In this layer, you can find the cloud providers, such as AWS, Azure, GCP, and more.
  • Near edge: In this layer, you can find telecommunications infrastructure and devices, such as 5G networks, radio virtual devices, and similar devices.
  • Far edge: In this layer, you will find edge clusters, such as K3s clusters or devices that exchange data between the cloud and edge layer, but this layer can be subdivided into the tiny edge layer.
  • Tiny edge: In this layer, you will find sensors, end-user devices that exchange data with a processing device, and edge clusters on the far edge.

Important Note

Remember that edge computing refers to data that is processed on edge devices before the result goes to its destination, which could be on a public or private cloud.

Other important concepts to consider for building edge clusters are the following:

  • Fog computing: An architecture of cloud services that distribute the system across near edge and far edge devices; these devices can be geographically dispersed.
  • Multi-Access Edge Computing (MEC): This distributes the computing at the edge of larger networks, with low latency and high bandwidth, and is the predecessor of mobile edge computing; in other words, the processing uses telecom networks and mobile devices.
  • Cloudlets: This is a small-scale cloud data center, which could be used for resource-intensive use cases, such as data analytics, Machine Learning (ML) and so on.

Benefits of edge computing

With this short explanation, let’s move on to understand the main benefits of edge computing; some of these include the following:

  • Reducing latency: Edge computing can process heavy compute processes on edge devices, reducing the latency to bring this information.
  • Reducing bandwidth: Edge computing can reduce the used bandwidth while taking part of the data on the edge devices, reducing the traffic on the network.
  • Reducing costs: Reducing latency and bandwidth translates to the reduction of operational costs, which is one of the most important benefits of edge computing.
  • Improving security: Edge computing uses data aggregation and data encryption algorithms to improve the security of data access.

Let’s now discuss containers, Docker, and containerd.

Containers, Docker, and containerd for edge computing

In the last few years, container adoption has been increasing because of the success of Docker. Docker has been the most popular container engine for the last few years. Container technology gives businesses a way to design applications using microservices architecture. This way, companies speed up their development and strategies for scaling their applications. So, to begin with a basic concept: A container is a small runtime environment that packages your application with all the dependencies needed for it to run. This concept is not new, but Docker, a container engine, popularized this concept. In simple words, Docker uses small operating system images with the necessary dependencies to run your software. This can be called operating system virtualization. What this does is use the cgroups kernel feature of Linux to limit CPU, memory, network, I/O, and so on for your processes. Other operating systems, such as Windows or FreeBSD, use similar features to insulate and create this type of virtualization. Let’s see the next figure to represent these concepts:

Figure 1.2 – Containerized applications inside the OS

Figure 1.2 – Containerized applications inside the OS

This figure shows that a container doesn’t depend on special features, such as a hypervisor that is commonly seen in hardware virtualization used by VMware, Hyper-V, and Xen; instead of that, the application runs as a binary inside the container and reuses the host kernel. Let’s say that running a container is almost like running a binary program inside a directory but adds some resource limits, using cgroups in the case of Linux containers.

Docker implements all these abstractions. It is a popular container toolchain that adds some versioning features, such as Git. That was the main reason it became very popular, and it features easy portability and versioning at the operating system level. At the moment, containerd is the container runtime used by Docker and Kubernetes to create containers. In general, with containerd, you can create containers without extra features; it’s very optimized. With the explosion of edge computing, containerd has become an important piece of software to run containers in low-resource environments.

In general, with all these technologies you can do the following:

  • Standardize how to package your software.
  • Bring portability to your software.
  • Maintain your software in an easier way.
  • Run applications in low-resource environments.

So, Docker must be taken into consideration as an important software piece to build edge computing and low-resource environments.

Distributed systems, edge computing, and Kubernetes

In the last decade, distributed systems evolved from multi-node clusters with applications using monolithic architectures to multi-node clusters with microservices architectures. One of the first options to start building microservices is to use containers, but once the system needs to scale, it is necessary to use an orchestrator. This is where Kubernetes comes into the game.

As an example, let’s imagine an orchestra with lots of musicians. You can find musicians playing the piano, trumpets, and so on. But if the orchestra was disorganized, what would you need to have to organize all the musicians? The answer is an orchestra director or an orchestrator. Here is when Kubernetes appears; each musician is a container that needs to communicate or listen to other musicians and, of course, follow the instructions of the orchestra director or orchestrator. In this way, all the musicians can play their instruments at the right time and can sound beautiful.

This is what Kubernetes does; it is an orchestrator of containers, but at the same time it is a platform with all the necessary prebuilt pieces to build your own distributed system, ready to scale and designed with best practices that can help you to implement agile development and a DevOps culture. Depending on your use case, sometimes it’s better to use something small such as Docker or containerd, but for complex or demanding scenarios, it’s better to use Kubernetes.

Edge clusters using K3s – a lightweight Kubernetes

Now, the big question is how to start building edge computing systems. Let’s get started with K3s. K3s is a Kubernetes-certified distribution created by Rancher Labs. K3s doesn’t include by default extra features that are not vital to be used on Kubernetes, but they can be added later. K3s uses containerd as its container engine, which gives K3s the ability to run on low-resource environments using ARM devices. For example, you can also run K3s on x86_64 devices in production environments. However, for the purpose of this book, we will use K3s as our main piece of software to build edge computing systems using ARM devices.

Talking about clusters at the edge, K3s offers the same power as Kubernetes but in a small package and in an optimized way, plus some features designed especially for edge computing systems. K3s is very easy to use, compared with other Kubernetes distributions. It’s a lightweight Kubernetes that can be used for edge computing, sandbox environments, or whatever you want, depending on the use case.

Edge devices using ARM processors and micro data centers

Now, it’s time to talk about edge devices and ARM processors, so let’s begin with edge devices. Edge devices are designed to process and analyze information near to the data source location; this is where the edge computing mindset comes from. Talking about low-energy consumption devices, x86 or Intel processors consume more energy and get warmer than ARM processors. This means more power and more cooling; in other words, you will pay more money for x86_64 processors. On the other hand, ARM processors have less computational power and consume less energy. That’s the reason for the success of ARM processors on smartphone devices; they give you better cost and benefit between processing and energy consumption compared to Intel processors.

Because of that, companies are interested in designing micro data centers using ARM processors in their servers. For the same reason, companies are starting to migrate their workloads to be processed by devices using ARM processors. One example is the AWS Graviton2, which is a service that offers cloud instances using ARM processors.

Edge computing diagrams to build your system

Right now, with all the basic concepts of containers, orchestrators, and edge computing and its layers, we can focus on the five basic diagrams of edge computing configurations that you can use to design this kind of system. So, let’s use K3s as our main platform for edge computing for the next diagrams.

Edge cluster and public cloud

This configuration shares and processes data between the public or private cloud with edge layers, but let’s explain its different layers:

  • Cloud layer: This layer is in the public cloud and its provider, such as AWS, Azure, or GCP. This provider can offer instances using Intel or ARM processors. For example, AWS offers the AWS Graviton2 instance if you need an ARM processor. As a complement, the public cloud can offer managed services to store data such as databases, storage, and so on. The private cloud could be in this layer too. You can find software such as VMware ESXi or OpenStack to provide this kind of service or instance locally. You can even choose a hybrid approach using the public and the private cloud. In general, this layer supports your far and tiny edge layers for storage or data processing.
  • Near edge: In this layer, you can find network devices to move all the data between the cloud layer and the far layer. Typically, these include telco devices, 5G networks, and so on.
  • Far edge: In this layer, you can find K3s clusters, similar lightweight clusters such as KubeEdge, and software such as Docker or containerd. In general, this is your local processing layer.
  • Tiny edge: This is a layer inside the far edge, where you can find edge devices such as smartwatches, IoT devices, and so on, which send data to the far edge.
Figure 1.3 – Edge cluster and public cloud

Figure 1.3 – Edge cluster and public cloud

Use cases include the following:

  • Scenarios where you must share data between different systems across the internet or a private cloud
  • Distribute data processing between your cloud and the edge, such as a machine learning model generation or predictions
  • Scenarios where you must scale IoT applications, and the response time of the application is critical
  • Scenarios where you want to secure your data using the aggregation strategy of distributing data and encryption across the system

Regional edge clusters and public cloud

This configuration is focused on distributing the processing strategy across different regions and sharing data across a public cloud. Let’s explain the different layers:

  • Cloud layer: This layer contains managed services such as databases to distribute the data across different regions.
  • Near edge: In this layer, you can find network devices to move all the data between the cloud layer and the far layer. Typically, this includes telco devices, 5G networks, and so on.
  • Far edge: In this layer, you can find K3s clusters across different regions. These clusters or nodes can share or update the data stored in a public cloud.
  • Tiny edge: Here, you can find different edge devices close to each region where the far edge clusters process the information because of this distributed configuration.
Figure 1.4 – Regional edge cluster and public cloud

Figure 1.4 – Regional edge cluster and public cloud

Use cases include the following:

  • Different cluster configurations across different regions
  • Reducing application response time, choosing the closest data, or processing node location, which is critical in IoT applications
  • Sharing data across different regions
  • Distributing processing across different regions

Single node cluster and public/private cloud

This is a basic configuration where a single computer processes all the information captured on tiny edge devices. Let’s explain the different layers:

  • Cloud layer: In this layer, you can find the data storage for the system. It could be placed on the public or private cloud.
  • Near edge: In this layer, you can find network devices to move all the data between the cloud layer and the far layer. Typically, this includes telco devices, 5G networks, and so on.
  • Far edge: In this layer, you can find a single node K3s cluster that recollects data from tiny edge devices.
  • Tiny edge: Devices that capture data, such as smartwatches, tablets, cameras, sensors, and so on. This kind of configuration is more for processing locally or on a small scale.
Figure 1.5 – Single node cluster and public/private cloud

Figure 1.5 – Single node cluster and public/private cloud

Use cases include the following:

  • Low-cost and low-energy consumption environments
  • Green edge applications that can be powered by solar panels or wind turbines
  • Small processes or use cases, such as analyzing health records or autonomous house systems that need something local or not too complicated

Let’s now adapt the software to run at the edge.

Adapting your software to run at the edge

Something important while designing an edge computing system is to choose the processor architecture to build your software. One popular architecture because of the lower consumption for computing is ARM, but if ARM is the selected architecture, it is necessary to transform your current code in most of the cases from x86_64 (Intel) to ARM (ARMv7 such as RI and ARM such as AWS Graviton2 instances). The following subsections include short guides to perform the process to convert from one platform to another; this process is called cross-compiling. With this, you will be able to run your software on ARM devices using Go, Python, Rust, and Java. So, let’s get started.

Adapting Go to run on ARM

First, it’s necessary to install Go on your system. Here are a couple of ways to install Go.

Installing Go on Linux

To install Go on Linux, execute the following steps:

  1. Download and untar the Go official binaries:
    $ wget https://golang.org/dl/go1.15.linux-amd64.tar.gz
    $ tar -C /usr/local -xzf go1.15.linux-amd64.tar.gz
  2. Set the environment variables to run Go:
    $ mkdir $HOME/go
  3. Set your GOPATH in the configuration file of your terminal with the following lines. ~/.profile is a common file to set these environment variables; let’s modify the .profile file:
    $ export PATH=$PATH:/usr/local/go/bin
    $ export GOPATH=$HOME/go
  4. Load the new configuration using the following command:
    $ . ~/.profile
    $ mkdir $GOPATH/src
  5. (Optional). If you want to, you can set these environment variables temporarily in your terminal using the following commands:
    $ export PATH=$PATH:/usr/local/go/bin
    $ export GOPATH=$HOME/go
  6. To check whether GOPATH is configured, run the following command:
    $ go env GOPATH

Now, you are ready to use Go on Linux. Let’s move to this installation using a Mac.

Installing Go on a Mac

To install Go on a Mac, execute the following steps:

  1. Install Homebrew (called brew) with the following command:
    $ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  2. Once it is installed, install Go with brew:
    $ brew install go

Important Note

To find out how to install brew, you can check the official page at https://brew.sh.

Cross-compiling from x86_64 to ARM with Go

To cross-compile from x86_64 to ARM, execute the following steps:

  1. Create a folder to store your code:
    $ cd ~/
    $ mkdir goproject
    $ cd goproject
  2. Create an initial Go configuration to install external Go libraries outside the GOPATH command; for this, execute the next command:
    $ go mod init main
  3. Create the example.go file with Hello World as its contents:
    $ cat << EOF > example.go
    package main
    import "fmt"
    func main() {
       fmt.Println("Hello World") 
    }
    EOF
  4. Assuming that your environment is under x86_64 and you want to cross-compile for ARMv7 support, execute the following commands:
    $ env GOOS=linux GOARM=7 GOARCH=arm go build example.go

Use the next line for ARMv8 64-bit support:

$ env GOOS=linux GOARCH=arm64 go build example.go

Important Note

If you want to see other options for cross-compiling, see https://github.com/golang/go/wiki/GoArm.

Set the execution permissions for the generated binary:

$ chmod 777 example
$ ./example
  1. Copy the generated binary to your ARM device and test if it works.

In the next section, we will learn how to adapt Rust to run on ARM.

Adapting Rust to run on ARM

First, it’s necessary to install Rust on your system. Here are a couple of ways to install Rust.

Installing Rust on Linux

To install Rust on Linux, execute the following steps:

  1. Install Rust by executing the following command in the terminal:
    $ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh 
  2. Set the path for Rust in the configuration file of your terminal. For example, if you are using Bash, add the following line to your .bashrc:
    $ export PATH=$PATH:$HOME/.cargo/bin

Installing Rust on a Mac

To install Rust on a Mac, execute the following steps:

  1. Install Homebrew with the following command:
    $ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  2. Once it is installed, install rustup with brew:
    $ brew install rustup-init
  3. Run the rustup command to install Rust and all the necessary tools for Rust with the following command:
    $ rustup-init
  4. Set your terminal environment variables by adding the following line to your terminal configuration file:
    $ export PATH=$PATH:$HOME/.cargo/bin

Important Note

Mac users often use the ZSH terminal, so they have to use .zshrc. If you are using another terminal, look for the proper configuration file or the generic /etc/profile.

Cross-compiling from x86_64 to ARMv7 with Rust on a Mac

To cross-compile from x86_64 to ARM, execute the following steps:

  1. Install the complements to match the compiler and environment variables for ARMv7 architecture on your Mac; for this, execute the following command:
    $ brew tap messense/macos-cross-toolchains
  2. Download the support for ARMv7 for cross-compiling by executing the following command:
    $ brew install armv7-unknown-linux-gnueabihf
  3. Now set the environment variables:
    $ export CC_armv7_unknown_linux_gnueabihf=armv7-unknown-linux-gnueabihf-gcc
    $ export CXX_armv7_unknown_linux_gnueabihf=armv7-unknown-linux-gnueabihf-g++
    $ export AR_armv7_unknown_linux_gnueabihf=armv7-unknown-linux-gnueabihf-ar
    $ export CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_LINKER=armv7-unknown-linux-gnueabihf-gcc
  4. Create a folder to store your code:
    $ cd ~/
    $ mkdir rustproject
    $ cd rustproject
  5. Create an initial Hello World project with Rust:
    $ cargo new hello-rust
    $ cd hello-rust

The generated Rust code will look like this:

fn main() {
  println!("Hello, world!");
}

The source code will be located at src/main.rs.

  1. Add the support for ARMv7:
    $ rustup target add armv7-unknown-linux-gnueabi
  2. Build your software:
    $ cargo build --target=armv7-unknown-linux-gnueabi
  3. Copy the binary file into your device and test whether it works:
    $ cargo build --target=armv7-unknown-linux-gnueabi
  4. The generated binary will be inside the target/armv7-unknown-linux-gnueabi/hello-rust folder.
  5. Now copy your binary into your device and test whether it works.

Important Note

For more options for cross-compiling with Rust, check out https://doc.rust-lang.org/nightly/rustc/platform-support.html and https://rust-lang.github.io/rustup/cross-compilation.html. For the toolchain for Mac and AArch64 (64-bit ARMv8), check out aarch64-unknown-linux-gnu inside the repository at https://github.com/messense/homebrew-macos-cross-toolchains.

Adapting Python to run on ARM

First, it is necessary to install Python on your system. There are a couple of ways of doing this.

Installing Python on Linux

To install Python, execute the following steps:

  1. Update your repositories:
    $ sudo apt-get update
  2. Install Python 3:
    $ sudo apt-get install -y python3

Install Python on a Mac

To install Python on a Mac using Homebrew, execute the following steps:

  1. Check for your desired Python version on brew’s available version list:
    $ brew search python
  2. Let’s say that you choose Python 3.8; you have to install it by executing the following command:
    $ brew install python@3.8
  3. Test your installation:
    $ python3 --version

Cross-compiling from x86_64 to ARM with Python

Python is very important and one of the most popular languages now, and it is commonly used for AI and ML applications. Python is an interpreted language; it needs a runtime environment (such as Java) to run the code. In this case, you must install Python as the runtime environment. It has similar challenges running code as Java but has other challenges too. Sometimes, you need to compile libraries from scratch to use it. The standard Python libraries currently support ARM, but the issue is when you want something outside those standard libraries.

As a basic example, let’s run Python code across different platforms by executing the following steps:

  1. Create a basic file called example.py:
    def main():
       print("hello world")
    if __name__ == "__main__":
       main()
  2. Copy example.py to your ARM device.
  3. Install Python 3 on your ARM device by running the following command:
    $ sudo apt-get install -y python3
  4. Run your code:
    $ python3 example.py

Adapting Java to run on ARM

When talking about Java to run on ARM devices, it is a little bit different. Java uses a hybrid compiler – in other words, a two-phase compiler. This means that it generates an intermediate code called bytecode and is interpreted by a Java Virtual Machine (JVM). This bytecode is a cross-platform code and, following the Java philosophy of compile once and run everywhere, it means that you can compile using the platform you want, and it will run on any other platform without modifications. So, let’s see how to perform cross-compiling for a basic Java program that can run on an ARMv7 and an ARMv8 64-bit device.

Installing Java JDK on Linux

To install Java on Linux, execute the following commands:

  1. Update the current repositories of Ubuntu:
    $ sudo apt-get update
  2. Install the official JDK 8:
    $ sudo apt-get install openjdk-8-jre
  3. Test whether javac runs:
    $ javac

Installing Java JDK on a Mac

If you don’t have Java installed on your Mac, follow the next steps:

  1. (Optional) Download Java JDK from the following link and choose the architecture that you need, such as Linux, Mac, or Windows: https://www.oracle.com/java/technologies/javase-downloads.html.
  2. (Optional) Download and run the installer.

To test whether Java exists or whether it was installed correctly, run the following command:

$ java -version
  1. Test whether the compiler is installed by executing the following command:
    $ javac -v

Cross-compiling from x86_64 to ARM with Java

Java is a language that generates an intermediate code called bytecode, which runs on the JVM. Let’s say that you have a basic code in a file called Example.java:

class Example {
   public static void main(String[] args) {
      System.out.println("Hello world!");
   }
}

To execute your code, follow these steps:

  1. To compile it, use the following command:
    $ javac Example.java

This will generate the intermediate code in a file called Example.class, which can be executed by the JVM. Let’s do this in the next step.

  1. To run the bytecode, execute the following command:
    $ java Example
  2. Now, copy Example.class to another device and run it with the proper JVM using the java command.

Summary

This chapter explained all the basic concepts about edge computing and how it relates to other concepts, such as fog computing, MEC, and cloudlets. It also explained how containers and orchestrators such as Docker, containerd, and Kubernetes can help you to build your own edge computing system, using different configurations, depending on your own use case. At the end of the chapter, we covered how you can run and compile your software on edge devices using ARM processors, using the cross-compiling technique with Go, Python, Rust, and Java languages.

Questions

Here are a few questions to test your new knowledge:

  1. What is the difference between the edge and edge computing?
  2. What infrastructure configurations can you use to build an edge computing system?
  3. How can containers and orchestrators help you to build edge computing systems?
  4. What is cross-compiling and how can you use it to run your software on ARM devices?

Further reading

Here are some additional resources that you can check out to learn more about edge computing:

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • A guide to implementing an edge computing environment
  • Reduce latency and costs for real-time applications running at the edge
  • Find stable and relevant cloud native open source software to complement your edge environments

Description

Edge computing is a way of processing information near the source of data instead of processing it on data centers in the cloud. In this way, edge computing can reduce latency when data is processed, improving the user experience on real-time data visualization for your applications. Using K3s, a light-weight Kubernetes and k3OS, a K3s-based Linux distribution along with other open source cloud native technologies, you can build reliable edge computing systems without spending a lot of money. In this book, you will learn how to design edge computing systems with containers and edge devices using sensors, GPS modules, WiFi, LoRa communication and so on. You will also get to grips with different use cases and examples covered in this book, how to solve common use cases for edge computing such as updating your applications using GitOps, reading data from sensors and storing it on SQL and NoSQL databases. Later chapters will show you how to connect hardware to your edge clusters, predict using machine learning, and analyze images with computer vision. All the examples and use cases in this book are designed to run on devices using 64-bit ARM processors, using Raspberry Pi devices as an example. By the end of this book, you will be able to use the content of these chapters as small pieces to create your own edge computing system.

What you will learn

Configure k3OS and K3s for development and production scenarios Package applications into K3s for shipped-node scenarios Deploy in occasionally connected scenarios, from one node to one million nodes Manage GitOps for applications across different locations Use open source cloud native software to complement your edge computing systems Implement observability event-driven and serverless edge applications Collect and process data from sensors at the edge and visualize it into the cloud

Product Details

Country selected

Publication date : Oct 14, 2022
Length 458 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781800568594
Vendor :
JetBrains
Category :
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want

Product Details


Publication date : Oct 14, 2022
Length 458 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781800568594
Vendor :
JetBrains
Category :
Concepts :

Table of Contents

21 Chapters
Preface Chevron down icon Chevron up icon
1. Part 1: Edge Computing Basics Chevron down icon Chevron up icon
2. Chapter 1: Edge Computing with Kubernetes Chevron down icon Chevron up icon
3. Chapter 2: K3s Installation and Configuration Chevron down icon Chevron up icon
4. Chapter 3: K3s Advanced Configurations and Management Chevron down icon Chevron up icon
5. Chapter 4: k3OS Installation and Configurations Chevron down icon Chevron up icon
6. Chapter 5: K3s Homelab for Edge Computing Experiments Chevron down icon Chevron up icon
7. Part 2: Cloud Native Applications at the Edge Chevron down icon Chevron up icon
8. Chapter 6: Exposing Your Applications Using Ingress Controllers and Certificates Chevron down icon Chevron up icon
9. Chapter 7: GitOps with Flux for Edge Applications Chevron down icon Chevron up icon
10. Chapter 8: Observability and Traffic Splitting Using Linkerd Chevron down icon Chevron up icon
11. Chapter 9: Edge Serverless and Event-Driven Architectures with Knative and Cloud Events Chevron down icon Chevron up icon
12. Chapter 10: SQL and NoSQL Databases at the Edge Chevron down icon Chevron up icon
13. Part 3: Edge Computing Use Cases in Practice Chevron down icon Chevron up icon
14. Chapter 11: Monitoring the Edge with Prometheus and Grafana Chevron down icon Chevron up icon
15. Chapter 12: Communicating with Edge Devices across Long Distances Using LoRa Chevron down icon Chevron up icon
16. Chapter 13: Geolocalization Applications Using GPS, NoSQL, and K3s Clusters Chevron down icon Chevron up icon
17. Chapter 14: Computer Vision with Python and K3s Clusters Chevron down icon Chevron up icon
18. Chapter 15: Designing Your Own Edge Computing System Chevron down icon Chevron up icon
19. Index Chevron down icon Chevron up icon
20. Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Top Reviews
No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.