Reader small image

You're reading from  Asynchronous Programming in Rust

Product typeBook
Published inFeb 2024
PublisherPackt
ISBN-139781805128137
Edition1st Edition
Right arrow
Author (1)
Carl Fredrik Samson
Carl Fredrik Samson
author image
Carl Fredrik Samson

Carl Fredrik Samson is a popular technology writer and has been active in the Rust community since 2018. He has an MSc in Business Administration where he specialized in strategy and finance. When not writing, he's a father of two children and a CEO of a company with 300 employees. He's been interested in different kinds of technologies his whole life and his programming experience ranges from programming against old IBM mainframes to modern cloud computing, using everything from assembly to Visual Basic for Applications. He has contributed to several open source projects including the official documentation for asynchronous Rust.
Read more about Carl Fredrik Samson

Right arrow

Runtimes, Wakers, and the Reactor-Executor Pattern

In the previous chapter, we created our own pausable tasks (coroutines) by writing them as state machines. We created a common API for these tasks by requiring them to implement the Future trait. We also showed how we can create these coroutines using some keywords and programmatically rewrite them so that we don’t have to implement these state machines by hand, and instead write our programs pretty much the same way we normally would.

If we stop for a moment and take a bird’s eye view over what we got so far, it’s conceptually pretty simple: we have an interface for pausable tasks (the Future trait), and we have two keywords (coroutine/wait) to indicate code segments we want rewritten as a state machine that divides our code into segments we can pause between.

However, we have no event loop, and we have no scheduler yet. In this chapter, we’ll expand on our example and add a runtime that allows us...

Technical requirements

The examples in this chapter will build on the code from our last chapter, so the requirements are the same. The examples will all be cross-platform and work on all platforms that Rust (https://doc.rust-lang.org/beta/rustc/platform-support.html#tier-1-with-host-tools) and mio (https://github.com/tokio-rs/mio#platforms) supports. The only thing you need is Rust installed and the repository that belongs to the book downloaded locally. All the code in this chapter will be found in the ch08 folder.

To follow the examples step by step, you’ll also need corofy installed on your machine. If you didn’t install it in Chapter 7, install it now by going into the ch08/corofy folder in the repository and running this command:

cargo install --force --path .

Alternatively, you can just copy the relevant files in the repository when we come to the points where we use corofy to rewrite our coroutine/wait syntax. Both versions will be available to you there...

Introduction to runtimes and why we need them

As you know by now, you need to bring your own runtime for driving and scheduling asynchronous tasks in Rust.

Runtimes come in many flavors, from the popular Embassy embedded runtime (https://github.com/embassy-rs/embassy), which centers more on general multitasking and can replace the need for a real-time operating system (RTOS) on many platforms, to Tokio (https://github.com/tokio-rs/tokio), which centers on non-blocking I/O on popular server and desktop operating systems.

All runtimes in Rust need to do at least two things: schedule and drive objects implementing Rust’s Future trait to completion. Going forward in this chapter, we’ll mostly focus on runtimes for doing non-blocking I/O on popular desktop and server operating systems such as Windows, Linux, and macOS. This is also by far the most common type of runtime most programmers will encounter in Rust.

Taking control over how tasks are scheduled is very invasive...

Improving our base example

We’ll create a version of the first example in Chapter 7 since it’s the simplest one to start with. Our only focus is showing how to schedule and drive the runtimes more efficiently.

We start with the following steps:

  1. Create a new project and name it a-runtime (alternatively, navigate to ch08/a-runtime in the book’s repository).
  2. Copy the future.rs and http.rs files in the src folder from the first project we created in Chapter 7, named a-coroutine (alternatively, copy the files from ch07/a-coroutine in the book’s repository) to the src folder in our new project.
  3. Make sure to add mio as a dependency by adding the following to Cargo.toml:
    [dependencies]
    mio = { version = "0.8", features = ["net", "os-poll"] }
  4. Create a new file in the src folder called runtime.rs.

We’ll use corofy to change the following coroutine/wait program into its state machine representation that...

Creating a proper runtime

So, if we visualize the degree of dependency between the different parts of our runtime, our current design could be described this way:

Figure 8.5 – Tight coupling between reactor and executor

Figure 8.5 – Tight coupling between reactor and executor

If we want a loose coupling between the reactor and executor, we need an interface provided to signal the executor that it should wake up when an event that allows a future to progress has occurred. It’s no coincidence that this type is called Waker (https://doc.rust-lang.org/stable/std/task/struct.Waker.html) in Rust’s standard library. If we change our visualization to reflect this, it will look something like this:

Figure 8.6 – A loosely coupled reactor and executor

Figure 8.6 – A loosely coupled reactor and executor

It’s no coincidence that we land on the same design as what we have in Rust today. It’s a minimal design from Rust’s point of view, but it allows for a wide variety of runtime designs without laying...

Step 1 – Improving our runtime design by adding a Reactor and a Waker

In this step, we’ll make the following changes:

  1. Change the project structure so that it reflects our new design.
  2. Find a way for the executor to sleep and wake up that does not rely directly on Poll and create a Waker based on this that allows us to wake up the executor and identify which task is ready to progress.
  3. Change the trait definition for Future so that poll takes a &Waker as an argument.

Tip

You’ll find this example in the ch08/b-reactor-executor folder. If you follow along by writing the examples from the book, I suggest that you create a new project called b-reactor-executor for this example by following these steps:

1. Create a new folder called b-reactor-executor.

2. Enter the newly created folder and write cargo init.

3. Copy everything in the src folder in the previous example, a-runtime, into the src folder of a new project...

Step 2 – Implementing a proper Executor

In this step, we’ll create an executor that will:

  • Hold many top-level futures and switch between them
  • Enable us to spawn new top-level futures from anywhere in our asynchronous program
  • Hand out Waker types so that they can sleep when there is nothing to do and wake up when one of the top-level futures can progress
  • Enable us to run several executors by having each run on its dedicated OS thread

Note

It’s worth mentioning that our executor won’t be fully multithreaded in the sense that tasks/futures can’t be sent from one thread to another, and the different Executor instances will not know of each other. Therefore, executors can’t steal work from each other (no work-stealing), and we can’t rely on executors picking tasks from a global task queue.

The reason is that the Executor design will be much more complex if we go down that route, not only because of the added...

Step 3 – Implementing a proper Reactor

The final part of our example is the Reactor. Our Reactor will:

  • Efficiently wait and handle events that our runtime is interested in
  • Store a collection of Waker types and make sure to wake the correct Waker when it gets a notification on a source it’s tracking
  • Provide the necessary mechanisms for leaf futures such as HttpGetFuture, to register and deregister interests in events
  • Provide a way for leaf futures to store the last received Waker

When we’re done with this step, we should have everything we need for our runtime, so let’s get to it.

Start by opening the reactor.rs file.

The first thing we do is add the dependencies we need:

ch08/b-reactor-executor/src/runtime/reactor.rs

use crate::runtime::Waker;
use mio::{net::TcpStream, Events, Interest, Poll, Registry, Token};
use std::{
    collections::HashMap,
    sync::{
   ...

Experimenting with our new runtime

If you remember from Chapter 7, we implemented a join_all method to get our futures running concurrently. In libraries such as Tokio, you’ll find a join_all function too, and the slightly more versatile FuturesUnordered API that allows you to join a set of predefined futures and run them concurrently.

These are convenient methods to have, but it does force you to know which futures you want to run concurrently in advance. If the futures you run using join_all want to spawn new futures that run concurrently with their “parent” future, there is no way to do that using only these methods.

However, our newly created spawn functionality does exactly this. Let’s put it to the test!

An example using concurrency

Note

The exact same version of this program can be found in the ch08/c-runtime-executor folder.

Let’s try a new program that looks like this:

fn main() {
    let mut executor...

Summary

So, what a ride! As I said in the introduction for this chapter, this is one of the biggest ones in this book, but even though you might not realize it, you’ve already got a better grasp of how asynchronous Rust works than most people do. Great work!

In this chapter, you learned a lot about runtimes and why Rust designed the Future trait and the Waker the way it did. You also learned about reactors and executors, Waker types, Futures traits, and different ways of achieving concurrency through the join_all function and spawning new top-level futures on the executor.

By now, you also have an idea of how we can achieve both concurrency and parallelism by combining our own runtime with OS threads.

Now, we’ve created our own async universe consisting of coro/wait, our own Future trait, our own Waker definition, and our own runtime. I’ve made sure that we don’t stray away from the core ideas behind asynchronous programming in Rust so that everything...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Asynchronous Programming in Rust
Published in: Feb 2024Publisher: PacktISBN-13: 9781805128137
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Carl Fredrik Samson

Carl Fredrik Samson is a popular technology writer and has been active in the Rust community since 2018. He has an MSc in Business Administration where he specialized in strategy and finance. When not writing, he's a father of two children and a CEO of a company with 300 employees. He's been interested in different kinds of technologies his whole life and his programming experience ranges from programming against old IBM mainframes to modern cloud computing, using everything from assembly to Visual Basic for Applications. He has contributed to several open source projects including the official documentation for asynchronous Rust.
Read more about Carl Fredrik Samson