Reader small image

You're reading from  Asynchronous Programming in Rust

Product typeBook
Published inFeb 2024
PublisherPackt
ISBN-139781805128137
Edition1st Edition
Right arrow
Author (1)
Carl Fredrik Samson
Carl Fredrik Samson
author image
Carl Fredrik Samson

Carl Fredrik Samson is a popular technology writer and has been active in the Rust community since 2018. He has an MSc in Business Administration where he specialized in strategy and finance. When not writing, he's a father of two children and a CEO of a company with 300 employees. He's been interested in different kinds of technologies his whole life and his programming experience ranges from programming against old IBM mainframes to modern cloud computing, using everything from assembly to Visual Basic for Applications. He has contributed to several open source projects including the official documentation for asynchronous Rust.
Read more about Carl Fredrik Samson

Right arrow

Futures in Rust

In Chapter 5, we covered one of the most popular ways of modeling concurrency in a programming language: fibers/green threads. Fibers/green threads are an example of stackful coroutines. The other popular way of modeling asynchronous program flow is by using what we call stackless coroutines, and combining Rust’s futures with async/await is an example of that. We will cover this in detail in the next chapters.

This first chapter will introduce Rust’s futures to you, and the main goals of this chapter are to do the following:

  • Give you a high-level introduction to concurrency in Rust
  • Explain what Rust provides and not in the language and standard library when working with async code
  • Get to know why we need a runtime library in Rust
  • Understand the difference between a leaf future and a non-leaf future
  • Get insight into how to handle CPU-intensive tasks

To accomplish this, we’ll divide this chapter into the following...

What is a future?

A future is a representation of some operation that will be completed in the future.

Async in Rust uses a poll-based approach in which an asynchronous task will have three phases:

  1. The poll phase: A future is polled, which results in the task progressing until a point where it can no longer make progress. We often refer to the part of the runtime that polls a future as an executor.
  2. The wait phase: An event source, most often referred to as a reactor, registers that a future is waiting for an event to happen and makes sure that it will wake the future when that event is ready.
  3. The wake phase: The event happens and the future is woken up. It’s now up to the executor that polled the future in step 1 to schedule the future to be polled again and make further progress until it completes or reaches a new point where it can’t make further progress and the cycle repeats.

Now, when we talk about futures, I find it useful to make a distinction...

Leaf futures

Runtimes create leaf futures, which represent a resource such as a socket.

This is an example of a leaf future:

let mut stream = tokio::net::TcpStream::connect("127.0.0.1:3000");

Operations on these resources, such as a reading from a socket, will be non-blocking and return a future, which we call a leaf future since it’s the future that we’re actually waiting on.

It’s unlikely that you’ll implement a leaf future yourself unless you’re writing a runtime, but we’ll go through how they’re constructed in this book as well.

It’s also unlikely that you’ll pass a leaf future to a runtime and run it to completion alone, as you’ll understand by reading the next paragraph.

Non-leaf futures

Non-leaf futures are the kind of futures we as users of a runtime write ourselves using the async keyword to create a task that can be run on the executor.

The bulk of an async program will consist of non-leaf futures, which are a kind of pause-able computation. This is an important distinction since these futures represent a set of operations. Often, such a task will await a leaf future as one of many operations to complete the task.

This is an example of a non-leaf future:

let non_leaf = async {
    let mut stream = TcpStream::connect("127.0.0.1:3000").await.unwrap();
    println!("connected!");
    let result = stream.write(b"hello world\n").await;
    println!("message sent!");
    ...
};

The two highlighted lines indicate points where we pause the execution, yield control to a runtime, and eventually resume. In...

A mental model of an async runtime

I find it easier to reason about how futures work by creating a high-level mental model we can use. To do that, I have to introduce the concept of a runtime that will drive our futures to completion.

Note

The mental model I create here is not the only way to drive futures to completion, and Rust’s futures do not impose any restrictions on how you actually accomplish this task.

A fully working async system in Rust can be divided into three parts:

  • Reactor (responsible for notifying about I/O events)
  • Executor (scheduler)
  • Future (a task that can stop and resume at specific points)

So, how do these three parts work together?

Let’s take a look at a diagram that shows a simplified overview of an async runtime:

Figure 6.1 – Reactor, executor, and waker

Figure 6.1 – Reactor, executor, and waker

In step 1 of the figure, an executor holds a list of futures. It will try to run the future by polling it (the poll phase...

What the Rust language and standard library take care of

Rust only provides what’s necessary to model asynchronous operations in the language. Basically, it provides the following:

  • A common interface that represents an operation, which will be completed in the future through the Future trait
  • An ergonomic way of creating tasks (stackless coroutines to be precise) that can be suspended and resumed through the async and await keywords
  • A defined interface to wake up a suspended task through the Waker type

That’s really what Rust’s standard library does. As you see there is no definition of non-blocking I/O, how these tasks are created, or how they’re run. There is no non-blocking version of the standard library, so to actually run an asynchronous program, you have to either create or decide on a runtime to use.

I/O vs CPU-intensive tasks

As you know now, what you normally write are called non-leaf futures. Let’s take a look at this async block using pseudo-Rust as an example:

let non_leaf = async {
    let mut stream = TcpStream::connect("127.0.0.1:3000").await.unwrap();
    // request a large dataset
    let result = stream.write(get_dataset_request).await.unwrap();
    // wait for the dataset
    let mut response = vec![];
    stream.read(&mut response).await.unwrap();
    // do some CPU-intensive analysis on the dataset
    let report = analyzer::analyze_data(response).unwrap();
    // send the results back
    stream.write(report).await.unwrap();
};

I’ve highlighted the points where we yield control to the runtime executor. It’s important to be aware...

Summary

So, in this short chapter, we introduced Rust’s futures to you. You should now have a basic idea of what Rust’s async design looks like, what the language provides for you, and what you need to get elsewhere. You should also have an idea of what a leaf future and a non-leaf future are.

These aspects are important as they’re design decisions built into the language. You know by now that Rust uses stackless coroutines to model asynchronous operations, but since a coroutine doesn’t do anything in and of itself, it’s important to know that the choice of how to schedule and run these coroutines is left up to you.

We’ll get a much better understanding as we start to explain how this all works in detail as we move forward.

Now that we’ve seen a high-level overview of Rust’s futures, we’ll start explaining how they work from the ground up. The next chapter will cover the concept of futures and how they’re...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Asynchronous Programming in Rust
Published in: Feb 2024Publisher: PacktISBN-13: 9781805128137
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Carl Fredrik Samson

Carl Fredrik Samson is a popular technology writer and has been active in the Rust community since 2018. He has an MSc in Business Administration where he specialized in strategy and finance. When not writing, he's a father of two children and a CEO of a company with 300 employees. He's been interested in different kinds of technologies his whole life and his programming experience ranges from programming against old IBM mainframes to modern cloud computing, using everything from assembly to Visual Basic for Applications. He has contributed to several open source projects including the official documentation for asynchronous Rust.
Read more about Carl Fredrik Samson