Chapter 1: Tools of the Trade – Rust Toolchains and Project Structures
Rust, as a modern systems programming language, has many inherent characteristics that make it easier to write safe, reliable, and performant code. Rust also has a compiler that enables a relatively fearless code refactoring experience as a project grows in size and complexity. But any programming language in itself is incomplete without the toolchains that support the software development life cycle. After all, where would software engineers be without their tools?
This chapter specifically discusses the Rust toolchain and its ecosystem, and techniques to structure code within Rust projects to write safe, testable, performant, documented, and maintainable code that is also optimized to run in the intended target environment.
The following are the key learning outcomes for this chapter:
- Choosing the right configuration of Rust for your project
- Cargo introduction and project structure
- Cargo build management
- Cargo dependencies
- Writing test scripts and doing automated unit and integration testing
- Automating the generation of technical documentation
By the end of this chapter, you will have learned how to select the right project type and toolchain; organize project code efficiently; add external and internal libraries as dependencies; build the project for development, test, and production environments; automate testing; and generate documentation for your Rust code.
Technical requirements
Rustup must be installed in the local development environment. Use this link for installation: https://github.com/rust-lang/rustup.
Refer to the following link for official installation instructions: https://www.rust-lang.org/tools/install.
After installation, check rustc
, and cargo
have been installed correctly with the following commands:
rustc --version cargo --version
You must have access to any code editor of your choice.
Some of the code and commands in this chapter, especially those related to shared libraries and setting paths, require a Linux system environment. It is recommended to install a local virtual machine such as VirtualBox or equivalent with a Linux installation for working with the code in this chapter. Instructions to install VirtualBox can be found at https://www.virtualbox.org.
The Git repo for the examples in this chapter can be found at https://github.com/PacktPublishing/Practical-System-Programming-for-Rust-Developers/tree/master/Chapter01.
Choosing the right Rust configuration for your project
When you start with Rust programming, you have to first select a Rust release channel and a Rust project type.
This section discusses details of the Rust release channels and gives guidance on how to choose among them for your project.
Rust also allows you to build different types of binaries – standalone executables, static libraries, and dynamic libraries. If you know upfront what you will be building, you can create the right project type with the scaffolding code generated for you.
We will cover these in this section.
Choosing a Rust release channel
The Rust programming language is developed continually and there are three releases being developed simultaneously at any point in time, each called a release channel. Each channel has a purpose and has varying features and stability characteristics. The three release channels are stable, beta, and nightly. Unstable language features and libraries are developed in the nightly and beta channels, while stability guarantees are provided on the stable channel.
Rustup is the tool that installs the Rust compiler, the Rust Standard Library, the Cargo package manager, and other core tools for activities such as code formatting, testing, benchmarking, and documentation. All these tools are available in multiple flavors called toolchains. A toolchain is a combination of a release channel and a host, and optionally also has an associated archive date.
Rustup can install a toolchain from a release channel, or from other sources such as official archives and local builds. Rustup also determines the toolchain depending on the host platform. Rust is officially available on Linux, Windows, and macOS. Rustup thus is called a tool multiplexer as it installs and manages multiple toolchains, and in this sense is similar to rbenv, pyenv, or nvm in Ruby, Python, and Node.js respectively.
Rustup manages the complexity associated with toolchains but makes the installation process fairly straightforward as it provides sensible defaults. These can later be modified by the developer.
Note
Rust's stable version is released every 6 weeks; for example, Rust 1.42.0 was released on March 12, 2020, and 6 weeks later to the day, Rust 1.43 was released on April 23, 2020.
A new nightly version of Rust is released every day. Once every 6 weeks, the latest master branch of nightly becomes the beta version.
Most Rust developers primarily use the stable channel. Beta channel releases are not used actively, but only to test for any regressions in the Rust language releases.
The nightly channel is for active language development and is published every night. The nightly channel lets Rust develop new and experimental features and allows early adopters to test them before they are stabilized. The price to be paid for early access is that there may be breaking changes to these features before they get into stable releases. Rust uses feature flags to determine what features are enabled in a given nightly release. A user who wants to use a cutting-edge feature in nightly version has to annotate the code with the appropriate feature flag.
An example of a feature flag is shown here:
#![feature(try_trait)]
Note that beta and stable releases cannot use feature flags.
Rustup is configured to use the stable channel by default. To work with other channels, here are a few commands. For a complete list, refer to the official link: https://github.com/rust-lang/rustup.
To install nightly Rust, use this command:
rustup toolchain install nightly
To activate nightly Rust globally, use this command:
rustup default nightly
To activate nightly at a directory level, use this command:
rustup override set nightly
To get the version of the compiler in nightly Rust, use this command:
rustup run nightly rustc –-version
To reset rustup
to use the stable channel, use this command:
rustup default stable
To show the installed toolchains and which is currently active, use this command:
rustup show
To update the installed toolchains to the latest versions, use this command:
rustup update
Note that once rustup default <channel-name>
is set, other related tools, such as Cargo and Rustc, use the default channel set.
Which Rust channel should you use for your project? For any production-bound projects, it is advisable to use only the stable release channel. For any experimental projects, the nightly or beta channels may be used, with caution as there may be breaking changes needed for the code in future releases.
Selecting a Rust project type
There are two basic types of projects in Rust: libraries and binaries (or executables).
A library is a self-contained piece of code that is intended for use by other programs. The purpose of a library is to enable code reuse and speed up the development cycle by leveraging the hard work of other open source developers. Libraries, also called a library crate (or lib crate) in Rust, can be published to a public package repository (such as crates.io
) that can be discovered and downloaded by other developers for use in their own programs. Program execution for a library crate begins in the src/lib.rs
file.
A binary is a standalone executable that may download and link other libraries into a single binary. A binary project type is also called a binary crate (or bin crate). Program execution for a bin crate starts in the main()
function that is present in the src/main.rs
file.
It is important to determine whether you want to build a binary or a library program in Rust while initializing the project. We will see examples of these two types of projects later in this chapter. It's time to introduce the star tool and Swiss-Army knife in the Rust ecosystem, Cargo.
Introducing Cargo and project structures
Cargo is the official build and dependency management tool for Rust. It has many of the features of the other popular tools in this segment, such as Ant, Maven, Gradle, npm, CocoaPods, pip, and yarn, but provides a far more seamless and integrated developer experience for compiling code, downloading and compiling dependent libraries (called crates in Rust), linking libraries, and building development and release binaries. It also performs the incremental build of the code to reduce the compilation time as the programs evolve. In addition, it creates an idiomatic project structure while creating new Rust projects.
In short, Cargo as an integrated toolchain gives a seamless experience in the day-to-day tasks of creating a new project, building it, managing external dependencies, debugging, testing, generating documentation, and release management.
Cargo is the tool that can be used to set up the basic project scaffolding structure for a new Rust project. Before we create a new Rust project with Cargo, let's first understand the options for organizing code within Rust projects:

Figure 1.1 – Cargo project structure and hierarchy
Figure 1.1 shows how code can be organized within a Cargo-generated Rust project.
The smallest standalone unit of organization of code in a Rust project is a function. (Technically, the smallest unit of code organization is a block of code, but it is part of a function.) A function can accept zero or more input parameters, performs processing, and optionally, returns a value. A set of functions are organized as a source file with a specific name, for example, main.rs
is a source file.
The next highest level of code organization is a module. Code within modules has its own unique namespace. A module can contain user-defined data types (such as structs, traits, and enums), constants, type aliases, other module imports, and function declarations. Modules can be nested within one another. Multiple module definitions can be defined within a single source file for smaller projects, or a module can contain code spread across multiple source files for larger projects. This type of organization is also referred to as a module system.
Multiple modules can be organized into crates. Crates also serve as the unit of code sharing across Rust projects. A crate is either a library or a binary. A crate developed by one developer and published to a public repository can be reused by another developer or team. The crate root is the source file that the Rust compiler starts from. For binary crates, the crate root is main.rs
and for library crates it is lib.rs
.
One or more crates can be combined into a package. A package contains a Cargo.toml
file, which contains information on how to build the package, including downloading and linking the dependent crates. When Cargo is used to create a new Rust project, it creates a package. A package must contain at least one crate – either a library or a binary crate. A package may contain any number of binary crates, but it can contain either zero or only one library crate.
As Rust projects grow in size, there may be a need to split up a package into multiple units and manage them independently. A set of related packages can be organized as a workspace. A workspace is a set of packages that share the same Cargo.lock
file (containing details of specific versions of dependencies that are shared across all packages in the workspace) and output directory.
Let's see a few examples to understand various types of project structures in Rust.
Automating build management with Cargo
When Rust code is compiled and built, the generated binary can either be a standalone executable binary or a library that can be used by other projects. In this section, we will look at how Cargo can be used to create Rust binaries and libraries, and how to configure metadata in Cargo.toml
to provide build instructions.
Building a basic binary crate
In this section, we will build a basic binary crate. A binary crate when built, produces an executable binary file. This is the default crate type for the cargo tool. Let's now look at the command to create a binary crate.
- The first step is to generate a Rust source package using the
cargo new
command. - Run the following command in a terminal session inside your working directory to create a new package:
cargo new --bin first-program && cd first-program
The
--bin
flag is to tell Cargo to generate a package that, when compiled, would produce a binary crate (executable).first-program
is the name of the package given. You can specify a name of your choice. - Once the command executes, you will see the following directory structure:
Figure 1.2 – Directory structure
The
Cargo.toml
file contains the metadata for the package:[package] name = "first-program" version = "0.1.0" authors = [<your email>] edition = "2018"
And the
src
directory contains one file calledmain.rs
:fn main() { println!("Hello, world!"); }
- To generate a binary crate (or executable) from this package, run the following command:
cargo build
This command creates a folder called
target
in the project root and creates a binary crate (executable) with the same name as the package name (first-program
, in our case) in the locationtarget/debug
. - Execute the following from the command line:
cargo run
You will see the following printed to your console:
Hello, world!
Note on path setting to execute binaries
Note that
LD_LIBRARY_PATH
should be set to include the toolchain library in the path. Execute the following command for Unix-like platforms. If your executable fails with the error Image not found, for Windows, alter the syntax suitably:export LD_LIBRARY_PATH=$(rustc --print sysroot)/lib:$LD_LIBRARY_PATH
Alternatively, you can build and run code with one command –
cargo run
, which is convenient for development purposes.By default, the name of the binary crate (executable) generated is the same as the name of the source package. If you wish to change the name of the binary crate, add the following lines to
Cargo.toml
:[[bin]] name = "new-first-program" path = "src/main.rs"
- Run the following in the command line:
cargo run –-bin new-first-program
You will see a new executable with the name
new-first-program
in thetarget/debug
folder. You will see Hello, world! printed to your console. - A cargo package can contain the source for multiple binaries. Let's learn how to add another binary to our project. In
Cargo.toml
, add a new[[bin]]
target below the first one:[[bin]] name = "new-first-program" path = "src/main.rs" [[bin]] name = "new-second-program" path = "src/second.rs"
- Next, create a new file,
src/second.rs
, and add the following code:fn main() { println!("Hello, for the second time!"); }
- Run the following:
cargo run --bin new-second-program
You will see the statement Hello, for the second time! printed to your console. You'll also find a new executable created in the target/debug
directory with the name new-second-program
.
Congratulations! You have learned how to do the following:
- Create your first Rust source package and compile it into an executable binary crate
- Give a new name to the binary, different from the package name
- Add a second binary to the same cargo package
Note that a cargo
package can contain one or more binary crates.
Configuring Cargo
A cargo package has an associated Cargo.toml
file, which is also called the manifest.
The manifest, at a minimum, contains the [package]
section but can contain many other sections. A subset of the sections are listed here:
Specifying output targets for the package: Cargo packages can have five types of targets:
[[bin]]
: A binary target is an executable program that can be run after it is built.[lib]
: A library target produces a library that can be used by other libraries and executables.[[example]]
: This target is useful for libraries to demonstrate the use of external APIs to users through example code. The example source code located in theexample
directory can be built into executable binaries using this target.[[test]]
: Files located in thetests
directory represent integration tests and each of these can be compiled into a separate executable binary.[[bench]]
: Benchmark functions defined in libraries and binaries are compiled into separate executables.
For each of these targets, the configuration can be specified, including parameters such as the name of the target, the source file of the target, and whether you want cargo to automatically run test scripts and generate documentation for the target. You may recall that in the previous section, we changed the name and set the source file for the generated binary executable.
Specifying dependencies for the package: The source files in a package may depend on other internal or external libraries, which are also called dependencies. Each of these in turn may depend on other libraries and so on. Cargo downloads the list of dependencies specified under this section and links them to the final output targets. The various types of dependencies include the following:
[dependencies]
: Package library or binary dependencies[dev-dependencies]
: Dependencies for examples, tests, and benchmarks[build-dependencies]
: Dependencies for build scripts (if any are specified)[target]
: This is for the cross-compilation of code for various target architectures. Note that this is not to be confused with the output targets of the package, which can be lib, bin, and so on.
Specifying build profiles: There are four types of profiles that can be specified while building a cargo package:
dev
: Thecargo build
command uses thedev
profile by default. Packages built with this option are optimized for compile-time speed.release
: Thecargo build –-release
command enables the release profile, which is suitable for production release, and is optimized for runtime speed.test
: Thecargo test
command uses this profile. This is used to build test executables.bench
: Thecargo bench
command creates the benchmark executable, which automatically runs all functions annotated with the#[bench]
attribute.
Specifying the package as a workspace: A workspace is a unit of organization where multiple packages can be grouped together into a project and is useful to save disk space and compilation time when there are shared dependencies across a set of related packages. The [workspace]
section can be used to define the list of packages that are part of the workspace.
Building a static library crate
We have seen how to create binary crates. Let's now learn how to create a library crate:
cargo new --lib my-first-lib
The default directory structure of a new cargo project is as follows:
├── Cargo.toml ├── src │ └── lib.rs
Add the following code in src/lib.rs
:
pub fn hello_from_lib(message: &str) { println!("Printing Hello {} from library",message); }
Run the following:
cargo build
You will see the library built under target/debug
and it will have the name libmy_first_lib.rlib
.
To invoke the function in this library, let's build a small binary crate. Create a bin
directory under src
, and a new file, src/bin/mymain.rs
.
Add the following code:
use my_first_lib::hello_from_lib; fn main() { println!("Going to call library function"); hello_from_lib("Rust system programmer"); }
The use my_first_lib::hello_from_lib
statement tells the compiler to bring the library function into the scope of this program.
Run the following:
cargo run --bin mymain
You will see the print
statement in your console. Also, the binary mymain
will be placed in the target/debug
folder along with the library we wrote earlier. The binary crate looks for the library in the same folder, which it finds in this case. Hence it is able to invoke the function within the library.
If you want to place the mymain.rs
file in another location (instead of within src/bin
), then add a target in Cargo.toml
and mention the name and path of the binary as shown in the following example, and move the mymain.rs
file to the specified location:
[[bin]] name = "mymain" path = "src/mymain.rs"
Run cargo run --bin mymain
and you will see the println
output in your console.
Automating dependency management
You learned in the previous section how Cargo can be used to set up the base project directory structure and scaffolding for a new project, and how to build various types of binary and library crates. We will look at the dependency management features of Cargo in this section.
Rust comes with a built-in standard library consisting of language primitives and commonly used functions, but it is small by design (compared to other languages). Most real-world programs in Rust depend on additional external libraries to improve functionality and developer productivity. Any such external code that is used is a dependency for the program. Cargo makes it easy to specify and manage dependencies.
In the Rust ecosystem, crates.io is the central public package registry for discovering and downloading libraries (called packages or crates in Rust). It is similar to npm in the JavaScript world. Cargo uses crates.io
as the default package registry.
Dependencies are specified in the [dependencies]
section of Cargo.toml
. Let's see an example.
Start a new project with this command:
cargo new deps-example && cd deps-example
In Cargo.toml
, make the following entry to include an external library:
[dependencies] chrono = "0.4.0"
Chrono
is a datetime library. This is called a dependency because our deps-example
crate depends on this external library for its functionality.
When you run cargo build
, cargo looks for a crate on crates.io
with this name and version. If found, it downloads this crate along with all of its dependencies, compiles them all, and updates a file called Cargo.lock
with the exact versions of packages downloaded. The Cargo.lock
file is a generated file and not meant for editing.
Each dependency in Cargo.toml
is specified in a new line and takes the format <crate-name> = "<semantic-version-number>"
. Semantic versioning or Semver has the form X.Y.Z, where X is the major version number, Y is the minor version, and Z is the patch version.
Specifying the location of a dependency
There are many ways to specify the location and version of dependencies in Cargo.toml
, some of which are summarized here:
- Crates.io registry: This is the default option and all that is needed is to specify the package name and version string as we did earlier in this section.
- Alternative registry: While
crates.io
is the default registry, Cargo provides the option to use an alternate registry. The registry name has to be configured in the.cargo/config
file, and inCargo.toml
, an entry is to be made with the registry name, as shown in the example here:[dependencies] cratename = { version = "2.1", registry = "alternate- registry-name" }
- Git repository: A Git repo can be specified as the dependency. Here is how to do it:
[dependencies] chrono = { git = "https://github.com/chronotope/chrono" , branch = "master" }
Cargo will get the repo at the branch and location specified, and look for its
Cargo.toml
file in order to fetch its dependencies. - Specify a local path: Cargo supports path dependencies, which means the library can be a sub-crate within the main cargo package. While building the main cargo package, the sub-crates that have also been specified as dependencies will be built. But dependencies with only a path dependency cannot be uploaded to the crates.io public registry.
- Multiple locations: Cargo supports the option to specify both a registry version and either a Git or path location. For local builds, the Git or path version is used, and the registry version will be used when the package is published to crates.io.
Using dependent packages in source code
Once the dependencies are specified in the Cargo.toml
file in any of the preceding formats, we can use the external library in the package code as shown in the following example. Add the following code to src/main.rs
:
use chrono::Utc; fn main() { println!("Hello, time now is {:?}", Utc::now()); }
The use
statement tells the compiler to bring the chrono
package Utc
module into the scope of this program. We can then access the function now()
from the Utc
module to print out the current date and time. The use
statement is not mandatory. An alternative way to print datetime would be as follows:
fn main() { println!("Hello, time now is {:?}", chrono::Utc::now()); }
This would give the same result. But if you have to use functions from the chrono
package multiple times in code, it is more convenient to bring chrono
and required modules into scope once using the use
statement, and it becomes easier to type.
It is also possible to rename the imported package with the as
keyword:
use chrono as time; fn main() { println!("Hello, time now is {:?}", time::Utc::now()); }
For more details on managing dependencies, refer to the Cargo docs: https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html.
In this section, we have seen how to add dependencies to a package. Any number of dependencies can be added to Cargo.toml
and used within the program. Cargo makes the dependency management process quite a pleasant experience.
Let's now look at another useful feature of Cargo – running automated tests.
Writing and running automated tests
The Rust programming language has built-in support for writing automated tests.
Rust tests are basically Rust functions that verify whether the other non-test functions written in the package work as intended. They basically invoke the other functions with the specified data and assert that the return values are as expected.
Rust has two types of tests – unit tests and integration tests.
Writing unit tests in Rust
Create a new Rust package with the following command:
cargo new test-example && cd test-example
Write a new function that returns the process ID of the currently running process. We will look at the details of process handling in a later chapter, so you may just type in the following code, as the focus here is on writing unit tests:
use std::process; fn main() { println!("{}", get_process_id()); } fn get_process_id() -> u32 { process::id() }
We have written a simple (silly) function to use the standard library process module and retrieve the process ID of the currently running process.
Run the code using cargo check
to confirm there are no syntax errors.
Let's now write a unit test. Note that we cannot know upfront what the process ID is going to be, so all we can test is whether a number is being returned:
#[test] fn test_if_process_id_is_returned() { assert!(get_process_id() > 0); }
Run cargo test
. You will see that the test has passed successfully, as the function returns a non-zero positive integer.
Note that we have written the unit tests in the same source file as the rest of the code. In order to tell the compiler that this is a test function, we use the #[test]
annotation. The assert!
macro (available in standard Rust library) is used to check whether a condition evaluates to true. There are two other macros available, assert_eq!
and assert_ne!
, which are used to test whether the two arguments passed to these macros are equal or not.
A custom error message can also be specified:
#[test] fn test_if_process_id_is_returned() { assert_ne!(get_process_id(), 0, "There is error in code"); }
To compile but not run the tests, use the --no-run
option with the cargo test
command.
The preceding example has only one simple test
function, but as the number of tests increases, the following problems arise:
- How do we write any helper functions needed for test code and differentiate it from the rest of the package code?
- How can we prevent the compiler from compiling tests as part of each build (to save time) and not include test code as part of the normal build (saving disk/memory space)?
In order to provide more modularity and to address the preceding questions, it is idiomatic in Rust to group test functions in a test
module:
#[cfg(test)] mod tests { use super::get_process_id; #[test] fn test_if_process_id_is_returned() { assert_ne!(get_process_id(), 0, "There is error in code"); } }
Here are the changes made to the code:
- We have moved the
test
function under thetests
module. - We have added the
cfg
attribute, which tells the compiler to compile test code only if we are trying to run tests (that is, only forcargo test
, not forcargo build
). - There is a
use
statement, which brings theget_process_id
function into the scope of thetests
module. Note thattests
is an inner module and so we usesuper:: prefix
to bring the function that is being tested into the scope of thetests
module.
cargo test
will now give the same results. But what we have achieved is greater modularity, and we've also allowed for the conditional compilation of test code.
Writing integration tests in Rust
In the Writing unit tests in Rust section, we saw how to define a tests
module to hold the unit tests. This is used to test fine-grained pieces of code such as an individual function call. Unit tests are small and have a narrow focus.
For testing broader test scenarios involving a larger scope of code such as a workflow, integration tests are needed. It is important to write both types of tests to fully ensure that the library works as expected.
To write integration tests, the convention in Rust is to create a tests
directory in the package root and create one or more files under this folder, each containing one integration test. Each file under the tests
directory is treated as an individual crate.
But there is a catch. Integration tests in Rust are not available for binary crates, only library crates. So, let's create a new library crate:
cargo new --lib integ-test-example && cd integ-test-example
In src/lib.rs
, replace the existing code with the following. This is the same code we wrote earlier, but this time it is in lib.rs
:
use std::process; pub fn get_process_id() -> u32 { process::id() }
Let's create a tests
folder and create a file, tests/integration_test1.rs
. Add the following code in this file:
use integ_test_example; #[test] fn test1() { assert_ne!(integ_test_example::get_process_id(), 0, "Error in code"); }
Note the following changes to the test code compared to unit tests:
- Integration tests are external to the library, so we have to bring the library into the scope of the integration test. This is simulating how an external user of our library would call a function from the public interface of our library. This is in place of
super:: prefix
used in unit tests to bring the tested function into scope. - We did not have to specify the
#[cfg(test)]
annotation with integration tests, because these are stored in a separate folder and cargo compiles files in this directory only when we runcargo test
. - We still have to specify the
#[test]
attribute for eachtest
function to tell the compiler these are the test functions (and not helper/utility code) to be executed.
Run cargo test
. You will see that this integration test has been run successfully.
Controlling test execution
The cargo test
command compiles the source code in test mode and runs the resultant binary. cargo test
can be run in various modes by specifying command-line options. The following is a summary of the key options.
Running a subset of tests by name
If there are a large number of tests in a package, cargo test
runs all tests by default each time. To run any particular test cases by name, the following option can be used:
cargo test —- testfunction1, testfunction2
To verify this, let's replace the code in the integration_test1.rs
file with the following:
use integ_test_example; #[test] fn files_test1() { assert_ne!(integ_test_example::get_process_id(),0,"Error in code"); } #[test] fn files_test2() { assert_eq!(1+1, 2); } #[test] fn process_test1() { assert!(true); }
This last dummy test
function is for purposes of the demonstration of running selective cases.
Run cargo test
and you can see both tests executed.
Run cargo test files_test1
and you can see files_test1
executed.
Run cargo test files_test2
and you can see files_test2
executed.
Run cargo test files
and you will see both files_test1
and files_test2
tests executed, but process_test1
is not executed. This is because cargo looks for all test cases containing the term 'files'
and executes them.
Ignoring some tests
In some cases, you want to execute most of the tests every time but exclude a few. This can be achieved by annotating the test
function with the #[ignore]
attribute.
In the previous example, let's say we want to exclude process_test1
from regular execution because it is computationally intensive and takes a lot of time to execute. The following snippet shows how it's done:
#[test] #[ignore] fn process_test1() { assert!(true); }
Run cargo test
, and you will see that process_test1
is marked as ignored, and hence not executed.
To run only the ignored tests in a separate iteration, use the following option:
cargo test —- --ignored
The first --
is a separator between the command-line options for the cargo
command and those for the test
binary. In this case, we are passing the --ignored
flag for the test binary, hence the need for this seemingly confusing syntax.
Running tests sequentially or in parallel
By default, cargo test
runs the various tests in parallel in separate threads. To support this mode of execution, the test
functions must be written in a way that there is no common data sharing across test cases. However if there is indeed such a need (for example, one test case writes some data to a location and another test case reads it), then we can run the tests in sequence as follows:
cargo test -- --test-threads=1
This command tells cargo to use only one thread for executing tests, which indirectly means that tests have to be executed in sequence.
In summary, Rust's strong built-in type system and strict ownership rules enforced by the compiler, coupled with the ability to script and execute unit and integration test cases as an integral part of the language and tooling, makes it very appealing to write robust, reliable systems.
Documenting your project
Rust ships with a tool called Rustdoc
, which can generate documentation for Rust projects. Cargo has integration with Rustdoc
, so you can use either tool to generate documentation.
To get an idea of what it means to have documentation generated for Rust projects, go to http://docs.rs.
This is a documentation repository for all the crates in crates.io. To see a sample of the generated documentation, select a crate and view the docs. For example, you can go to docs.rs/serde
to see docs for the popular serialization/deserialization library in Rust.
To generate similar documentation for your Rust projects, it is important to think through what to document, and how to document it.
But what can you document? The following are some of the aspects of a crate that it would be useful to document:
- An overall short description of what your Rust library does
- A list of modules and public functions in the library
- A list of other items, such as
traits
,macros
,structs
,enums
, andtypedefs
, that a public user of the library needs to be familiar with to use various features - For binary crates, installation instructions and command-line parameters.
- Examples that demonstrate to users how to use the crate
- Optionally, design details for the crate
Now that we know what to document, we have to learn how to document it. There are two ways to document your crate:
- Inline documentation comments within the crate
- Separate markdown files
You can use either approach, and the rustdoc
tool will convert them into HTML
, CSS
, and JavaScript
code that can be viewed from a browser.
Writing inline documentation comments within crate
Rust has two types of comments: code comments (aimed at developers) and documentation comments (aimed at users of the library/crate).
Code comments are written using:
//
for single-line comments and writing inline documentation comments within crate/* */
for multi-line comments
Documentation comments are written using two styles:
The first style is to use three slashes ///
for commenting on individual items that follow the comments. Markdown notation can be used to style the comments (for example, bold or italic). This is typically used for item-level documentation.
The second style is to use //!
. This is used to add documentation for the item that contains these comments (as opposed to the first style, which is used to comment items that follow the comments). This is typically used for crate-level documentation.
In both cases, rustdoc
extracts documentation from the crate's documentation comments.
Add the following comments to the integ-test-example
project, in src/lib.rs
:
//! This is a library that contains functions related to //! dealing with processes, //! and makes these tasks more convenient. use std::process; /// This function gets the process ID of the current /// executable. It returns a non-zero number pub fn get_process_id() -> u32 { process::id() }
Run cargo doc –open
to see the generated HTML documentation corresponding to the documentation comments.
Writing documentation in markdown files
Create a new folder, doc
, under the crate root, and add a new file, itest.md
, with the following markdown content:
# Docs for integ-test-example crate This is a project to test `rustdoc`. [Here is a link!](https://www.rust-lang.org) // Function signature pub fn get_process_id() -> u32 {}
This function returns the process ID of the currently running executable:
// Example ```rust use integ_test_example; fn get_id() -> i32 { let my_pid = get_process_id(); println!("Process id for current process is: {}", my_pid); } ```
Note that the preceding code example is only representational.
Unfortunately, cargo does not directly support generating HTML from standalone markdown files (at the time of this writing), so we have to use rustdoc
as follows:
rustdoc doc/itest.md
You will find the generated HTML document itest.html
in the same folder. View it in your browser.
Running documentation tests
If there are any code examples written as part of the documentation, rustdoc
can execute the code examples as tests.
Let's write a code example for our library. Open src/lib.rs
and add the following code example to existing code:
//! Integration-test-example crate //! //! This is a library that contains functions related to //! dealing with processes //! , and makes these tasks more convenient. use std::process; /// This function gets the process id of the current /// executable. It returns a non-zero number /// ``` /// fn get_id() { /// let x = integ_test_example::get_process_id(); /// println!("{}",x); /// } /// ``` pub fn get_process_id() -> u32 { process::id() }
If you run cargo test --doc
, it will run this example code and provide the status of the execution.
Alternatively, running cargo test
will run all the test cases from the tests
directory (except those that are marked as ignored), and then run the documentation tests (that is, code samples provided as part of the documentation).
Summary
Understanding the Cargo ecosystem of toolchains is very important to be effective as a Rust programmer, and this chapter has provided the foundational knowledge that will be used in future chapters.
We learned that there are three release channels in Rust – stable, beta, and nightly. Stable is recommended for production use, nightly is for experimental features, and beta is an interim stage to verify that there isn't any regression in Rust language releases before they are marked stable
. We also learned how to use rustup to configure the toolchain to use for the project.
We saw different ways to organize code in Rust projects. We also learned how to build executable binaries and shared libraries. We also looked at how to use Cargo to specify and manage dependencies.
We covered how to write unit tests and integration tests for a Rust package using Rust's built-in test framework, how to invoke automated tests using cargo, and how to control test execution. We learned how to document packages both through inline documentation comments and using standalone markdown files.
In the next chapter, we will take a quick tour of the Rust programming language, through a hands-on project.
Further reading
- The Cargo Book (https://doc.rust-lang.org/cargo)
- The Rust Book (https://doc.rust-lang.org/book/)
- Rust Forge (https://forge.rust-lang.org/)
- The Rustup book (https://rust-lang.github.io/rustup/index.html)
- The Rust style guide – the Rust style guide contains conventions, guidelines, and best practices to write idiomatic Rust code, and can be found at the following link: https://github.com/rust-dev-tools/fmt-rfcs/blob/master/guide/guide.md