In this chapter, we will cover the following recipes:
- Setting up the host system
- Installing Poky
- Creating a build directory
- Building your first image
- Explaining the NXP Yocto ecosystem
- Installing support for NXP hardware
- Building Wandboard images
- Using the Toaster web interface
- Running a Toaster Docker container
- Configuring network booting for a development setup
- Using Docker as a Yocto build system container
- Sharing downloads
- Sharing the shared state cache
- Setting up a package feed
- Using build history
- Working with build statistics
- Debugging the build system
The Yocto Project (http://www.yoctoproject.org/) is an embedded Linux distribution builder that makes use of several other open source projects. In this book, the generic term Yocto refers to the Yocto Project.
A Linux distribution is a collection of software packages and policies, and there are hundreds of Linux distributions available. Most of these are not designed for embedded systems and they lack the flexibility needed to accomplish target footprint sizes and functionality tweaks, as well as not catering well for resource constrained systems.
The Yocto Project, in contrast, is not a distribution per se; it allows you to create a Linux distribution designed for your particular embedded system. The Yocto Project provides a reference distribution for embedded Linux, called Poky.
The Yocto Project has the BitBake and OpenEmbedded-Core (OE-Core) projects at its base. Together they form the Yocto build system which builds the components needed for an embedded Linux product, namely:
With these, the Yocto Project covers the needs of both system and application developers. When the Yocto Project is used as an integration environment for bootloaders, the Linux kernel, and user space applications, we refer to it as system development.
For application development, the Yocto Project builds SDKs that enable the development of applications independently of the Yocto build system.
The Yocto Project makes a new release every 6 months. The latest release at the time of this writing is Yocto 2.4 Rocko, and all the examples in this book refer to the 2.4 release.
A Yocto release comprises the following components:
And for the different supported platforms:
- Prebuilt toolchains
- Prebuilt images
The Yocto 2.4 release is available to download from http://downloads.yoctoproject.org/releases/yocto/yocto-2.4/.
This recipe will explain how to set up a host Linux system to use the Yocto Project.
The recommended way to develop an embedded Linux system is using a native Linux workstation. Development work using virtual machines, such as the Build Appliance, is discouraged, although they may be used for demo and test purposes.
Docker containers are increasingly used as they provide a maintainable way to build the same version of Yocto over the course of several years, which is a common need for embedded systems with long product lifetimes. We will cover using Docker as a Yocto build system in the Using Docker as a Yocto build system container recipe in this same chapter.
Yocto builds all the components mentioned before from scratch, including the cross-compilation toolchain and the native tools it needs, so the Yocto build process is demanding in terms of processing power and both hard drive space and I/O.
Although Yocto will work fine on machines with lower specifications, for professional developers' workstations, it is recommended to use symmetric multiprocessing (SMP) systems with 8 GB or more system memory and a high capacity, fast hard drive, and solid state drives (SSD) if possible. Due to different bottlenecks in the build process, there does not seem to be much improvement above eight CPUs or around 16 GB RAM.
The first build will also download all the sources from the internet, so a fast internet connection is also recommended.
Yocto supports several Linux host distributions, and each Yocto release will document a list of the supported ones. Although the use of a supported Linux distribution is strongly advised, Yocto is able to run on any Linux system if it has the following dependencies:
- Git 1.8.3.1 or greater
- Tar 1.27 or greater
- Python 3.4.0 or greater
Yocto also provides a way to install the correct version of these tools by either downloading a buildtools-tarball or building one on a supported machine. This allows virtually any Linux distribution to be able to run Yocto, and also makes sure that it will be possible to replicate your Yocto build system in the future. The Yocto Project build system also isolates itself from the host distribution's C library, which makes it possible to share build caches between different distributions and also helps in future-proofing the build system. This is important for embedded products with long-term availability requirements.
This book will use the Ubuntu 16.04 Long-Term Stable (LTS) Linux distribution for all examples. Instructions to install on other Linux distributions can be found in the Supported Linux Distributions section of the Yocto Project Reference Manual, but the examples will only be tested with Ubuntu 16.04 LTS.
To make sure you have the required package dependencies installed for Yocto and to follow the examples in the book, run the following command from your shell:
$ sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath socat libsdl1.2-dev xterm bmap-tools make xsltproc docbook-utils fop dblatex xmlto cpio python python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping python-git bmap-tools python3-git curl parted dosfstools mtools gnupg autoconf automake libtool libglib2.0-dev python-gtk2 bsdmainutils screen libstdc++-5-dev libx11-dev
Note
Downloading the example code
You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files emailed directly to you.
The example code in the book can be accessed through several GitHub repositories at https://github.com/yoctocookbook2ndedition. Follow the instructions on GitHub to obtain a copy of the source in your computer.
You will also need to configure the Git revision control software as follows:
$ git config --global user.email "your.email.address@somewhere.com"$ git config --global user.name "Your Name"
The preceding command uses apt-get
, the Advanced Packaging Tool (APT) command-line tool. It is a frontend of the dpkg
package manager that is included in the Ubuntu distribution. It will install all the required packages and their dependencies to support all the features of the Yocto Project as well as the examples in this book.
Git is a distributed source control versioning system under the General Public License v2 (GNU) originally developed by Linus Torvalds for the development of the Linux kernel. Since then, it has become the standard for many open source projects. Git will be the tool of choice for source version control used in this book.
If build times are an important factor for you, there are certain steps you can take when preparing your disks to optimize them even further:
- Place the build directories on their own disk partition or a fast external solid state drive.
- Use the ext4 filesystem but configure it not to use journalism on your Yocto-dedicated partitions. Be aware that power losses may corrupt your build data.
- Mount the filesystem in such a way that read times are not written/recorded on file reads, disable write barriers, and delay committing filesystem changes with the following mount options:
noatime,barrier=0,commit=6000
These changes reduce the data integrity safeguards, but with the separation of the build directories to their own disk, failures would only affect temporary build data, which can be erased and regenerated.
- The complete Yocto Project installation instructions for Ubuntu and other supported distributions can be found in the Yocto Project Reference Manual at http://www.yoctoproject.org/docs/2.4/ref-manual/ref-manual.html
- Git documentation and other reference material can be found at https://git-scm.com/documentation
This recipe will explain how to set up your host Linux system with Poky, the Yocto Project reference system.
Poky uses the OpenEmbedded build system and, as such, uses the BitBake tool, a task scheduler written in Python which is forked from Gentoo's Portage tool. You can think of BitBake as the make utility in Yocto. It will parse the configuration and recipe metadata, schedule a task list, and run through it.
BitBake is also the command-line interface to Yocto.
Poky and BitBake are two of the open source projects used by Yocto:
- The Poky project is maintained by the Yocto community. You can download Poky from its Git repository at http://git.yoctoproject.org/cgit/cgit.cgi/poky/.
- Development discussions can be followed and contributed to by visiting the development mailing list at https://lists.yoctoproject.org/listinfo/poky.
- Poky development takes place in the master branch. Before merging submitted patches into the master, maintainers test them in the master-next branch.
- Stable Yocto releases have their own branch. Yocto 2.4 is maintained in the
rocko
branch, and Yocto releases are tagged in that branch.
- BitBake, on the other hand, is maintained by both the Yocto and OpenEmbedded communities, as the tool is used by both. BitBake can be downloaded from its Git repository at http://git.openembedded.org/bitbake/.
- Development discussions can be followed and contributed to by visiting the development mailing list at http://lists.openembedded.org/mailman/listinfo/bitbake-devel.
- Bitbake also uses master and master-next in the same way, but then creates a new branch per release, for example 1.32, with tags going into the corresponding release branch.
The Poky distribution only supports virtualized QEMU machines for the following architectures:
- ARM (qemuarm, qemuarm64)
- x86 (qemux86)
- x86-64 (qemux86-64)
- PowerPC (qemuppc)
- MIPS (qemumips, qemumips64)
Apart from these, it also supports some reference hardware BSPs, representative of the architectures just listed. These are:
- Texas Instruments BeagleBone (beaglebone)
- Freescale MPC8315E-RDB (mpc8315e-rdb)
- Intel x86-based PCs and devices (genericx86 and genericx86-64)
- Ubiquiti Networks EdgeRouter Lite (edgerouter)
To develop on different hardware, you will need to complement Poky with hardware-specific Yocto layers. This will be covered later on.
The Poky project incorporates a stable BitBake release, so to get started with Yocto, we only need to install Poky in our Linux host system.
Note
Note that you can also install BitBake independently through your distribution's package management system. This is not recommended and can be a source of problems, as BitBake needs to be compatible with the metadata used in Yocto. If you have installed BitBake from your distribution, please remove it.
The current Yocto release is 2.4, or Rocko, so we will install that into our host system. We will use the /opt/yocto
folder as the installation path:
$ sudo install -o $(id -u) -g $(id -g) -d /opt/yocto$ cd /opt/yocto$ git clone --branch rocko git://git.yoctoproject.org/poky
The previous instructions use Git (the source code management system command-line tool) to clone the Poky repository, which includes BitBake, into a new poky
directory under /opt/yocto
, and point it to the rocko
stable branch.
Poky contains three metadata directories, meta
, meta-poky
, and meta-yocto-bsp
, as well as a template metadata layer, meta-skeleton
, which can be used as a base for new layers. Poky's three metadata directories are explained here:
meta
: This directory contains the OpenEmbedded-core metadata, which supports the ARM, ARM64, x86, x86-64, PowerPC, MIPS, and MIPS64 architectures and the QEMU emulated hardware. You can download it from its Git repository at http://git.openembedded.org/openembedded-core/.
Development discussions can be followed and contributed to by visiting the development mailing list at http://lists.openembedded.org/mailman/listinfo/openembedded-core.
- More information about OpenEmbedded, the build framework for embedded Linux used by the Yocto Project, can be found at http://www.openembedded.org
- The official Yocto Project documentation can be accessed at http://www.yoctoproject.org/docs/2.4/mega-manual/mega-manual.html
Before building your first Yocto image, we need to create a build directory for it.
The build process, on a host system as outlined before, can take up to 1 hour and needs around 20 GB of hard drive space for a console-only image. A graphical image, such as core-image-sato
, can take up to 4 hours for the build process and occupy around 50 GB of space.
The first thing we need to do is create a build directory for our project, where the build output will be generated. Sometimes, the build directory may be referred to as the project directory, but build directory is the appropriate Yocto term.
There is no right way to structure the build directories when you have multiple projects, but a good practice is to have one build directory per architecture or machine type. They can all share a common downloads
folder, and even a shared state cache (this will be covered later on), so keeping them separate won't affect the build performance, but it will allow you to develop on multiple projects simultaneously.
To create a build directory, we use the oe-init-build-env
script provided by Poky. The script needs to be sourced into your current shell, and it will set up your environment to use the OpenEmbedded/Yocto build system, including adding the BitBake utility to your path.
You can specify a build directory to use or it will use build
by default. We will use qemuarm
for this example:
$ cd /opt/yocto/poky$ source oe-init-build-env qemuarm
The script will change to the specified directory.
Note
As oe-init-build-env
only configures the current shell, you will need to source it on every new shell. But, if you point the script to an existing build directory, it will set up your environment but won't change any of your existing configurations.
BitBake is designed with a client/server abstraction, so we can also start a persistent server and connect a client to it. To instruct a BitBake server to stay resident, configure a timeout in seconds in your build directory's conf/local.conf
configuration file as follows:
BB_SERVER_TIMEOUT = "n"
With n
being the time in seconds for BitBake to stay resident.
With this setup, loading cache and configuration information each time is avoided, which saves some overhead.
The oe-init-build-env
script calls scripts/oe-setup-builddir
script inside the Poky
directory to create the build directory.
On creation, the qemuarm
build directory contains a conf
directory with the following three files:
bblayers.conf
: This file lists the metadata layers to be considered for this project.local.conf
: This file contains the project-specific configuration variables. You can set common configuration variables to different projects with asite.conf
file, but this is not created by default. Similarly, there is also anauto.conf
file which is used by autobuilders. BitBake will first readsite.conf
, thenauto.conf
, and finallylocal.conf
.templateconf.cfg
: This file contains the directory that includes the template configuration files used to create the project. By default it uses the one pointed to by thetemplateconf
file in your Poky installation directory, which ismeta-poky/conf
by default.
You can specify different template configuration files to use when you create your build directory using the TEMPLATECONF
variable, for example:
$ TEMPLATECONF=meta-custom/config source oe-init-build-env <build-dir>
The TEMPLATECONF
variable needs to refer to a directory containing templates for both local.conf
and bblayer.conf
, but named local.conf.sample
and bblayers.conf.sample
.
For our purposes, we can use the unmodified default project configuration files.
Before building our first image, we need to decide what type of image we want to build. This recipe will introduce some of the available Yocto images and provide instructions to build a simple image.
Poky contains a set of default target images. You can list them by executing the following commands:
$ cd /opt/yocto/poky$ ls meta*/recipes*/images/*.bb
A full description of the different images can be found in the Yocto Project Reference Manual, on Chapter 13, Images. Typically, these default images are used as a base and customized for your own project needs. The most frequently used base default images are:
core-image-minimal
: This is the smallest BusyBox, sysvinit, and udev-based console-only imagecore-image-full-cmdline
: This is the BusyBox-based console-only image with full hardware support and a more complete Linux system, including Bashcore-image-lsb
: This is a console-only image that is based on Linux Standard Base (LSB) compliancecore-image-x11
: This is the basic X11 Windows-system-based image with a graphical terminalcore-image-sato
: This is the X11 Window-system-based image with a SATO theme and a GNOME mobile desktop environmentcore-image-weston
: This is a Wayland protocol and Weston reference compositor-based image
You will also find images with the following suffixes:
dev
: This image is suitable for development work, as it contains headers and librariessdk
: This image includes a complete SDK that can be used for development on the targetinitramfs
: This is an image that can be used for a RAM-based root filesystem, which can optionally be embedded with the Linux kernel
- To build an image, we need to configure the machine we are building it for and pass its name to BitBake. For example, for the
qemuarm
machine, we would run the following:
$ cd /opt/yocto/poky/$ source /opt/yocto/poky/oe-init-build-env qemuarm$ MACHINE=qemuarm bitbake core-image-minimal
- Or we could export the
MACHINE
variable to the current shell environment before sourcing theoe-init-build-env
script with the following:
$ export MACHINE=qemuarm
- On an already configured project, we could also edit the
conf/local.conf
configuration file to change the default machine toqemuarm
:
- #MACHINE ?= "qemuarm" + MACHINE ?= "qemuarm"
- Then, after setting up the environment, we execute the following:
$ bitbake core-image-minimal
With the preceding steps, BitBake will launch the build process for the specified target image.
When you pass a target recipe to BitBake, it first parses the following configuration files in order:
conf/bblayers.conf
: This file is parsed to find all the configured layersconf/layer.conf
: This file is parsed on each configured layermeta/conf/bitbake.conf
: This file is parsed for its own configurationconf/local.conf
: This file is used for any other configuration the user may have for the current buildconf/machine/<machine>.conf
: This file is the machine configuration; in our case, this isqemuarm.conf
conf/distro/<distro>.conf
: This file is the distribution policy; by default, this is thepoky.conf
file
There are also some other distribution variants included with Poky:
poky-bleeding
: Extension to the Poky default distribution that includes the most up-to-date versions of packagespoky-lsb
: LSB compliance extension to Pokypoky-tiny
: Oriented to create headless systems with the smallest Linux kernel and BusyBox read-only or RAM-based root filesystems, using themusl
C library
And then, BitBake parses the target recipe that has been provided and its dependencies. The outcome is a set of interdependent tasks that BitBake will then execute in order.
A depiction of the BitBake build process is shown in the following diagram:

BitBake build process
Most developers won't be interested in keeping the whole build output for every package, so it is recommended to configure your project to remove it with the following configuration in your conf/local.conf
file:
INHERIT += "rm_work"
But at the same time, configuring it for all packages means that you won't be able to develop or debug them.
You can add a list of packages to exclude from cleaning by adding them to the RM_WORK_EXCLUDE
variable. For example, if you are going to do BSP work, a good setting might be:
RM_WORK_EXCLUDE += "linux-wandboard u-boot-fslc"
Remember that you can use a custom template local.conf.sample
configuration file in your own layer to keep these configurations and apply them for all projects so that they can be shared across all developers.
On a normal build, the -dbg
packages that include debug symbols are not needed. To avoid creating -dbg
packages, do this:
INHIBIT_PACKAGE_DEBUG_SPLIT = "1"
Once the build finishes, you can find the output images in the tmp/deploy/images/qemuarm
directory inside your build directory.
You can test run your images on the QEMU emulator by executing this:
$ runqemu qemuarm core-image-minimal
The runqemu
script included in Poky's scripts
directory is a launch wrapper around the QEMU machine emulator to simplify its usage.
The Yocto Project also has a set of precompiled images for supported hardware platforms that can be downloaded from http://downloads.yoctoproject.org/releases/yocto/yocto-2.4/machines/.
As we saw, Poky metadata starts with the meta
, meta-poky
, and meta-yocto-bsp
layers, and it can be expanded by using more layers.
An index of the available OpenEmbedded layers that are compatible with the Yocto Project is maintained at http://layers.openembedded.org/.
An embedded product's development usually starts with hardware evaluation using a manufacturer's reference board design. Unless you are working with one of the reference boards already supported by Poky, you will need to extend Poky to support your hardware by adding extra BSP layers.
The first thing to do is to select which base hardware your design is going to be based on. We will use a board that is based on a NXP i.MX6 System on Chip (SoC) as a starting point for our embedded product design.
This recipe gives an overview of the support for NXP hardware in the Yocto Project.
The SoC manufacturer (in this case, NXP) has a range of reference design boards for purchase, as well as official Yocto-based software releases. Similarly, other manufacturers that use NXP's SoCs offer reference design boards and their own Yocto-based BSP layers and even distributions.
Selecting the appropriate hardware to base your design on is one of the most important design decisions for an embedded product. Depending on your product needs, you will decide to either:
Most of the time, a production-ready board will not match the specific requirements of a professional embedded system, and the process of designing a complete carrier board using NXP's SoC would be too time consuming. So, using an appropriate module that already solves the most technically challenging design aspects is a common choice.
Some of the characteristics that are important to consider are:
- Industrial temperature ranges
- Power management
- Long-term availability
- Pre-certified wireless and Bluetooth (if applicable)
The Yocto community that support NXP-based boards is called the FSL community BSP and their main layers are called meta-freescale
and meta-freescale-3rdparty
. The Freescale brand was acquired by NXP with the purchase of Freescale. The selection of boards that are supported on meta-freescale
is limited to NXP reference designs, which would be the starting point if you are considering designing your own carrier board around NXP's SoC. Boards from other vendors are maintained on the meta-freescale-3rdparty
layer.
There are other embedded manufacturers that use meta-freescale
, but they have not integrated their boards in the meta-freescale-3rdparty
community layer. These manufacturers keep their own BSP layers, which depend on meta-freescale
, with specific support for their hardware. An example of this is Digi International and its ConnectCore product range, with the Yocto layers available at https://github.com/digi-embedded/meta-digi. There is also a Yocto-based distribution available called Digi Embedded Yocto.
To understand NXP's Yocto ecosystem, we need to start with the FSL community BSP, comprising the meta-freescale
layer with support for NXP's reference boards, and its companion, meta-freescale-3rdparty
, with support for boards from other vendors, and its differences with the official NXP Yocto BSP releases that NXP offers for their reference designs.
There are some key differences between the community and NXP Yocto releases:
- NXP releases are developed internally by NXP without community involvement and are used for BSP validation on NXP reference boards.
- NXP releases go through an internal QA and validation test process, and they are maintained by NXP support.
- NXP releases for a specific platform reach a maturity point, after which they are no longer worked on. At this point, all the development work has been integrated into the community layer and the platforms are further maintained by the FSL BSP community.
- NXP Yocto releases are not Yocto compatible, while the community release is.
NXP's engineering works very closely with the FSL BSP community to make sure that all development in their official releases is integrated in the community layer in a reliable and quick manner.
The FSL BSP community is also very responsive and active, so problems can usually be worked on with them to benefit all parts.
The FSL community BSP extends Poky with the following layers:
meta-freescale
: This is the community layer that supports NXP reference designs. It has a dependency on OpenEmbedded-Core. Machines in this layer will be maintained even after NXP stops active development on them. You can downloadmeta-freescale
from its Git repository at http://git.yoctoproject.org/cgit/cgit.cgi/meta-freescale/.
Development discussions can be followed and contributed to by visiting the development mailing list at https://lists.yoctoproject.org/listinfo/meta-freescale.
The meta-freescale
layer provides both the i.MX6 Linux kernel and the U-Boot source either from NXP's or from FSL community BSP maintained repositories using the following links:
- NXP's Linux kernel Git repository: http://git.freescale.com/git/cgit.cgi/imx/linux-imx.git/
- FSL community Linux kernel Git repository: https://github.com/Freescale/linux-fslc.git
- NXP's U-Boot Git repository: http://git.freescale.com/git/cgit.cgi/imx/uboot-imx.git/
- FSL community U-Boot Git repository: https://github.com/Freescale/u-boot-fslc.git
Other Linux kernel and U-Boot versions are available, but keeping the manufacturer's supported version is recommended.
The meta-freescale
layer includes NXP's proprietary binaries to enable some hardware features—most notably its hardware graphics, multimedia, and encryption capabilities. To make use of these capabilities, the end user needs to accept the NXP End-User License Agreement (EULA), which is included in the meta-freescale
layer.
meta-freescale-3rdparty
: This layer adds support for other community-maintained boards, for example, the Wandboard. To download the layer's content, you may visit https://github.com/Freescale/meta-freescale-3rdparty/.
meta-freescale-distro
: This layer adds a metadata layer for demonstration target images. To download the layer's content, you may visit https://github.com/Freescale/meta-freescale-distro.
This layer adds two different sets of distributions, one maintained by the FSL BSP community (fslc-
distributions) and one maintained by NXP (fsl-
distributions). They are a superset of Poky that allows you to easily choose the graphical backend to use between:
- framebuffer
- x11
- Wayland
- XWayland
We will learn more about the different graphical backends in Chapter 4, Application Development.
NXP uses another layer on top of the layers previously mentioned for their official software releases:
meta-fsl-bsp-release
: This is an NXP-maintained layer that is used in the official NXP software releases. It contains modifications to bothmeta-freescale
andmeta-freescale-distro
. It is not part of the community release but can be accessed at http://git.freescale.com/git/cgit.cgi/imx/meta-fsl-bsp-release.git/.

NXP-based platforms extended layers hierarchy
For more information, refer to the FSL community BSP web page available at http://freescale.github.io/
NXP's official support community can be accessed at https://community.nxp.com/
In this recipe, we will install the FSL community BSP Yocto release that adds support for NXP hardware to our Yocto installation.
With so many layers, manually cloning each of them and adding them to your project's conf/bblayers.conf
file is cumbersome. The community uses the repo tool developed by Google for their community Android to simplify the installation of Yocto.
To install repo in your host system, type in the following commands:
$ mkdir -p ${HOME}/bin/ $ curl https://storage.googleapis.com/git-repo-downloads/repo > ${HOME}/bin/repo$ chmod a+x ${HOME}/bin/repo
The repo tool is a Python utility that parses an XML file, called manifest
, with a list of Git repositories. The repo tool is then used to manage those repositories as a whole.
For an example, we will use repo to download all the repositories listed in the previous recipe to our host system. For that, we will point it to the FSL community BSP manifest
for the Rocko release:
<?xml version="1.0" encoding="UTF-8"?> <manifest> <default sync-j="4" revision="master"/> <remote fetch="https://git.yoctoproject.org/git" name="yocto"/> <remote fetch="https://github.com/Freescale" name="freescale"/> <remote fetch="https://github.com/openembedded" name="oe"/> <project remote="yocto" revision="rocko" name="poky" path="sources/poky"/> <project remote="yocto" revision="rocko" name="meta-freescale" path="sources/meta-freescale"/> <project remote="oe" revision="rocko" name="meta-openembedded" path="sources/meta-openembedded"/> <project remote="freescale" revision="rocko" name="fsl-community-bsp-base" path="sources/base"> <linkfile dest="README" src="README"/> <linkfile dest="setup-environment" src="setup-environment"/> </project> <project remote="freescale" revision="rocko" name="meta-freescale-3rdparty" path="sources/meta-freescale-3rdparty"/> <project remote="freescale" revision="rocko" name="meta-freescale-distro" path="sources/meta-freescale-distro"/> <project remote="freescale" revision="rocko" name="Documentation" path="sources/Documentation"/> </manifest>
The manifest
file shows all the installation paths and repository sources for the different components that are going to be installed.
The manifest
file is a list of the different layers that are needed for the FSL community BSP release. We can now use repo
to install it. Run the following:
$ mkdir /opt/yocto/fsl-community-bsp$ cd /opt/yocto/fsl-community-bsp$ repo init -u https://github.com/Freescale/fsl-community-bsp-platform -b rocko$ repo sync
To list the hardware boards supported by the different layers, we may run:
$ ls sources/meta-freescale*/conf/machine/*.conf
And to list the newly introduced target images, use the following:
$ ls sources/meta-freescale*/recipes*/images/*.bb
The FSL community BSP release introduces the following new target images:
fsl-image-mfgtool-initramfs
: This is a small, RAM-based initramfs image used with the NXP manufacturing toolfsl-image-multimedia
: This is a console-only image that includes thegstreamer
multimedia framework over the framebufferfsl-image-multimedia-full
: This is an extension offsl-image-multimedia
, that extends thegstreamer
multimedia framework to include all available pluginsfsl-image-machine-test
: This is an extension offsl-image-multimedia-full
for testing and benchmarking
The release includes a sources/Documentation
repository with buildable documentation. To build, we first need to install some host tools as follows:
$ sudo apt-get install libfreetype6-dev libjpeg8-dev python3-dev python3-pip python3-sphinx texlive-fonts-recommended texlive-latex-extra zlib1g-dev fonts-liberation $ sudo pip3 install reportlab sphinxcontrib-blockdiag
And then we can build the different documents by entering its sub-directory, and build an HTML document with:
$ make singlehtml
Or a PDF version with:
$ make latexpdf
For example, to build the release notes in both HTML and PDF versions we do:
$ cd /opt/yocto/fsl-community-bsp/sources/Documentation/release-notes$ make latexpdf singlehtml
The documents can be found inside the build/latex
and build/singlehtml
directories.
- Instructions to use the repo tool, including using repo with proxy servers, can be found in the Android documentation at https://source.android.com/setup/downloading
- The FSL community BSP manifest can be accessed at https://github.com/Freescale/fsl-community-bsp-platform/blob/rocko/default.xml
The Wandboard is an inexpensive NXP i.MX6-based board with broad community support. It is perfect for exploration and educational purposes, more feature rich than a Raspberry Pi, and much closer to professional high-end embedded systems.
Designed and sold by Technexion, a Taiwanese company, it comes in four flavors based around a SoM with different i.MX6 SoC variants, the solo, dual, quad, and quad plus, featuring one, two, or four cores.
Technexion made the schematics for both the board and the SoM available as PDF, which gave the board a taint of openness.
The Wandboard is still widely used, easy to purchase, and with a wide community, so we will use it as an example in the following chapters. However, any i.MX6-based board could be used to follow the book. The know-how will then be applicable to any embedded platform that uses the Yocto Project.
The Wandboard has been released in different revisions throughout its history: a0, b1, c1, and d1. The revision is printed on the PCB and it will become important as the software that runs in each revision differs.
The Wandboard features the following specification:
- 2 GB RAM
- Broadcom BCM4330 802.11n Wi-Fi
- Broadcom BCM4330 4.0 Bluetooth
- HDMI
- USB
- RS-232
- uSD
Revision D introduced a MMPF0100 PMIC, replaced the Ethernet PHY Atheros AR8031 with Atheros AR8035, and replaced the BCM4330 with a BCM4339 802.11ac Wi-Fi, among other minor changes.
It is a perfect multimedia enabled system with a Vivante 2D and 3D graphical processing unit, hardware graphics and video acceleration, and an SGTL5000 audio codec. The different i.MX6-based systems are widely used in industrial control and automation, home automation, automotive, avionics, and other industrial applications.
For production, professional OEMs and products are recommended, as they can offer the industrial quality and temperature ranges, component availability, support, and manufacturing guarantees that final products require.
Support for the Wandboard is included in the meta-freescale-3rdparty
FSL community BSP layer. All of the Wandboard board variants are bundled in a single Yocto machine called wandboard
.
To build an image for the wandboard
machine for the Poky distribution, use the following commands:
$ cd /opt/yocto/fsl-community-bsp$ MACHINE=wandboard DISTRO=poky source setup-environment wandboard$ bitbake core-image-minimal
The setup-environment
script is a wrapper around the oe-init-build-env
we used before. It will create a build directory, set up the MACHINE
variable and DISTRO
with the provided values, and prompt you to accept the NXP EULA as described earlier. Your conf/local.conf
configuration file will be updated both with the specified machine and the EULA
acceptance variable. To accept the license, the following line has been automatically added to the project's conf/local.conf
configuration file:
ACCEPT_FSL_EULA = "1"
Note
Remember that if you close your Terminal session, you will need to set up the environment again before being able to use BitBake. You can safely rerun the setup-environment script shown next, as it will not touch an existing conf/local.conf
file:$ cd /opt/yocto/fsl-community-bsp/
$ source setup-environment wandboard
The preceding BitBake command creates a core-image-minimal-wandboard.wic.gz
file, that is, a compressed WIC file, inside the tmp/deploy/images/wandboard
folder.
A WIC file is created by Yocto using the WIC tool and it is a partitioned image from Yocto build artifacts that can then be directly programmed.
This image can be programmed into a microSD card, inserted in the primary slot in the Wandboard CPU board (the one in the side of the i.MX6 SoM and under the heatsink), and booted using the following commands:
$ cd /opt/yocto/fsl-community-bsp/wandboard/tmp/deploy/images/wandboard/$ sudo bmaptool copy --nobmap core-image-minimal-wandboard.wic.gz /dev/sdN
Here, /dev/sdN
corresponds to the device node assigned to the microSD card in your host system.
Note
If the bmaptool utility is missing from your system, you can install it with:$ sudo apt-get install bmap-tools
bmaptool will refuse to program mounted devices and it will complain with:bmaptool: ERROR: cannot open block device '/dev/sdN' in exclusive mode: [Errno 16] Device or resource busy: '/dev/sdN'
You will need to unmount the SD card if Ubuntu auto mounted it with:$ sudo umount /dev/sdN
Here, N
is a letter assigned by the Linux kernel. Check the dmesg
to find out the device name.
The --nobmap
option passed to bmaptool requires some explanation. bmaptool is a utility specialized in copying data to block devices, similar to the traditional dd
command. However, it has some extra functionality that makes it a very convenient tool to use in embedded device development work:
- It is able to copy from compressed files, as we can see with the
wic.gz
file - It is able to use a BMAP file to speed up the copying of sparse files
When data is stored in a filesystem, blocks of data are mapped to disk sectors using an on-disk index. When a block of data is not mapped to any disk sector, it's called a hole, and files with holes are called sparse files. A BMAP file provides a list of mapped areas as well as checksums for both the BMAP file itself and the mapped areas.
Using this BMAP file, bmaptool can significantly speed up the process of copying sparse files.
However, as we are not using a BMAP file, we pass the --nobmap
file and use bmaptool for the convenience of using a compressed file. It also has other optimizations over dd
that make it a better tool for the job.
- You can find more information regarding the repo tool in Android's documentation at https://source.android.com/setup/using-repo
- The bmaptool documentation can be accessed at https://source.tizen.org/documentation/reference/bmaptool
More information about the different hardware mentioned in this section can be found at:
- Digi International's ConnectCore 6 SBC at https://www.digi.com/products/embedded-systems/single-board-computers/connectcore-6-sbc
- The Wandboard at https://www.wandboard.org/
Toaster is a web application interface to the Yocto Project's build system built on the Django framework with a database backend to store and represent build data. It replaces the Hob user interface, which could be found on releases prior to Yocto 1.8. The welcome page is shown next:

Welcome to Toaster
It allows you to perform the following actions:
- Configure local or remote builds
- Manage layers
- Set configuration variables
- Set build targets
- Start builds either from the command line (analysis mode) or the web UI (managed mode)
- Collect and represent build data
- Browse final images
- List installed packages
- See build variable values
- Explore recipes, packages, and task dependencies
- Examine build warnings, errors, and trace messages
- Provide build performance statistics
- Examine build tasks and use of shared state cache
In order to run the Toaster Django web application, your host machine needs to be set up as follows:
$ sudo apt-get install python3-pip $ pip3 install --user -r /opt/yocto/poky/bitbake/toaster-requirements.txt
Toaster can be started with the following commands:
$ cd /opt/yocto/poky$ source oe-init-build-env$ source toaster start
/opt/yocto/poky/bitbake/bin/toaster
is a shell script that will set up Toaster's environment, load the default configuration and database migrations, connect to the OpenEmbedded Layer Index, and download information about the metadata layers it has available for the current release, as well as starting the web server and the runbuilds poller process.
To access the web user interface, go to http://127.0.0.1:8000
.
By default, Toaster binds to localhost on port 8000
, but this can be specified as follows:
$ source toaster start webport=<IP>:<PORT>
The administrator interface can be accessed at http://127.0.0.1:8000/admin
.
This administration interface can be used to configure Toaster itself, but it needs a superuser account to be created from the directory that contains the Toaster database:
$ cd /opt/yocto/poky/build$ ../bitbake/lib/toaster/manage.py createsuperuser
Toaster can run two different types of builds:
- You can manually start a build on the terminal and Toaster will monitor it. You can then use the Toaster web UI to explore the build results. The following image shows the command line builds page:

Toaster command line builds
- You can also use the Toaster web interface to create a new project. This will be named
build-toaster-<project_id>
and will be created inside thePoky
directory:

Toaster's create a new project wizard
You can use the TOASTER_DIR
configuration variable to specify a different build directory for Toaster.
When creating a Toaster project, you can choose between two different types:
- Local builds: This uses the local Poky clone on your computer. Using this build type limits the build to the layers available on the Yocto Project,
openembedded-core
,meta-poky
, andmeta-yocto-bsp
. Other layers would need to be manually imported using theImport Layer
page. - Yocto Project builds: When a Yocto Project release is chosen, Toaster fetches the source from the Yocto Project upstream Git repositories, and updates it every time you run a build. In this mode, compatible layers can be selected, including BSP layers that allow you to build for different machines. The Toaster project configuration page looks like the following:

Toaster's project configuration page
After an image is built, Toaster offers the possibility to create a custom image based on that image's recipe where packages can easily be added/removed.
You can instruct Toaster to build both the standard and the extensible SDK by specifying the populate_sdk
and populate_sdk_ext
tasks to the target image. For example, to create SDKs for the core-image-base
target image, you would use the following.
For the standard SDK:
core-image-base:populate_sdk
Or for the extensible SDK:
core-image-base:populate_sdk_ext
We will learn more about using SDKs on Chapter 4, Application Development.
The version of Django that Toaster uses is specified on the /opt/yocto/poky/bitbake/toaster-requirements.txt
file, for example:
Django>1.8,<1.9.9
Django and hence Toaster store data in a relational database. The backend configuration is done in the /opt/yocto/poky/bitbake/lib/toaster/toastermain/settings.py
file as follows:
TOASTER_SQLITE_DEFAULT_DIR = os.environ.get('TOASTER_DIR') DATABASES = { 'default': { # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'. 'ENGINE': 'django.db.backends.sqlite3', # DB name or full path to database file if using sqlite3. 'NAME': "%s/toaster.sqlite" % TOASTER_SQLITE_DEFAULT_DIR, 'USER': '', 'PASSWORD': '', #'HOST': '127.0.0.1', # e.g. mysql server #'PORT': '3306', # e.g. mysql port } }
By default, Toaster will create a toaster.sqlite
database on the configured TOASTER_DIR
path. For production servers, MySQL is the recommended backend.
Django has a built in object-relational mapper, Django ORM, which automates the transfer of data from the relational database to Python objects and allows database accesses in Python code. The initial state of the database is created from a set of fixtures (data dumps) under /opt/yocto/poky/bitbake/lib/toaster/orm/fixtures
. Toaster fixtures are in XML format:
settings.xml
: This contains Toaster and BitBake variable settings. Some of these can be changed through the Toaster administrative interface.poky.xml
andoe-core.xml
: These are defaults for both the Poky and OE-core builds.custom.xml
: This allows you to override data on any of the preceding fixtures with a custom configuration. XML, JSON, and YAML formats are all supported.
When Toaster is launched, these Django fixtures are used to populate its database with initial data.
Toaster has extended the Django manage.py
command with some custom Toaster-specific options. The manage.py
management script needs to be invoked from the build
directory, which contains the Toaster database:
$ cd /opt/yocto/poky/build$ /opt/yocto/poky/bitbake/lib/toaster/manage.py <command> [<command option>]
The commands can be the following:
From /opt/yocto/poky/bitbake/lib/toaster/toastermain/managements/commands/
:
buildlist
: This returns the current build list including their build IDsbuildelete <build_id>
: This deletes all build dates for the build specified by its build IDchecksocket
: This verifies that Toaster can bind to the provided IP address and portperf
: This is a sanity check that measures performance by returning page loading times
From /opt/yocto/poky/bitbake/lib/toaster/orm/managements/commands/
:
lsupdates
: This updates the local layer index cache
From /opt/yocto/poky/bitbake/lib/toaster/bldcontrol/managements/commands/
:
checksettings
: This verifies that the existing Toaster database settings are enough to start a buildrunbuilds
: This launches scheduled builds
Toaster enables you to set up a build server on a shared hosted/cloud environment that allows you to:
- Use it with multiple users
- Distribute it across several build hosts
- Handle heavy loads
Typically, when setting up Toaster on a shared hosted environment, the Apache web server and MySQL as a database backend are used.
Installation instructions for this type of production server can be found in the Yocto Project's Toaster User Manual. The installation can be spread across different hosts for load sharing.
Docker is a software technology that provides operating system level virtualization. Functionality-wise it can be compared with a virtual machine, except that it suffers less of a performance penalty. On Linux it uses the resource isolation features of the Linux kernel to provide abstraction and process isolation. It allows you to create containers that run on Docker and are independent of the operating system underneath.
There are Docker instances of the Toaster user interface available, which will be introduced in this recipe.
- To install Docker on your Ubuntu 16.04 machine, add the GPG key for the official Docker repository to the system:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- Then add the Docker repository to APT sources:
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- Next, update the package database with the Docker packages from the newly added repository:
$ sudo apt-get update$ sudo apt-get install docker-ce
- Add your user to the docker group:
$ sudo usermod -aG docker ${USER}$ su - ${USER}
- Finally, test run Docker by running the
hello-world
container:
$ docker run hello-world
- To run a
docker-toaster
instance, we will first create a directory in our host machine for thedocker
container to store the builds:
$ mkdir /opt/yocto/docker-toaster
- We can then instruct Docker to run the
crops/toaster
container and point its/workdir
directory to the local directory we just created:
$ docker run -it --rm -p 127.0.0.1:18000:8000 -v /opt/yocto/docker-toaster:/workdir crops/toaster
Note
If you see the following error:Refusing to use a gid of 0
Traceback (most recent call last):
File "/usr/bin/usersetup.py", line 62, in <module>
subprocess.check_call(cmd.split(), stdout=sys.stdout, stderr=sys.stderr)
File "/usr/lib/python2.7/subprocess.py", line 541, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', 'restrict_groupadd.sh', '0', 'toasteruser']' returned non-zero exit status 1
Make sure the /opt/yocto/docker-toaster
directory was created before running Docker and is not owned by root. If you don't create it beforehand, Docker will do it with the root user and the setup will fail as above.
See https://github.com/crops/poky-container/issues/20.
Note
Note that you can replace the 127.0.0.1
above with an IP address that is externally accessible if you are running Docker on a different machine.
$ docker ps
- You can now access the Toaster web interface at
http://127.0.0.1:18000
. - The
docker
container can be stopped with the following command:
$ docker stop <container-id>
- Django documentation can be accessed at https://docs.djangoproject.com/en/1.9/
- The Django management command available at
toaster/manage.py
is documented as part of the Django documentation at https://docs.djangoproject.com/en/1.9/ref/django-admin/ - Docker specific documentation can be accessed at https://docs.docker.com/engine/reference/commandline/dockerd/
- The Toaster
docker
container home page is https://github.com/crops/toaster-container
Most professional i.MX6 boards will have an internal flash memory, and that would be the recommended way to boot firmware. The Wandboard is not really a product meant for professional use, so it does not have one, booting from a microSD card instead. But neither the internal flash nor the microSD card are ideal for development work, as any system change would involve a reprogramming of the firmware image.
The ideal setup for development work is to use both Trivial File Transfer Protocol (TFTP) and Network File System (NFS) servers in your host system and to only store the U-Boot bootloader in either the internal flash or a microSD card. With this setup, the bootloader will fetch the Linux kernel from the TFTP server and the kernel will mount the root filesystem from the NFS server. Changes to either the kernel or the root filesystem are available without the need to reprogram. Only bootloader development work would need you to reprogram the physical media.
If you are not already running a TFTP server, follow the next steps to install and configure a TFTP server on your Ubuntu 16.04 host:
$ sudo apt-get install tftpd-hpa
The tftpd-hpa
configuration file is installed in /etc/default/tftpd-hpa
. By default, it uses /var/lib/tftpboot
as the root TFTP
folder. Change the folder permissions to make it accessible to all users using the following command:
$ sudo chmod 1777 /var/lib/tftpboot
Now copy the Linux kernel and device tree for the Wandboard Quad Plus from your build directory as follows:
$ cd /opt/yocto/fsl-community-bsp/wandboard/tmp/deploy/images/wandboard/$ cp zImage-wandboard.bin zImage-imx6qp-wandboard-revd1.dtb /var/lib/tftpboot
Note
If you have a different hardware variant or revision of the Wandboard, you will need to use a different device tree, as shown next. The corresponding device trees for the Wandboard Quad are:
- revision b1:
zImage-imx6q-wandboard-revb1.dtb
- revision c1:
zImage-imx6q-wandboard.dtb
- revision d1:
zImage-imx6q-wandboard-revd1.dtb
The corresponding device trees for the Wandboard solo/dual lite are:
- revision b1:
zImage-imx6dl-wandboard-revb1.dtb
- revision c1:
zImage-imx6dl-wandboard.dtb
- revision d1:
zImage-imx6dl-wandboard-revd1.dtb
And the device tree for the Wandboard Quad Plus is:
- revision d1:
zImage-imx6qp-wandboard-revd1.dtb
If you are not already running an NFS server, follow the next steps to install and configure one on your Ubuntu 16.04 host:
$ sudo apt-get install nfs-kernel-server
We will use the /nfsroot
directory as the root for the NFS server, so we will untar the target's root filesystem from our Yocto build directory in there.
By default, the Wandboard only builds WIC images. We will need to modify our build project to build a compressed copy of the target's root filesystem. For that, follow the next steps:
$ cd /opt/yocto/fsl-community-bsp/wandboard
Edit conf/local.conf
and add the following:
IMAGE_FSTYPES = "wic.gz tar.bz2"
This will build a core-image-minimal-wandboard.tar.bz2
file that we can then uncompress under /nfsroot
, as follows:
$ sudo mkdir /nfsroot$ cd /nfsroot$ sudo tar --numeric-owner -x -v -f /opt/yocto/fsl-community-bsp/wandboard/tmp/deploy/images/wandboard/core-image-minimal-wandboard.tar.bz2
The extraction of the root filesystem can also be done without superuser permissions by using the runqemu-extract-sdk
script, which uses pseudo
to correctly extract and set the permissions of the root filesystem, as follows:
$ cd /opt/yocto/fsl-community-bsp/wandboard$ bitbake meta-ide-support$ runqemu-extract-sdk tmp/deploy/images/wandboard/core-image-minimal-wandboard.tar.bz2 /nfsroot/rootfs/
Next, we will configure the NFS server to export the /nfsroot
folder.
Add the following line to /etc/exports
:
/nfsroot/ *(rw,no_root_squash,async,no_subtree_check)
We will then restart the NFS server for the configuration changes to take effect:
$ sudo service nfs-kernel-server restart
We now have the boot binaries and root filesystem ready for network booting, and we need to configure U-Boot to perform the network boot.
Boot the Wandboard and stop at the U-Boot prompt by pressing any key on the serial console. Make sure it has an Ethernet cable plugged in and connected to your local network. You should see the U-Boot banner and prompt as follows:

U-Boot banner
The Yocto 2.4 version of U-Boot for the Wandboard has introduced changes in the default environment so that there is less platform-specific customization made in the source. As such, previous versions used to have a default environment ready to perform a network boot just by setting a few environmental variables and running the netboot
script.
The current U-Boot has instead replaced it with a network boot mechanism that looks for a U-Boot script called extlinux.conf
on the configured TFTP server and executes it. In that way, platform-specific booting options are isolated into the boot script which is compiled with the U-Boot source.
The Yocto Project prepares an extlinux.conf
boot script and copies it to the deploy
directory along with the images. We can add kernel command line arguments to pass to the Linux kernel in this boot script by using the UBOOT_EXTLINUX_KERNEL_ARGS
configuration variable. More details about customizing the extlinux.conf
script is provided in Chapter 2, The BSP Layer.
However, for development purposes, it is more flexible to restore the previous network boot environment variables:
> env set netload 'tftpboot ${loadaddr} ${image};tftpboot ${fdt_addr} ${fdt_file}'> env set netargs 'setenv bootargs console=${console} ${optargs} root=/dev/nfs ip=${ipaddr} nfsroot=${serverip}:${nfsroot},v3,tcp'> env set netboot 'echo Booting from net ...;run netargs;run netload;bootz ${loadaddr} - ${fdt_addr}'
The netload
script loads the Linux kernel binary and the device tree blob into memory. The netargs
script prepares the bootargs
environmental variable to pass the correct kernel command line parameters for a network boot, and the netboot
command executes the network boot by running netargs
and using the bootz
command.
Now we will prepare the rest of the environmental variables it needs:
- Configure a static IP address with:
> env set ipaddr <static_ip>
- Configure the IP address of your host system, where the TFTP and NFS servers have been set up:
> env set serverip <host_ip>
- Configure the root filesystem mount:
> env set nfsroot /nfsroot
- Configure the Linux kernel and device tree filenames:
> env set image zImage-wandboard.bin> env set fdt_file zImage-imx6qp-wandboard-revd1.dtb
- Save the U-Boot environment to the microSD card:
> env save
- Perform a network boot:
> run netboot
The Linux kernel and device tree will be fetched from the TFTP server, and the root filesystem will be mounted by the kernel from the NFS share.
You should be able to log in with the root user without a password prompt.
Once booted, we can find out the kernel command line arguments used to boot by doing:
$ cat /proc/cmdlineconsole=ttymxc0,115200 root=/dev/nfs ip=192.168.1.15 nfsroot=192.168.1.115:/nfsroot,v3,tcp
Embedded systems often have a long product lifetime so software needs to be built with the same Yocto version over several years in a predictable way. Older versions of Yocto often have problems in running with state-of-the-art distributions.
To work around this, there are several alternatives:
- Keep a build machine with a fixed operating system. This is problematic as the machine also ages and it may suffer from hardware problems and need re-installation.
- Use a cloud machine with a fixed operating system. Not everyone has this type of infrastructure available and it usually has a price tag attached.
- Build in a virtual machine such as VMware or VirtualBox. This affects the build performance significantly.
- Use a Docker Yocto builder container. This has the advantage of providing the same isolation as the virtual machine but with a much better build performance.
We saw how to run a docker
container in the Using the Toaster web interface recipe. Now we will see how to create our own Docker image to use as a Yocto builder.
Docker is able to build images automatically by reading instructions from a text file called a Dockerfile
. Dockerfiles can be layered on top of each other, so to create a Docker Yocto builder image we would start by using a Ubuntu 16.04 Docker image or one of the other supported distributions, and sequentially configure the image.
An example Dockerfile
for a Yocto builder follows:
FROM ubuntu:16.04 MAINTAINER Alex Gonzalez <alex@lindusembedded.com> # Upgrade system and Yocto Proyect basic dependencies RUN apt-get update && apt-get -y upgrade && apt-get -y install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath socat cpio python python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping libsdl1.2-dev xterm curl # Set up locales RUN apt-get -y install locales apt-utils sudo && dpkg-reconfigure locales && locale-gen en_US.UTF-8 && update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 ENV LANG en_US.utf8 # Clean up APT when done. RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # Replace dash with bash RUN rm /bin/sh && ln -s bash /bin/sh # User management RUN groupadd -g 1000 build && useradd -u 1000 -g 1000 -ms /bin/bash build && usermod -a -G sudo build && usermod -a -G users build # Install repo RUN curl -o /usr/local/bin/repo https://storage.googleapis.com/git-repo-downloads/repo && chmod a+x /usr/local/bin/repo # Run as build user from the installation path ENV YOCTO_INSTALL_PATH "/opt/yocto" RUN install -o 1000 -g 1000 -d $YOCTO_INSTALL_PATH USER build WORKDIR ${YOCTO_INSTALL_PATH} # Set the Yocto release ENV YOCTO_RELEASE "rocko" # Install Poky RUN git clone --branch ${YOCTO_RELEASE} git://git.yoctoproject.org/poky # Install FSL community BSP RUN mkdir -p ${YOCTO_INSTALL_PATH}/fsl-community-bsp && cd ${YOCTO_INSTALL_PATH}/fsl-community-bsp && repo init -u https://github.com/Freescale/fsl-community-bsp-platform -b ${YOCTO_RELEASE} && repo sync # Create a build directory for the FSL community BSP RUN mkdir -p ${YOCTO_INSTALL_PATH}/fsl-community-bsp/build # Make /home/build the working directory WORKDIR /home/build
- To build the container locally from the directory containing the Dockerfile, run the following command:
$ docker build
- However, there is no need to build it locally as the container is automatically built on the Docker registry: https://hub.docker.com/r/yoctocookbook2ndedition/docker-yocto-builder
First create an empty folder owned by a user with the same uid and gid that the build
user inside the container:
$ sudo install -o 1000 -g 1000 -d /opt/yocto/docker-yocto-builder
And change inside the new directory:
$ cd /opt/yocto/docker-yocto-builder
To run the container and map its /home/build
folder to the current directory, type:
$ docker run -it --rm -v $PWD:/home/build yoctocookbook2ndedition/docker-yocto-builder
Where:
-it
instructs Docker to keepstdin
open even when the container is not attached and assign apseudo-tty
to the interactive shell--rm
instructs Docker to remove the container on exit-v
maps the host current directory as the/home/build
container volume
- We can now instruct the container to build a Poky project with:
build@container$ source /opt/yocto/poky/oe-init-build-env qemuarmbuild@container$ MACHINE=qemuarm bitbake core-image-minimal
- To build a FSL community BSP project, you need to map the
/opt/yocto/fsl-community-bsp/build
container directory with the current directory as thesetup-environment
script only works when the build directory is under theinstallation
folder:
$ docker run -it --rm -v $PWD:/opt/yocto/fsl-community-bsp/build yoctocookbook2ndedition/docker-yocto-builder
- Then we can run the following command inside the container to create a new project and start a build:
build@container$ cd /opt/yocto/fsl-community-bsp/build@container$ mkdir -p wandboardbuild@container$ MACHINE=wandboard DISTRO=poky source setup-environment buildbuild@container$ bitbake core-image-minimal
Instructing Docker to start the image creation process with a Ubuntu 16.04 image is as easy as starting the Dockerfile
with the following:
FROM ubuntu:16.04
To inherit a Docker image, you use the Dockerfile FROM
syntax.
Other commands used in the Dockerfile are:
RUN
, which will run the specified command in a new layer and commit the resultENV
, to set an environmental variableUSER
, which sets the username to use forRUN
andCMD
instructions following itWORKDIR
, which sets the working directory forRUN
andCMD
instructions that follow itCMD
, which provides the default executable for the container, in this case the Bash shell
The rest of the Dockerfile
does the following:
- Updates Ubuntu 16.04 to the latest packages
- Installs Yocto dependencies
- Sets up the locale for the container
- Adds a new
build
user - Installs both Poky and the FSL community BSP release
The image has Poky installed at /opt/yocto/poky
and the FSL community BSP installed at /opt/yocto/fsl-community-bsp
. When it starts, the default directory is/home/build
.
The usual way to work with a docker
container is to instruct it to run commands but store the output in the host filesystem.
In our case, we instruct the container to run BitBake for us, but we map the build directories to the host by doing the external volume mapping when the container is initialized. In that way, all the build output is stored on the host machine.
- Docker documentation for the image builder can be found at https://docs.docker.com/engine/reference/builder/
You will usually work on several projects simultaneously, probably for different hardware platforms or different target images. In such cases, it is important to optimize the build times by sharing downloads.
The build system runs a search for downloaded sources in a number of places:

Source download hierarchy
- It tries the local
downloads
folder. - It looks into the configured pre-mirrors, which are usually local to your organization.
- It then tries to fetch from the upstream source as configured in the package recipe.
- Finally, it checks the configured mirrors. Mirrors are public alternate locations for the source.
If a package source is not found in any of these four sources, the package build will fail with an error. Build warnings are also issued when upstream fetching fails and mirrors are tried, so that the upstream problem can be looked at.
The Yocto Project, including BSP layers such as meta-freescale
, maintains a set of mirrors to isolate the build system from problems with the upstream servers. However, when adding external layers, you could be adding support for packages that are not in the Yocto Project's mirror servers, or other configured mirrors, so it is recommended that you keep a local pre-mirror to avoid problems with source availability.
The default Poky setting for a new project is to store the downloaded package sources on the current build directory. This is the first place the build system will run a search for source downloads. This setting can be configured in your project's conf/local.conf
file with the DL_DIR
configuration variable.
To optimize the build time, it is recommended to keep a shared downloads
directory between all your projects. The setup-environment
script of the meta-freescale
layer changes the default DL_DIR
to the fsl-community-bsp
directory created by the repo tool. With this setup, the downloads
folder will already be shared between all the projects in your host system. It is configured as:
DL_DIR ?= "${BSPDIR}/downloads/"
A more scalable setup (for instance, for teams that are remotely distributed) is to configure a pre-mirror. For example, add the following to your conf/local.conf
file:
INHERIT += "own-mirrors" SOURCE_MIRROR_URL = "http://example.com/my-source-mirror"
A usual setup is to have a build server serve its downloads
directory. The build server can be configured to prepare tarballs of the Git
directories to avoid having to perform Git operations from upstream servers. This setting in your conf/local.conf
file will affect the build performance, but this is usually acceptable in a build server. Add the following:
BB_GENERATE_MIRROR_TARBALLS = "1"
An advantage of this setup is that the build server's downloads
folder can also be backed up to guarantee source availability for your products in the future. This is especially important in embedded products with long-term availability requirements.
In order to test this setup, you may check to see whether a build is possible just by using the pre-mirrors with the following:
BB_FETCH_PREMIRRORONLY = "1"
This setting in your conf/local.conf
file can also be distributed across the team with the TEMPLATECONF
variable during the project's creation.
The Yocto Project builds everything from source. When you create a new project, only the configuration files are created. The build process then compiles everything from scratch, including the cross-compilation toolchain and some native tools important for the build.
This process can take a long time, and the Yocto Project implements a shared state cache mechanism that is used for incremental builds with the aim to build only the strictly necessary components for a given change.
For this to work, the build system calculates a checksum of the given input data to a task. If the input data changes, the task needs to be rebuilt. In simplistic terms, the build process generates a run script for each task that can be checksummed and compared. It also keeps track of a task's output, so that it can be reused.
A package recipe can modify the shared state caching to a task, for example, to always force a rebuild by marking it as nostamp
. A more in-depth explanation of the shared state cache mechanism can be found in the Yocto Project Reference Manual at http://www.yoctoproject.org/docs/2.4/ref-manual/ref-manual.html.
By default, the build system will use a shared state cache directory called sstate-cache
on your build directory to store the cached data. This can be changed with the SSTATE_DIR
configuration variable in your conf/local.conf
file. The cached data is stored in directories named with the first two characters of the hash. Inside, the filenames contain the whole task checksum, so the cache validity can be ascertained just by looking at the filename. The build process set scene tasks will evaluate the cached data and use it to accelerate the build if valid.
When you want to start a build from a clean state, you need to remove both the sstate-cache
directory and the tmp
directory.
You can also instruct BitBake to ignore the shared state cache by using the --no-setscene
argument when running it.
It's a good practice to keep backups of clean shared state caches (for example, from a build server), which can be used in case of shared state cache corruption.
Sharing a shared state cache is possible; however, it needs to be approached with care. Not all changes are detected by the shared state cache implementation, and when this happens, some or all of the cache needs to be invalidated. This can cause problems when the state cache is being shared.
The recommendation in this case depends on the use case. Developers working on Yocto metadata should keep the shared state cache as default, separated per project.
However, validation and testing engineers, kernel and bootloader developers, and application developers would probably benefit from a well-maintained shared state cache.
To configure an NFS share drive to be shared among the development team to speed up the builds, you can add the following to your conf/local.conf
configuration file:
SSTATE_MIRRORS ?= "\ file://.* file:///nfs/local/mount/sstate/PATH"
To configure shared state cache sharing via HTTP, add the following to your conf/local.conf configuration file:
SSTATE_MIRRORS ?= "file://.* http://example.com/some_path/sstate-cache/PATH"
The expression PATH
in these examples will get substituted by the build system with a directory named with the hash's first two characters.
An embedded system project seldom has the need to introduce changes to the Yocto build system. Most of the time and effort is spent in application development, followed by a lesser amount in system development, maybe kernel and bootloader work.
As such, a whole system rebuild is probably done very few times. A new project is usually built from a prebuilt shared state cache, and application development work only needs to be done to perform full or incremental builds of a handful of packages.
Once the packages are built, they need to be installed on the target system for testing. Emulated machines are fine for application development, but most hardware-dependent work needs to be done on embedded hardware.
An option is to manually copy the build binaries to the target's root filesystem, either copying it to the NFS share on the host system the target is mounting its root filesystem from (as explained in the Configuring network booting for a development setup recipe earlier) or using any other method such as SCP, FTP, or even a microSD card.
This method is also used by IDEs such as Eclipse when debugging an application you are working on, and by the devtool Yocto command-line tool which will be introduced later on. However, this method does not scale well when you need to install several packages and dependencies.
The next option would be to copy the packaged binaries (that is, the RPM, DEB, or IPK packages) to the target's filesystem and then use the target's package management system to install them. For this to work, your target's filesystem needs to be built with package management tools. Doing this is as easy as adding the package-management
feature to your root filesystem; for example, you may add the following line to your project's conf/local.conf
file:
EXTRA_IMAGE_FEATURES += "package-management"
The default package type in Yocto is RPM, and for an RPM package, you will copy it to the target and use the rpm
or dnf
utilities to install it. In Yocto 2.4, the default RPM package manager is Dandified Yum (DNF). It is the next generation version of the Yellodog Updater Modified (YUM) and licensed under the General Public License v2.
However, the most convenient way to do this is to convert your host system package's output directory into a package feed. For example, if you are using the default RPM package format, you may convert tmp/deploy/rpm
in your build directory into a package feed that your target can use to update.
For this to work, you need to configure an HTTP server on your computer that serves the packages.
You also need to make sure that the generated packages are correctly versioned, and that means updating the recipe revision, PR, with every change. It is possible to do this manually, but the recommended and compulsory way if you want to use package feeds is to use a PR server.
However, the PR server is not enabled by default. The packages generated without a PR server are consistent with each other but offer no update guarantees for a system that is already running.
The simplest PR server configuration is to run it locally on your host system. To do this, you add the following to your conf/local.conf
file:
PRSERV_HOST = "localhost:0"
With this setup, update coherency is guaranteed for your feed.
If you want to share your feed with other developers, or you are configuring a build server or package server, you would run a single instance of the PR server by running the following command:
$ bitbake-prserv --host <server_ip> --port <port> --start
And you will update the project's build configuration to use the centralized PR server, editing conf/local.conf
as follows:
PRSERV_HOST = "<server_ip>:<port>"
Also, if you are using a shared state cache as described before, all of the contributors to the shared state cache need to use the same PR server.
Once the feed's integrity is guaranteed, we need to configure an HTTP server to serve the feed.
We will use lighttpd
for this example, as it is lightweight and easy to configure. Follow these steps:
- Install the web server:
$ sudo apt-get install lighttpd
- By default, the document root specified in the
/etc/lighttpd/lighttpd.conf
configuration file is/var/www/html
, so we only need a symlink to our package feed:
$ sudo ln -s /opt/yocto/fsl-community-bsp/wandboard/tmp/deploy/rpm /var/www/html/rpm
- Next, reload the configuration as follows:
$ sudo service lighttpd reload
Note
For development, you can also launch a Python HTTP server from the feeds directory as follows:$ cd /opt/yocto/fsl-community-bsp/wandboard/tmp/deploy/rpm
$ sudo python -m SimpleHTTPServer 80
- Refresh the package index. This needs to be done manually to update the package feed after every build:
$ bitbake package-index
- If you want to serve the packages from a different directory instead of directly from your build directory:
- You will need to copy the packages:
$ rsync -r -u /opt/yocto/fsl-community-bsp/wandboard/tmp/deploy/rpm/* <new_dir>/
- Then add the corresponding metadata to the repositories. For that, you will need to install the createrepo tool:
$ sudo apt-get install createrepo
- And direct it to the new
feed
directory:
- And direct it to the new
$ createrepo <new_dir>
The createrepo tool will create XML-based metadata from the RPM packages:
Note
You can also build and use the createrepo-c
utility from your Yocto build system, a C implementation of createrepo, as follows:$ bitbake createrepo-c-native -c addto_recipe_sysroot
$ oe-run-native createrepo-c-native createrepo_c <new_dir>
Then we need to configure our target filesystem with the new package feeds:
- Log in to the target and create a new directory to contain the repository configuration:
$ mkdir -p /etc/yum.repos.d
The repository configuration files will have the following format:
[<repo name>] name=<Repository description> baseurl=<url://path/to/repo> enabled=<0 (disable) or 1 (enabled)> gpgcheck=<0 (disable signature check) or 1 (enabled)> gpgkey=<url://path/to/gpg-file if gpgcheck is enabled>
The previously mentioned baseurl
is the complete URL for the repositories, with a http://
, https://
, ftp://
, or file://
prefix.
An example repository configuration file is as follows:
$ vi /etc/yum.repos.d/yocto.repo[yocto-rpm] name=Yocto 2.4: rpm baseurl=http://<server-ip>/rpm/
- Once the setup is ready, we will be able to query and update packages from the target's root filesystem with the following:
# dnf --nogpgcheck makecache# dnf --nogpgcheck search <package_name># dnf --nogpgcheck install <package_name>
By default, dnf
is built to use sign package feeds so we need to either configure the preceding repository with:
gpgcheck=0
Or use the --nogpgcheck
command line argument as shown previously.
- To make this change persistent in the target's root filesystem, we can configure the package feeds at compilation time by using the
PACKAGE_FEED_*
variables inconf/local.conf
, as follows:
PACKAGE_FEED_URIS = "http://<server_ip>/" PACKAGE_FEED_BASE_PATHS = "rpm"
The package feed's base URL is composed as shown next:
${PACKAGE_FEED_URIS}/${PACKAGE_FEED_BASE_PATHS}/${PACKAGE_FEED_ARCHS}.
By default, the package feed is prepared as a single repository so there is no need to use the PACKAGE_FEED_ARCHS
variable.
The Yocto build system can both generate signed packages and configure target images to use a signed package feed.
The build system will use the GNU privacy guard (GNUPG), an RFC 4880-compliant cryptographic software suite licensed under the GNU General Public License GPLv3.
To configure the project for RPM package signing, add the following to your conf/local.conf
configuration file:
INHERIT += "sign_rpm"
For IPK package signing, do the following instead:
INHERIT += "sign_ipk"
You will then need to define the name of the GPG key to use for signing, and its passphrase:
RPM_GPG_NAME = "<key ID>" RPM_GPG_PASSPHRASE = "<key passphrase>"
Or for the IPK package format:
IPK_GPG_NAME = "<key ID>" IPK_GPG_PASSPHRASE_FILE = "<path/to/passphrase/file>"
See the Creating a GNUPG key pair next section in this same recipe to find the generated key ID.
The Yocto build system will locate the private GPG key in the host and use it to sign the generated packages.
To enable your target image to use a signed package feed, you will need to add the following configuration to your conf/local.conf
configuration file:
INHERIT += "sign_package_feed" PACKAGE_FEED_GPG_NAME = "<key name>" PACKAGE_FEED_GPG_PASSPHRASE_FILE = "<path/to/passphrase/file>"
The <path/to/passphrase/file>
shown previously is the absolute path to a text file containing the passphrase.
The dnf
package manager will use the configured public key to verify the authenticity of the package feed.
In the Setting up the host system recipe in this same chapter, you installed the gnupg
package in your host machine; if you didn't, you can do so now with:
$ sudo apt-get install gnupg
To generate a key, type the following command:
$ gpg --gen-key
Follow the instructions, keeping the default values. You may need to generate random data with mouse movements and disk activity.
You can check your key with:
$ gpg --list-keys/home/alex/.gnupg/pubring.gpg-----------------------------pub 2048R/4EF0ECE0 2017-08-13uid Alex Gonzalez <alex@lindusembedded.com>sub 2048R/298446F3 2017-08-13
The GPG key ID in the previous example is 4EF0ECE0
.
And export it with the following command:
$ gpg --output rpm-feed.gpg --export <id>
The ID may be the key ID or any part of the user ID, such as the email address. The exported public key may now be moved to its final destination, such as the package feed web server.
An example conf/local.conf
configuration would be:
INHERIT += "sign_rpm" RPM_GPG_NAME = "4EF0ECE0" RPM_GPG_PASSPHRASE = "<very-secure-password>" INHERIT += "sign_package_feed" PACKAGE_FEED_GPG_NAME = "4EF0ECE0" PACKAGE_FEED_GPG_PASSPHRASE_FILE = "/opt/yocto/passphrase.txt"
You can move your key pair to a secure location with:
$ gpg --output rpm-feed.pub --armor --export <key id>$ gpg --output rpm-feed.sec --armor --export-secret-key <key id>
Copy them securely to a new location and import them with:
$ gpg --import rpm-feed.pub$ gpg --allow-secret-key-import --import rpm-feed.sec
- More information and a user manual for the
dnf
utility can be found at http://dnf.readthedocs.io/en/latest/index.html - The GNUPG documentation can be accessed at https://www.gnupg.org/documentation/
When maintaining software for an embedded product, you need a way to know what has changed and how it is going to affect your product.
On a Yocto system, you may need to update a package revision (for instance, to fix a security vulnerability), and you need to make sure what the implications of this change are, for example, in terms of package dependencies and changes to the root filesystem.
Build history enables you to do just that, and we will explore it in this recipe.
To enable build history, add the following to your conf/local.conf
file:
INHERIT += "buildhistory"
The preceding configuration enables information gathering, including dependency graphs.
To enable the storage of build history in a local Git repository add the following line to the conf/local.conf
configuration file as well:
BUILDHISTORY_COMMIT = "1"
The Git repository location can be set by the BUILDHISTORY_DIR
variable, which by default is set to a buildhistory
directory on your build directory.
By default, buildhistory
tracks changes to packages, images, and SDKs. This is configurable using the BUILDHISTORY_FEATURES
variable. For example, to track only image changes, add the following to your conf/local.conf
:
BUILDHISTORY_FEATURES = "image"
It can also track specific files and copy them to the buildhistory
directory. By default, this includes only /etc/passwd
and /etc/groups
, but it can be used to track any important files, such as security certificates. The files need to be added with the BUILDHISTORY_IMAGE_FILES
variable in your conf/local.conf
file, as follows:
BUILDHISTORY_IMAGE_FILES += "/path/to/file"
Build history will slow down the build, increase the build size, and may also grow the Git
directory to an unmanageable size. The recommendation is to enable it on a build server for software releases, or in specific cases, such as when updating production software.
When enabled, it will keep a record of the changes to each package and image in the form of a Git repository in a way that can be explored and analyzed.
Note
Note that build history will only record changes to the build. If your project is already built, you will have to modify something or remove the tmp
folder in order for build history to be generated.
The build configuration and metadata revision, as printed by Bitbake, is stored in the build-id.txt
file.
For a package, build history records the following information:
- Package and recipe revision
- Dependencies
- Package size
- Files
For an image, it records the following information:
- Build configuration
- Dependency graphs
- A list of files that include ownership and permissions, as well as size and symlink information
- A list of installed packages
And for an SDK, it records the following information:
- SDK configuration
- A list of both host and target files, including ownership and permissions, as well as size and symlinks information
- Package-related information is only generated for the standard SDK, not for the extensible SDK. This includes:
- Dependency graphs
- A list of installed packages
For more details about using Yocto SDKs, please refer to the Preparing an SDK and Using the extensible SDK recipes in Chapter 4, Application Development.
Inspecting the Git
directory with build history can be done in several ways:
- Using Git tools such as
gitk
orgit log
- Using the
buildhistory-diff
command-line tool, which displays the differences in a human-readable format - Using a Django-1.8-based web interface
To install the Django web interface on a development machine, you first need to install some host dependencies:
$ sudo apt-get install python3-django $ sudo apt-get install python-django-registration
Note
This will install Django 1.8 both for Python 2.7, and Python 3. The buildhistory-web
interface will only currently work on Python 2.7 but the build history import script will need to run under Python 3 as that is what the Yocto 2.4 BitBake uses.
Now we can clone the web interface source and configure it:
$ cd /opt/yocto/fsl-community-bsp/sources$ git clone git://git.yoctoproject.org/buildhistory-web$ cd buildhistory-web/
Edit the settings.py
file to change the path to the database engine:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': '/opt/yocto/fsl-community-bsp/sources/buildhistory-web/bhtest.db3',
'USER': '',
'PASSWORD': '',
'HOST': '',
'PORT': '',
}
}
You then need to set up the Django application with:
$ python manage.py migrate
Next, import buildhistory
as follows:
$ python3 warningmgr/import.py /opt/yocto/fsl-community-bsp/sources/poky/ /opt/yocto/fsl-community-bsp/wandboard/buildhistory/
The preceding command will need to be executed each time there is a new build.
And finally, start the web server on the localhost with:
$ python manage.py runserver
Note
To bind it to a different IP address and port you can do:$ python manage.py runserver <host>:<port>
But you will need to configure your settings.py
accordingly with:ALLOWED_HOSTS = [u'<host>']
The following image shows the Buildhistory
web interface home page:

Buildhistory web interface
To maintain build history, it's important to optimize it and prevent it from growing over time. Periodic backups of build history and clean-ups of older data are important to keep the build history repository at a manageable size.
Once the buildhistory
directory has been backed up, the following process will trim it and keep only the most recent history:
- Copy your repository to a temporary RAM filesystem (
tmpfs
) to speed things up. Check the output of thedf -h
command to see which directories aretmpfs
filesystems and how much space they have available, and use one. For example, in Ubuntu 16.04, the/run/
directory is available.
- Copy build history to the
/run
directory as follows:
$ sudo mkdir /run/workspace$ sudo chown ${USER} /run/workspace/$ cp -r /opt/yocto/fsl-community-bsp/wandboard/buildhistory/ /run/workspace/$ cd /run/workspace/buildhistory/
- Add a graft point for a commit 1 month ago with no parents:
$ git rev-parse "HEAD@{1 month ago}" > .git/info/grafts
- Make the graft point permanent:
$ git filter-branch
- Clone a new repository to clean up the remaining Git objects:
$ git clone file://${tmpfs}/buildhistory buildhistory.new
- Replace the old
buildhistory
directory with the new cleaned one:
$ rm -rf buildhistory$ mv buildhistory.new /opt/yocto/fsl-community-bsp/wandboard/buildhistory/
$ rm -rf /run/workspace/
The build system can collect build information per task and image. The data may be used to identify areas of optimization of build times and bottlenecks, especially when new recipes are added to the system. This recipe will explain how the build statistics work.
To enable the collection of statistics, your project needs to inherit the buildstats
class by adding it to USER_CLASSES
in your conf/local.conf
file. By default, the fsl-community-bsp
build project is configured to enable them:
USER_CLASSES ?= "buildstats"
You can configure the location of these statistics with the BUILDSTATS_BASE
variable, and by default it is set to the buildstats
folder in the tmp
directory under the build directory (tmp/buildstats
).
The buildstats
folder contains a folder per image with the build stats under a timestamp folder. Under it will be a sub-directory per package in your built image, and a build_stats
file that contains:
- Host system information
- Root filesystem location and size
- Build time
- Average CPU usage
The accuracy of the data depends on the download directory, DL_DIR
, and the shared state cache directory, SSTATE_DIR
, existing on the same partition or volume, so you may need to configure them accordingly if you are planning to use the build data.
An example build-stats
file looks like the following:
Host Info: Linux langabe 4.10.0-30-generic #34~16.04.1-Ubuntu SMP Wed Aug 2 02:13:56 UTC 2017 x86_64 x86_64 Build Started: 1502529685.16 Uncompressed Rootfs size: 93M /opt/yocto/fsl-community-bsp/wandboard/tmp/work/wandboard-poky-linux-gnueabi/core-image-minimal/1.0-r0/rootfs Elapsed time: 101.87 seconds CPU usage: 47.8%
Inside each package, we have a list of tasks; for example, for ncurses-6.0+20161126-r0
, we have the following tasks:
do_compile
do_fetch
do_package
do_package_write_rpm
do_populate_lic
do_rm_work
do_configure
do_install
do_packagedata
do_package_qa
do_patch
do_prepare_recipe_sysroot
do_populate_sysroot
do_unpack
Each one of them contains the following:
- Build time
- CPU usage
- Disk stats
The information is displayed as follows:
Event: TaskStarted Started: 1502541082.15 ncurses-6.0+20161126-r0: do_compile Elapsed time: 35.37 seconds utime: 31 stime: 2 cutime: 7790 cstime: 1138 IO rchar: 778886123 IO read_bytes: 3354624 IO wchar: 79063307 IO cancelled_write_bytes: 1507328 IO syscr: 150688 IO write_bytes: 26726400 IO syscw: 31565 rusage ru_utime: 0.312 rusage ru_stime: 0.027999999999999997 rusage ru_maxrss: 78268 rusage ru_minflt: 5050 rusage ru_majflt: 0 rusage ru_inblock: 0 rusage ru_oublock: 1184 rusage ru_nvcsw: 705 rusage ru_nivcsw: 126 Child rusage ru_utime: 77.908 Child rusage ru_stime: 11.388 Child rusage ru_maxrss: 76284 Child rusage ru_minflt: 2995484 Child rusage ru_majflt: 0 Child rusage ru_inblock: 6552 Child rusage ru_oublock: 51016 Child rusage ru_nvcsw: 18280 Child rusage ru_nivcsw: 29984 Status: PASSED Ended: 1502541117.52
The CPU usage is given with data extracted from /proc/<pid>/stat
and given in units of clock ticks:
utime
is the amount of time the process has been scheduled in user modestime
is the amount of time it has been scheduled in kernel modecutime
is the time the process's children were scheduled in user modecstime
is the time they were scheduled in kernel mode
And the following is also available from the resource usage information provided from getrusage()
, representing the resource usage of the calling process, including all threads, as well as the children and their descendants:
ru_utime
is the user CPU time used in secondsru_stime
is the system CPU time used in secondsru_maxrss
is the maximum resident set size in KBru_minflt
is the number of page faults without I/O activityru_majflt
is the number of page faults with required I/O activityru_inblock
is the count of filesystem inputsru_oublock
is the count of filesystem outputsru_nvcsw
is the count of times a process yielded voluntarilyru_nivcsw
is the count of times a process was forced to yield
Finally, the disk access statistics are provided from /proc/<pid>/io
as follows:
rchar
is the number of bytes read from storagewchar
is the number of bytes written to disksyscr
is the estimated number of read I/O operationssyscw
is the estimated number of write I/O operationsread_bytes
is the number of bytes read from storage (estimate-accurate for block-backed filesystems)write_bytes
is the estimated number of bytes written to the storage layercancelled_write_bytes
is the number of bytes written that did not happen, by truncating page cache
You can also obtain a graphical representation of the data using the pybootchartgui.py
tool included in the Poky source. From your project's build folder, you can execute the following command to obtain a bootchart.png
graphic in /tmp
:
$ cd /optyocto/fsl-community-bsp/wandboard/$ /opt/yocto/fsl-community-bsp/sources/poky/scripts/pybootchartgui/pybootchartgui.py tmp/buildstats/ -o /tmp
An example graphic is shown next:

Graphical build statistics documentation
- Refer to the Linux kernel documentation for more details regarding the data obtained through the proc filesystem: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/proc.txt
In the last recipe of this chapter, we will explore the different methods available to debug problems with the build system and its metadata.
Let's first introduce some of the usual use cases for a debugging session.
A good way to check whether a specific package is supported in your current layers is to search for it as follows:
$ cd /opt/yocto/fsl-community-bsp/sources$ find -name "*busybox*"
This will recursively search all layers for the BusyBox pattern. You can limit the search to recipes and append files by executing:
$ find -name "*busybox*.bb*"
Yocto includes a bitbake-layers
command-line utility that can also be used to search for specific recipes on the configured layers, with the preferred version appearing first:
$ bitbake-layers show-recipes "<package_name>"
Here, <package_name>
also supports wildcards.
For example:
$ bitbake-layers show-recipes gdb*=== Matching recipes: ===gdb: meta 7.12.1gdb-cross-arm: meta 7.12.1gdb-cross-canadian-arm: meta 7.12.1gdbm: meta 1.12
Finally, the devtool command-line utility can also be used to search the dependency cache with a regular expression. It will search on recipe or package names but also description and install files, so it is better suited in the context of developing recipes metadata:
$ devtool search <regular expression>
To use devtool, the environment needs to be previously set up, and the shared state cache populated:
$ cd /opt/yocto/fsl-community-bsp$ source setup-environment wandboard$ bitbake <target-image>$ devtool search gdbLoaded 2323 entries from dependency cache.perl Perl scripting languageshared-mime-info Shared MIME type database and specificationbash-completion Programmable Completion for Bash 4glib-2.0 A general-purpose utility librarypython The Python Programming Languagegdbm Key/value database library with extensible hashinggcc-runtime Runtime libraries from GCC
When developing or debugging package or image recipes, it is very common to ask BitBake to list its environment both globally and for a specific target, be it a package or image.
To dump the global environment and grep
for a variable of interest (for example, DISTRO_FEATURES
), use the following command:
$ bitbake -e | grep -w DISTRO_FEATURES
Optionally, to locate the source directory for a specific package recipe such as BusyBox, use the following command:
$ bitbake -e busybox | grep ^S=
You could also execute the following command to locate the working directory for a package or image recipe:
$ bitbake -e <target> | grep ^WORKDIR=
BitBake offers the devshell
and devpyshell
tasks to help developers. They are executed with the following commands:
$ bitbake -c devshell <target>
And:
$ bitbake -c devpyshell <target>
They will unpack and patch the source, and open a new Terminal (they will autodetect your Terminal type or it can be set with OE_TERMINAL
) in the target source directory, which has the environment correctly set up. They run with the nostamp
flag so up-to-date tasks will be rerun.
The devpyshell
command will additionally set up the Python environment including Python objects and code such as the datastore d
object.
Note
While in a graphical environment, devshell
and devpyshell
will open a new Terminal or console window, but if we are working on a non-graphical environment, such as Telnet or SSH, you may need to specify screen
as your Terminal in your conf/local.conf
configuration file as follows:OE_TERMINAL = "screen"
Inside the devshell
, you can run development commands such as configure
and make
or invoke the cross-compiler directly (use the $CC
environment variable, which has been set up already). You can also run BitBake tasks inside devshell
by calling the ${WORKDIR}/temp/run*
script directly. This has the same result as invoking BitBake externally to devshell
for that task.
Inside the devpyshell
Python interpreter, you can call functions, such as d.setVar()
and d.getVar()
, or any Python code, such as bb.build.exec_fun()
.
The starting point for debugging a package build error is the BitBake error message printed on the build process. This will usually point us to the task that failed to build.
- To list all the tasks available for a given recipe, with descriptions, we execute the following:
$ bitbake -c listtasks <target>
- If you need to recreate the error, you can force a build with the following:
$ bitbake -f <target>
- Or you can ask BitBake to force-run only a specific task using the following command:
$ bitbake -c compile -f <target>
Note
Forcing a task to run will taint the task and BitBake will show a warning. This is meant to inform that the build has been modified. You can remove the warnings by cleaning the work
directory with the -c clean
argument.
To debug the build errors, BitBake creates two types of useful files per shell task and stores them in a temp
folder in the working directory. Taking BusyBox as an example, we would look into:
/opt/yocto/fsl-community-bsp/wandboard/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/busybox/1.24.1-r0/temp
And find a list of log*
and run*
files. The filename format is:
log.do_<task>.<pid> and run.do_<task>.<pid>.
But luckily, we also have symbolic links without the <pid>
part that link to the latest version.
The log
files will contain the output of the task, and that is usually the only information we need to debug the problem. The run
file contains the actual code executed by BitBake to generate the log mentioned before. This is only needed when debugging complex build issues.
Python tasks, on the other hand, do not currently write files as described previously, although it is planned to do so in the future. Python tasks execute internally and log information to the Terminal.
BitBake recipes accept either Bash or Python code. Python logging is done through the bb
class and uses the standard logging Python library module. It has the following components:
bb.plain
: This useslogger.plain
. It can be used for debugging, but should not be committed to the source.bb.note
: This useslogger.info
.bb.warn
: This useslogger.warn
.bb.error
: This useslogger.error
.bb.fatal
: This useslogger.critical
and exits BitBake.bb.debug
: This should be passed a log level as the first argument and useslogger.debug
.
To print debug output from Bash in our recipes, we need to use the logging
class by executing:
inherit logging
The logging
class is inherited by default by all recipes containing base.bbclass
, so we don't usually have to inherit it explicitly. We will then have access to the following Bash functions:
bbplain
: This function outputs literally what's passed in. It can be used in debugging but should not be committed to a recipe source.bbnote
: This function prints with theNOTE
prefix.bbwarn
: This prints a non-fatal warning with theWARNING
prefix.bberror
: This prints a non-fatal error with theERROR
prefix.bbfatal
: This function halts the build and prints an error message as withbberror
.bbdebug
: This function prints debug messages with the log level passed as the first argument. It is used with the following format:
bbdebug [123] "message"
You can ask BitBake to print the current and provided versions of packages with the following command:
$ bitbake --show-versions
Another common debugging task is the removal of unwanted dependencies.
To see an overview of pulled-in dependencies, you can use BitBake's verbose output by running this:
$ bitbake -v <target>
To analyze what dependencies are pulled in by a package, we can ask BitBake to create DOT files that describe these dependencies by running the following command:
$ bitbake -g <target>
The DOT format is a text description language for graphics that is understood by the GraphViz open source package and all the utilities that use it. DOT files can be visualized or further processed.
You can omit dependencies from the graph to produce more readable output. For example, to omit dependencies from glibc
, you would run the following command:
$ bitbake -g <target> -I glibc
Once the preceding commands have been run, we get the following files in the current directory:
pn-buildlist
: This file shows the list of packages that would be built by the given targetrecipes-depends.dot
: This file shows the dependencies between recipestask-depends.dot
: This file shows the dependencies between tasks
To convert the .dot
files to postscript files (.ps
), you may execute:
$ dot -Tps filename.dot -o outfile.ps
However, the most useful way to display dependency data is to ask BitBake to display it graphically with the dependency explorer, as follows:
$ bitbake -g -u taskexp <target>
The result may be seen in the following screenshot:

Task dependency explorer
On rare occasions, you may find yourself debugging a task dependency problem, for example, if BitBake misses a task dependency.
In the tmp/stamps
sub-directory inside the build directory, you can find two file types that are helpful when debugging dependency problems:
sigdata
, a Python database of all the metadata that is used to calculate the task's input checksumsiginfo
, which is the same but for shared state cache accelerated recipes
You can use bitbake-dumpsig
on both of these file types to dump the variable dependencies for the task, variable values, as well as a list of variables never included in any checksum.
When trying to compare two versions of a given task, bitbake-diffsig
can be used to dump the differences between two sigdata
or siginfo
revisions.
It is not common to have to debug BitBake itself, but you may find a bug in BitBake and want to explore it by yourself before reporting it to the BitBake community. For such cases, you can ask BitBake to output the debug information at three different levels with the -D
flag. To display all the debug information, run the following command:
$ bitbake -DDD <target>
Sometimes, you will find a build error on a Yocto recipe that you have not modified. The first place to check for errors is the community itself, but before launching your mail client, head to http://errors.yoctoproject.org. The welcome page is displayed as follows:

Error reporting web interface
This is a central database of mostly autobuilder, but also user-reported, errors. Here, you may check whether someone else is experiencing the same problem.
You can submit your own build failure to the database to help the community debug the problem. To do so, you may use the report-error
class. Add the following to your conf/local.conf
file:
INHERIT += "report-error"
By default, the error information is stored under tmp/log/error-report
under the build directory, but you can set a specific location with the ERR_REPORT_DIR
variable.
When the error reporting tool is activated, a build error will be captured in a file in the error-report
folder. The build output will also print a command to send the error log to the server:
$ send-error-report ${LOG_DIR}/error-report/error-report_${TSTAMP}
When this command is executed, it will report back with a link to the upstream error.
You can set up a local error server, and use that instead by passing a server argument. The error server code is a Django web application and setting up details can be found at http://git.yoctoproject.org/cgit/cgit.cgi/error-report-web/tree/README.