All Products
Best Sellers
New Releases
Learning Hub
Free Learning
Linux Kernel Programming - Second Edition
Linux Kernel Programming - Second Edition

Linux Kernel Programming: A comprehensive and practical guide to kernel internals, writing modules, and kernel synchronization, Second Edition

By Kaiwan N. Billimoria
$39.99 $27.98
Book Feb 2024 826 pages 2nd Edition
$39.99 $27.98
$15.99 Monthly
$39.99 $27.98
$15.99 Monthly

What do you get with eBook?

Feature icon Instant access to your Digital eBook purchase
Feature icon Download this book in EPUB and PDF formats
Feature icon Access this title in our online reader with advanced features
Feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details

Publication date : Feb 29, 2024
Length 826 pages
Edition : 2nd Edition
Language : English
ISBN-13 : 9781803232225
Vendor :
Linux Foundation
toc View table of contents toc Preview Book

Linux Kernel Programming - Second Edition

Building the 6.x Linux Kernel from Source – Part 1

Building the Linux kernel from source code is an interesting way to begin your kernel development journey! Be assured, the journey’s a long and arduous one, but that’s the fun of it, right? The topic of kernel building itself is large enough to merit being divided into two chapters, this one and the next.

Recall what we have learned till now in the extended first chapter that has been published online ( primarily, how to set the workspace for Linux kernel programming. You have also been introduced to user and kernel documentation sources and several useful projects that go hand-in-hand with kernel/driver development. By now, I assume you’ve completed the chapter that has been published online, and thus the setup of the workspace environment; if not, please do so before proceeding forward.

The primary purpose of this chapter and the next is to describe in detail how exactly you can build a modern Linux kernel from scratch using source code. In this chapter, you will first learn about the required basics: the kernel version nomenclature, development workflow, and the different types of source trees. Then, we’ll get hands-on: you’ll learn how to download a stable vanilla Linux kernel source tree onto a guest Linux Virtual Machine (VM). By “vanilla kernel,” we mean the plain and regular default kernel source code released by the Linux kernel community on its repository, After that, you will learn a little bit about the layout of the kernel source code – getting, in effect, a 10,000-foot view of the kernel code base. The actual kernel build recipe then follows.

Before proceeding, a key piece of information: any modern Linux system, be it a supercomputer or a tiny, embedded device, has three required components:

  • A bootloader
  • An Operating System (OS) kernel
  • A root filesystem

It additionally has two optional components:

  • If the processor family is ARM or PPC (32- or 64-bit), a Device Tree Blob (DTB) image file
  • An initramfs (or initrd) image file

In these two chapters, we concern ourselves only with the building of the OS (Linux) kernel from source code. We do not delve into the root filesystem details. In the next chapter, we will learn how to minimally configure the x86-specific GNU GRUB bootloader.

The complete kernel build process – for x86[_64] at least – requires a total of six or seven steps. Besides the required preliminaries, we cover the first three here and the remaining in the next chapter.

In this chapter, we will cover the following topics:

  • Preliminaries for the kernel build
  • Steps to build the kernel from source
  • Step 1 – Obtaining a Linux kernel source tree
  • Step 2 – Extracting the kernel source tree
  • Step 3 – Configuring the Linux kernel
  • Customizing the kernel menu, Kconfig, and adding our own menu item

You may wonder: what about building the Linux kernel for another CPU architecture (like ARM 32 or 64 bit)? We do precisely this as well in the following chapter!

Technical requirements

I assume that you have gone through Online Chapter, Kernel Workspace Setup, and have appropriately prepared an x86_64 guest VM running Ubuntu 22.04 LTS (or equivalent) and installed all the required packages. If not, I highly recommend you do this first.

To get the most out of this book, I also strongly recommend you clone this book’s GitHub repository ( for the code and work on it in a hands-on fashion.

Preliminaries for the kernel build

It’s important to understand a few things right from the outset that will help you as we proceed on our journey of building and working with a Linux kernel. Firstly, the Linux kernel and its sister projects are completely decentralized – it’s a virtual, online open-source community! The closest we come to an “office” for Linux is this: stewardship of the Linux kernel (as well as several dozen related projects) is in the capable hands of the Linux Foundation (; further, it manages the Linux Kernel Organization, a private foundation that distributes the Linux kernel to the public free of charge (

Did you know? The terms of the GNU GPLv2 license – under which the Linux kernel’s released and will continue to be held for the foreseeable future – do not in any manner prevent original developers from charging for their work! It’s just that Linus Torvalds, and now the Linux Foundation, makes the Linux kernel software available to everybody for free. This doesn’t prevent commercial organizations from adding value to the kernel (and their products bundled with it) and charging for it, whether via an initial upfront charge or today’s typical SaaS/IaaS subscription model.

Thus, open source is definitely viable for business, as has been, and continues to be, proved daily. Customers whose core business lies elsewhere simply want value for money; businesses built around Linux can provide that by providing the customer with expert-level support, detailed SLAs, and upgrades.

Some of the key points we’ll discuss in this section include the following:

  • The kernel release, or version number nomenclature
  • The typical kernel development workflow
  • The existence of different types of kernel source trees within the repository

With this information in place, you will be better armed to move through the kernel build procedure. All right let’s go over each of the preceding points.

Understanding the Linux kernel release nomenclature

To see the kernel version number, simply run uname -r on your shell. How do you precisely interpret the output of uname -r? On our x86_64 Ubuntu distribution version 22.04 LTS guest VM, we run uname, passing the -r option switch to display just the current kernel release or version:

$ uname -r

Of course, by the time you read this, the Ubuntu 22.04 LTS kernel will very likely have been upgraded to a later release; that’s perfectly normal. The 5.19.0-40-generic kernel was the one I encountered with the Ubuntu 22.04.2 LTS at the time of writing this chapter.

The modern Linux kernel release number nomenclature is as follows:


This is also often written or described as w.x[.y][-z].

The square brackets around the patchlevel and EXTRAVERSION (or the y and -z) components indicate that they are optional. The following table summarizes the meaning of the components of the release number:

Release # component


Example numbers

Major # (or w)

Main or major number; currently, we are on the 6.x kernel series, thus the major number is 6.

2, 3, 4, 5, 6

Minor # (or x)

The minor number, hierarchically under the major number.

0 onward

[patchlevel] (or y)

Hierarchically under the minor number – also called the ABI or revision – applied on occasion to the stable kernel when significant bug/security fixes are required.

0 onward


Also called localversion; typically used by distribution kernels and vendors to track their internal changes.

Varies; Ubuntu uses w.x.y-<z>-generic

Table 2.1: Linux kernel release nomenclature

So, we can now interpret our Ubuntu 22.04 LTS distribution’s kernel release number, 5.19.0-40-generic:

  • Major # (or w): 5
  • Minor # (or x): 19
  • [patchlevel] (or y): 0
  • [-EXTRAVERSION] (or -z): -40-generic

Note that distribution kernels may not precisely follow these conventions; it’s up to them. The regular or vanilla kernels released on do follow these conventions (at least until Linus decides to change them).

Historically, in kernels before 2.6 (IOW, ancient stuff now), the minor number held a special meaning; if it was an even number, it indicated a stable kernel release, and if odd, an unstable or beta release. This is no longer the case.

As part of an interesting exercise configuring the kernel, we will later change the localversion (aka the -EXTRAVERSION) component of the kernel we build.

Fingers-and-toes releases

Next, it’s important to understand a simple fact: with modern Linux kernels, when the kernel major and/or minor number changes, it does not imply that some tremendous or key new design, architecture, or feature has come about; no, it is simply, in the words of Linus, organic evolution.

The currently used kernel version nomenclature is a loosely time-based one, not feature-based. Thus, a new major number will pop up every so often. How often exactly? Linus likes to call it the “fingers and toes” model; when he runs out of fingers and toes to count the minor number (the x component of the w.x.y release), he updates the major number from w to w+1. Hence, after iterating over 20 minor numbers – from 0 to 19 – we end up with a new major number.

This has practically been the case since the 3.0 kernel; thus, we have the following:

  • 3.0 to 3.19 (20 minor releases)
  • 4.0 to 4.19 (20 minor releases)
  • 5.0 to 5.19 (20 minor releases)
  • 6.0 to … (it’s still moving along; you get the idea!)

Take a peek at Figure 2.1 to see this. Each minor-to-next-minor release takes approximately between 6 to 10 weeks.

Kernel development workflow – understanding the basics

Here, we provide a brief overview of the typical kernel development workflow. Anyone like you who is interested in kernel development should at least minimally understand the process.

A detailed description can be found within the official kernel documentation here:

A common misconception, especially in its baby years, was that the Linux kernel is developed in an ad hoc fashion. This is not true at all! The kernel development process has evolved to become a mostly well-oiled system with a thoroughly documented process and expectations of what a kernel contributor should know to use it well. I refer you to the preceding link for the complete details.

In order for us to take a peek into a typical development cycle, let’s assume we’ve cloned the latest mainline Linux Git kernel tree on to our system.

The details regarding the use of the powerful Git Source Code Management (SCM) tool lie beyond the scope of this book. Please see the Further reading section for useful links on learning how to use Git. Obviously, I highly recommend gaining at least basic familiarity with using Git.

As mentioned earlier, at the time of writing, the 6.1 kernel is a Long-Term Stable (LTS) version with the furthest projected EOL date from now (December 2026), so we shall use it in the materials that follow.

So, how did it come to be? Obviously, it has evolved from earlier release candidate (rc) kernels and the previous stable kernel release that preceded it, which in this case would be the v6.1-rc’n’ kernels and the stable v6.0 one before it. Let’s view this evolution in two ways: via the command line and graphically via the kernel’s GitHub page.

Viewing the kernel’s Git log via the command line

We use the git log command as follows to get a human-readable log of the tags in the kernel Git tree ordered by date. Here, as we’re primarily interested in the release of the 6.1 LTS kernel, we’ve deliberately truncated the following output to highlight that portion:

The git log command (that we use in the following code block, and in fact any other git sub-commands) will only work on a Git tree. We use the following one purely to demonstrate the evolution of the kernel. A bit later, we will show how you can clone a Git kernel source tree.

$ git log --date-order --tags --simplify-by-decoration \
--pretty=format:'%ai %h %d'
2023-04-23 12:02:52 -0700 457391b03803  (tag: v6.3)
2023-04-16 15:23:53 -0700 6a8f57ae2eb0  (tag: v6.3-rc7)
2023-04-09 11:15:57 -0700 09a9639e56c0  (tag: v6.3-rc6)
2023-04-02 14:29:29 -0700 7e364e56293b  (tag: v6.3-rc5)
[ … ]
2023-03-05 14:52:03 -0800 fe15c26ee26e  (tag: v6.3-rc1)
2023-02-19 14:24:22 -0800 c9c3395d5e3d  (tag: v6.2)
2023-02-12 14:10:17 -0800 ceaa837f96ad  (tag: v6.2-rc8)
[ … ]
2022-12-25 13:41:39 -0800 1b929c02afd3  (tag: v6.2-rc1)
2022-12-11 14:15:18 -0800 830b3c68c1fb  (tag: v6.1)
2022-12-04 14:48:12 -0800 76dcd734eca2  (tag: v6.1-rc8)
2022-11-27 13:31:48 -0800 b7b275e60bcd  (tag: v6.1-rc7)
2022-11-20 16:02:16 -0800 eb7081409f94  (tag: v6.1-rc6)
2022-11-13 13:12:55 -0800 094226ad94f4  (tag: v6.1-rc5)
2022-11-06 15:07:11 -0800 f0c4d9fc9cc9  (tag: v6.1-rc4)
2022-10-30 15:19:28 -0700 30a0b95b1335  (tag: v6.1-rc3)
2022-10-23 15:27:33 -0700 247f34f7b803  (tag: v6.1-rc2)
2022-10-16 15:36:24 -0700 9abf2313adc1  (tag: v6.1-rc1)
2022-10-02 14:09:07 -0700 4fe89d07dcc2  (tag: v6.0)
2022-09-25 14:01:02 -0700 f76349cf4145  (tag: v6.0-rc7)
[ … ]
2022-08-14 15:50:18 -0700 568035b01cfb  (tag: v6.0-rc1)
2022-07-31 14:03:01 -0700 3d7cb6b04c3f  (tag: v5.19)
2022-07-24 13:26:27 -0700 e0dccc3b76fb  (tag: v5.19-rc8)
[ … ]

In the preceding output block, you can first see that, at the time I ran this git log command (late April 2023), the 6.3 kernel was just released! You can also see that seven rc kernels led up to this release, numbered as 6.3-rc1, 6.3-rc2, …, 6.3-rc7.

Delving further, we find what we’re after – you can clearly see that the stable 6.1 (LTS) kernel initial release date was 11 December 2022, and its predecessor, the 6.0 tree, was released on 2 October 2022. You can also verify these dates by looking up other useful kernel resources, such as

For the development series that ultimately led to the 6.1 kernel, this latter date (2 October 2022) marks the start of what is called the merge window for the next stable kernel for a period of approximately two weeks. In this period, developers are allowed to submit new code to the kernel tree. In reality, the actual work would have been going on from a lot earlier; the fruit of this work is now merged into mainline at this time, typically by subsystem maintainers.

We attempt to diagram a timeline of this work in Figure 2.1; you can see how the earlier kernels (from 3.0 onward) had 20 minor releases. More detail is shown for our target kernel: 6.1 LTS.

Figure 2.1: A rough timeline of modern Linux kernels, highlighting the one we’ll primarily work with (6.1 LTS)

Two weeks from the start of the merge window (2 October 2022) for the 6.1 kernel, on 16 October 2022, the merge window was closed and the rc kernel work started, with 6.1-rc1 being the first of the rc versions, of course. The -rc (also known as prepatch) trees work primarily on merging patches and fixing (regression and other) bugs, ultimately leading to what is determined by the chief maintainers (Linus Torvalds and Andrew Morton) to be a “stable” kernel tree.

The number of rc kernels or prepatches varies; typically, though, this “bugfix” window takes anywhere between 6 to 10 weeks, after which the new stable kernel is released. In the preceding output block, we can see that eight release candidate kernels (from 6.1-rc1 to 6.1-rc8) finally resulted in the stable release of the v6.1 tree on 11 December 2022, taking a total of 70 days, or 10 weeks. See the 6.x section on to confirm this.

Why does Figure 2.1 begin at the 2.6.12 kernel? The answer is simple: this is the first version from which Git kernel history is maintained.

Another question may be, why show the 5.4 (LTS) kernel in the figure? Because it too is an LTS kernel (projected EOL is December 2025), and it was the kernel we used in the first edition of this book!

Viewing the kernel’s Git log via its GitHub page

The kernel log can be seen more visually via the releases/tags page at Linus’s GitHub tree here –

Figure 2.2: See the v6.1 tag with the -rc’n’ release candidates that ultimately result in the v6.1 kernel seen above it (read it bottom-up)

Figure 2.2 shows us the v6.1 kernel tag in this truncated screenshot. How did we get there? Clicking on the Next button (not shown here) several times leads to the remaining pages where the v6.1-rc’n’ release candidate kernels can be spotted. Alternatively, simply navigate to

The preceding screenshot is a partial one showing how two of the various v6.1-rc’n’ release candidate kernels, 6.1-rc7 and 6.1-rc8, ultimately resulted in the release of the LTS 6.1 tree on 12 December 2022.

The work never really stops: as can be seen, by early January 2023, the v6.2-rc3 release candidate went out, ultimately resulting in the v6.2 kernel on 19 February 2023. Then, the 6.3-rc1 kernel came out on 6 March 2023, and six more followed, ultimately resulting in the release of the stable 6.3 kernel on 23 April 2023. And so it continues…

Again, by the time you’re reading this, the kernel will be well ahead. But that’s okay – the 6.1 LTS kernel will be maintained for a relatively long while (recall, the projected EOL is December 2026) and it’s thus a very significant release to products and projects!

The kernel dev workflow in a nutshell

Generically, taking the 6.x kernel series as an example, the kernel development workflow is as follows. You can simultaneously refer to Figure 2.3, where we diagram how the 6.0 kernel evolves into 6.1 LTS.

  1. A 6.x stable release is made (for our purposes, consider x to be 0). Thus, the 2-week merge window for the 6.x+1 mainline kernel is opened.
  2. The merge window remains open for about two weeks and new patches are merged into the mainline kernel by the various subsystem maintainers, who have been carefully accepting patches from contributors and updating their trees for a long while.
  3. When around 2 weeks have elapsed, the merge window is closed.
  4. Now, the “bugfix” period ensues; the rc (or mainline, prepatch) kernels start. They evolve as follows: 6.x+1-rc1, 6.x+1-rc2, ..., 6.x+1-rcn are released. This process can take anywhere between 6 to 8 weeks.
  5. A “finalization” period ensues, typically about a week long. The stable release arrives and the new 6.x+1 stable kernel is released.
  6. The release is handed off to the “stable team”:
    • Significant bug or security fixes result in the release of 6.x+1.y : 6.x+1.1, 6.x+1.2, ... , 6.x+1.n. We’re going to primarily work upon the 6.1.25 kernel, making the y value 25.
    • The release is maintained until the next stable release or End Of Life (EOL) date is reached, which for 6.1 LTS is projected as December 2026. Bug and security fixes will be applied right until then.

...and the whole process repeats.

Figure 2.3 : How a 6.x becomes a 6.x+1 kernel (example here particularly for how 6.0 evolves into 6.1)

So, let’s say you’re trying to submit a patch but miss the merge window. Well, there’s no help for it: you (or more likely the subsystem maintainer who’s pushing your patch series as a part of several others) will just have wait until the next merge window arrives in about 2.5 to 3 months. Hey, that’s the way it is; we aren’t in that great a rush.


Go through what we did – following the kernel’s evolution – for the current latest stable kernel. (Quick tip: one more way to see the history of kernel releases:

So, when you now see Linux kernel releases, the names and the process involved will make sense. Let’s now move on to looking at the different types of kernel source trees out there.

Exploring the types of kernel source trees

There are several types of Linux kernel source trees. A key one is the Long Term Support (LTS) kernel. It’s simply a “special” release in the sense that the kernel maintainers will continue to backport important bugs and security fixes upon it until a given EOL date. By convention, the next kernel to be “marked” as an LTS release is the last one released each year, typically in December.

The “life” of an LTS kernel will usually be a minimum of 2 years, and it can be extended to go for several more. The 6.1.y LTS kernel that we will use throughout this book is the 23rd LTS kernel and has a projected lifespan of 4 years – from December 2022 to December 2026.

This image snippet from the Wikipedia page on Linux kernel version history ( says it all:

Figure 2.4: A look at the dev, supported (stable), and stable LTS kernels and their EOL dates; 6.1 LTS is what we work with (image credit: Wikipedia)

Interestingly, the 5.4 LTS kernel will be maintained until December 2025, and 5.10 LTS will be maintained for the same period as 6.1 LTS, up to December 2026.

Note, though, that there are several types of release kernels in the repository. Here, we mention an incomplete list, ordered from least to most stable (thus, their lifespan is from the shortest to longest time):

  • -next trees: This is indeed the bleeding edge (the tip of the arrow!), subsystem trees with new patches collected here for testing and review. This is what an upstream kernel contributor will work on. If you intend to upstream your patches to the kernel (contribute), you must work with the latest -next tree.
  • Prepatches, also known as -rc or mainline: These are release candidate kernels that get generated prior to a release.
  • Stable kernels: As the name implies, this is the business end. These kernels are typically picked up by distributions and other projects (at least to begin with). They are also known as vanilla kernels.
  • Distribution and LTS kernels: Distribution kernels are (obviously) the kernels provided by the distributions. They typically begin with a base vanilla/stable kernel. LTS kernels are the specially-maintained-for-a-longer-while kernels, making them especially useful for industry/production projects and products. Especially with regard to enterprise class distros, you’ll often find that many of them seem to be using “old” kernels. This can even be the case with some Android vendors. Now, even if uname -r shows that the kernel version is, say, 4.x based, it does not necessarily imply it’s old. No, the distro/vendor/OEM typically has a kernel engineering team (or outsources to one) that periodically updates the old kernel with new and relevant patches, especially critical security and bug fixes. So, though the version might appear outdated, it isn’t necessarily the case!

    Note, though, that this entails a huge workload on the kernel engineering team’s part, and it still is very difficult to keep up with the latest stable kernel; thus, it’s really best to work with vanilla kernels.

In this book, we will work throughout with one of the latest LTS kernels with the longest lifetime.

As of the time of writing, it’s the 6.1.x LTS kernel, with a projected EOL date of December 2026, thus keeping this book’s content current and valid for years to come!

Two more types of kernel source trees require a mention here: Super LTS (SLTS) and chip (SoC) vendor kernels. SLTS kernels are maintained for even longer durations than the LTS kernels, by the Civil Infrastructure Platform (, a Linux Foundation project. Quoting from their site:

”... The CIP project is focused on establishing an open source “base layer” of industrial grade software to enable the use and implementation of software building blocks in civil infrastructure projects. Currently, civil infrastructure systems are built from the ground up, with little re-use of existing software building blocks. The CIP project intends to create reusable building blocks that meet the safety, reliability, and other requirements of industrial and civil infrastructure.”

In fact, as of this writing, the latest CIP kernels – SLTS v6.1 and SLTS v6.1-rt – are based on the 6.1 LTS kernel and their projected EOL date is August 2033 (10 years)! As well, the SLTS 5.10 and SLTS 5.10-rt kernels have a projected EOL of January 2031. See the Wiki site here for the latest info:

LTS kernels – the new mandate

A quick and important update! In September 2023 at the Open Source Summit in Bilbao, Spain, Jonathan Corbet, in his famous “kernel report” talk, made an important announcement: LTS kernels will from now on be maintained for a period of only two years.

Here’s some resources, in case you’d like to look into it for yourself:

This might come as something of a surprise to many. Why only two years? Briefly, two reasons were given:

  • First, why maintain a kernel series for many years when people aren’t really using them? Many enterprise kernel vendors, as well as SoC ones, maintain their own kernels.
  • Next, an unfortunate and serious issue: maintainer fatigue. It’s hard for the kernel community to keep all these LTS kernels – there are 7 major LTS versions as of now (4.14, 4.19, 5.4, 5.10, 5.15, 6.1, and 6.6) – continuously maintained! Furthermore, the burden only increases with time. For example, the 4.14 LTS series has had about 300 updates and close to 28,000 commits. Besides, old bugs eventually surface, and a lot of work ensues to fix them in modern LTS kernels. Not just that, but new bugs that surface in later kernels must be fixed and then back-ported to the older still-maintained LTS kernels. It all adds up.

As of this time, it appears that the kernel we work with here – 6.1 LTS – will be maintained for the usual time, until December 2026.

Moving along, let’s now briefly discuss the aforementioned second type of kernel source tree: silicon/chipset SoC vendor kernels tend to maintain their own kernels for various boards/silicon that they support. They typically base their kernel on an existing vanilla LTS kernel (not necessarily the latest one!) and then build upon it, adding their vendor-specific patches, Board Support Package (BSP) stuff, drivers, and so on. Of course, as time goes by, the differences – between their kernel and the latest stable one – can become quite significant, leading to difficult maintenance issues, like the need to constantly backport critical security/bugfix patches to them.

When on a project using such silicon, the perhaps-best approach is to base your work on existing industry-strength solutions like the Yocto Project (, which does a great job in keeping recent LTS kernels maintained with vendor layers applied in sync with key security/bugfix patches. For example, as of this writing, the latest stable Yocto release – Nanbield 4.3 – supports both the 6.1 LTS as well as the more recent 6.5 non-LTS kernel; of course, the particular version can vary with the architecture (processor family).

So, the types of kernel source trees out there are aplenty. Nevertheless, I refer you to’s Releases page to obtain details on the type of release kernels: Again, for even more detail, visit How the development process works (

Querying the repository,, in a non-interactive scriptable fashion can be done using curl. The following output is the state of Linux as of 06 December 2023:

$ curl -L
The latest stable version of the Linux kernel is:             6.6.4
The latest mainline version of the Linux kernel is:           6.7-rc4
The latest stable 6.6 version of the Linux kernel is:         6.6.4
The latest stable 6.5 version of the Linux kernel is:         6.5.13 (EOL)
The latest longterm 6.1 version of the Linux kernel is:       6.1.65
The latest longterm 5.15 version of the Linux kernel is:      5.15.141
The latest longterm 5.10 version of the Linux kernel is:      5.10.202
The latest longterm 5.4 version of the Linux kernel is:       5.4.262
The latest longterm 4.19 version of the Linux kernel is:      4.19.300
The latest longterm 4.14 version of the Linux kernel is:      4.14.331
The latest linux-next version of the Linux kernel is:         next-20231206

By the time you read this, it’s extremely likely – certain, in fact – that the kernel has evolved further, and later versions show up. For a book such as this, the best I can do is pick close to the latest stable LTS kernel with the longest projected EOL date at the time of writing: 6.1.x LTS.

Of course, it’s happened already! The 6.6 kernel was released on 29 October 2023 and, as of the time of writing (just before going to print), it has in fact been marked as an LTS kernel, with a projected EOL date of December 2026 (the same as that of 6.1 LTS).

To help demonstrate this point, note that the first edition of this book used the 5.4.0 kernel, as 5.4 was the LTS kernel series with the longest lifetime at the time of its writing. Today, as I pen this, 5.4 LTS is still maintained with a projected EOL in December 2025 and the latest stable version is 5.4.268.

Which kernel should I run?

So, with all the types and types of kernels we’ve seen, it really does beg the question, which kernel should I run? The answer is nuanced, since it depends on the environment. Is it for embedded, desktop or server usage? Is it for a new project or a legacy one? What’s the intended maintenance period? Is it an SoC kernel that is maintained by a vendor? What’s the security requirement? Still, the “right answer,” straight from the mouths of senior kernel maintainers, is this:

Run the latest stable update. That is the most stable, the most secure, the best kernel we know how to create at this time. That’s the very best we can do. You should run that.

– Jon Corbet, September 2023

Tip: Point your browser to; do you see the big yellow button with the kernel release number inside it? That’s the latest stable kernel as of today.

You have to take all of the stable/LTS releases in order to have a secure and stable system. If you attempt to cherry-pick random patches you will not fix all of the known, and unknown, problems, but rather you will end up with a potentially more insecure system, and one that contains known bugs.

– Greg Kroah-Hartman

Greg Kroah-Hartman’s practical blog article What Stable Kernel Should I Use, August 2018 (, echoes these thoughts.

Right, now that we’re armed with the knowledge of kernel version nomenclature and types of kernel source trees, it’s definitely time to begin our journey of building our kernel.

Steps to build the kernel from source

As a convenient and quick reference, the following are the main, key steps required to build a Linux kernel from source. As the explanation for each of them is pretty detailed, you can refer back to this summary to see the big picture. The steps are as follows:

  1. Obtain a Linux kernel source tree through either of the following options:
    • Downloading a specific kernel source tree as a compressed file
    • Cloning a (kernel) Git tree
  2. Extract the kernel source tree into some location in your home directory (skip this step if you obtained a kernel by cloning a Git tree).
  3. Configure: Get a starting point for your kernel config (the approach varies). Then edit it, selecting the kernel support options as required for the new kernel. The recommended way of doing this is with make menuconfig.
  4. Build the kernel image, the loadable modules, and any required Device Tree Blobs (DTBs) with make [-j'n'] all. This builds the compressed kernel image (arch/<arch>/boot/[b|z|u]{Ii}mage), the uncompressed kernel image – vmlinux, the file, the kernel module objects, and any configured DTB files.
  5. Install the just-built kernel modules (on x86) with sudo make [INSTALL_MOD_PATH=<prefix-dir>] modules_install. This step installs kernel modules by default under /lib/modules/$(uname -r)/ (the INSTALL_MOD_PATH environment variable can be leveraged to change this).
  6. Bootloader (x86): Set up the GRUB bootloader and the initramfs, earlier called initrd, image:sudo make [INSTALL_PATH=</new/boot/dir>] install
    • This creates and installs the initramfs or initrd image under /boot (the INSTALL_PATH environment variable can be leveraged to change this).
    • It updates the bootloader configuration file to boot the new kernel (first entry).
  7. Customize the GRUB bootloader menu (optional).

This chapter, being the first of two on this kernel build topic, will cover steps 1 to 3, with a lot of required background material thrown in as well. The next chapter will cover the remaining steps, 4 to 7. So, let’s begin with step 1.

Step 1 – Obtaining a Linux kernel source tree

In this section, we will see two broad ways in which you can obtain a Linux kernel source tree:

  • By downloading and extracting a specific kernel source tree from the Linux kernel public repository:
  • By cloning Linus Torvalds’ source tree (or others’) – for example, the linux-next Git tree.

How do you decide which approach to use? For most developers working on a project or product, the decision has already been made – the project uses a very specific Linux kernel version. You will thus download that particular kernel source tree, quite possibly apply project-specific patches to it as required, and use it.

For folks whose intention is to contribute or upstream code to the mainline kernel, the second approach – cloning the Git tree – is the way to go. Of course, there’s more to it; we described some details in the Exploring the types of kernel source trees section.

In the following section, we demonstrate both approaches to obtaining a kernel source tree. First, we describe the approach where a particular kernel source tree (not a Git tree) is downloaded from the kernel repository. We choose the 6.1.25 LTS Linux kernel for this purpose. So, for all practical purposes for this book, this is the approach to use. In the second approach, we clone a Git tree.

Downloading a specific kernel tree

Firstly, where is the kernel source code? The short answer is that it’s on the public kernel repository server visible at The home page of this site displays the latest stable Linux kernel version, as well as the latest longterm and linux-next releases. The following screenshot shows the site as of 25 April 2023. It shows dates in the format yyyy-mm-dd:

Figure 2.5: The site (as of 25 April 2023) with the 6.1 LTS kernel highlighted

A quick reminder: we also provide a PDF file that has the full-color images of the screenshots/diagrams used in this book. You can download it here:

There are many ways to download a compressed kernel source file from this server and/or its mirrors. Let’s look at two of them:

  • An interactive, and perhaps the simplest way, is to visit the preceding website and simply click on the appropriate tarball link within your web client. The browser will download the image file in .tar.xz format to your system.
  • You can also download any kernel source tree in compressed form by navigating to and selecting the major version; practically speaking, for the major # 6 kernels, the URL is; browse or search within this page for the kernel you want. For example, check out the following screenshot:

Figure 2.6: Partial screenshot from highlighting the (6.1.25 LTS) kernel we’ll download and work with

The tar.gz and tar.xz files have identical content; it’s just the compression type that differs. You can see that it’s typically quicker to download the .tar.xz files as they’re smaller.

  • Alternatively, you can download the kernel source tree from the command line using the wget utility. We can also use the powerful curl utility to do so. For example, to download the stable 6.1.25 LTS kernel source compressed file, we type the following in one line:
    wget –https-only -O ~/Downloads/linux-6.1.25.tar.xz

This will securely download the 6.1.25 compressed kernel source tree to your computer’s ~/Downloads folder. So, go ahead and do this, get the 6.1.25 (LTS) kernel source code onto your system!

Cloning a Git tree

For developers working on and looking to contribute code upstream, you must work on the very latest version of the Linux kernel code base. Well, there are fine gradations of what exactly constitutes the latest version within the kernel community. As mentioned earlier, the linux-next tree, and some specific branch or tag within it, is the one to work on for this purpose.

In this book, though, we do not intend to delve into the gory details of setting up a linux-next tree. This process is already very well documented, see the Further reading section of this chapter for detailed links. The detailed page on how exactly you should clone a linux-next tree is here: Working with linux-next,, and, as mentioned there, the linux-next tree,, is the holding area for patches aimed at the next kernel merge window. If you’re doing bleeding-edge kernel development, you likely want to work from that tree rather than Linus Torvalds’ mainline tree or a source tree from the general kernel repository at

For our purposes, cloning the mainline Linux Git repository (in effect, Linus Torvalds’ Git tree) is more than sufficient. Do so like this:

git clone
cd linux

Note that cloning a complete Linux kernel tree is a time-, network-, and disk-consuming operation! Ensure you have sufficient disk space free (at least a few gigabytes worth).

Performing git clone --depth n <...>, where n is an integer value, can be useful to limit the depth of history (commits) and thus keep the download/disk usage lower. As the man page on git-clone mentions for the --depth option: “Create a shallow clone with a history truncated to a specified number of commits.” Also, FYI, to undo the “shallow fetch” and fetch everything, just do a git pull --unshallow.

The git clone command can take a while to finish. Further, you can specify that you want the latest stable version of the kernel Git tree by running git clone like shown below; for now, and only if you intend to work on this mainline Git tree, we’ll just bite the bullet and clone the stable kernel Git tree with all its storied history (again, type this on one line):

git clone git:// 

Now switch to the directory it got extracted to:

cd linux-stable

Again, if you intend to work on this Git tree, please skip the Step 2 – extracting the kernel source tree section as the git clone operation will, in any case, extract the source tree. Instead, continue with the Step 3 – configuring the Linux kernel section that follows it. This does imply, though, that the kernel source tree version you’re using will be much different from the 6.1.25 one that we use in this book. Thus, I’d suggest you treat this portion as a demo of how to obtain the latest stable Git tree via git, and leave it at that.

Finally, yet another way to download a given kernel is provided by the kernel maintainers who offer a script to safely download a given Linux kernel source tree, verifying its PGP signature. The script is available here:

Step 2 – Extracting the kernel source tree

In the previous section, in step 1, you learned how exactly you can obtain a Linux kernel source tree. One way – and the one we follow in this book – is to simply download a compressed source file from the website (or one of its mirror sites). Another way is to use Git to clone a recent kernel source tree.

So, I’ll assume that by now you have obtained the 6.1.25 (LTS) kernel source tree in compressed form onto your Linux box. With it in place, let’s proceed with step 2, a simple step, where we learn how to extract it.

As mentioned earlier, this section is meant for those of you who have downloaded a particular compressed Linux kernel source tree from the repository,, and aim to build it. In this book, we work primarily on the 6.1 longterm kernel series, particularly, on the 6.1.25 LTS kernel.

On the other hand, if you have performed git clone on the mainline Linux Git tree, as shown in the immediately preceding section, you can safely skip this section and move on to the next one – Step 3 – Configuring the Linux kernel.

Right; now that the download is done, let’s proceed further. The next step is to extract the kernel source tree – remember, it’s a tar-ed and compressed (typically .tar.xz) file. At the risk of repetition, we assume that by now you have downloaded the Linux kernel version 6.1.25 code base as a compressed file into the ~/Downloads directory:

$ cd ~/Downloads ; ls -lh linux-6.1.25.tar.xz
-rw-rw-r-- 1 c2kp c2kp  129M  Apr 20 16:13  linux-6.1.25.tar.xz

The simple way uncompress and extract this file is by using the ubiquitous tar utility to do so:

tar xf ~/Downloads/linux-6.1.25.tar.xz

This will extract the kernel source tree into a directory named linux-6.1.25 within the ~/Downloads directory. But what if we would like to extract it into another folder, say, ~/kernels? Then, do it like so:

mkdir -p ~/kernels tar xf ~/Downloads/linux-6.1.25.tar.xz \

This will extract the kernel source into the ~/kernels/linux-6.1.25/ folder. As a convenience and good practice, let’s set up an environment variable to point to the location of the root of our shiny new kernel source tree:

export LKP_KSRC=~/kernels/linux-6.1.25

Note that, going forward, we will assume that this variable LKP_KSRC holds the location of our 6.1.25 LTS kernel source tree.

While you could always use a GUI file manager application, such as Nautilus, to extract the compressed file, I strongly urge you to get familiar with using the Linux CLI to perform these operations.

Don’t forget tldr when you need to quickly lookup the most frequently used options to common commands! Take tar, for example: simply do tldr tar to look at common tar commands, or look it up here:

Did you notice? We can extract the kernel source tree into any directory under our home directory, or elsewhere. This is unlike in the old days, when the tree was always extracted into a root-writeable location, often /usr/src/.

If all you wish to do now is proceed with the kernel build recipe, skip the following section and move along. If you’re interested (I certainly hope so!), the next section is a brief but important digression into looking at the structure and layout of the kernel source tree.

A brief tour of the kernel source tree

Imagine! The entire Linux kernel source code is now available on your system! Awesome – let’s take a quick look at it:

Figure 2.7: The root of the pristine 6.1.25 Linux kernel source tree

Great! How big is it? A quick du -h . issued within the root of the uncompressed kernel source tree reveals that this kernel source tree (recall, its version is 6.1.25) is approximately 1.5 gigabytes in size!

FYI, the Linux kernel has grown to be big and is getting bigger in terms of Source Lines of Code (SLOCs). Current estimates are close to 30 million SLOCs. Of course, do realize that not all this code will get compiled when building a kernel.

How do we know which version exactly of the Linux kernel this code is by just looking at the source? That’s easy: one quick way is to just check out the first few lines of the project’s Makefile. Incidentally, the kernel uses Makefiles all over the place; most directories have one. We will refer to this Makefile, the one at the root of the kernel source tree, as the top-level Makefile:

$ head Makefile
# SPDX-License-Identifier: GPL-2.0
NAME = Hurr durr I'ma ninja sloth
# To see a list of typical targets execute "make help"
# More info can be located in ./README

Clearly, it’s the source of the 6.1.25 kernel. We covered the meaning of the VERSION, PATCHLEVEL, SUBLEVEL, and EXTRAVERSION tags – corresponding directly to the w.x.y.z nomenclature – in the Understanding the Linux kernel release nomenclature section. The NAME tag is simply a nickname given to the release (looking at it here – well, what can I say: that’s kernel humor for you. I personally preferred the NAME for the 5.x kernels – it’s “Dare mighty things”!).

Right, let’s now get for ourselves a zoomed-out 10,000-foot view of this kernel source tree. The following table summarizes the broad categorization and purpose of the more important files and directories within the root of the Linux kernel source tree. Cross-reference it with Figure 2.7:

File or directory name


Top-level files


The project’s README file. It informs us as to where the official kernel documentation is kept – spoiler alert! it’s in the directory called Documentation – and how to begin using it. The modern kernel documentation is online as well and very nicely done:

The documentation is really important; it’s the authentic thing, written by the kernel developers themselves. Do read this short README file first! See more below, point [1].


This file details the license terms under which the kernel source is released. The vast majority of kernel source files are released under the well-known GNU GPL v2 (written as GPL-2.0) license. The modern trend is to use easily grep-pable industry-aligned SPDX license identifiers. Here’s the full list:

See more below, point [2].


FAQ: something’s wrong in kernel component (or file) XYZ – who do I contact to get some support?

That is precisely what this file provides – the list of all kernel subsystems along with its maintainer(s). This goes all the way down to the level of individual components, such as a particular driver or file, as well as its status, who is currently maintaining it, the mailing list, website, and so on. Very helpful! There’s even a helper script to find the person or team to talk to: scripts/ . See more, point [3].


This is the kernel’s top-level Makefile; the kernel’s Kbuild build system as well as kernel modules use this Makefile for the build.

Major subsystem directories


Core kernel subsystem: the code here deals with a large number of core kernel features including stuff like process/thread life cycle management, CPU task scheduling, locking, cgroups, timers, interrupts, signaling, modules, tracing, RCU primitives, [e]BPF, and more.


The bulk of the memory management (mm) code lives here. We will cover a little of this in Chapter 6, Kernel Internals Essentials – Processes and Threads, and some related coverage in Chapter 7, Memory Management Internals – Essentials, and Chapter 8, Kernel Memory Allocation for Module Authors – Part 1, as well.


The code here implements two key filesystem features: the abstraction layer – the kernel Virtual Filesystem Switch (VFS) – and the individual filesystem drivers (for example, ext[2|4], btrfs, nfs, ntfs, overlayfs, squashfs, jffs2, fat, f2fs, isofs, and so on).


The underlying block I/O code path to the VFS/FS. It includes the code implementing the page cache, a generic block IO layer, IO schedulers, the new-ish blk-mq features, and so on.


Complete implementation of the network protocol stack, to the letter of the Request For Comments (RFCs) – Includes high-quality implementations of TCP, UDP, IP, and many, many more networking protocols. Want to see the code-level implementation of TCP/IP for IPv4? It’s here: net/ipv4/, see the tcp*.c and ip*.c source, besides others.


The Inter-Process Communication (IPC) subsystem code; the implementation of IPC mechanisms such as SysV and POSIX message queues, shared memory, semaphores, and so on.


The audio subsystem code, aka the Advanced Linux Sound Architecture (ALSA) layer.


The virtualization (hypervisor) code; the popular and powerful Kernel Virtual Machine (KVM) is implemented here.



The official kernel documentation resides right here; it’s important to get familiar with it. The README file refers to its online version.


The text of all licenses, categorized under different heads. See point [2].


The arch-specific code lives here (by the word arch, we mean CPU). Linux started as a small hobby project for the i386. It is now very probably the most ported OS ever. See the arch ports in point [4] of the list that follows this table.


Support code for generating signed modules; this is a powerful security feature, which when correctly employed ensures that even malicious rootkits cannot simply load any kernel module they desire.


This directory contains the kernel-level implementation of ciphers (as in encryption/decryption algorithms, or transformations) and kernel APIs to serve consumers that require cryptographic services.


The kernel-level device drivers code lives here. This is considered a non-core region; it’s classified into many types of drivers. This tends to be the region that’s most often being contributed to; as well, this code accounts for the most disk space within the source tree.


This directory contains the arch-independent kernel headers. There are also some arch-specific ones under arch/<cpu>/include/....


The arch-independent kernel initialization code; perhaps the closest we get to the kernel’s main function is here: init/main.c:start_kernel(), with the start_kernel() function within it being considered the early C entry point during kernel initialization. (Let’s leverage Bootlin’s superb web code browsing tool to see it, for 6.1.25, here:


Kernel infrastructure for implementing the new-ish io_uring fast I/O framework; see point [5].


The closest equivalent to a library for the kernel. It’s important to understand that the kernel does not support shared libraries as user space apps do. Some of the code here is auto-linked into the kernel image file and hence is available to the kernel at runtime. Various useful components exist within lib/: [un]compression, checksum, bitmap, math, string routines, tree algos, and so on.


Kernel infrastructure for supporting the Rust programming language; see point [6].


Sample code for various kernel features and mechanisms; useful to learn from!


Various scripts are housed here, some of which are used during kernel build, many for other purposes like static/dynamic analysis, debugging, and so on. They’re mostly Bash and Perl scripts. (FYI, and especially for debugging purposes, I have covered many of these scripts in Linux Kernel Debugging, 2022.)


Houses the kernel’s Linux Security Module (LSM), a Mandatory Access Control (MAC) framework that aims at imposing stricter access control of user apps to kernel space than the default kernel does. The default model is called Discretionary Access Control (DAC). Currently, Linux supports several LSMs; well-known ones are SELinux, AppArmor, Smack, Tomoyo, Integrity, and Yama. Note that LSMs are “off” by default.


The source code of various user mode tools is housed here, mostly applications or scripts that have a “tight coupling” with the kernel, and thus require to be within the particular kernel codebase. Perf, a modern CPU profiling tool, eBPF tooling, and some tracing tools, serve as excellent examples.


Support code to generate and load the initramfs image; this allows the kernel to execute user space code during kernel init. This is often required; we cover initramfs in Chapter 3, Building the 6.x Linux Kernel from Source – Part 2, section Understanding the initramfs framework.

Table 2.2: Layout of the Linux kernel source tree

The following are some important explanations from the table:

  1. README: This file also mentions the document to refer to for info on the minimal acceptable versions of software to build and run the kernel: Documentation/process/changes.rst. Interestingly, the kernel provides an Awk script (scripts/ver_linux) that prints the versions of current software on the system it’s run upon, helping you to check whether the versions you have installed are acceptable.
  2. Kernel licensing: Without getting stuck in the legal details (needless to say, I am not a lawyer), here’s the pragmatic essence of the thing. As the kernel is released under the GNU GPL-2.0 license (GNU GPL is the GNU General Public License), any project that directly uses the kernel code base automatically falls under this license. This is the “derivative work” property of the GPL-2.0. Legally, these projects or products must now release their kernel software under the same license terms. Practically speaking, the situation on the ground is a good deal hazier; many commercial products that run on the Linux kernel do have proprietary user- and/or kernel-space code within them. They typically do so by refactoring kernel (most often, device driver) work in Loadable Kernel Module (LKM) format. It is possible to release the kernel module (LKM) under a dual-license model. The LKM is the subject matter of Chapter 4, Writing Your First Kernel Module – Part 1, and Chapter 5, Writing Your First Kernel Module – Part 2, and we cover some information on the licensing of kernel modules there.

    Some folks, preferring proprietary licenses, manage to release their kernel code within a kernel module that is not licensed under GPL-2.0 terms; technically, this is perhaps possible, but is at the very least considered as being terribly anti-social and can even cross the line to being illegal. The interested among you can find more links on licensing in the Further reading document for this chapter.

  1. MAINTAINERS: Just peek at this file in the root of your kernel source tree! Interesting stuff... To illustrate how it’s useful, let’s run a helper Perl script: scripts/ Do note that, pedantically, it’s meant to be run on a Git tree only. Here, we ask the script to show the maintainers of the kernel CPU task scheduling code base by specifying a file or directory via the -f switch:
    $ scripts/ --nogit -f kernel/sched
    Ingo Molnar <> (maintainer:SCHEDULER)
    Peter Zijlstra <> (maintainer:SCHEDULER)
    Juri Lelli <> (maintainer:SCHEDULER)
    Vincent Guittot <> (maintainer:SCHEDULER)
    Dietmar Eggemann <> (reviewer:SCHEDULER)
    Steven Rostedt <> (reviewer:SCHEDULER)
    Ben Segall <> (reviewer:SCHEDULER)
    Mel Gorman <> (reviewer:SCHEDULER)
    Daniel Bristot de Oliveira <> (reviewer:SCHEDULER)
    Valentin Schneider <> (reviewer:SCHEDULER) (open list:SCHEDULER)
  2. Linux arch (CPU) ports: As of 6.1, the Linux OS has been ported to all these processors. Most have MMUs, You can see the arch-specific code under the arch/ folder, each directory representing a particular CPU architecture:
    $ cd ${LKP_KSRC} ; ls arch/
    alpha/        arm64/     ia64/      m68k/       nios2/        powerpc/     sh/           x86/       arc/       csky/       Kconfig       microblaze/  openrisc/     riscv/     sparc/     x86_64/     arm/          hexagon/  loongarch/    mips/      parisc/    s390/       um/           xtensa/

    In fact, when cross-compiling, the ARCH environment variable is set to the name of one of these folders, in order to compile the kernel for that architecture. For example, when building target “foo” for the AArch64, we’d typically do something like make ARCH=arm64 CROSS_COMPILE=<...> foo.

    As a kernel or driver developer, browsing the kernel source tree is something you will have to get quite used to (and even grow to enjoy!). Searching for a particular function or variable can be a daunting task when the code is in the ballpark of 30 million SLOCs though! Do learn to use efficient code browser tools. I suggest the ctags and cscope Free and Open Source Software (FOSS) tools. In fact, the kernel’s top-level Makefile has targets for precisely these: make [ARCH=<cpu>] tags ; make [ARCH=<cpu>] cscope A must-do! (With cscope, with ARCH set to null (the default), it builds the index for the x86[_64]. To, for example, generate the tags relevant to the AArch64, run make ARCH=arm64 cscope.) Also, FYI, several other code-browsing tools exist of course; another good one is opengrok.

  1. io_uring: It’s not an exaggeration to say that io_uring and eBPF are considered to be two of the new(-ish) “magic features” that a modern Linux system provides (the io_uring folder here is the kernel support for this feature)! The reason database/network-like folks are going ga-ga over io_uring is simple: performance. This framework dramatically improves performance numbers in real-world high I/O situations, for both disk and network workloads. Its shared (between user and kernel-space) ring buffer architecture, zero-copy schema, and ability to use much fewer system calls compared to typical older AIO frameworks, including a polled mode operation, make it an enviable feature. So, for your user space apps to get on that really fast I/O path, check out io_uring. The Further reading section for this chapter carries useful links.
  2. Rust in the kernel: Yes, indeed, there was a lot of hoopla about the fact that basic support for the Rust programming language has made it into the Linux kernel (6.0 first). Why? Rust does have a well-advertised advantage over even our venerable C language: memory-safety. The fact is, even today, one of the biggest programing-related security headaches for code written in C/C++ – for both OS/drivers as well as user space apps – have at their root memory-safety issues (like the well-known BoF (Buffer Overflow) defect). These can occur when developers generate memory corruption defects (bugs!) in their C/C++ code. This leads to vulnerabilities in software that clever hackers are always on the lookout for and exploit! Having said all that, at least as of now, Rust has made a very minimal entry into the kernel – no core code uses it. The current Rust support within the kernel is to support writing modules in Rust in the future. (There is a bit of sample Rust code, of course, here: samples/rust/.) Rust usage in the kernel will certainly increase in time.... The Further reading section has some links on this topic – do check it out, if interested.

We have now completed step 2, the extraction of the kernel source tree! As a bonus, you also learned the basics regarding the layout of the kernel source. Let’s now move on to step 3 of the process and learn how to configure the Linux kernel prior to building it.

Step 3 – Configuring the Linux kernel

Configuring the kernel is perhaps the most critical step in the kernel build process. One of the many reasons Linux is a critically acclaimed OS is its versatility. It’s a common misconception to think that there is a separate Linux kernel code base for an enterprise-class server, a data center, a workstation, and a tiny, embedded Linux device – no, they all use the very same unified Linux kernel source! Thus, carefully configuring the kernel for a particular use case (server, desktop, embedded, or hybrid/custom) is a powerful feature and a requirement. This is precisely what we are delving into here.

You’ll probably find that this topic – arriving at a working kernel config – tends to be a long and winding discussion, but it’s ultimately worth it; do take the time and trouble to read through it.

Also, to build a kernel, you must carry out the kernel configuration step regardless. Even if you feel you do not require any changes to the existing or default config, it’s very important to run this step at least once as part of the build process. Otherwise, certain headers that are auto-generated here will be missing and cause issues later. At the very least, the make old[def]config step should be carried out. This will set up the kernel config to that of the existing system with answers to config options being requested from the user only for any new options.

Next, we cover a bit of required background on the kernel build system.

Note: you might find that if you’re completely new to configuring the kernel, the following details might feel a bit overwhelming at first. In this case, I suggest you skip it on first reading, move ahead with the practicalities of configuring the kernel, and then come back to this section.


Minimally understanding the Kconfig/Kbuild build system

The infrastructure that the Linux kernel uses to configure the kernel is known as the Kconfig system, and to build it, there is the Kbuild infrastructure. Without delving into the gory details, the Kconfig + Kbuild system ties together the complex kernel configuration and build process by separating out the work into logical streams:

  • Kconfig – the infrastructure to configure the kernel; it consists of two logical parts:
    • The Kconfig language: it’s used to specify the syntax within the various Kconfig[.*] files, which in effect specify the “menus” where kernel config options are selected.
    • The Kconfig parsers: tools that intelligently parse the Kconfig[.*] files, figure out dependencies and auto-selections, and generate the menu system. Among them is the commonly used make menuconfig, which internally invokes the mconf tool (the code’s under scripts/kconfig).
  • Kbuild – the support infrastructure to build the source code into kernel binary components. It mainly uses a recursive make style build, originating at the kernel top-level Makefile, which, in turn recursively parses the content of hundreds of Makefiles embedded into sub-directories within the source (as required).

A diagram that attempts to convey information regarding this Kconfig/Kbuild system (in a simplified way) can be seen in Figure 2.8. Several details aren’t covered yet; still, you can keep it in mind while reading the following materials.

To help you gain a better understanding, let’s look at a few of the key components that go into the Kconfig/Kbuild system:

  • The CONFIG_FOO symbols
  • The menu specification file(s), named Kconfig[.*]
  • The Makefile(s)
  • The overall kernel config file – .config – itself.

The purposes of these components are summarized as follows:

Kconfig/Kbuild component

Purpose in brief

Kconfig: Config symbol: CONFIG_FOO

Every kernel configurable FOO is represented by a CONFIG_FOO macro. Depending on the user’s choice, the macro will resolve to one of y, m, or n:

  • y=yes: this implies building the config or feature FOO into the kernel image itself
  • m=module: this implies building it as a separate object, a kernel module (a .ko file)
  • n=no: this implies not building the feature

Note that CONFIG_FOO is an alphanumeric string. We will soon see a means to look up the precise config option name via the make menuconfig UI.


Kconfig.* files

This is where the CONFIG_FOO symbol is defined. The Kconfig syntax specifies its type (Boolean, tristate, [alpha]numeric, and so on) and dependency tree. Furthermore, for the menu-based config UI (invoked via one of make [menu|g|x]config), it specifies the menu entries themselves. We will, of course, make use of this feature later.

Kbuild: Makefile(s)

The Kbuild system uses a recursive make Makefile approach. The Makefile in the root of the kernel source tree is called the top-level Makefile, typically with a Makefile within each sub-folder to build the source there. The 6.1 kernel source has over 2,700 Makefiles in all!

The .config file

Ultimately, the kernel configuration distills down to this file; .config is the final kernel config file. It’s generated and stored within the kernel source tree root folder as a simple ASCII text file. Keep it safe, as it’s a key part of your product. Note that the config filename can be overridden via the environment variable KCONFIG_CONFIG.

Table 2.3: Major components of the Kconfig+Kbuild build system

How the Kconfig+Kbuild system works – a minimal take

Now that we know a few details, here’s a simplified take on how it’s tied together and works:

  • First, the user (you) configures the kernel using some kind of menu system provided by Kconfig.
  • The kernel config directives selected via this menu system UI are written into a few auto-generated headers and a final .config file, using a CONFIG_FOO={y|m} syntax, or, CONFIG_FOO is simply commented out (implying “don’t build FOO at all”).
  • Next, the Kbuild per-component Makefiles (invoked via the kernel top-level Makefile) typically specify a directive FOO like this:
    obj-$(CONFIG_FOO) += FOO.o
  • A FOO component could be anything – a core kernel feature, a device driver, a filesystem, a debug directive, and so on. Recall, the value of CONFIG_FOO may be y, or m, or not exist; this accordingly has the build either build the component FOO into the kernel (when its value is y), or as a module (when its value is m)! If commented out, it isn’t built at all, simple. In effect, the above-mentioned Makefile directive, at build time, expands into one of these three for a given kernel component FOO:
    obj-y += FOO.o      # build the feature FOO into the kernel image
    obj-m += FOO.o     # build the feature FOO as a discrete kernel module (a                     #  foo.ko file)
    <if CONFIG_FOO is null>        # do NOT build feature FOO

To see an instance of this in action, check out the Kbuild file (more details on Kconfig files are in the Understanding the Kconfig* files section) in the root of the kernel source tree:

$ cat Kconfig
# Kbuild for top-level directory of the kernel
# Ordinary directory descending
# ---------------------------------------------------------------------------
obj-y                   += init/
obj-y                   += usr/
obj-y                   += arch/$(SRCARCH)/
obj-y                   += $(ARCH_CORE)
obj-y                   += kernel/
[ … ]
obj-$(CONFIG_BLOCK)     += block/
obj-$(CONFIG_IO_URING)  += io_uring/
obj-$(CONFIG_RUST)      += rust/
obj-y                   += $(ARCH_LIB)
[ … ]
obj-y                   += virt/
obj-y                   += $(ARCH_DRIVERS)

Interesting! We can literally see how the top-level Makefile will descend into other directories, with the majority being set to obj-y; in effect, build it in (in a few cases it’s parametrized, becoming obj-y or obj-m depending on how the user selected the option).

Great. Let’s move along now; the key thing to do is to get ourselves a working .config file. How can we do so? We do this iteratively. We begin with a “default” configuration – the topic of the following section – and carefully work our way up to a custom config.

Arriving at a default configuration

So, how do you decide on the initial kernel configuration to begin with? Several techniques exist; a few common ones are as follows:

  • Don’t specify anything; Kconfig will pull in a default kernel configuration (as all kernel configs have a default value)
  • Use the existing distribution’s kernel configuration
  • Build a custom configuration based on the kernel modules currently loaded in memory

The first approach has the benefit of simplicity. The kernel will handle the details, giving you a default configuration. The downside is that the default config can be very large (this is the case when building Linux for an x86_64-based desktop or server-type system); a huge number of options are turned on by default, just in case you need it, which can make the build time very long and the kernel image size very large. Typically, of course, you are then expected to manually configure the kernel to the desired settings.

This brings up the question, where is the default kernel config stored? The Kconfig system uses a priority list fallback scheme to retrieve a default configuration if none is specified. The priority list and its order (the first being the highest priority) is as follows:

  • .config
  • /lib/modules/$(uname -r)/.config
  • /etc/kernel-config
  • /boot/config-$(uname -r)
  • ARCH_DEFCONFIG (if defined)
  • arch/${ARCH}/defconfig

From the list, you can see that the Kconfig system first checks for the presence of a .config file in the root of the kernel source tree; if found, it picks up all the config values from there. If it doesn’t exist, it next looks at the path /lib/modules/$(uname -r)/.config. If found, the values found in that file will be used as the defaults. If not found, it checks the next one in the preceding priority list, and so on… You can see this shown in Figure 2.8.

For a more detailed look at the kernel’s Kconfig and Kbuild infrastructure, we suggest you refer to the following excellent docs:

A diagram (inspired by Cao Jin’s articles) that attempts to communicate the kernel’s Kconfig/Kbuild system is shown here. The diagram conveys more information than has been covered by now; worry not, we’ll get to it.

Figure 2.8: The kernel’s Kconfig/Kbuild system in a simplified form

Right, let’s now get to figuring out how exactly to get a working kernel config!

Obtaining a good starting point for kernel configuration

This brings us to a really important point: while playing around with the kernel configuration is okay to do as a learning exercise, for a production system it’s critical that you base your custom config on a proven – known, tested, and working – kernel configuration.

Here, to help you understand the nuances of selecting a valid starting point for kernel configuration, we will see three approaches to obtaining a starting point for a typical kernel configuration:

  • First, an easy (but sub-optimal) approach where you simply emulate the existing distribution’s kernel configuration.
  • Next, a more optimized approach where you base the kernel configuration on the existing system’s in-memory kernel modules. This is the localmodconfig approach.
  • Finally, a word on the approach to follow for a typical embedded Linux project.

Let’s examine each of these approaches in a bit more detail. In terms of configuring the kernel you’ve downloaded and extracted in the previous two steps, don’t do anything right now; read the sections that follow, and then, in the Getting going with the localmodconfig approach section, we’ll have you actually get started.

Kernel config using distribution config as a starting point

The typical target system for using this approach is a x86_64 desktop or server Linux system. Let’s configure the kernel to all defaults:

$ make mrproper
  CLEAN   scripts/basic
  CLEAN   scripts/kconfig
  CLEAN   include/config include/generated .config

To ensure we begin with a clean slate, we run make mrproper first; be careful, it cleans pretty much everything, including the .config if it exists.

Next, we perform the make defconfig step, which, as the make help command output shows (try it out! See Figure 2.10), gives us a new config:

$ make defconfig
  HOSTCC  scripts/basic/fixdep
  HOSTCC  scripts/kconfig/conf.o
 [ … ]
 HOSTLD  scripts/kconfig/conf
*** Default configuration is based on 'x86_64_defconfig'
# configuration written to .config

The building of the mconf utility itself is performed first (under scripts/kconfig), and then the config is generated. Here it is:

$ ls -l .config 
-rw-rw-r-- 1 c2kp c2kp 136416 Apr 29 08:12 .config

There, done: we now have an “all-defaults” kernel config saved in .config!

What if there’s no defconfig file for your arch under arch/${ARCH}/configs? Then, at least on x86_64, you can simply copy in the existing distro default kernel config:

cp /boot/config-$(uname -r) ${LKP_KSRC}/.config

Here, we simply copy the existing Linux distribution’s (here, it’s our Ubuntu 22.04 LTS guest VM) config file into the .config file in the root of the kernel source tree, thereby making the distribution config the starting point, which can then be further edited. As already mentioned, the downside of this quick approach is that the config tends to be large, thus resulting in a large-footprint kernel image.

Also, FYI, once the kernel config is generated in any manner (like via make defconfig), every kernel config FOO is shown as an empty file within include/config.

Tuned kernel config via the localmodconfig approach

The typical target system for using this approach is a (typically x86_64) desktop or server Linux system.

This second approach is more optimized than the previous one – a good one to use when the goal is to begin with a kernel config that is based on your existing running system and is thus (usually) relatively compact compared to the typical default config on a desktop or server Linux system.

Here, we provide the Kconfig system with a snapshot of the kernel modules currently running on the system by simply redirecting the output of lsmod into a temporary file and then providing that file to the build. This can be achieved as follows:

lsmod > /tmp/
cd ${LKP_KSRC}
make LSMOD=/tmp/ localmodconfig

The lsmod utility simply lists all the kernel modules currently residing in system kernel memory. We will see more on this in Chapter 4, Writing Your First Kernel Module – Part 1.

We save its output in a temporary file, which we then pass via the LSMOD environment variable to the Makefile's localmodconfig target. The job of this target is to configure the kernel in a manner as to only include the base functionality plus the functionality provided by these kernel modules and leave out the rest, in effect giving us a reasonable facsimile of the current kernel (or of whichever kernel the lsmod output represents). We use precisely this technique to configure our 6.1 kernel in the upcoming Getting going with the localmodconfig approach section. We also show only this approach as step 1 (1 in the circle) in Figure 2.8.

Kernel config for typical embedded Linux systems

The typical target system for using this approach is usually a small embedded Linux system. The goal here is to begin with a proven – a known, tested, and working – kernel configuration for our embedded Linux project. Well, how exactly can we achieve this?

Before going further, let me mention this: the initial discussion here will be shown to be the older approach to configuring (the AArch32 or ARM-32 arch) embedded Linux; we shall then see the “correct” and modern approach for modern platforms.

Interestingly, for the AArch32 at least, the kernel code base itself contains known, tested, and working kernel configuration files for various well-known hardware platforms.

Assuming our target is ARM-32 based, we merely must select the one that matches (or is the nearest match to) our embedded target board. These kernel config files are present within the kernel source tree in the arch/<arch>/configs/ directory. The config files are in the format <platform-name>_defconfig.

A quick peek is in order; see the following screenshot showing, for the ARM-32, the existing board-specific kernel code under arch/arm/mach-<foo> and platform config files under arch/arm/configs on the v6.1.25 Linux kernel code base:

Figure 2.9: The contents of arch/arm and arch/arm/configs on the 6.1.25 Linux kernel

Whoah, quite a bit! The directories arch/arm/mach-<foo> represent hardware platforms (boards or machines) that Linux has been ported to (typically by the silicon vendor); the board-specific code is within these directories.

Similarly, working default kernel config files for these platforms are also contributed by them and are under the arch/arm/configs folder of the form <foo>_defconfig; as can clearly be seen in the lower portion of Figure 2.9.

Thus, for example, if you find yourself configuring the Linux kernel for a hardware platform having, say, an i.MX 7 SoC from NXP on it, please don’t start with an x86_64 kernel config file as the default. It won’t work. Even if you manage it, the kernel will not build/work cleanly. Pick the appropriate kernel config file: for our example here, perhaps the imx_v6_v7_defconfig file would be a good starting point. You can copy this file into .config in the root of your kernel source tree and then proceed to fine-tune it to your project-specific needs.

As another example, the Raspberry Pi ( is a very popular hobbyist and production platform. The kernel config file – within its kernel source tree – used as a base for it is this one: arch/arm/configs/bcm2835_defconfig. The filename reflects the fact that Raspberry Pi boards use a Broadcom 2835-based SoC. You can find details regarding kernel compilation for the Raspberry Pi here: Hang on, though, we will be covering at least some of this in Chapter 3, Building the 6.x Linux Kernel from Source – Part 2, in the Kernel build for the Raspberry Pi section.

The modern approach – using the Device Tree

Okay, a word of caution! For AArch32, as we saw, you’ll find the platform-specific config files under arch/arm/configs as well as the board-specific kernel code under arch/arm/mach-<foo>, where foo is the platform name. Well, the reality is that this approach – keeping board-specific config and kernel source files within the Linux OS code base – is considered to be exactly the wrong one for an OS! Linus has made it amply clear – the “mistakes” made in older versions of Linux, which happened when the ARM-32 was the popular arch, must never be repeated for other arches. Then how does one approach this? The answer, for the modern (32 and 64-bit) ARM and PPC architectures, is to use the modern Device Tree (DT) approach.

Very basically, the DT holds all platform hardware topology details. It is effectively the board or platform layout; it’s not code, it’s a description of the hardware platform analogous to VHDL. BSP-specific code and drivers still need to be written, but now have a neat way to be “discovered” or enumerated by the kernel at boot when it parses the DTB (Device Tree Blob) that’s passed along by the bootloader. The DTB is generated as part of the build process, by invoking the DTC (Device Tree Compiler) on the DT source files for the platform.

So, nowadays, you’ll find that the majority of embedded projects (for ARM and PPC at least) will use the DT to good effect. It also helps OEMs and ODMs/vendors by allowing usage of essentially the same kernel with platform/model-specific tweaks built into the DT. Think of the dozens of Android phone models from popular OEMs with mostly the same stuff but just a few hardware differences; one kernel will typically suffice! This dramatically eases the maintenance burden. For the curious – the DT sources – the .dts files can be found here: arch/<arch>/boot/dts.

The lesson on keeping board-specific stuff outside the kernel code base as far as is possible seems to have been learned well for the AArch64 (ARM-64). Compare its clean, well-organized, and uncluttered config and DTS folders (arch/arm64/configs/: it has only one file, a defconfig) to AArch32. Even the DTS files (look under arch/arm64/boot/dts/) are well organized compared to AArch32:

6.1.25 $ ls arch/arm64/configs/
6.1.25 $ ls arch/arm64/boot/dts/
actions/      amazon/     apm/          bitmain/     exynos/       intel/       marvell/      nuvoton/    realtek/      socionext/   tesla/        xilinx/        allwinner/    amd/        apple/        broadcom/    freescale/    lg/          mediatek/     nvidia/     renesas/      sprd/        ti/           altera/   amlogic/      arm/        cavium/       hisilicon/   Makefile      microchip/     qcom/         rockchip/   synaptics/    toshiba/

So, with modern embedded projects and the DT, how is one to go about kernel/BSP tasks? Well, there are several approaches; the BSP work is:

  • Carried out by an in-house BSP or platform team.
  • Provided as a BSP “package” by a vendor (often the silicon vendor that you’ve partnered with) along with reference hardware.
  • Outsourced to an external company or consultant that you’ve partnered with. Several companies exist in this space – among them are Siemens (ex Mentor Graphics), Timesys, and WindRiver.
  • Often nowadays, with the project being built and integrated via sophisticated builder software like Yocto or Buildroot, the vendors contribute BSP layers, which are then integrated into the product by the build team.

    Design: A bit off-topic, but I think it’s important: when working on projects (especially embedded), teams have repeatedly shown the undesirable tendency to directly employ vendor SDK APIs to perform device-specific work in their apps and drivers. Now, at first glance, this might seem fine; it can become a huge burden when the realization dawns that, hey, requirements change, devices themselves change, and thus your tightly coupled software simply breaks! You had the apps (and drivers) tie into the device hardware with hardly any separation.

    The solution, of course, is to use a loosely coupled architecture, with what’s essentially a HAL (Hardware Abstraction Layer) to allow apps to interface with devices seamlessly. This also allows for the device-specific code to be changed without affecting the higher layers (apps). Designing this way might seem obvious in the abstract, essentially leveraging the information-hiding idea, but can be difficult to ensure in practice; do always keep this in mind. The fact is, Linux’s device model encourages this loose coupling. The Further reading section has some good links on these design approaches, within the Generic online and book resources... section (here:

Right, that concludes the three approaches to setting up a starting point for kernel configuration.

Seeing all available config options

As a matter of fact, with regard to kernel config, we have just scratched the surface. Many more techniques to explicitly generate the kernel configuration in a given manner are encoded into the Kconfig system itself! How? Via configuration targets to make. See them by running make help in the root of your kernel source; they’re under the Configuration targets heading:

Figure 2.10: Output from make help on an x86_64 (6.1.25 kernel) with interesting lines highlighted

Let’s experiment with a couple of other approaches as well – the oldconfig one to begin with:

$ make mrproper
[ … ]
$ make oldconfig
  HOSTCC  scripts/basic/fixdep
  HOSTCC  scripts/kconfig/conf.o
  HOSTCC  scripts/kconfig/confdata.o
  [ … ]
 HOSTLD  scripts/kconfig/conf
 # using defaults found in /boot/config-5.19.0-41-generic
 [ … ]
 * Restart config...
 [ … ]
 Control Group support (CGROUPS) [Y/?] y
  Favor dynamic modification latency reduction by default   (CGROUP_FAVOR_DYNMODS) [N/y/?] (NEW)                  << waits for input here >>
 [ … ]

It works of course, but you’ll have to press the Enter key (a number of times, perhaps) to accept defaults for any and every newly detected kernel config… (or you can specify a value explicitly; we shall see more on these new kernel configs in the section that follows). This is quite normal, but a bit of an annoyance. There’s an easier way : using the olddefconfig target; as its help line says – “Same as oldconfig but sets new symbols to their default value without prompting”:

$ make olddefconfig
# using defaults found in /boot/config-5.19.0-41-generic
.config:10301:warning: symbol value 'm' invalid for ANDROID_BINDER_IPC
.config:10302:warning: symbol value 'm' invalid for ANDROID_BINDERFS
# configuration written to .config

Viewing and setting new kernel configs

Another quick experiment: we clean up and then copy in the distro kernel config. Keep a backup if you want your existing .config.

$ make mrproper
$ cp /boot/config-5.19.0-41-generic .config
cp: overwrite '.config'? y

We have the distro defaults in .config. Now, think on it: we’re currently running the 5.19.0-41-generic distro kernel but are intending to build a new kernel, 6.1.25. So, the new kernel’s bound to have at least a few new kernel configs. In cases like this, when you attempt to configure the kernel, the Kconfig system will question you: it will display every single new config option and the available values you can set it to, with the default one in square brackets, in the console window. You’re expected to select the values for the new config options it encounters. You will see this as a series of questions and a prompt to answer them on the command line.

The kernel provides two interesting mechanisms to see all new kernel configs:

  • listnewconfig – list new options
  • helpnewconfig – list new options and help text

Running the first merely lists every new kernel config variable:

$ make listnewconfig
[ … ]
[ … ]

There are lots of them – I got 108 new configs – so I’ve truncated the output here.

We can see all the new configs, though the output’s not very helpful in understanding what exactly they mean. Running the helpnewconfig target solves this – you can now see the “Help” (from the config option’s Kconfig file) for every new kernel config:

$ make helpnewconfig
[ … ] 
This option enables the favordynmods mount option by default, which reduces the latencies of dynamic cgroup modifications such as task migrations and controller on/offs at the cost of making hot path operations such as forks and exits more expensive.
Say N if unsure.
Type  : bool
Defined at init/Kconfig:959
  Prompt: Favor dynamic modification latency reduction by default
[ … ]
This module registers a tracer callback to count enabled pr_debugs in a do_debugging function, then alters their enablements, calls the function, and compares counts.
If unsure, say N.
[ … ]

Don’t worry about understanding the Kconfig syntax for now; we shall cover it in the Customizing the kernel menu, Kconfig, and adding our own menu item section.

The LMC_KEEP environment variable

Also, did you notice in Figure 2.10 that the localmodconfig and localyesconfig targets can optionally include an environment variable named LMC_KEEP (LMC is LocalModConfig)?

Its meaning is straightforward: setting LMC_KEEP to some colon-delimited values has the Kconfig system preserve the original configs for the specified paths. An example might look like this: "drivers/usb:drivers/gpu:fs". In effect, it says, “Keep these modules enabled.”

This is a feature introduced in the 5.8 kernel (the commit’s here: So, to make use of it, you could run the config command like this:

make LSMOD=/tmp/mylsmod \
  LMC_KEEP="drivers/usb:drivers/gpu:fs" \

Tuned config via the script

Interestingly, the kernel provides many helper scripts that can perform useful housekeeping, debugging, and other tasks within the scripts/ directory. A good example with respect to what we’re discussing here is the scripts/kconfig/ Perl script. It’s ideally suited to situations where your distro kernel has too many modules or built-in kernel features enabled, and you just want the ones that you’re using right now – the ones that the currently loaded modules provide, like localmodconfig. Run this script with all the modules you want loaded up, saving its output to, ultimately, .config. Then run make oldconfig and the config’s ready!

As a sidebar, here’s how the original author of this script – Steven Rostedt – describes what it does (

# Here's what I did with my Debian distribution.
#    cd /usr/src/linux-2.6.10
#    cp /boot/config-2.6.10-1-686-smp .config
#    ~/bin/streamline_config > config_strip
#    mv .config config_sav
#    mv config_strip .config
#    make oldconfig

You can try it if you wish.

Getting going with the localmodconfig approach

Now (finally!) let’s get hands-on and create a reasonably sized base kernel configuration for our 6.1.25 LTS kernel by using the localmodconfig technique. As mentioned, this existing-kernel-modules-only approach is a good one when the goal is to obtain a starting point for kernel config on an x86-based system by keeping it tuned to the current host.

Don’t forget: the kernel configuration being performed right now is appropriate for your typical x86_64 desktop/server systems, as a learning approach. This approach merely provides a starting point, and even that might not be okay. For actual projects, you’ll have to carefully check and tune every aspect of the kernel config; having an audit of your precise hardware and software to support is key. Again, for embedded targets, the approach is different (as we discussed in the Kernel config for typical embedded Linux systems section).

Before going any further, it’s a good idea to clean up the source tree, especially if you ran the experiments we worked on previously. Be careful: this command will wipe everything, including the .config:

make mrproper

As described previously, let’s first obtain a snapshot of the currently loaded kernel modules, and then have the build system operate upon it by specifying the localmodconfig target, like so:

lsmod > /tmp/ cd ${LKP_KSRC}
make LSMOD=/tmp/ localmodconfig

Now, when you run the make [...] localmodconfig command just shown, it’s entirely possible, indeed probable, that there will be a difference in the configuration options between the kernel you are currently configuring (version 6.1.25) and the kernel you are currently running on the build machine (for myself, the host kernel is $(uname -r) = 5.19.0-41-generic). In such cases, as explained in the Viewing and setting new kernel configs section, the Kconfig system will question you for each new config; pressing Enter accepts the default.

Now let’s make use of the localmodconfig command.

The prompt will be suffixed with (NEW), in effect telling you that this is a new kernel config option and that it wants your answer as to how to configure it.

Enter the following commands (if not already done):

$ uname -r
$ lsmod > /tmp/
$ make LSMOD=/tmp/ localmodconfig
  HOSTCC  scripts/basic/fixdep
  HOSTCC  scripts/kconfig/conf.o
  [ ... ]
using config: '/boot/config-5.19.0-41-generic'
System keyring enabled but keys "debian/canonical-certs.pem" not found. Resetting keys to default value.
* Restart config...
* Control Group support
Control Group support (CGROUPS) [Y/?] y
  Favor dynamic modification latency reduction by default (CGROUP_FAVOR_DYNMODS) [N/y/?] (NEW)   
  Memory controller (MEMCG) [Y/n/?] y
[ … ]
Userspace opportunistic sleep (PM_USERSPACE_AUTOSLEEP) [N/y/?] (NEW)   
[ … ]
Multi-Gen LRU (LRU_GEN) [N/y/?] (NEW)   
[ … ]
 Rados block device (RBD) (BLK_DEV_RBD) [N/m/y/?] n
 Userspace block driver (Experimental) (BLK_DEV_UBLK) [N/m/y/?] (NEW)   
[ … ]
  Pine64 PinePhone Keyboard (KEYBOARD_PINEPHONE) [N/m/y/?] (NEW)   
[ … ]
  Intel Meteor Lake pinctrl and GPIO driver (PINCTRL_METEORLAKE) [N/m/y/?] (NEW)   
[ … ]
[ … ]
# configuration written to .config
$ ls -l .config
-rw-rw-r-- 1 c2kp c2kp 170136 Apr 29 09:56 .config

After pressing the Enter key () many times – we’ve highlighted this in the output block just above, showing just a few of the many new options encountered – the interrogation mercifully finishes and the Kconfig system writes the newly generated configuration to a file named .config in the current working directory. Note that we truncated the previous output as it’s simply too voluminous, and unnecessary, to reproduce fully.

The preceding steps take care of generating the .config file via the localmodconfig approach. Before we conclude this section, here are a few additional points to note:

  • To ensure a completely clean slate, run make mrproper or make distclean in the root of the kernel source tree, useful when you want to restart the kernel build procedure from scratch; rest assured, it will happen one day! Note that doing this deletes the kernel configuration file(s) too. Keep a backup before you begin, if required.
  • Here, in this chapter, all the kernel configuration steps and the screenshots pertaining to it have been performed on an x86_64 Ubuntu 22.04 LTS guest VM, which we use as the host to ultimately build a brand-spanking-new 6.1 LTS Linux kernel. The precise names, presence, and content of the menu items seen, as well as the look and feel of the menu system (the UI), can and do vary based on (a) the architecture (CPU) and (b) the kernel version.
  • As mentioned earlier, on a production system or project, the platform or BSP team, or indeed the embedded Linux BSP vendor company if you have partnered with one, will provide a good known, working, and tested kernel config file. Use this as a starting point by copying it into the .config file in the root of the kernel source tree. Alternatively, builder software like Yocto or Buildroot might be employed.

As you gain experience with building the kernel, you will realize that the effort in setting up the kernel configuration correctly the first time is higher; and, of course, the time required for the very first build is a lengthy one. Once done correctly, though, the process typically becomes much simpler – a recipe to run repeatedly.

Now, let’s learn how to use a useful and intuitive UI to fine-tune our kernel configuration.

Tuning our kernel configuration via the make menuconfig UI

Okay, great, we now have an initial kernel config file (.config) generated for us via the localmodconfig Makefile target, as shown in detail in the previous section, which is a good starting point. Typically, we now further fine-tune our kernel configuration. One way to do this – in fact, the recommended way – is via the menuconfig Makefile target. This target has the Kbuild system generate a pretty sophisticated C-based program executable (scripts/kconfig/mconf), which presents to the end user a neat menu-based UI. This is step 2 in Figure 2.8. In the following output block, when (within the root of our kernel source tree), we invoke the command for the first time, the Kbuild system builds the mconf executable and invokes it:

$ make menuconfig
 UPD scripts/kconfig/.mconf-cfg
 HOSTCC scripts/kconfig/mconf.o
 HOSTCC scripts/kconfig/lxdialog/checklist.o
 HOSTLD  scripts/kconfig/mconf

Of course, a picture is no doubt worth a thousand words, so here’s what the menuconfig UI looks like on my VM.

Figure 2.11: The main menu for kernel configuration via make menuconfig (on x86-64)

By the way, you don’t need to be running your VM in GUI mode to use this approach; it even works on the terminal window when employing an SSH login shell from the host as well – another advantage of this UI approach to edit our kernel config!

As experienced developers, or indeed anyone who has sufficiently used a computer, well know, things can and do go wrong. Take, for example, the following scenario – running make menuconfig for the first time on a freshly installed Ubuntu system:

$ make menuconfig
 UPD     scripts/kconfig/.mconf-cfg
 HOSTCC  scripts/kconfig/mconf.o
 YACC    scripts/kconfig/
/bin/sh: 1: bison: not found
scripts/Makefile.lib:196: recipe for target 'scripts/kconfig/' failed
make[1]: *** [scripts/kconfig/] Error 127
Makefile:539: recipe for target 'menuconfig' failed
make: *** [menuconfig] Error 2

Hang on, don’t panic. Read the failure messages carefully. The line after YACC [...] provides the clue: /bin/sh: 1: bison: not found. Ah! So, install bison with the following command:

sudo apt install bison

Now, all should be well. Well, almost; again, on a freshly baked Ubuntu guest, make menuconfig then complains that flex wasn’t installed. So, we installed it (you guessed it: via sudo apt install flex). Also, specifically on Ubuntu, you need the libncurses5-dev package installed. On Fedora, do sudo dnf install ncurses-devel.

If you read and followed Online Chapter, Kernel Workspace Setup, you would have all these prerequisite packages already installed. If not, please refer to it now and install all required packages. Remember, as ye sow…

Quick tip: running the <book_src>/ch1/ Bash script will (on an Ubuntu system) install all required packages.

Moving along, the Kconfig+Kbuild open-source framework provides clues to the user via its UI. Look at Figure 2.11; you’ll often see symbols prefixed to the menus (like [*], <>, -*-, (), and so on); these symbols and their meaning are as follows:

  • [.]: In-kernel feature, Boolean option. It’s either On or Off; the ‘.’ shown will be replaced by * or a space:
    • [*]: On, feature compiled and built in to the kernel image (y)
    • [ ]: Off, not built at all (n)
  • <.>: A feature that could be in one of three states. This is known as tristate; the . shown will be replaced by *, M, or a space):
    • <*>: On, feature compiled and built in the kernel image (y)
    • <M>: Module, feature compiled and built as a kernel module (an LKM) (m)
    • < >: Off, not built at all (n)
  • {.}: A dependency exists for this config option; hence, it’s required to be built or compiled as either a module (m) or to the kernel image (y).
  • -*-: A dependency requires this item to be compiled in (y).
  • (...): Prompt: an alphanumeric input is required. Press the Enter key while on this option and a prompt box appears.
  • <Menu name> --->: A sub-menu follows. Press Enter on this item to navigate to the sub-menu.

Again, the empirical approach is key. Let’s perform a few experiments with the make menuconfig UI to see how it works. This is precisely what we’ll learn in the next section.

Sample usage of the make menuconfig UI

To get a feel for using the Kbuild menu system via the convenient menuconfig target, let’s turn on a quite interesting kernel config. It’s named Kernel .config support and allows one to see the content of the kernel config while running that kernel! Useful, especially during development and testing. For security reasons, it’s typically turned off in production, though.

A couple of nagging questions remain:

  • Q. Where is it?

    A. It’s located as an item under the General Setup main menu (we’ll see it soon enough).

  • Q. What is it set to by default?

    A. To the value <M>, meaning it will be built as a kernel module by default.

As a learning experiment, we’ll set it to the value [*] (or y), building it into the very fabric of the kernel. In effect, it will be always on. Okay, let’s get it done!

  1. Fire up the kernel config UI:
    make menuconfig

    You should see a terminal UI as in Figure 2.11. The very first item is usually a submenu labeled General Setup ---> ; press the Enter key while on it; this will lead you into the General Setup submenu, within which many items are displayed; navigate (by pressing the down arrow) to the item named Kernel .config support:

    Figure 2.12: A screenshot of the General Setup menu items, with the relevant one highlighted (on x86-64)

  1. We can see in the preceding screenshot that we’re configuring the 6.1.25 kernel on an x86, the highlighted menu item is Kernel .config support, and, from its <M> prefix, that it’s a tristate menu item that’s set to the choice <M> for “module,” to begin with (by default).
  2. Keeping this item (Kernel .config support) highlighted, use the right arrow key to navigate to the < Help > button on the bottom toolbar and press the Enter key while on the < Help > button. Or, simply press ? while on an option! The screen should now look something like this:

    Figure 2.13: Kernel configuration via make menuconfig; an example Help screen (with the name of the kernel config macro highlighted)

    The help screen is quite informative. Indeed, several of the kernel config help screens are very well populated and helpful. Unfortunately, some just aren’t.

  1. Okay, next, press Enter while on the < Exit > button so that we go back to the previous screen.
  2. Change the value by pressing the spacebar; doing this has the current menu item’s value toggle between <*> (always on), < > (off), and <M> (module). Keep it on <*>, meaning “always on.”
  3. Next, though it’s now turned on, the ability to actually view the kernel config is provided via a pseudofile under procfs; the very next item below this one is the relevant one:
    [ ]   Enable access to .config through /proc/config.gz
  4. You can see it’s turned off by default ([ ]); turn it on by navigating to it and pressing the spacebar. It now shows as [*]:

Figure 2.14: A truncated screenshot showing how we’ve turned on the ability to view the kernel config

  1. Right, we’re done for now; press the right arrow or Tab key, navigate to the < Exit > button, and press Enter while on it; you’re back at the main menu screen. Repeat this, pressing < Exit > again; the UI asks you if you’d like to save this configuration. Select < Yes > (by pressing Enter while on the Yes button):
Graphical user interface, text, application  Description automatically generated

Figure 2.15: Save the modified kernel config prompt

  1. The new kernel config is now saved within the .config file. Let’s quickly verify this. I hope you noticed that the exact name of the kernel configs we modified – which is the macro as seen by the kernel source – is:
    1. CONFIG_IKCONFIG for the Kernel .config support option.
    2. CONFIG_IKCONFIG_PROC for the Enable access to .config through /proc/config.gz option.

    How do we know? It’s in the top-left corner of the Help screen! Look again at Figure 2.13.

Done. Of course, the actual effect won’t be seen until we build and boot from this kernel. Now, what exactly does turning this feature on achieve? When turned on, the currently running kernel’s configuration settings can be looked up at any time in two ways:

  • By running the scripts/extract-ikconfig script.
  • By directly reading the content of the /proc/config.gz pseudofile. Of course, it’s gzip compressed; first uncompress it, and then read it. zcat /proc/config.gz does the trick!

As a further learning exercise, why not further modify the default kernel config (of our 6.1.25 Linux kernel for the x86-64 architecture) of a couple more items? For now, don’t stress out regarding the precise meaning of each of these kernel config options; it’s just to get some practice with the Kconfig system. So, run make menuconfig, and within it make changes by following the format seen just below.


  • Kernel config we’re working with:
    • What it means
    • Where to navigate
    • Name of menu item (and kernel config macro CONFIG_FOO within parentheses)
    • Its default value
    • Value to change it to

Right, here’s what to try out; let’s begin with:

Local version:

  • Meaning: the string to append to the kernel version. Take uname –r as an example; in effect, it’s the “z” or EXTRAVERSION component in the w.x.y.z kernel version nomenclature).
  • Navigate to: General Setup.
  • Menu item: Local version – append to kernel release (CONFIG_LOCALVERSION); press Enter once here and you’ll get a prompt box.
  • Default value: NULL.
  • Change to: anything you like; prefixing a hyphen to the localversion is considered good practice; for example, -lkp-kernel.


  • Timer frequency. You’ll learn the details regarding this tunable in Chapter 10, The CPU Scheduler – Part 1:
    • Meaning: the frequency at which the timer (hardware) interrupt is triggered.
    • Navigate to: Processor type and features | Timer frequency (250 HZ) ---> . Keep scrolling until you find the second menu item.
    • Menu item: Timer frequency (CONFIG_HZ).
    • Default value: 250 HZ.
    • Change to: 300 HZ.

Look up the Help screens for each of the kernel configs as you work with them. Great; once done, save and exit the UI.

Verifying the kernel config within the config file

But where’s the new kernel configuration saved? This is repeated as it’s important: the kernel configuration is written into a simple ASCII text file in the root of the kernel source tree, named .config. That is, it’s saved in ${LKP_KSRC}/.config.

As mentioned earlier, every single kernel config option is associated with a config variable of the form CONFIG_<FOO>, where <FOO>, of course, is replaced with an appropriate name. Internally, these become macros that the build system and indeed the kernel source code uses.

Thus, to verify whether the kernel configs we just modified will take effect, let’s appropriately grep the kernel config file:


Aha! The configuration file now reflects the fact that we have indeed modified the relevant kernel configs; the values too show up.

Caution: it’s best to NOT attempt to edit the .config file manually. There are several inter-dependencies you may not be aware of; always use the Kbuild menu system (we suggest using make menuconfig) to edit it.

Having said that, there is also a non-interactive way to do so, via a script. We’ll learn about this later. Still, using the make menuconfig UI is really the best way.

So, by now, I expect you’ve modified the kernel config to suit the values just seen.

During our quick adventure with the Kconfig/Kbuild system so far, quite a lot has occurred under the hood. The next section examines some remaining points: a little bit more regarding Kconfig/Kbuild, searching within the menu system, cleanly visualizing the differences between the original and modified kernel configuration files, using a script to edit the config, security concerns and tips on addressing them; plenty still to learn!

Kernel config – exploring a bit more

The creation of, or edits to, the .config file within the root of the kernel source tree via the make menuconfig UI or other methods is not the final step in how the Kconfig system works with the configuration. No, it now proceeds to internally invoke a hidden target called syncconfig, which was earlier misnamed silentoldconfig. This target has Kconfig generate a few header files that are further used in the setup to build the kernel.

These files include some meta-headers under include/config, as well as the include/generated/autoconf.h header file, which stores the kernel config as C macros, thus enabling both the kernel Makefiles and kernel code to make decisions based on whether a kernel feature is available.

Now that we’ve covered sufficient ground, take another look at Figure 2.8, the high-level diagram (inspired by Cao Jin’s articles) that attempts to communicate the kernel’s Kconfig/Kbuild system. This diagram, in the Kconfig portion, only shows the common make menuconfig UI; note that several other UI approaches exist, which are make config and make {x|g|n}config. Those are not shown here.

Searching within the menuconfig UI

Moving along, what if – when running make menuconfig – you are looking for a particular kernel configuration option but are having difficulty spotting it? No problem: the menuconfig UI system has a Search Configuration Parameter feature. Just as with the famous vi editor (yes, [g]vi[m] is still our favorite text editor!), press the / (forward slash) key to have a search dialog pop up, then enter your search term with or without CONFIG_ preceding it, and select the < Ok > button to have it go on its way.

The following couple of screenshots show the search dialog and the result dialog. As an example, we searched for the term vbox:

Graphical user interface, text, application  Description automatically generated

Figure 2.16: Kernel configuration via make menuconfig: searching for a config parameter

The result dialog in Figure 2.17 for the preceding search is interesting. It reveals several pieces of information regarding the configuration options:

  • The config directive. Just prefix CONFIG_ onto whatever it shows in Symbol:.
  • The Type of config (Boolean, tristate, alphanumeric, and so on).
  • The Prompt string.
  • Importantly, so you can find it, its Location in the menu system.
  • Its internal dependencies (Depends on:) if any.
  • The Kconfig file and line number n within it (Defined at <path/to/foo.Kconfig*:n>) where this particular kernel config is defined. We’ll cover more on this in coming sections.
  • Any config option it auto-selects (Selects:) if it itself is selected.

The following is a partial screenshot of the result dialog:

Figure 2.17: Kernel configuration via make menuconfig: truncated screenshot of the result dialog from the preceding search

All the information driving the menu display and selections is present in an ASCII text file used by the Kbuild system – this file is typically named Kconfig. There are actually several of them. Their precise names and locations are shown in the Defined at ... line.

Looking up the differences in configuration

The moment the .config kernel configuration file is to be written to, the Kconfig system checks whether it already exists, and if so, it backs it up with the name .config.old. Knowing this, we can always differentiate the two to see the changes we have just wrought. However, using your typical diff utility to do so makes the differences quite hard to interpret. The kernel helpfully provides a better way, a console-based script that specializes in doing precisely this. The scripts/diffconfig script within the kernel source tree is useful for this. Pass it the --help parameter to see a usage screen.

Let’s try it out:

$ scripts/diffconfig .config.old .config
HZ 250 -> 300
HZ_250 y -> n
HZ_300 n -> y
LOCALVERSION "" -> "-lkp-kernel"

If you modified the kernel configuration changes as shown in the preceding section, you should see an output like that shown in the preceding code block via the kernel’s diffconfig script. It clearly shows us exactly which kernel config options we changed and how. In fact, you don’t even need to pass the .config* parameters; it uses these by default.

Using the kernel’s config script to view/edit the kernel config

On occasion, there’s a need to edit or query the kernel configuration directly, checking for or modifying a given kernel config. We’ve learned to do so via the super make menuconfig UI. Here we learn that there’s perhaps an easier, and more importantly, non-interactive and thus scriptable, way to achieve the same – via a Bash script within the kernel source: scripts/config.

Running it without any parameters will result in a useful help screen being displayed; do check it out. An example will help regarding its usage.

The ability to look up the current kernel config’s very useful, so let’s ensure these kernel configs are turned on. Just for this example, let’s first explicitly disable the relevant kernel configs and then enable them:

$ scripts/config --disable IKCONFIG --disable IKCONFIG_PROC
$ grep IKCONFIG .config
# CONFIG_IKCONFIG is not set
$ scripts/config --enable IKCONFIG --enable IKCONFIG_PROC
$ grep IKCONFIG .config

Voila, done.

Careful though: this script can modify the .config but there’s no guarantee that what you ask it to do is actually correct. The validity of the kernel config will only be checked when you next build it. When in doubt, first check all dependencies via the Kconfig* files or by running make menuconfig, then use scripts/config accordingly, and then test the build to see if all’s well.

Configuring the kernel for security

Before we finish, a quick note on something critical: kernel security. While user-space-security-hardening technologies have vastly grown, kernel-space-security-hardening technologies are playing catch-up. Careful configuration of the kernel’s config options does indeed play a key role in determining the security posture of a given Linux kernel; the trouble is, there are so many options and opinions that it’s often hard to check what’s a good idea security-wise and what isn’t.

Alexander Popov has written a very useful Python script named kconfig-hardened-check. It can be run to check and compare a given kernel configuration, via the usual config file, to a set of predetermined hardening preferences sourced from various Linux kernel security projects:

You can clone the kconfig-hardened-check project from its GitHub repository at and try it out! FYI, my Linux Kernel Debugging book does cover using this script in more detail. Below, a screenshot of its help screen will help you get started with it:

Figure 2.18: A screenshot showing the super kconfig-hardened-check script’s help screen

A quick, useful tip: using the kconfig-hardened-check script, one can easily generate a security-conscious kernel config file like this (here, as an example, for the AArch64):

kconfig-hardened-check -g ARM64 > my_kconfig_hardened

(The output file is called the config fragment.) Now, practically speaking, what if you have an already existing kernel config file for your product? Can we merge both? Indeed we can! The kernel provides a script to do just this: scripts/kconfig/ Run it, passing as parameters the pathname to the original (perhaps non-secure) kernel config file and then the path to the just-generated secure kernel config fragment; the result is the merger of both (additional parameters to allow you to control it further; do check it out.

An example can be found here:

Also, you’re sure to come across the fact that new-ish GCC plugins exist (CONFIG_GCC_PLUGINS) providing some cool arch-specific security features. For example, auto-initialization of local/heap variables, entropy generation at boot, and so on. However, they often don’t even show up in the menu. Typically they’re here: General architecture-dependent options | GCC plugins, as the support isn’t installed by default. On x86 at least, try installing the gcc-<ver#>-plugin-dev package, where ver# is the GCC version number, and then retry configuring.

Miscellaneous tips – kernel config

A few remaining tips follow with regard to kernel configuration:

  • When building the x86 kernel for a VM using VirtualBox (as we are here), when configuring the kernel, you might find it useful to set CONFIG_ISO9660_FS=y; it subsequently allows VirtualBox to have the guest mount the Guest Additions virtual CD and install the (pretty useful!) guest additions. Typically stuff that improves performance in the VM and allows better graphics, USB capabilities, clipboard and file sharing, and so on.
  • When building a custom kernel, we at times want to write/build eBPF programs (an advanced topic not covered here) or stuff similar to it. In order to do so, some in-kernel headers are required. You can explicitly ensure this by setting the kernel config CONFIG_IKHEADERS=y (or to m; from 5.2 onward). This results in a /sys/kernel/kheaders.tar.xz file being made available, which can be extracted elsewhere to provide the headers.
    • Further, while talking about eBPF, modern kernels have the ability to generate some debug information, called BPF Type Format (BTF) metadata. This can be enabled by selecting the kernel config CONFIG_DEBUG_INFO_BTF=y. This also requires the pahole tool to be installed. More on the BTF metadata can be found within the official kernel documentation here:
    • Now, when this option is turned on, another kernel config – CONFIG_MODULE_ALLOW_BTF_MISMATCH – becomes relevant when building kernel modules. This is a topic we cover in depth in the following two chapters. If CONFIG_DEBUG_INFO_BTF is enabled, it’s a good idea to set this latter config to Yes, as otherwise, your modules may not be allowed to load up if the BTF metadata doesn’t match at load time.
  • Next, the kernel build should, in theory at least, generate no errors or even warnings. To ensure this, to treat warnings as errors, set CONFIG_WERROR=y. Within the now familiar make menuconfig UI, it’s under General Setup | Compile the kernel with warnings as errors, and is typically off by default.
  • There’s an interesting script here: scripts/; its help screen shows how you can leverage it to list the kernel feature support matrix for the machine or for a given architecture. For example, to see the kernel feature support matrix for the AArch64, do this:
    scripts/ --arch arm64 ls
  • Next, an unofficial “database” of sorts, of all available kernel configs – and the kernel versions they’re supported upon – is available at the Linux Kernel Driver database (LKDDb) project site here:
  • Kernel boot config: At boot, you can always override some kernel features via the powerful kernel command-line parameters. They’re thoroughly documented here: While this is very helpful, sometimes we need to pass along more parameters as key-value pairs, essentially of the form key=value, extending the kernel command line. This can be done by populating a small kernel config file called the boot config. This boot config feature depends on the kernel config’s BOOT_CONFIG being y. It’s under the General Setup menu and is typically on by default.
    • It can be used in two ways: by attaching a boot config to the initrd or initramfs image (we cover initrd in the following chapter) or by embedding a boot config into the kernel itself. For the latter, you’ll need to create the boot config file, pass the directive CONFIG_BOOT_CONFIG_EMBED_FILE="x/y/z" in the kernel config, and rebuild the kernel. Note that kernel command-line parameters will take precedence over the boot config parameters. On boot, if enabled and used, the boot config parameters are visible via /proc/bootconfig. Details regarding the boot config are in the official kernel documentation here:

You’re sure to come across many other useful kernel config settings and scripts, including those for hardening the kernel; keep a keen eye out.

Alright! You have now completed the first three steps of the Linux kernel build – quite a thing. Of course, we will complete the remaining four steps of the kernel build process in the following chapter. We will end this chapter with a final section on learning another useful skill – how to customize the kernel UI menu.

Customizing the kernel menu, Kconfig, and adding our own menu item

So, let’s say you have developed a device driver, an experimental new modular scheduling class, a custom debugfs (debug filesystem) callback, or some other cool kernel feature. You will one day. How will you let others on the team – or, for that matter, your customer or the community – know that this fantastic new kernel feature exists? You will typically set up a new kernel config (macro) and allow folks to select it as either a built-in or as a kernel module, and thus build and make use of it. As part of this, you’ll need to define the new kernel config and insert a new menu item at an appropriate place in the kernel configuration menu.

To do so, it’s useful to first understand a little more about the various Kconfig* files and where they reside. Let’s find out.

Understanding the Kconfig* files

The Kconfig* files contain metadata interpreted by the kernel’s config and build system – Kconfig/Kbuild – allowing it to build and (conditionally) display the menus you see when you run the menuconfig UI, accept selections, and so on.

For example, the Kconfig file at the root of the kernel source tree is used to fill in the initial screen of the menuconfig UI. Take a peek at it; it works by sourcing various other Kconfig files in different folders of the kernel source tree. There are many Kconfig* files within the kernel source tree (over 1,700 for 6.1.25)! Each of them typically defines a single menu, helping us realize how intricate the build system really is.

As a real example, let’s look up the Kconfig entry that defines the following items in the menu: Google Devices | Google Virtual NIC (gVNIC) support. Google Cloud employs a virtual Network Interface Card (NIC); it’s called the Google Virtual NIC. It’s likely that their Linux-based cloud servers will make use of it. Its location within the menuconfig UI is here:

-> Device Drivers

-> Network device support

-> Ethernet driver support

-> Google Devices

Here’s a screenshot showing these menu items:

Figure 2.19: Partial screenshot showing the Google Devices item in the make menuconfig UI (for x86/6.1.25)

How do we know which Kconfig file defines these menu items? The Help screen for a given config reveals it! So, while on the relevant menu item, select the < Help > button and press Enter; here, the Help screen says (among other things):

Defined at drivers/net/ethernet/google/Kconfig:5

That’s the Kconfig file describing this menu! Let’s look it up:

$ cat drivers/net/ethernet/google/Kconfig
# Google network device configuration
    bool "Google Devices"
    default y
      If you have a network (Ethernet) device belonging to this class, say Y.
[ … ]

This is a nice and simple Kconfig entry; notice the Kconfig language keywords: config, bool, default, and help. We’ve highlighted them in reverse colors. You can see that this device is enabled by default. We’ll cover the syntax shortly.

The following table summarizes the more important Kconfig* files and which submenu they serve in the Kbuild UI:


Kconfig file location for it

The main menu, the initial screen of the menuconfig UI


General setup + Enable loadable module support


Processor types and features + Bus options + Binary Emulations (this menu title tends to be arch-specific; here,its wrt the x86[_64]; in general, the Kconfig file is here: arch/<arch>/Kconfig)


Power management


Firmware drivers




General architecture-dependent options


Enable the block layer + IO Schedulers


Executable file formats


Memory management options


Networking support

net/Kconfig, net/*/Kconfig*

Device drivers

drivers/Kconfig, drivers/*/Kconfig*


fs/Kconfig, fs/*/Kconfig*

Security options

security/Kconfig, security/*/Kconfig*

Cryptographic API

crypto/Kconfig, crypto/*/Kconfig*

Library routines

lib/Kconfig, lib/*/Kconfig*

Kernel hacking (implies Kernel debugging)

lib/Kconfig.debug, lib/Kconfig.*

Table 2.4: Kernel config (sub) menus and the corresponding Kconfig* file(s) defining them

Typically, a single Kconfig file drives a single menu, though there could be multiple. Now, let’s move on to actually adding a menu item.

Creating a new menu item within the General Setup menu

As a trivial example, let’s add our own Boolean dummy config option within the General Setup menu. We want the config name to be, shall we say, CONFIG_LKP_OPTION1. As can be seen from the preceding table, the relevant Kconfig file to edit is the init/Kconfig one as it’s the meta-file that defines the General Setup menu.

Let’s get to it (we assume you’re in the root of the kernel source tree):

  1. Optional: to be safe, always make a backup copy of the Kconfig file you’re editing:
    cp init/Kconfig init/Kconfig.orig
  2. Now, edit the init/Kconfig file:
    vi init/Kconfig

    Scroll down to an appropriate location within the file; here, we choose to insert our custom menu entry between the LOCALVERSION_AUTO and the BUILD_SALT ones. The following screenshot shows our new entry (the init/Kconfig file being edited with vim):

    Figure 2.20: Editing the 6.1.25:init/Kconfig and inserting our own menu entry (highlighted)

    FYI, I’ve provided the preceding experiment as a patch to the original 6.1.25 init/Kconfig file in our book’s GitHub source tree. Find the patch file here: ch2/Kconfig.patch.

    The new item starts with the config keyword followed by the FOO part of your new CONFIG_LKP_OPTION1 config variable. For now, just read the statements we have made in the Kconfig file regarding this entry. More details on the Kconfig language/syntax are in the A few details on the Kconfig language section that follows.

  1. Save the file and exit the editor.
  2. (Re)configure the kernel: run make menuconfig. Then navigate to our cool new menu item under General Setup | Test case for LKP 2e book/Ch 2: creating …. Turn the feature on. Notice how, in Figure 2.21, it’s highlighted and off by default, just as we specified via the default n line.
        make menuconfig

    Here’s the relevant output:

    Figure 2.21: Kernel configuration via make menuconfig showing our new menu entry (before turning it on)

  1. Now turn it on by toggling it with the space bar, then save and exit the menu system.

    While there, try pressing the < Help > button. You should see the “help” text we provided within the init/Kconfig file.

  1. Check whether our feature has been selected:
    $ grep "LKP_OPTION1" .config
    $ grep "LKP_OPTION1" include/generated/autoconf.h 

    We find that indeed it has been set to on (y) within our .config file, but is not yet within the kernel’s internal auto-generated header file. This will happen when we build the kernel.

    Now let’s check it via the useful non-interactive config script method. We covered this in the Using the kernel’s config script to view/edit the kernel config section.

    $ scripts/config -s LKP_OPTION1

    Ah, it’s on, as expected (the -s option is the same as --state). Below, we disable it via the -d option, query it (-s), and then re-enable it via the -e option, and again query it (just for learning’s sake!):

    $ scripts/config -d LKP_OPTION1 ; scripts/config -s LKP_OPTION1
    $ scripts/config -e LKP_OPTION1 ; scripts/config -s LKP_OPTION1
  1. Build the kernel. Worry not; the full details on building the kernel are found in the next chapter. You can skip this for now, or you could always cover Chapter 3, Building the 6.x Linux Kernel from Source – Part 2, and then come back to this point.
    make -j4

    Further, in recent kernels, after the build step, every kernel config option that’s enabled (either y or m) appears as an empty file within include/config; this happens with our new config as well, of course:

    $ ls -l include/config/LKP_*
    -rw-r--r-- 1 c2kp c2kp 0 Apr 29 11:56 include/config/LKP_OPTION1
  1. Once done, recheck the autoconf.h header for the presence of our new config option:
    $ grep LKP_OPTION1 include/generated/* 2>/dev/null 
    include/generated/autoconf.h:#define CONFIG_LKP_OPTION1 1

It worked (from 6.0 even Rust knows about it!). Yes; however, when working on an actual project or product, in order to leverage this new kernel config of ours, we would typically require a further step, setting up our config entry within the Makefile relevant to the code that uses this config option.

Here’s a quick example of how this might look. Let’s imagine we wrote some kernel code in a C source file named lkp_options.c. Now, we need to ensure it gets compiled and built into the kernel image! How? Here’s one way: in the kernel’s top-level (or within its own) Makefile, the following line will ensure that it gets compiled into the kernel at build time; add it to the end of the relevant Makefile:

obj-${CONFIG_LKP_OPTION1}  +=  lkp_option1.o

Don’t stress about the fairly weird kernel Makefile syntax for now. The next few chapters will certainly shed some light on this. Also, we did cover this particular syntax in the How the Kconfig+Kbuild system works – a minimal take section.

Further, you should realize that the very same config can be used as a normal C macro within a piece of kernel code; for example, we could do things like this within our in-tree kernel (or module) C code:


Then again, it’s very much worth noting that the Linux kernel community has devised and strictly adheres to certain rigorous coding style guidelines. In this context, the guidelines state that conditional compilation should be avoided whenever possible; if it’s required to use a Kconfig symbol as a conditional, then please do it this way:


The Linux kernel coding style guidelines can be found here: I urge you to refer to them often, and, of course, to follow them!

A few details on the Kconfig language

Our usage of the Kconfig language so far (Figure 2.20) is just the tip of the proverbial iceberg. The fact is, the Kconfig system uses the Kconfig language (or syntax) to express and create menus using simple ASCII text directives. The language includes menu entries, attributes, dependencies, visibility constraints, help text, and so on.

The kernel documents the Kconfig language constructs and syntax here: Do refer to this document for complete details.

A brief mention of the more common Kconfig constructs is given in the following table:



config <FOO>

Specifies the menu entry name of the form CONFIG_FOO here; just use the FOO part after the config keyword.

Menu attributes

bool ["<description>"]

Specifies the config option as a Boolean; its value in .config will be either y (built into the kernel image) or will not exist (will show up as a commented-out entry).

tristate ["description>"]

Specifies the config option as tristate; its value in .config will be either y or m (built as a kernel module) or will not exist (will show up as a commented-out entry).

int ["<description>"]

Specifies the config option as taking an integer value.

range x-y

For an integer whose valid range is from x to y.

default <value>

Specifies the default value; use y, m, n, or other, as required.

prompt "<description>" [if <expr>]

An input prompt with a describing sentence (can be made conditional); a menu entry can have at most one prompt.

depends on "expr"

Defines a dependency for the menu item; can have several with the depends on FOO1 && FOO2 && (FOO3 || FOO4) type of syntax.

select <config> [if "expr"]

Defines a reverse dependency.

help "my awesome help text … bleh

bleh bleh "

Text to display when the < Help > button is selected.

Table 2.5: Kconfig, a few constructs

To help understand the syntax, a few examples from lib/Kconfig.debug (the file that describes the menu items for the Kernel Hacking submenu – it means kernel debugging, really – of the UI) follow (don’t forget, you can browse it online as well:

  1. We will start with a simple and self-explanatory one (the CONFIG_DEBUG_INFO option):
    config DEBUG_INFO
              A kernel debug info option other than "None" has been selected
              in the "Debug information" choice below, indicating that debug
              information will be generated for build targets.
  2. Next, let’s look at the CONFIG_FRAME_WARN option. Notice the range and the conditional default value syntax, as follows:
    config FRAME_WARN
        int "Warn for stack frames larger than"
        range 0 8192
        default 0 if KMSAN
            default 2048 if GCC_PLUGIN_LATENT_ENTROPY
            default 2048 if PARISC
            default 1536 if (!64BIT && XTENSA)
            default 1280 if KASAN && !64BIT
            default 1024 if !64BIT
            default 2048 if 64BIT
              Tell the compiler to warn at build time for stack frames larger than this.
              Setting this too low will cause a lot of warnings.
              Setting it to 0 disables the warning.
  3. Next, the CONFIG_HAVE_DEBUG_STACKOVERFLOW option is a simple Boolean; it’s either on or off (the kernel either has the capability to detect kernel-space stack overflows or doesn’t). The CONFIG_DEBUG_STACKOVERFLOW option is also a Boolean. Notice how it depends on two other options, separated with a Boolean AND (&&) operator:
            bool "Check for stack overflows"
               Say Y here if you want to check for overflows of kernel, IRQ
               and exception stacks (if your architecture uses them). This 
               option will show detailed messages if free stack space drops
               below a certain limit. [...]

Another useful thing: while configuring the kernel (via the usual make menuconfig UI), clicking on < Help > not only shows some (usually useful) help text, but it also displays the current runtime values of various config options. The same can be seen by simply searching for a config option (via the slash key, /, as mentioned earlier). So, for example, type / and search for the kernel config named KASAN; this is what I see when doing so.

Figure 2.22: Partial screenshot showing the KASAN config option; you can see it’s off by default

If you’re unaware, KASAN is the Kernel Address SANitizer – it’s a brilliant compiler-based technology to help catch memory corruption defects; I cover it in depth in the book Linux Kernel Debugging.

Look carefully at the Depends on: line; it shows the dependencies as well as their current value. The important thing to note is that the menu item won’t even show in the UI unless the dependencies are fulfilled.

Alright! This completes our coverage of the Kconfig files, creating or editing a custom menu entry in the kernel config, a little Kconfig language syntax, and indeed this chapter.


In this chapter, you first learned about the Linux kernel’s release (or version) nomenclature (remember, Linux kernel releases are time- and not feature-based!), the various types of Linux kernels (-next trees, -rc/mainline trees, stable, LTS, SLTS, distributions, custom embedded), and the basic kernel development workflow. You then learned how to obtain for yourself a Linux kernel source tree and how to extract the compressed kernel source tree to disk. Along the way, you even got a quick 10,000-foot view of the kernel source tree so that its layout is clearer.

After that, critically, you learned how to approach the kernel configuration step and perform it – a key step in the kernel build process! Furthermore, you learned how to customize the kernel menu, adding your own entries to it, and a bit about the Kconfig/Kbuild system and the associated Kconfig files it uses, among others.

Knowing how to fetch and configure the Linux kernel is a useful skill to possess. We have just begun this long and exciting journey. You will realize that with more experience and knowledge of kernel internals, drivers, and the target system hardware, your ability to fine-tune the kernel to your project’s purpose will only get better.

We’re halfway to building a custom kernel; I suggest you digest this material, try out the steps in this chapter in a hands-on fashion, work on the questions/exercises, and browse through the Further reading section. Then, in the next chapter, let’s actually build the 6.1.25 kernel and verify it!


Following pretty much exactly the steps you’ve learned in this chapter, I’d like you to now do the same for some other kernel, say, the 6.0.y Linux kernel, where y is the highest number (as of this writing, it’s 19)! Of course, if you wish, feel free to work on any other kernel:

  1. Navigate to and look up the 6.0.y releases.
  2. Download the latest v6.0.y Linux kernel source tree.
  3. Extract it to disk.
  4. Configure the kernel (begin by using the localmodconfig approach, then tweak the kernel config as required. As an additional exercise, you could run the script as well).
  5. Show the “delta” – the differences between the original and the new kernel config file (tip: use the kernel’s diffconfig script to do so).


As we conclude, here is a list of questions for you to test your knowledge regarding this chapter’s material: You will find some of the questions answered in the book’s GitHub repo:

Further reading

To help you delve deeper into the subject, we provide a rather detailed list of online references and links (and, at times, even books) in a Further reading document in this book’s GitHub repository. It’s available here:

Learn more on Discord

To join the Discord community for this book – where you can share feedback, ask questions to the author, and learn about new releases – follow the QR code below:

left arrow right arrow
toc Download Code

Key benefits

  • Discover how to write Linux kernel and module code for real-world products
  • Implement industry-grade techniques in real-world scenarios for fast, efficient memory allocation and data synchronization
  • Understand and exploit kernel architecture, CPU scheduling, and kernel synchronization techniques


The 2nd Edition of Linux Kernel Programming is an updated, comprehensive guide for new programmers to the Linux kernel. This book uses the recent 6.1 Long-Term Support (LTS) Linux kernel series, which will be maintained until Dec 2026, and also delves into its many new features. Further, the Civil Infrastructure Project has pledged to maintain and support this 6.1 Super LTS (SLTS) kernel right until August 2033, keeping this book valid for years to come! You’ll begin this exciting journey by learning how to build the kernel from source. In a step by step manner, you will then learn how to write your first kernel module by leveraging the kernel’s powerful Loadable Kernel Module (LKM) framework. With this foundation, you will delve into key kernel internals topics including Linux kernel architecture, memory management, and CPU (task) scheduling. You’ll finish with understanding the deep issues of concurrency, and gain insight into how they can be addressed with various synchronization/locking technologies (e.g., mutexes, spinlocks, atomic/refcount operators, rw-spinlocks and even lock-free technologies such as per-CPU and RCU). By the end of this book, you’ll have a much better understanding of the fundamentals of writing the Linux kernel and kernel module code that can straight away be used in real-world projects and products.

What you will learn

Configure and build the 6.1 LTS kernel from source Write high-quality modular kernel code (LKM framework) for 6.x kernels Explore modern Linux kernel architecture Get to grips with key internals details regarding memory management within the kernel Understand and work with various dynamic kernel memory alloc/dealloc APIs Discover key internals aspects regarding CPU scheduling within the kernel, including cgroups v2 Gain a deeper understanding of kernel concurrency issues Learn how to work with key kernel synchronization primitives

What do you get with eBook?

Feature icon Instant access to your Digital eBook purchase
Feature icon Download this book in EPUB and PDF formats
Feature icon Access this title in our online reader with advanced features
Feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details

Publication date : Feb 29, 2024
Length 826 pages
Edition : 2nd Edition
Language : English
ISBN-13 : 9781803232225
Vendor :
Linux Foundation

Table of Contents

16 Chapters
Preface Packt Packt
Linux Kernel Programming – A Quick Introduction Packt Packt
Building the 6.x Linux Kernel from Source – Part 1 Packt Packt
Building the 6.x Linux Kernel from Source – Part 2 Packt Packt
Writing Your First Kernel Module – Part 1 Packt Packt
Writing Your First Kernel Module – Part 2 Packt Packt
Kernel Internals Essentials – Processes and Threads Packt Packt
Memory Management Internals – Essentials Packt Packt
Kernel Memory Allocation for Module Authors – Part 1 Packt Packt
Kernel Memory Allocation for Module Authors – Part 2 Packt Packt
The CPU Scheduler – Part 1 Packt Packt
The CPU Scheduler – Part 2 Packt Packt
Kernel Synchronization – Part 1 Packt Packt
Kernel Synchronization – Part 2 Packt Packt
Other Books You May Enjoy Packt Packt
Index Packt Packt

Customer reviews

filter Filter
Top Reviews
Rating distribution
star-icon star-icon star-icon star-icon star-icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by

No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial


How do I buy and download an eBook? Packt Packt

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Packt Packt

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Packt Packt
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see and view the pages for the title you have.
  • To view your account detai