Embedded Programming with Modern C++ Cookbook

3 (1 reviews total)
By Igor Viarheichyk
  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Fundamentals of Embedded Systems

About this book

Developing applications for embedded systems may seem like a daunting task as developers face challenges related to limited memory, high power consumption, and maintaining real-time responses. This book is a collection of practical examples to explain how to develop applications for embedded boards and overcome the challenges that you may encounter while developing.

The book will start with an introduction to embedded systems and how to set up the development environment. By teaching you to build your first embedded application, the book will help you progress from the basics to more complex concepts, such as debugging, logging, and profiling. Moving ahead, you will learn how to use specialized memory and custom allocators. From here, you will delve into recipes that will teach you how to work with the C++ memory model, atomic variables, and synchronization. The book will then take you through recipes on inter-process communication, data serialization, and timers. Finally, you will cover topics such as error handling and guidelines for real-time systems and safety-critical systems.

By the end of this book, you will have become proficient in building robust and secure embedded applications with C++.

Publication date:
April 2020
Publisher
Packt
Pages
412
ISBN
9781838821043

 

Fundamentals of Embedded Systems

Embedded systems are computer systems that combine hardware and software components to solve a specific task within a larger system or device. Unlike general-purpose computers, they are heavily specialized and optimized to perform only one task but do it really well. 

They are everywhere around us, but we rarely notice them. You can find them in virtually every home appliance or gadget, such as a microwave oven, TV set, network-attached storage, or smart thermostat. Your car contains several interconnected embedded systems that handle brakes, fuel injection, and infotainment.

In this chapter, we are going to deal with the following topics on embedded systems:

  • Exploring embedded systems
  • Working with limited resources
  • Looking at performance implications
  • Working with different architectures
  • Working with hardware errors
  • Using C++ for embedded development
  • Deploying software remotely
  • Running software remotely
  • Logging and diagnostics

 

 

Exploring embedded systems

Every computer system created to solve a particular problem as part of a larger system or device is an embedded system. Even your general-purpose PC or laptop contains many embedded systems. A keyboard, a hard drive, a network card, or a Wi-Fi moduleeach of these is an embedded system with a processor, often called a microcontroller, and its own software, often called firmware.

Let's now dive into the different features of an embedded system.

How are they different from desktop or web applications?

The most distinctive feature of embedded systems compared to desktops or servers is their tight coupling of hardware and software specialized to accomplish a particular task.

Embedded devices work in a wide range of physical and environmental conditions. Most of them are not designed to work only in dedicated conditioned data centers or offices. They have to be functional in uncontrollable environments, often without any supervision and maintenance.

Since they are specialized, hardware requirements are precisely calculated to accomplish the task of being as cost-efficient as possible. As a result, the software aims to utilize 100% of the available resources with minimal or no reserves.

The hardware of embedded systems is much more differentiated compared to regular desktops and servers. The design of each system is individual. They may require very specific CPUs and schematics that connect them to memory and peripheral hardware.

Embedded systems are designed to communicate with peripheral hardware. A major part of an embedded program is checking the status, reading input, sending data, or controlling the external device. It is common for an embedded system to not have a user interface. This makes development, debugging, and diagnostics much more difficult compared to doing the same on traditional desktop or web applications.

Types of embedded systems

Embedded systems span a wide range of use cases and technologies—from powerful systems used for autonomous driving or large-scale storage systems to tiny microcontrollers used to control light bulbs or LED displays. 

Based on the level of integration and specialization of hardware, embedded systems can roughly be divided into the following categories:

  • Microcontrollers (MCUs)
  • A System on Chip (SoC)  
  • Application-Specific Integrated Circuits (ASICs
  • Field Programmable Gate Arrays (FPGAs)

Microcontrollers

MCUs are general-purpose integrated circuits designed for embedded applications. A single MCU chip typically contains one or more CPUs, memory, and programmable input/output peripherals. Their design allows them to interface directly with sensors or actuators without adding any additional components.

MCUs are widely used in automobile engine control systems, medical devices, remote controls, office machines, appliances, power tools, and toys.

Their CPUs vary from simple 8-bit processors to the more complex 32-bit and even 64-bit processors. 

Lots of MCUs exist; the most common ones nowadays are the following:

  • The Intel MCS-51 or 8051 MCU.
  • AVR by Atmel
  • The Programmable Interface Controller (PIC) from Microchip Technology
  • Various ARM-based MCUs

System on Chip

An SoC is an integrated circuit that combines all the electronic circuits and parts needed to solve a particular class of problem on a single chip.

It may contain digital, analog, or mixed-signal functions, depending on the application. The integration of most electronic parts in a single chip gives two major benefits: miniaturization and low power consumption. Compared to a less-integrated hardware design, an SoC requires significantly less power. The optimization of power consumption on the hardware and software levels allows it to create systems that can work for days, months, and even years on a battery without an external power source. Often, it also integrates radio frequency signal processing, which, along with its compact physical size, makes it an ideal solution for mobile applications. Besides that, SoCs are commonly used in the automotive industry, in wearable electronics, and in the Internet of Things (IoT):

Figure 1.1: A Raspberry Pi Model B+

A Raspberry Pi family of single-board computers is an example of a system based on the SoC design. Model B+ is built on top of a Broadcom BCM2837B0 SoC with an integrated quad-core 1.4 Hz ARM-based CPU, 1 GB memory, a network interface controller, and four Ethernet interfaces.

The board has four USB interfaces, a MicroSD card port to boot an operating system and store data, Ethernet and Wi-Fi network interfaces, HDMI video output, and a 40-pin GPIO header to connect custom peripheral hardware.

It is shipped with the Linux operating system and is an excellent choice for educational and DIY projects.

Application-specific integrated circuits

Application-specific integrated circuits, or ASICs, are integrated circuits customized by their manufactures for a particular use. The customization is an expensive process but allows them to meet the requirements that are often infeasible for solutions based on general-purpose hardware. For example, modern high-efficiency Bitcoin miners are usually built on top of specialized ASIC chips. 

To define the functionality of ASICs, hardware designers use one of the hardware description languages, such as Verilog or VHDL.

Field programmable gate arrays

Unlike SoCs, ASICs, and MCUs, field programmable gate arrays, or FPGAs, are semiconductor devices that can be reprogrammed on a hardware level after manufacturing. They are based around a matrix of configurable logic blocks (CLBs), which are connected via programmable interconnects. The interconnects can be programmed by developers to perform a specific function according to their requirements. The FPGA is programmed with a Hardware Definition Language (HDL). It allows the implementation of any combination of digital functions in order to process a massive amount of data very quickly and efficiently.

 

Working with limited resources

It is a common misconception that embedded systems are based on hardware that is much slower compared to regular desktop or server hardware. Although this is commonly the case, it is not always true.

Some particular applications may require lots of computation power of large amounts of memory. For example, autonomous driving requires both memory and CPU resources to handle the large amount of data that comes from various sensors using AI algorithms in real time. Another example is high-end storage systems that utilize large amounts of memory and resources for data caching, replication, and encryption.

In either case, the embedded system hardware is designed to minimize the cost of the overall system. The results for software engineers working with embedded systems is that resources are scarce. They are expected to utilize all of the available resources and take performance and memory optimizations very seriously.

 

Looking at performance implications

Most embedded applications are optimized for performance. As discussed earlier, the target CPU is chosen to be cost-efficient and developers extract all the computation power that it is capable of. An additional factor is communication with peripheral hardware. This often requires precise and fast reaction times. As a result, there is only limited room for the scripting, interpretable, bytecode languages such as Python or Java. Most of the embedded programs are written in languages that compile into the native code, primarily C and C++.

To achieve maximum performance, embedded programs utilize all the performance optimization capabilities of compilers. Modern compilers are so good at code optimization that they can outperform code in assembly language written by skilled developers.

However, engineers cannot rely solely on the performance optimizations provided by compilers. To achieve maximum efficiency, they have to take into account the specifics of the target platform. Coding practices that are commonly used for desktop or server applications running on an x86 platform may be inefficient for different architectures such as ARM or MIPS. The utilization of specific features of the target architecture often gives a significant performance boost to the program.

 

Working with different architectures

Developers of desktop applications usually pay little attention to the hardware architecture. First, they often use high-level programming languages that hide these complexities at the cost of some performance drop. Second, in most cases, their code runs on x86 architecture and they often take its features for granted. For example, they may assume that the size of int is 32 bits, which is not true in many cases.

Embedded developers deal with a much wider variety of architectures. Even if they do not write code in assembly language native to the target platform, they should be aware that all C and C++ fundamental types are architecture-dependent; the standard only guarantees that int is at least 16 bits. They should also know the traits of particular architectures, such as endianness and alignment, and take into account that operations with floating point or 64-bit numbers, which are relatively cheap on x86 architecture, may be much more expensive on other architectures.

Endianness

Endianness defines the order in which bytes that represent large numerical values are stored in memory.

There are two types of endianness:

  • Big-endian: The most significant byte is stored first. The 0x01020304 32-bit value is stored at the ptr address as follows:

    Offset in memory Value
    ptr 0x01
    ptr + 1 0x02
    ptr + 2 0x03
    ptr + 3 0x04

Examples of big-endian architectures are AVR32 and Motorola 68000.

  • Little-endian: The least significant byte is stored first. The 0x01020304 32-bit value is stored at the ptr address as follows:

    Offset in memory Value
    ptr 0x04
    ptr + 1 0x03
    ptr + 2 0x02
    ptr + 3 0x01

The x86 architecture is little-endian.

  • Bi-endian: Hardware supports switchable endianness. Some examples are PowerPC, ARMv3, and the preceding examples.

Endianness is particularly essential when exchanging data with other systems. If a developer sends the 0x01020304 32-bit integer as is, it may be read as 0x04030201 if the endianness of the receiver does not match the endianness of the sender. That is why data should be serialized.

This C++ snippet can be used to determine the endianness of a system:

#include <iostream>
int main() {
union {
uint32_t i;
uint8_t c[4];
} data;
data.i = 0x01020304;
if (data.c[0] == 0x01) {
std::cout << "Big-endian" << std::endl;
} else {
std::cout << "Little-endian" << std::endl;
}
}

Alignment

Processors don't read and write data in bytes but in memory words—chunks that match their data address size. 32-bit processors work with 32-bit words, 64-bit processors with 64-bit words, and so on.

Reads and writes are most efficient when words are aligned—the data address is a multiple of the word size. For example, for 32-bit architectures, the 0x00000004 address is aligned, while 0x00000005 is unaligned.

Compilers align data automatically to achieve the most efficient data access. When it comes to structures, the result may be surprising for developers who are not aware of alignment:

 struct {

uint8_t c;

uint32_t i;

} a = {1, 1};

std::cout << sizeof(a) << std::endl;

What is the output of the preceding code snippet? The size of uint8_t is 1 and the size of  uint32_t is 4. A developer may expect that the size of the structure is the sum of the individual sizes. However, the result highly depends on the target architecture.

For x86, the result is 8. Let's add one more uint8_t field before i:

struct {

uint8_t c;

uint8_t cc;

uint32_t i;

} a = {1, 1};

std::cout << sizeof(a) << std::endl;

The result is still 8! The compiler optimizes the placement of the data fields within a structure according to alignment rules by adding padding bytes. The rules are architecture-dependent and the result may be different for other architectures. As a result, structures cannot be exchanged directly between two different systems without serializationwhich will be explained in more depth in Chapter 8, Communication and Serialization.

Besides the CPU, access data alignment is also crucial for efficient memory mapping through hardware address translation mechanisms. Modern operating systems operate 4 KB memory blocks or pages to map a process virtual address space to physical memory. Aligning data structures on 4 KB boundaries can lead to performance gain.

Fixed-width integer types

C and C++ developers often forget that the size of fundamental data types, such as char, short, or int, is architecture-dependent. To make the code portable, embedded developers often use fixed-size integer types that explicitly specify the size of a data field.

The most commonly used data types are as follows:

Width Signed Unsigned
8-bit int8_t uint8_t
16-bit int16_t uint16_t
32-bit int32_t uint32_t

 

The pointer size also depends on the architecture. Developers often need to address elements of arrays and since arrays are internally represented as pointers, the offset representation depends on the pointer size. size_t is a special data type to represent the offset and data sizes in an architecture-independent way.

 

Working with hardware errors

A significant part of an embedded developer's work is dealing with hardware. Unlike most application developers, embedded developers cannot rely on hardware. Hardware fails for different reasons and embedded developers have to distinguish purely software failures from software failures caused by hardware failures or glitches.

Early versions of hardware

Embedded systems are based on specialized hardware designed and manufactured for a particular use case. This implies that at the time that the software for the embedded system is being developed, its hardware is not yet stable and well tested. When software developers encounter an error in their code behavior, it does not necessarily mean there is a software bug but it might be a result of incorrectly working hardware.

It is hard to triage these kinds of problems. They require knowledge, intuition, and sometimes the use of an oscilloscope to narrow the root cause of an issue down to hardware.

Hardware is unreliable

Hardware is inherently unreliable. Each hardware component has a probability of failure and developers should be aware that hardware can fail at any time. Data stored in memory can be corrupted because of memory failure. Messages being transmitted over a communication channel can be altered because of external noise.

Embedded developers are prepared for these situations. They use checksums or cyclic redundancy check (CRC) code to detect and, if possible, correct corrupted data.

The influence of environmental conditions

High temperature, low temperature, high humidity, vibration, dust, and other environmental factors can significantly affect the performance and reliability of hardware. While developers design their software to handle all potential hardware errors, it is common practice to test the system in different environments. Besides that, knowledge of environmental conditions can give an important clue when working on the root-cause analysis of an issue. 

 

Using C++ for embedded development

For many years, the vast majority of an embedded project was developed using the C programming language. This language perfectly fits the needs of embedded software developers. It provides feature-rich and convenient syntax but at the same time, it is relatively low-level and does not hide platform specifics from developers. 

Due to its versatility, compactness, and the high performance of the compiled code, it became a de facto standard development language in the embedded world. Compilers for the C language exist for most, if not all, architectures; they are optimized to generate machine code that is more efficient than those that are written manually.

Over time, the complexity of embedded systems increased and developers faced the limitations of C, the most notable being error-prone resource management and a lack of high-level abstractions. The development of complex applications in C requires a lot of effort and time. 

At the same time, C++ was evolving, gaining new features and adopting programming techniques that make it the best choice for developers of modern embedded systems. These new features and techniques are as follows:

  • You don't pay for what you don't use.
  • Object-oriented programming to time the code complexity.
  • Resource acquisition is initialization (RAII).
  • Exceptions.
  • A powerful standard library.
  • Threads and memory model as part of the language specification.

You don't pay for what you don't use

One of the mottos of C++ is You don't pay for what you don't use. This language is packed with many more features than C, yet it promises zero overhead for those that are not used. 

Take, for example, virtual functions:

#include <iostream>

class A {

public:

void print() {

std::cout << "A" << std::endl;

}

};

class B: public A {

public:

void print() {

std::cout << "B" << std::endl;

}

};

int main() {

A* obj = new B;

obj->print();

}

The preceding code will output A, despite obj pointing to the object of the B class. To make it work as expected, the developer adds a keywordvirtual:

#include <iostream>

class A {

public:

virtual void print() {

std::cout << "A" << std::endl;

}

};

class B: public A {

public:

void print() {

std::cout << "B" << std::endl;

}

};

int main() {

A* obj = new B;

obj->print();

}

After this change, the code outputs B, which is what most developers expect to get as a result. You may ask why C++ does not enforce every method to be virtual by default. This approach is adopted by Java and doesn't seem to have any downsides.

The reason is that virtual functions are not free. Function resolution is performed at runtime via the virtual table—an array of function pointers. It adds a slight overhead to the function invocation time. If you do not need dynamic polymorphism, you do not pay for it. That is why C++ developers add the virtual keyboard, to explicitly agree with functionality that adds performance overhead.

Object-oriented programming to time the code complexity

As the complexity of embedded programs grows over time, it becomes more and more difficult to manage them using the traditional procedural approach provided by the C language. If you take a look at a large C project, such as the Linux kernel, you will see that it adopts many aspects of object-oriented programming.

The Linux kernel extensively uses encapsulation, hiding implementation details and providing object interfaces using C structures.

Though it is possible to write object-oriented code in C, it is much easier and convenient to do it in C++, where a compiler does all the heavy lifting for the developers.

Resource acquisition is initialization

Embedded developers work a lot with the resources provided by the operating system: memory, files, and network sockets. C developers use pairs of API functions to acquire and free resources; for example, malloc to claim a block of memory and free to return it to the system. If for some reason the developer forgets to invoke free, this block of memory leaks. Memory leaking, or resource leaking, is generally a common problem in applications written in C:

#include <stdio.h>

#include <unistd.h>

#include <fcntl.h>

#include <string.h>

int AppendString(const char* str) {

int fd = open("test.txt", O_CREAT|O_RDWR|O_APPEND);

if (fd < 0) {

printf("Can't open file\n");

return -1;

}

size_t len = strlen(str);

if (write(fd, str, len) < len) {

printf("Can't append a string to a file\n");

return -1;

}

close(fd);

return 0;

}

This preceding code looks correct, but it contains several serious issues. If the write function returns an error or writes less data than requested (and this is correct behavior), the AppendString function logs an error and returns. However, if it forgets to close the file descriptor, it leaks. Over time, more and more file descriptors leak and at some point, the program reaches the limit of open file descriptors, making all calls to the open function fail.

C++ provides a powerful programming idiom that prevents resource leakage: RAII. A resource is allocated in an object constructor and deallocated in the object destructor. This means that the resource is only held while the object is alive. It is automatically freed when the object is destroyed:

#include <fstream>

void AppendString(const std::string& str) {

std::ofstream output("test.txt", std::ofstream::app);

if (!output.is_open()){

throw std::runtime_error("Can't open file");

}

output << str;

}

Note that this function does not call close explicitly. The file is closed in the destructor of the output object, which is automatically invoked when the AppendString function returns.

Exceptions

Traditionally, C developers handled errors using error codes. This approach requires lots of attention from the coders and is a constant source of hard-to-find bugs in C programs. It is too easy to omit or overlook missing check-for-a-return code, masking the error:

#include <stdio.h>

#include <unistd.h>

#include <fcntl.h>

#include <iostream>

#include <fstream>

char read_last_byte(const char* filename) {

char result = 0;

int fd = open(filename, O_RDONLY);

if (fd < 0) {

printf("Can't open file\n");

return -1;

}

lseek(fd, -1, SEEK_END);

size_t s = read(fd, &result, sizeof(result));

if (s != sizeof(result)) {

printf("Can't read from file: %lu\n", s);

close(fd);

return -1;

}

close(fd);

return result;

}

The preceding code has at least two issues related to error handling. First, the result of the lseek function call is not checked. If lseek returns an error, the function will work incorrectly. The second issue is more subtle, yet more important and harder to fix. The read_last_byte function returns -1 to indicate an error, but it is also a valid value of a byte. It is not possible to distinguish whether the last byte of a file is 0xFF or whether the function encountered an error. To correctly handle this case, the function interface should be redefined as follows:

int read_last_byte(const char* filename, char* result);

The function returns -1 in the case of an error and 0 otherwise. The result is stored in a char variable passed by reference. Although this interface is correct, it is not as convenient for developers as the original one.

A program that eventually crashes randomly may be considered the best outcome for these kinds of errors. It would be worse if it keeps working, silently corrupting data or generating incorrect results. 

Besides that, the code that implements the logic and the code responsible for error checks are intertwined. The code becomes hard to read and hard to understand and, as a result, even more error-prone.

Although developers can still keep using return codes, the recommended way of error handling in modern C++ is exceptions. Correctly designed and correctly used exceptions significantly reduce the complexity of error handling, making code readable and robust. 

The same function written in C++ using exceptions looks much cleaner: 

char read_last_byte2(const char* filename) {

char result = 0;

std::fstream file;

file.exceptions (

std::ifstream::failbit | std::ifstream::badbit );

file.open(filename);

file.seekg(-1, file.end);

file.read(&result, sizeof(result));

return result;

}

 

The powerful standard library

C++ comes with a feature-rich and powerful standard library. Many functions that required C developers to use third-party libraries are now part of the standard C++ library. This means less external dependencies, more stable and predictable behavior, and improved portability between hardware architectures.

The C++ standard library comes with containers built on top of the most commonly used data structures, such as arrays, binary trees, and hash tables. These containers are generic and efficiently cover most of the developer's everyday needs. Developers do not need to spend time and effort creating their own, often error-prone, implementations of the essential data structures.

The containers are carefully designed in a way that minimizes the need for explicit resources, allocation, or deallocation, leading to significantly lower chances of memory or other system resources leaking.

The standard library also provides many standard algorithms, such as find, sort, replace, binary search, operations with sets, and permutations. The algorithms can be applied to any containers that expose integrator interfaces. Combined with standard containers, they help developers focus on high-level abstractions and build them on top of well-tested functionality with a minimal amount of additional code.

Threads and a memory model as part of the language specification

The C++11 standard introduced a memory model that clearly defines the behavior of a C++ program in a multithreaded environment. 

For the C language specifications, the memory model was out of scope. The language itself was not aware of threads or parallel execution semantics. It was up to the third-party libraries, such as pthreads, to provide all the necessary support for multithread applications.

Earlier versions of C++ followed the same principle. Multithreading was out of the scope of the language specification. However, modern CPUs with multiple pipelines supporting instruction reordering demanded more deterministic behavior of compilers.

As a result, modern specifications of C++ explicitly define classes for threads, various types of locks and mutexes, condition variables, and atomic variables. This gives embedded developers a powerful tool kit to design and implement applications capable of utilizing all the power of modern multicore CPUs. Since the tool kit is part of the language specification, these applications have deterministic behavior and are portable to all supported architectures.

 

Deploying software remotely

The deployment of software for embedded systems is often a complex procedure that should be carefully designed, implemented, and tested. There are two major challenges:

  • Embedded systems are often deployed in places that are difficult or impractical for a human operator to access.
  • If software deployment fails, the system can become inoperable. It will require the intervention of a skilled technician and additional tools for recovery. This is expensive and often impossible.

A solution for the first challenge of embedded systems that are connected to the internet was found in the form of Over-the-Air (OTA) updates. A system periodically connects to the dedicated server and checks for available updates. If the updated version of the software is found, it is downloaded to the device and installed to the persistent memory.

This approach is widely adopted by manufacturers of smartphones, Set-Top-Box (STB) appliances, smart TVs, and game consoles connected to the internet.

When designing OTA updates, system architects should take into account many factors that affect the scalability and reliability of the overall solution. For example, if all devices check for updates at approximately the same time, it creates high peak loads in the update servers, while leaving them idle all other time. Randomizing the check time keeps the load distributed evenly. The target system should be designed to reserve enough persistent memory to download the complete update image before applying it. The code implementing the updated software image download should handle network connection drops and resume download once the connection is recovered, rather than start over. Another important factor of OTA update is security. The updated process should only accept genuine update images. Updates are cryptographically signed by the manufacturer and an image is not accepted by the installer running on the device unless the signature matches.

Developers of embedded systems are aware that the update may fail for different reasons; for example, a power outage during the update. Even if the update completes successfully, the new version of the software may be unstable and crash on startup. It is expected that even in such situations the system will be able to recover.

This is achieved by separating the main software components and the bootloader. The bootloader validates the consistency of the main components, such as the operating system kernel and root filesystem that contains all the executables, data, and scripts. Then, it tries to run the operating system. In the case of failure, it switches to the previous version, which should be kept in the persistent memory along with the new one. Hardware watchdog timers are used to detect and prevent situations where a software update causes the system to hang.

It is impractical to use OTA or complete image re-flashing during software development and testing. It significantly slows down the development process. Engineers use other ways to deploy their software builds to the development systems, such as a remote shell or network filesystems that allow file sharing between developers' workstations and target boards.

 

Running software remotely

Embedded systems are designed to solve a particular problem using a specific combination of hardware and software components. That is why all software components in a system are tailored to fulfill this goal. Everything non-essential is disabled and all custom software is integrated into the boot sequence.

Users do not launch embedded programs; they start on system boot. However, during the development process, engineers need to run their applications without rebooting the system.

This is done differently depending on the type of the target platform. For powerful-enough systems based on SoC and running a preemptive multitasking operating system such as Linux, it can be done using a remote shell.

Modern systems usually use a secure shell (SSH) as a remote shell. The target system runs an SSH daemon waiting for incoming connections. Developers connect using a client SSH program, such as SSH in Linux or PuTTY in Windows, to get access to the target system. Once connected, they can work with the Linux shell on the embedded board in the same way as on a local computer.

The common workflow for running the program remotely is as follows:

  1. Build a program executable in your local system using a cross-compilation toolkit.
  2. Copy it to the remote system using the scp tool.
  1. Connect to the remote system using SSH and run the executable from the command line.
  2. Using the same SSH connection, analyze the program output.
  3. When the program terminates or gets interrupted by the developer, fetch its logs back to the developer's workstation for in-depth analysis. 

MCUs do not have enough resources for a remote shell. Developers usually upload the compiled code directly into the platform memory and initiate the code execution from the particular memory address.

 

Logging and diagnostics

Logging and diagnostics are an important aspect of any embedded project.

In many cases, using an interactive debugger is not possible or practical. Hardware state can change in a few milliseconds. After a program stops on a breakpoint, a developer does not have enough time to analyze it. Collecting detailed log data and using tools for their analysis and visualization is a better approach for high-performance, multithreaded, time-sensitive embedded systems.

Since in most cases resources are limited, developers often have to make tradeoffs. On the one hand, they need to collect as much data as possible to identify the root cause of failure—whether it is the software or hardware, the status of the hardware components at the time of the failure, and the accurate timing of the hardware and software events handled by the system. On the other hand, the space available for the log is limited, and each time writing the log affects the overall performance.

The solution is buffering log data locally on a device and sending it to a remote system for detailed analysis.

This approach works fine for the development of embedded software. However, the diagnostics of the deployed systems require more sophisticated techniques.

Many embedded systems work offline and do not provide convenient access to internal logs. Developers need to design and implement other ways of diagnostics and reporting carefully. If a system does not have a display, LED indicators or beeps are often used to encode various error conditions. They are sufficient for giving information about the failure category but in most cases cannot provide the necessary details to nail it down to the root cause.

Embedded devices have dedicated diagnostics modes that are used to test the hardware components. After powering up, virtually any device or appliance performs a Power-On Self-Test (POST), which runs quick tests of the hardware. These tests are supposed to be fast and do not cover all testing scenarios. That is why many devices have hidden service modes that can be activated by developers or field engineers to perform more thorough tests.

 

Summary

In this chapter, we discussed a high-level overview of embedded software, what makes it different, and also learned why and how C++ can be used efficiently in this area.

About the Author

  • Igor Viarheichyk

    Igor Viarheichyk works as an engineering manager at Samsung, developing a safety-critical middleware platform for advanced driver assistance systems aimed at specialized automotive embedded platforms. Prior to joining Samsung, in the past 20 years of his career, he has played different roles, from software engineer to software architect, to engineering manager in a variety of projects, and he has gained vast experience in the areas of system programming, embedded programming, network protocols, distributed and fault-tolerant systems, and software internationalization. Though he knows and actively uses programming languages such as C, Java, and Python, C++ is his language of choice to implement large-scale, high-performance applications.

    Browse publications by this author

Latest Reviews

(1 reviews total)
Stuck with technical issue, waiting for answers from author

Recommended For You

Book Title
Access this book, plus 7,500 other titles for FREE
Access now