1. Real-Time Systems – Setting the Scene
40 years ago, software development was widely seen as consisting of only programming. And it was regarded more as an art than a science (and certainly not as an engineering discipline). Perhaps that's why this period is associated with so many gloomy tales of project failure. Well, the industry has matured. Along the way, we had new languages, real design methods, and, in 1968, the distinction between computer science and software engineering.
The microprocessor arrived circa 1970 and set a revolution in motion. However, experienced software developers played little part in this. For, until the late 1970s, most developers of microcomputer software were electronic, electrical, or control engineers. And they proceeded to make exactly the same mistakes as their predecessors. Now, why didn't they learn from the experience of earlier workers? There were three main reasons for this. In the first place, there was little contact between electronic engineers (and the like) and computer scientists. In the second place, many proposed software design methods weren't suitable for real-time applications. Thirdly, traditional computer scientists were quite dismissive of the difficulties met by microprocessor systems designers. Because programs were small, the tasks were trivial (or so it was concluded).
Over the years, the industry has changed considerably. The driving force for this has been the need to:
- Reduce costs
- Improve quality, reliability, and safety
- Reduce design, development, and commissioning timescales
- Design complex systems
- Build complex systems
Without this pressure for change, the tools, techniques, and concepts discussed in this book would probably still be academic playthings.
Early design methods can be likened to handcrafting, while the latest ones are more like automated manufacture. But, as in any industry, it's no good automating the wrong tools; we have to use the right tools in the right place at the right time. This chapter lays the groundwork for later work by giving a general picture of real-time systems. It does this with the following:
- Highlights the differences between general-purpose computer applications (for example, information technology, management information systems, and more) and real-time systems
- Looks at the types of real-time systems met in practice
- Describes the environmental and performance requirements of embedded real-time systems
- Describes the typical structures of modern microprocessors and microcomputers
- Shows, in general, how software design and development techniques are influenced by these factors
The detailed features of modern software methods are covered in later chapters.
1.1 Categorizing Computer Systems
So, how are computer systems categorized? There are many answers to this, sometimes conflicting, sometimes overlapping. But if we use the speed of response as the main criterion, then three general groups emerge:
- Batch: I don't mind when the computer results arrive, within reason (the time taken may be hours or even days in such systems).
- Interactive online: I would like the results within a fairly short period of time, typically, a few seconds.
- Real-time: I need the results within definite timescales; otherwise, the system just won't work properly.
Let's consider these in turn.
An example of a modern batch system is shown in Figure 1.1. Methods like this are used:
Where, computing resources are scarce and/or expensive as it is a very efficient technique.
Here, the user usually preprocesses all programs and information, perhaps storing data on a local computer. At a convenient time, say, at the start of an evening shift, this job is passed over the data link to a remote site (often, a number of jobs are transmitted as a single job lot). When all the jobs are finished, the results are transmitted back to the originating site.
Interactive online computer systems are widely used in banking, holiday booking, and mail-order systems. Here, for private systems, access to the system is made using (typically) PC-based remote terminals (Figure 1.2):
The local processing of data isn't normally done in this instance. Instead, all transactions are handled by the central computer in a time-slice fashion. Routing and access control is the responsibility of the frontend processors and local multiplexers. Many readers will, of course, have experience of such systems through their use of the internet and the web (perhaps the importance of timeliness in interactive systems is summed up by the definition of www as standing for world wide wait). A further point to take note of is that response times depend on the amount of activity. All systems slow down as load builds up, sometimes, seizing up at peak times. For time-critical applications, this type of response is unacceptable, as, for example, in auto cruise control systems (Figure 1.3):
Here, the driver dials in the desired cruising speed. The cruise control computer notes this and compares it with the actual vehicle speed. If there is a difference, correcting signals are sent to the power unit. The vehicle will either speed up or slow down, depending on the desired response. Provided the control is executed quickly, the vehicle will be powered in a smooth and responsive manner. However, if there is a significant delay in the computer, a kangaroo-like performance occurs. Clearly, in this case, the computer is worse than useless; it degrades the car's performance.
In this book, "real-time" is taken to imply time-bound response constraints. Should computer responses exceed specific time bounds, then this results in performance degradation and/or malfunction. So, within this definition, batch and interactive online systems are not considered to operate in real-time.
1.2 Real-Time Computer Systems
1.2.1 Time and Criticality Issues
From what's been said so far, one factor distinguishes real-time systems from batch and online applications: timeliness. Unfortunately, this is a rather limited definition; a more precise one is needed. Many ways of categorizing real-time systems have been proposed and are in use. One particular pragmatic scheme, based on time and criticality, is shown in Figure 1.4. An arbitrary boundary between slow and fast is 1 second (chosen because problems shift from individual computing issues to overall system aspects at around this point). The related attributes are given in Figure 1.5.
Hard, fast embedded systems tend, in computing terms, to be small (or maybe a small, localized part of a larger system). Computation times are short (typically, in the tens of milliseconds or faster), and deadlines are critical. Software complexity is usually low, especially in safety-critical work. A good example is the airbag deployment system in motor vehicles. Late deployment defeats the whole purpose of airbag protection:
Hard, slow systems do not fall into any particular size category (though, many, as with process controllers, are small). An illustrative example of such an application is an anti-aircraft missile-based point-defense system for fast patrol boats. Here, the total reaction time is in the order of 10 seconds. However, the consequences of failing to respond in this time frame are self-evident.
Larger systems usually include comprehensive, and sometimes complex, human-machine interfaces (HMIs). Such interfaces may form an integral part of the total system operation as, for instance, in integrated weapon fire-control systems. Fast operator responses may be required, but deadlines are not as critical as in the previous cases. Significant tolerance can be permitted (in fact, this is generally true when humans form part of the system operation). HMI software tends to be large and complex.
The final category, soft/slow, is typified by condition monitoring, trend analysis, and statistical analysis in, for example, factory automation. Frequently, such software is large and complex. Applications like these may be classified as information processing (IP) systems.
1.2.2 Real-Time System Structures
It is clear that the fundamental difference between real-time and others (such as batch and interactive) systems is timeliness. However, this in itself tells us little about the structure of such computer systems. So, before looking at modern real-time systems, it's worth digressing to consider the setup of IT-type mainframe installations. While most modern mainframe systems are large and complex (and may be used for a whole variety of jobs) they have many features in common. In the first case, the essential architectures are broadly similar; the real differences lie in the applications themselves and the application software. Second, the physical environments are usually benign ones, often including air conditioning. Peripheral devices include terminals, PCs, printers, plotters, disks, tapes, communication links, and little else. Common to many mainframe installations is the use of terabytes of disk and tape storage. The installation itself is staffed and maintained by professional data processing (DP) personnel. It requires maintenance in the broadest sense, including that for upgrading and modifying programs. In such a setting, it's not surprising that the computer is the focus of attention and concern.
By contrast, real-time systems come in many types and sizes. The largest, in geographical terms, are telemetry control systems (Figure 1.6):
Such systems are widely used in the gas, oil, water, and electricity industries. They provide centralized control and monitoring of remote sites from a single control room.
Smaller in size, but probably more complex in nature, are missile control systems (Figure 1.7):
Many larger embedded applications involve a considerable degree of complex man-machine interaction. Typical of these are the command and control systems of modern naval vessels (Figure 1.8):
And, of course, one of the major application areas of real-time systems is that of avionics (Figure 1.9):
These, in particular, involve numerous hard, fast, and safety-critical systems.
On the industrial scene, there are many installations that use computer-based standalone controllers (often for quite dedicated functions). Applications include vending machines (Figure 1.10), printer controllers, anti-lock braking, and burglar alarms; the list is endless.
These examples differ in many detailed ways from DP installations, and such factors are discussed next. There are, though, two fundamental points. First, as stated previously, the computer is seen to be merely one component of a larger system. Second, the user does not normally have the requirements – or facilities – to modify programs on a day-to-day basis. In practice, most users won't have the knowledge or skills to reprogram the machine.
Embedded systems use a variety of hardware architectures (or platforms), as shown in Figure 1.11:
Many are based on special-to-purpose (that is, bespoke) designs, especially where there are significant constraints such as:
- Environmental aspects (temperature, shock, vibration, humidity, and so on)
- Size and weight (aerospace, auto, telecomms, and so on)
- Cost (auto, consumer goods, and so on)
The advantage of bespoke systems is that products are optimized for the applications. Unfortunately, design and development is a costly and time-consuming process. A much cheaper and faster approach is to use ready-made items, a commercial off-the-shelf (COTS) buying policy. Broadly speaking, there are two alternative approaches:
- Base the hardware design on the use of sets of circuit boards
- Implement the design using some form of PC
In reality, these aren't mutually exclusive.
(a) COTS Board-Based Designs
Many vendors offer single-board computer systems, based on particular processors and having a wide range of peripheral boards. In some cases, these may be compatible with standard PC buses such as PCI (peripheral component interconnect). For embedded applications, it is problematic whether boards from different suppliers can be mixed and matched with confidence. However, where boards are designed to comply with well-defined standards, this can be done (generally) without worry. One great advantage of this is that it doesn't tie a company to one specific supplier. Two standards are particularly important in the embedded world: VME and PC/104.
VME was originally introduced by a number of vendors in 1981 and was later standardized as IEEE standard 1014-1987. It is especially important to developers of military and similar systems, as robust, wide-temperature range boards are available. A second significant standard for embedded applications is PC/104, which is a cheaper alternative to VME. It is essentially a PC but with a different physical construction, being based on stackable circuit boards (it gets its name from its PC roots and the number of pins used to connect the boards together, that is, 104). At present, it is estimated that more than 150 vendors manufacture PC/104 compatible products.
(b) COTS PC-Based Designs
Clearly, PC/104 designs are PC-based. However, an alternative to the board solution is to use ready-made personal computers. These may be tailored to particular applications by using specialized plug-in boards (for example, stepper motor drives, data acquisition units, and so on). If the machine is to be located in, say, an office environment, then a standard desktop computer may be satisfactory. However, these are not designed to cope with conditions met on the factory floor, such as dust, moisture, and so on. In such situations, ruggedized, industrial-standard PCs can be used. Where reliability, durability, and serviceability are concerned, these are immensely superior to the desktop machines.
1.2.3 Characteristics of Embedded Systems
Embedded computers are defined to be those where the computer is used as a component within a system, not as a computing engine in its own right. This definition is the one that, at heart, separates embedded from non-embedded designs (note that, from now on, "embedded" implicitly means "real-time embedded").
Embedded systems are characterized (Figure 1.12) by the following:
- The environments they work in
- The performance expected of them
- The interfaces to the outside world:
(a) Environmental Aspects
Environmental factors may, at first glance, seem to have little bearing on software. Primarily, they affect the following:
- Hardware design and construction
- Operator interaction with the system
But these, to a large extent, determine how the complete system works – and that defines the overall software requirements. Consider the following physical effects:
- Shock and vibration
- Size limits
- Weight limits
The temperature ranges commonly met in embedded applications are shown in Figure 1.13:
Many components used in commercial computers are designed to operate in the band 0-30°C. Electronic components aren't usually a problem. Items such as terminals, display units, and hard disks are the weaknesses. As a result, the embedded designer must either do without them or else provide them with a protected environment – which can be a costly solution. When the requirements to withstand shock, vibration, and water penetration are added, the options narrow. For instance, the ideal way to reprogram a system might be to update the system using a flashcard. But if we can't use this technology because of environmental factors, then what?
Size and weight are two factors at the forefront in the minds of many embedded systems designers. For vehicle systems, such as automobiles, aircraft, armored fighting vehicles, and submarines, they may be the crucial factors. Not much to do with software, you may think. However, suppose a design requirement can only be met by using a single-chip micro (learn in Section 1.3.4, Single-Chip Microcomputers). Additionally, suppose that this device has only 256 bytes of random-access memory (RAM). So, how does that affect our choice of programming language?
The electrical environments of industrial and military systems are not easy to work in. Yet most systems are expected to cope with extensive power supply variations in a predictable manner. To handle problems like this, we may have to resort to defensive programming techniques (Chapter 2, The Search for Dependable Software). Program malfunction can result from electrical interference; again, defensive programming is needed to handle this. A further complicating factor in some systems is that the available power may be limited. This won't cause difficulties in small systems. But if your software needs 10 gigabytes of dynamic RAM to run in, the power system designers are going to face problems.
Let's now turn to the operational environmental aspects of embedded systems. Normally, we expect that when the power is turned on, the system starts up safely and correctly. It should do this every time and without any operator intervention. Conversely, when the power is turned off, the system should also behave safely. What we design for are "fit and forget" functions.
In many instances, embedded systems have long operational lives, perhaps between 10 and 30 years. Often, it is necessary to upgrade the equipment a number of times in its lifetime. So, the software itself will also need upgrading. This aspect of software, its maintenance, may well affect how we design it in the first place.
Two particular factors are important here:
- How fast does a system respond?
- When it fails, what happens?
(i) The Speed of Response
All required responses are time-critical (although these may vary from microseconds to days). Therefore, the designer should predict the delivered performance of the embedded system. Unfortunately, even with the best will in the world, it may not be possible to give 100% guarantees. The situation is complicated because there are two distinct sides to this issue – both relating to the way tasks are processed by the computer.
Case one concerns the demands to run jobs at regular, predefined intervals. A typical application is that of closed-loop digital controllers having fixed, preset sampling rates. This we'll define to be a "synchronous" or "periodic" task event (synchronous with a real-time clock – Figure 1.14):
Case two occurs when the computer must respond to (generally) external events that occur at random ("asynchronous" or "aperiodic"). And the event must be serviced within a specific maximum time period. Where the computer handles only periodic events, response times can be determined reasonably well. This is also true where only one aperiodic event drives the system (a rare event), such as in Figure 1.15:
When the system has to cope with a number of asynchronous events, estimates are difficult to arrive at. However, by setting task priorities, good estimates of worst-case performance can be deduced (Figure 1.16). As shown here, task 1 has higher priority than task 2:
Where we get into trouble is in situations that involve a mixture of periodic and aperiodic events – which are usually in real-time designs. Much thought and skill are needed to deal with the response requirements of periodic and aperiodic tasks (especially when using just one processor).
(ii) Failures and Their Effects
All systems go wrong at some time in their lives. It may be a transient condition or a hard failure; the cause may be hardware or software or a combination of both. It really doesn't matter; accept that it will happen. What we have to concern ourselves with are:
- The consequences of such faults and failures
- Why the problem(s) arose in the first place
Because a system can tolerate faults without sustaining damage doesn't mean that such performance is acceptable. Nuisance tripping out of a large piece of plant, for instance, is not going to win many friends. All real-time software must, therefore, be designed in a professional manner to handle all foreseen problems, that is, "exception" handling (an exception is defined here to be an error or fault that produces program malfunction, see Chapter 2, The Search for Dependable Software. It may originate within the program itself or be due to external factors). If, on the other hand, software packages are bought in, their quality must be assessed. Regularly, claims are made concerning the benefits of using Windows operating systems in real-time applications. Yet users of such systems often experience unpredictable behavior, including total system hang up. Could this really be trusted for plant control and similar applications?
In other situations, we may not be able to cope with unrectified system faults. Three options are open to us. In the first, where no recovery action is possible, the system is put into a fail-safe condition. In the second, the system keeps on working, but with reduced service. This may be achieved, say, by reducing response times or by servicing only the "good" elements of the system. Such systems are said to offer "graceful" degradation in their response characteristics. Finally, for fault-tolerant operations, full and safe performance is maintained in the presence of faults.
The range of devices that interface to embedded computers is extensive. It includes sensors, actuators, motors, switches, display panels, serial communication links, parallel communication methods, analog-to-digital converters, digital-to-analog converters, voltage-to-frequency converters, pulse-width modulated controllers, and more. Signals may be analog (DC or AC) or digital; voltage, current, or frequency encoding methods may be used. In anything but the smallest systems, hardware size is dominated by the interfacing electronics. This has a profound effect on system design strategies concerning processor replication and exception handling.
When the processor itself is the major item in a system, fitting a backup to cope with failures is feasible and sensible. Using this same approach in an input-output (I/O) dominated system makes less sense (and introduces much complexity).
Conventional exception handling schemes are usually concerned with detecting internal (program) problems. These include stack overflow, array bound violations, and arithmetic overflow. However, for most real-time systems, a new range of problems has to be considered. These relate to factors such as sensor failure, illegal operator actions, program malfunction induced by external interference, and more. Detecting such faults is one thing; deciding what to do subsequently can be an even more difficult problem. Exception-handling strategies need careful design to prevent faults causing system or environmental damage (or worse – injury or death).
1.3 The Computing Elements of Real-Time Systems
In real-time systems, computing elements are destined for use in either general-purpose or specialized applications (Figure 1.17):
To use these effectively, the software designer should have a good understanding of their features. After all, what might be an excellent design solution for one application might be ghastly (or even unusable) in others.
1.3.2 General-Purpose Microprocessors
General-purpose microprocessors were originally the core building blocks of microcomputer systems. Although they are far less common nowadays, they form a good starting point for this topic.
By itself, the processor is only one element within the microprocessor system. To turn it into a computing machine, certain essential elements need to be added (Figure 1.18):
The program code itself is stored in memory, which, for embedded systems, must be retained on power down. That is, the memory must be "non-volatile." Older designs typically used ultraviolet-erasable (electrically) programmable ROM (EPROM). The drawback to this device is that (normally) it must be removed from the computer for erasure and reprogramming. However, where in-circuit reprogramming is required, code is located in electrically erasable/ programmable non-volatile storage, the alternatives being:
- Electrically erasable programmable ROM (EEPROM)
- Flash memory (a particular type of EEPROM technology)
- Ferroelectric random-access memory (FRAM)
Flash memory has, to a large extent, replaced EPROM in new designs.
When large production quantities are concerned, two approaches may be used:
- Mask-programmable devices
- One-time programmable ROM (OTPROM)
In the first case, the program is set in memory by the chip manufacturer; as such, it is unalterable. The second method is essentially an EPROM device without a light window. Nowadays, this market sector usually uses single-chip microcomputers rather than general-purpose ones.
All data that is subject to regular change is located in read/write random-access memory (a confusing term, as memory locations, for most devices, can be accessed randomly). This includes program variables, stack data, process descriptors, and dynamic data items.
The final element is the address decoder unit. Its function is to identify the element being accessed by the processor.
Taken together, these items form the heart of the microcomputer. However, to make it usable in real-time applications, extra elements need to be added. The key items are:
- Interrupt controllers
- Real-time clocks
- Hardware timers
- Watchdog timers
- Serial communication controllers
Items that should also be considered at the design stage include:
- Direct memory access (DMA) controllers
- I/O peripheral controllers (only where a large volume of data transfer is required)
These may be essential in some systems but not in others:
- Interrupt controllers:
As pointed out earlier, real-time systems must support both periodic and aperiodic tasks. In most designs, "guaranteed" response times are obtained by using interrupts.
- Real-time clock:
The function of the real-time clock is to provide a highly accurate record of elapsed time. It is normally used in conjunction with an interrupt function. Real-time clocks shouldn't be confused with calendar clocks (although they may be used for calendar functions). When an operating system is incorporated within the software, the clock acts as the basic timing element (the "tick").
- Hardware timers:
Accurate timing, especially that involving long time periods, cannot normally be done in software. Without the timing support of the tick in an operating system, hardware timers have to be used. Even when an operating system is used, these timers provide great flexibility. Generally, these are software programmable (Figure 1.19), both in terms of timing and modes of operation (for example, in square-wave generation, the timing is a "one-shot" pulse outputs and retriggerable operations):
- Watchdog timers:
The purpose of the watchdog timer is to act as the last line of defense against program malfunction. It normally consists of a retriggerable monostable or one-shot timer, activated by a program
writecommand (Figure 1.20). Each time the timer is signaled, it is retriggered, with the output staying in the "normal" state:
If for any reason, it isn't retriggered, then a time-out occurs, and the output goes into alarm conditions. The usual course of action is to then generate a non-maskable interrupt (NMI), so setting a recovery program into action. In some instances, external warnings are also produced. In others, especially digital control systems, warnings are produced and the controller is then isolated from the controlled process.
Address decoding of the watchdog timer is, for critical systems, performed over all bits of the address. In these circumstances, the address is a unique one; hence retriggering by accident is virtually eliminated.
- Serial communication controllers:
Serial communication facilities are integral parts of many modern embedded systems. However, even where this isn't needed, it is worthwhile designing in a USB and/or an RS232-compatible communication channel. These can be used as major aids in the development and debugging of the application software.
- DMA controllers:
The DMA controller (Figure 1.21) is used where data has to be moved about quickly and/or in large amounts (data rates can exceed 1 gigabyte/sec):
DMA techniques are widely used in conjunction with bulk memory storage devices such as hard disks and compact disks. For many real-time systems, they are frequently used where high-speed serial communication links have to be supported.
In normal circumstances (that is, the "normal" mode of operation; see Figure 1.21 (a)), the controller acts just like any other slave device, being controlled by the processor. However, when a DMA request is generated by a peripheral device, control is taken over by the DMA controller (Figure 1.21 (b)). In this case, the micro is electrically disconnected from the rest of the system. Precise details of data transfer operations are usually programmed into the controller by the micro.
- I/O peripherals:
I/O peripherals are used either as controllers or as interfacing devices. When used as a controller, their function is to offload routine I/O processing, control, and high-speed transfer work from the processor itself (Figure 1.22):
One of the most common uses of such devices is to handle high-speed, large-volume data transfers to and from hard disk. They are especially useful in dealing with replicated memory storage units, as with replicated arrays of independent disk (RAID) technology. Other applications include intelligent bus, network, and communications interfacing.
The I/O controller's basic operation is similar to that of a DMA controller, but with two major differences. First, it can work cooperatively with the processor, using system resources when the processor is busy. Second, I/O processors are much more powerful than DMA devices. For example, the Intel i960 IOP includes (among other items) a high-speed parallel bus bridge, a specialized serial bus interface, internal DMA controllers, and a performance monitoring unit.
In other applications, I/O devices are used to provide compact, simple, and low-cost interfaces between the processor and peripheral equipment (Figure 1.23). Input/output pins are user-programmable to set up the desired connections to such equipment. These interface chips function as slave devices to the processing unit.
1.3.3 Highly Integrated Microprocessors
Highly integrated processors are those that contain many of the standard elements of a microcomputer system on a single chip. A typical example is the NXP MPC8240-integrated processor (Figure 1.24):
A comparison of Figure 1.18 and Figure 1.24 shows just what can be achieved on one chip (the MPC8240, for example, reduces the chip count from eight to one). Naturally, such processors are more expensive than the basic general-purpose device. However, the integration of many devices onto one chip usually reduces the overall system cost. Moreover, it makes a major impact on board-packing densities, which also reduces manufacturing and test costs. In short, these are highly suited for use in embedded systems design.
1.3.4 Single-Chip Microcomputers
With modern technology, complete microcomputers can be implemented on a single chip, eliminating the need for external components. Using the single-chip solution reduces the following:
- Package count
- Overall costs
One widely used device of this type is the 8052 microcomputer, a microchip variant is shown in Figure 1.25.
By now, all the on-chip devices will be familiar. Note that the interfacing to the outside world may be carried out through the I/O port subsystem. This is a highly flexible structure that, in smaller systems, minimizes the component count. However, with only 8 kBytes of ROM and 256 bytes of RAM, it is clearly intended for use in small systems (the memory size can, of course, be extended by using external devices):
1.3.5 Single-Chip Microcontrollers
Microcontrollers are derivatives of microcomputers but aimed specifically at the embedded control market (though the boundary between the two is becoming somewhat blurred). Like single-chip microcomputers, they are designed to provide all the necessary computing functions in a single package. Broadly speaking, there are two categories: general-purpose (sector-independent) and sector-specific. These differ only in the actual internal devices included on the chip. For sector-specific units, the on-chip devices are chosen to provide support specifically for that sector. In particular, they try to provide all required functionality on the chip, so minimizing the need for (extra) external hardware.
An example of such a device, aimed at automotive body electronic applications, is shown in Figure 1.26, the STMicroelectronics SPC560 series chip:
Like many modern microcontrollers, it contains an impressive set of functions:
- Up to 512 Kbytes Code Flash, with error correcting code (ECC)
- 64 Kbytes Data Flash, with error-correcting code
- Up to 48 Kbytes SRAM, with error-correcting code
- Memory protection unit (MPU)
- Up to 24 external interrupts
- Between 45 and 123, depending on the IC package type
- 6-channel periodic interrupt timers
- 4-channel system timer module
- Software watchdog timer
- Real-time clock timer
- Up to 56 channels counter-time-triggered I/Os
(F) Communications Interface:
- Up to 6 CAN network interfaces
- 4 LIN network interfaces
- Others: Serial Peripheral (SPI) and I2C Interfaces
- Up to 36 channel 10-bit ADC
You might have noticed that the diagram of Figure 1.26 doesn't show any connections to the outside world. There is a simple reason for this. Although the device has many, many functions, not all of these may be accessed simultaneously. In practice, what you can actually use at any one time is limited by the package pin count. The SPC560 series, for example, comes in a number of chip sizes, including 64, 100, and 144 pin types. In many cases, a number of functions may be provided on individual pins, being accessed as a shared (multiplexed) item. Clearly, such functions are available only in a mutually exclusive way.
1.3.6 Digital Signal Processors
There are numerous applications that need to process analog signals very quickly. These include instrumentation, speech processing, telecommunications, radar, sonar, and control systems. In the past, such processing was done using analog techniques. However, because of the disadvantages of analog processors (filters), designers have, where possible, moved to digital techniques. Central to this is the use of digital filtering calculations, typified by sets of multiply and add (accumulate) instructions (the so-called "sum of products" computation). The important characteristics of such systems are that they:
- Have extremely high throughputs
- Are optimized for numerical operations
- Employ a small number of repetitive numerical calculations
- Are usually low-cost
These needs have, for some time now, been met by a device specialized for such work: the digital signal processor (DSP).
To achieve high processing speeds, the basic computing engine is organized around a high-speed multiplier/accumulator combination (Figure 1.27). In these designs, the Von Neumann structure is replaced by the Harvard architecture, having separate paths for instruction and data. The system form shown in Figure 1.27 is fairly typical of DSPs. Obviously, specific details vary from processor to processor; you can refer to http://www.ti.com/processors/dsp/overview.html for further information.
Programming DSPs is a demanding task, especially working at an assembly language level. The instruction sets are carefully chosen to perform fast, efficient, and effective arithmetic. Among those instructions are ones that invoke complex multipurpose operations. Added to this is the need to produce compact and efficient code if the whole program is to fit into the on-chip ROM. And finally, there is the need to handle extensive fixed-point computations without running into overflow problems.
It used to be said that in fixed-point DSP programming, "90% of the effort goes into worrying about where the decimal point is." Fortunately, this is much less of a problem nowadays as word lengths of 32 or 64 bits are commonplace:
A final point: the classical DSP processor is being challenged by "conventional" processors that include DSP instructions. One such example is the ARM NEON SIMD (single-instruction multiple-data) architecture extension for their Cortex series processors.
You can find out more at https://developer.arm.com/technologies/dsp.
1.3.7 Mixed-Signal Processors
Mixed-signal processors, as their name suggests, are designed to interface simultaneously to analog and digital components. The Texas MSP430, for example (Figure 1.28, the G2 variant), is aimed at battery-powered applications (such as multimeters and intelligent sensing) where low power consumption is paramount. Quoted figures (typical) for power requirements are:
- Active: 230 μA at 1MHz, 2.2 volts
- Standby: 0.5 μA
- Off mode (RAM retention): 0.1μA
Work like this could be done by a standard microcontroller, but this is a relatively costly solution. Hence, mixed-signal processors are optimized for use in low cost, high-volume products.
1.3.8 System-On-Chip Designs – Overview
A system-on-chip (SOC) device is an integrated circuit (IC) that integrates all components of a microcomputer or microcontroller (or other electronic system) into a single chip. Thus the Microchip AT89C52 (Figure 1.25) and the MSP430 (Figure 1.28) are, in fact, SOC devices. ICs like these have capabilities designed for general use within specific sectors (for example, the 89C52 for embedded controller applications and the MSP430 for metering systems). Their place in the SOC technology range is shown in Figure 1.29 (please note that this is a simplified view of the topic, being limited to the more important device types):
One of the key aspects here is that their hardware functionality is fixed by the manufacturer; it cannot be changed by the user. What these devices also have in common is that the processors themselves are company-specific. However, in the past few years, there has been a major trend by chip designers to buy-in the processor designs. Such components are usually called virtual components, virtual cores (VCs), or intellectual property (IP) cores. Probably, the most important company in this area is ARM (especially in the 32 and 64-bit field); you'll find their "products" incorporated in many, many SOC devices.
One drawback to using general-purpose SOC ICs is that designs may have to be extensively tailored to meet specific application needs. Other factors may also be important, for example, power consumption, temperature range, radiation hardness, and so on. Such needs can be met by using bespoke SOC devices, these essentially being specialized single-chip application-specific designs.
Now, application-specific integrated circuit (ASIC) technology is not new. Electronic engineers have been using it for many years to produce specialized devices (for example, ICs for advanced signal processing, image stabilization, and digital filtering). Design is performed typically using computer-aided design (CAD) methods based on very high-level description language (VHDL) programming. SOC design methods are fundamentally the same but the implementations are much more complex. In particular, they incorporate microprocessor(s) and memory devices to form full on-chip microcomputers or microcontrollers. Applications include digital cameras, wireless communication, engine management, specialized peripherals, and complex signal processing. Typical of this technology is the Snapdragon SOC suite from Qualcomm Inc, intended for use in mobile devices (https://www.qualcomm.com/products/snapdragon).
Figure 1.30 is a representative structure of an SOC unit; though, by definition, there are many variations of such structures:
The subsystems shown here fall into two groups, custom and others. Anything designed by the chip designer is labeled custom; the others represent bought-in items (for example, a microprocessor, RAM, or ROM). Of course, because this is chip fabrication, we cannot plugin such items onto the chip. What happens typically is that VHDL descriptions of the components are used within the overall design process. The end result of the design process is a manufacturing file that is then sent to a chip manufacturer.
1.3.9 Programmable SOCs – FPGA-Embedded Processors
The customized SOC technology described previously has some major drawbacks. First, having a specialized chip manufactured can be quite costly. Second, it isn't exactly a fast process; typically, the manufacturing process takes 6 to 8 weeks, done in highly specialized semiconductor fabrication plants. Third, modifying the design and producing a new chip is both costly and time-consuming. Fourth, it isn't suitable for low-volume product production because of the costs involved. Fortunately, for many applications, these obstacles can be overcome by using programmable SOC (PSOC) technology. And one of the most important devices here is the Field Programmable Gate Array (FPGA).
An FPGA is a general-purpose digital device that consists of sets of electronic building blocks. You, the designer, set the functionality of the FPGA by configuring these blocks, typically using VHDL design methods. This, which requires specialized design knowledge, is normally done by hardware engineers. Software engineers haven't generally concerned themselves with the detailed aspects of FPGAs, treating them merely as peripheral devices. However, things have changed as a result of FPGA chip manufacturers embedding silicon cores into their devices, the FPGA-embedded processor. As a result, we have some very compelling reasons to go down the FPGA route, such as:
- Producing custom products at a reasonable cost
- Minimizing component count (especially important when size is constrained)
- Maximizing performance by being able to make trade-offs between software and hardware
An example of this technology is the Intel Nios processor (Figure 1.31). Its use in an application is shown in Figure 1.32, where it forms part of a Cyclone V FPGA:
Here, the overall functionality of the device is split between hardware and software. For devices like these, all programming may be done in C: standard C for software and SystemC (or its equivalent) for the hardware. Such an approach has two significant benefits:
- It's much easier to get hold of C programmers than VHDL designers.
- Algorithms, and so on, coded in software can be readily transferred to hardware (by making only minor changes to the C code and then recompiling using SystemC).
And remember, the device functionality can always be modified without us having to make actual physical hardware changes. Thus devices are not only programmable, they're also reprogrammable.
1.3.10 SOC Devices – Single and Multicore
A single-core device is defined to be a chip that contains just one CPU (thus all conventional microprocessors can be considered to be examples of single-core designs). However, SOC technology has given rise to devices that consist of two or more cores, called multicore chips. This structure is now a very important feature of high-performance microcontrollers; at present, the claimed record for the greatest number of cores on a single chip is 100 ARM CPUs, made by the company EZchip (www.tilera.com).
Figure 1.33 shows the makeup of a typical small multicore-based SOC integrated circuit. Here, the multicore processor consists of two CPUs together with key processor-related devices: interrupt management, memory, and timers. This processor is embedded within a single chip microcontroller that also includes various real-world devices:
From hardware, perspective processors come in two forms: symmetric and asymmetric. Essentially, with asymmetric multiprocessor design, all the processing units are identical; with asymmetric multiprocessors, the units differ.
An example of an embedded multicore symmetric multiprocessor is the Arm Cortex A9 (Figure 1.34) showing a simplified description of its key features. This has four identical processing units (cores), each one consisting of a CPU, hardware accelerator, debug interface, and cache memory. It can be seen that several on-chip resources are shared by all the processing units. From a software perspective, the device can be used in two ways. First, each core can be allocated specific tasks, and hence is considered to be a dedicated resource. Second, any core can run any task, thus being treated as an anonymous resource:
An example of an asymmetric multicore multiprocessor is the Texas TMS320DM6443 (Figure 1.35):
In this device, there are two distinct processing units, one for general-purpose computing and the other for DSP.
One final small point: cores are defined to be "hard" or "soft." Where the actual silicon of a processor is used as the core, it is said to be hard. But when the core is implemented using design file data, then it is a soft one.
1.4 Software for Real-Time Applications – Some General Comments
In later chapters, the total design process for real-time systems is described. We'll be looking for answers to questions such as:
- What truly needs to be done?
- How should we specify these needs?
- How can we ensure that we're doing the right job (satisfying the system requirements)?
- How can we make sure that we're doing the job right (performing the design correctly)?
- How can we test the resulting designs for correctness, performance, and errors?
- How do we get programs to work correctly in the target system itself?
Before doing this, let's consider some general problems met in real-time systems work and also dispel a few software myths. And a useful word of advice: don't believe everything you read – question any unsupported assertions (even in this book).
The following quotations have been made (in print) by experienced software practitioners.
"For real-time systems ... programs tend to be large, often in the order of tens of thousands or even of hundreds of thousands of lines of code." This generalization is wrong, especially for deeply embedded systems. Here, programs are frequently small, having object code sizes in the range of 2-64 kBytes (however, two factors, in particular, tend to bloat embedded code: support for highly-interactive graphical user interfaces and support for internet communication protocols). It is a big mistake (one that is frequently made) to apply the rules of large systems to small ones.
"At the specification stage ... all the functional requirements, performance requirements, and design constraints must be specified." In the world of real-time system design, this is an illusion. Ideas like these have come about mostly from the DP world. There, systems such as stock control, accounting, management reporting methods, and the like can be specified in their entirety before software design commences. In contrast, specifications for real-time systems tend to follow an evolutionary development. We may start with an apparently clear set of requirements. At a much later stage (usually, sometime after the software has been delivered), the final, clear, but quite different specifications are agreed.
"Software costs dominate ...." This is rarely true for embedded systems. It all depends on the size of the job, the role of software within the total system, and the number of items to be manufactured.
"Software is one of the most complex things known to man ... Hundreds of man-years to develop system XXX." Well, yes, software is complex. But let's not go overboard about it. Just consider that the development of a new nuclear propulsion for submarines took more than 5,000 man-years (at a very conservative estimate). And it involved large teams, skilled in many engineering disciplines, and based at various geographically separate sites. Is this an "easy" task compared with software development?
"Software, by its nature, is inherently unreliable." I think the assumption behind this is that software is a product of thought, and isn't bounded by natural physical laws. Therefore, there is a much greater chance of making mistakes. This is rather like saying that as circuit theory underpins electronic design, hardware designs are intrinsically less prone to errors. Not so. Delivered hardware is generally free of fault because design, development, and manufacture is (or should be) rigorous, formal, and systematic. By contrast, software has for far too long been developed in a sloppy manner in cottage industry style. The industry (especially the industrial embedded world) has a lack of design formality, has rarely used software design tools, and has almost completely ignored the use of documentation and configuration control mechanisms. Look out for the classic hacker comment: "my code is the design."
The final point for consideration concerns the knowledge and background needed by embedded systems software designers. In the early days of microprocessor systems, there was an intimate bond between hardware and software. It was (and, in many cases, still is) essential to have a very good understanding of the hardware, especially for I/O activities. Unfortunately, in recent years, a gulf has developed between hardware and software engineers. As a result, it is increasingly difficult to find engineers with the ability to bridge this gap effectively. Moreover, larger and larger jobs are being implemented using microprocessors. Allied to this has been an explosion in the use of software-based real-time systems. As a result, more and more software is being developed by people who have little knowledge of hardware or systems. We may not like the situation, but that's the way it is. To cope with this, there has been a change in software design methodologies. Now the design philosophy is to provide a "software base" for handling hardware and system-specific tasks. This is sometimes called "foundation" or "service" software. Programmers can then build their application programs on the foundation software, needing only a minimal understanding of the system hardware. The greatest impact of this has been in the area of real-time operating systems.
You should now:
- Clearly, understand the important features of real-time systems
- Know what sets them apart from batch and interactive applications
- See how real-time systems may be categorized in terms of speed and criticality
- Have a general understanding of the range of real-time (and especially embedded) applications
- Realize that environmental and performance factors are key drivers in real-time systems design
- Know the basic component parts of real-time computer units
- Appreciate the essential differences between microprocessors, microcomputers, and microcontrollers
- Realize why there is a large market for specialized processors
1.6 Useful Reading Material
- Advanced HW/SW Embedded System for Designers 2019, Lennart Lindh Tommy Klevin Mia Lindh: https://www.amazon.in/Advanced-Embedded-System-Designers-2017-ebook/dp/B077T9V3V8
- DSP benchmarking suite: http://www.bdti.com/procsum/index.htm
- EIA standard 232: Interface between data terminal equipment and data communication equipment employing serial binary data interchange, Electronic Industries Association, 1969: https://en.wikipedia.org/wiki/RS-232#Related_standards
- PC/104 embedded solutions: www.pc104.org
- SystemC tutorial: https://www.doulos.com/knowhow/systemc/tutorial/
- The Designer's Guide to VHDL, Peter Ashenden, Morgan Kaufmann, ISBN 1558606742: https://www.amazon.co.uk/Designers-Guide-VHDL-Systems-Silicon/dp/0120887851/ref=sr_1_1?ie=UTF8&qid=1521623316&sr=8-1&keywords=the+designer%27s+guide+to+vhdl
- The Mythical Man-Month, Frederick P. Brooks Jr, Addison-Wesley: https://www.amazon.co.uk/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959
- VME – Versa Module Europa, IEEE Std. 1014-1987: www.vmebus-systems.com