The outer limits of embedded systems
Connecting state and local government leaders
Embedded systems are becoming increasingly sophisticated, as their use on the Mars Rover and at the South Pole attest. And they’re becoming more integrated with larger information technology systems.
Think writing an artificial intelligence program is hard? Try writing one that works on a 20 MHz processor and runs a NASA rover scuttling across Mars.
The rovers were already moving about on the Red Planet on their own because sending move-by-move directions from Earth would take too long. But their autonomous navigation software kept getting the units trapped in cul-de-sacs and other locations that were tricky to escape from.
To boost the machine’s driving skills, NASA’s Jet Propulsion Laboratory, which runs the rover space mission, turned to software developed by Carnegie Mellon University research professor Tony Stentz and his student Dave Ferguson.
The program, called Field D*, was developed with funding from the Army Research Laboratory and the Defense Advanced Research Projects Agency. The software essentially builds a large-scale map of the terrain it encounters so that when a self-guiding vehicle needs to back out of a dead end, it can easily do so by retracing its steps.
However, the software’s primary selling point was its ability to run on the speed and memory constraints of a 1990 PC. In addition to doing what it was supposed to do, the software had to be what Stentz called a computationally efficient algorithm — that is, the developers had to ensure the code used only minimal computer resources.
As it happened, writing efficiently made the program more robust. Field D* is an incremental algorithm, Stentz said. It can create a new plan by computing lots of partial results and recycling them to repair a previously generated plan.
“Given a CPU speed, how fast can we replan?” he asked. That’s an important question because “if we can replan more quickly, our robot would be more responsive to the environment.”
Welcome to the world of embedded computing. Thanks to the plummeting cost of microprocessors, computing is no longer something that only takes place on a desktop PC or at a data center. It now happens in automobiles, Global Positioning System receivers, identification cards and even outer space.
But unlike today’s powerful servers, embedded systems must operate on limited resources — small processors, tiny memory and low power. “With embedded, it is about doing as much as you can with as little as you can,” said Rob Oshana, director of engineering at Freescale Semiconductor.
The ways in which embedded systems developers have learned to work within those limitations offers lessons for all system builders and managers.
Embedded everywhere
Embedded systems are nothing new. Most cars built in the past 10 years have tiny networks of low-power computing systems to control brakes and manage the engine. And cell phones and digital cameras now pack more processing power than the previous decade’s desktop PCs.
Although originally designed for interacting with the real world, such systems are increasingly feeding information into larger information technology systems, said Wayne Wolf, Rhesa Farmer Distinguished Chair of Embedded Computing Systems at the Georgia Institute of Technology and author of the recently published “Computers as Components: Principles of Embedded Computer Systems Design.”
“What we’re starting to see now is [the emergence] of what the National Science Foundation is calling cyber-physical systems,” Wolf said. “Embedded systems are not IT systems. They are signal processing and control systems. But as we build more complex systems, we need to figure out ways of marrying those two worlds.”
The two worlds still remain different in many ways. Embedded systems must work under extreme conditions, such as harsh climates, and often a long way from data centers.
Perhaps no example is more extreme than IceCube, an NSF-funded project the University of Wisconsin is conducting at the South Pole.
The project’s scientists want to record cosmic neutrinos as they speed through the Earth, and one of the best places to detect them is in the clear expanses of Antarctic polar ice.
At the O’Reilly Open Source Convention in Portland, Ore., in July, University of Wisconsin researcher Dave Glowacki and John Jacobsen, head of NPX Designs, explained how they are building a network of 5,000 surface and under-ice neutrino detectors buried thousands of feet down.
In essence, the detectors are embedded systems. The team drills a hole one or two kilometers into the ice and inserts the detector. Each detector is housed on a motherboard that has an Altera Excalibur dual field-programmable gate array (FPGA) and CPU chip.
The code that runs the device, written in the C programming language, “is rock-solid stuff,” Glowacki said. “If it breaks, we won’t be able to go down and press the reset button.”
A Document Object Model application monitors the system’s health and configuration and packages the data for transmission. Communications are conducted to the surface via twisted-pair copper cables, Jacobsen said. The power requirement is low: Each detector consumes 89 milliamps from a 96-volt feed.
Back up on the surface, Linux servers manage the acquisition and distribution of the data. Java modules handle the data while the overall control functions are rendered in Python.
Hardware choices
When it comes to hardware, embedded-system designers have a wide array of options. In terms of processors, they can choose from ARM, analog-to-digital, digital signal and low-power general-use x86 processors.
Each processor answers a specific set of needs. In a tutorial at the Embedded Systems Conference (ESC) in Boston in October, Oshana did a quick rundown of the best uses of each.
When all you need is basic control and monitoring, a simple low-cost processor called a microcontroller will do the trick. On the other hand, an FPGA is great for high-speed processing of data. And digital signal processors (DSPs) are good for processing digital data, especially when a lot of power is not available.
Each type of processor has its own characteristics, and the trick of choosing the right one involves knowing the limitations you are dealing with. Do you want to go for performance or keep the price low? Do you need to hold power requirements to a minimum? Do you need to design and build the system as quickly as possible?
General-use processors are the easiest to program for, but they tend to be power-hungry. They can also be slower than an FPGA, which you can tune to the application itself. A DSP could be used for filtering signals, while an ARM processor is better suited to file management.
“A lot of systems are heterogeneous,” Wolf said, meaning that they use more than one type of processor.
Oshana said most signal-processing systems involve an FPGA, DSP and general-purpose CPU. The trick is engineering the right combination.
And processors aren’t the only choice embedded-system designers have to make. They must also decide what kind of memory to use. Most embedded systems are far too small and must be too rugged to rely on hard drives, so flash memory works best for holding the programs, data and even scratch space for the programs to run.
At ESC, Bill Stafford and Mike McClimans, marketing executives at Numonyx, discussed how different forms of flash memory might suit different needs. In a nutshell, there are two types of nonvolatile flash memory to choose from: NOR and NAND. NOR memory is smaller, ranging from kilobits to 128 megabits, while NAND can be as large as 8 gigabits. NOR also has a longer life span, lasting five years or more, but it has no error-correction capabilities, so if you need to ensure the integrity of data, NAND would be the way to go.
On the software front
Getting the best hardware is only the beginning of the battle. As the Mars rover demonstrated, embedded-system applications must be tailored to work with little memory and slow processors.
Simply by examining the code and figuring out how to efficiently execute the work that needs to be done can go a long way toward saving resources or making sure a piece of code can run on a small-scale system.
During his presentation, Oshana demonstrated how to revise a piece of code from taking 521 processor cycles to execute to requiring only 288 cycles. The trick is to understand the architecture of the processor you are working with and write the program to fit comfortably within those parameters.
For example, a developer should know the processor’s native word size — a word being the natural number of bits a processor can ingest at once. Additional cycles are needed whenever the data size exceeds the word size. The same goes for registers: Try not to have more variables declared than there are registers. Otherwise, the processor will have to put the additional variables on the stack, which will require more cycles to load and store them.
The way loops are structured is also important. In another ESC presentation, Analog Devices software engineer Hazarathaiah Malepati showed how to cut the number of cycles it takes to encrypt data using the Data Encryption Standard on the company’s Blackfin processor. Typically, it would take 4,288 cycles to encrypt a piece of data, but the number of cycles can be reduced to 896 by using a processor’s multiple arithmetic logic units in parallel rather than sending the whole operation serially through a single ALU.
There are limits to that approach, of course. Oshana warned of premature optimization, or spending too much time rewriting code that won’t get called on all that often. He offered a general rule: 80 percent of performance improvement will come from improving 20 percent of the code.
Another major consideration is power consumption. The average desktop computer or server has a stable source of power, but many embedded devices are not so lucky. They must rely on battery power or some other low-power source. In those cases, keeping power usage to a minimum is critical.
Oshana offered a number of tips for reducing power use. He suggested using processors and other components that the operating system can easily put into low-voltage sleep mode. If performance is not a central need, you could also take advantage of chips that let you set the voltage and/or clock speeds for further power saving.
Oshana also offered some tips on reducing power consumption with software design. Whenever possible, he said, try to shift data among components using direct memory access rather than using a power-hungry CPU. Reduce the amount of space the program takes up in memory, and try to move the data as close to the CPU as possible, he added.
Although embedded electronics and standard IT systems have traditionally been two separate worlds, they are looking increasingly similar. As Moore’s law continues to drive processor power, the programs written to run on those processors will continue to grow to millions or even billions of lines of code, the size of full-scale applications. And networking enhancements will continue to bring embedded and IT systems closer together.
Embedded-system developers and IT managers might have more in common than they imagine — and a lot they could learn from one another.
NEXT STORY: Council honors public servants