The growth of wearable electronics—whether for biometrics, communication, or augmented reality—extends the concept of an embedded system into new, unexplored territory. Putting sensors and output devices on or in the human operator conjures up the coined word cyborg: a merging of human faculties with embedded systems.
Just as wearables open new vistas for applications, they demand new perspectives in embedded architecture. A cluster of sensors in an adhesive patch or an ingestible lozenge is completely isolated from conventional power, ground, and I/O connections. Yet to achieve tiny size and near-zero energy consumption, that little cluster may be more-then-conventionally dependent on local signal-processing, storage, and wireless connectivity. It is a paradox architects must untangle.
One way to think about the challenge of wearable systems is to imagine a conventional embedded-system design that includes sensors, actuators, and displays attached to the user’s body. Then by stages pull that system apart under the strains of mobility, convenience, and inconspicuousness. As the sensors, output devices, and computing resources become physically separated from one another, watch what happens to the system architecture.
As an example, consider a design for smart glasses. To avoid being trite, we will pass by the familiar consumer version and look at a pair of glasses designed by industrial-equipment vendor XOEye. These glasses are intended for parts inspection, inventory taking, in-field maintenance, and such activities. With stereo-mounted 720-line video cameras, voice input, and both LED and speech output, the system is designed to interactively assist its human user in specific pre-defined tasks.
XOEye chief technical officer Jon Sharp explains that the glasses, by capturing and analyzing stereo images of what the user is seeing, can improve identification of parts, measure dimensions and shapes without physical contact or measuring tools, step a technician interactively through a repair procedure—“Adjust the left screw first”—or warn of an impending safety hazard—flashing red LED; “No, don’t reach in there!”
The conventional approach to such a design would include cameras and microphones mounted on the glasses, and then video processing, object recognition, and a wireless communications link in a back-pack along with a substantial battery. The conventional user response to this design would be to take one look at the back pack and politely decline to use the system.
Enter the concept of wearable tech. XOEye’s approach is to keep the glasses completely self-contained. This goal imposes obvious space and power constraints. Short of magic, these constraints force at least some of the computing to be done remotely—generally in a cloud. But partitioning the computing load imposes a new set of design challenges.
Moving heavy computing tasks to the cloud is not an unprecedented idea in the Internet of Things (IoT). Imagination Technologies senior director of business development Chakra Parvathaneni points out that partitioning varies by application. “The Nest home thermostat has a lot of local processing,” he observes, “but Apple’s Siri is almost entirely cloud-based.”
In XOEye’s case, moving tasks to the cloud implies either enough bandwidth to get the two video streams up in raw format, or near-real-time video compression in the glasses. The latter is feasible with existing media-processing chips, even with a reasonable-sized battery. But there is another issue as well.
“You have to maintain a human interface and some functionality even when you don’t have connectivity,” Sharp warns. “For example, you can’t lose real-time recognition of safety issues when you step into a WiFi shadow.” And some functions may require a level of consistent real-time response that the cloud—lying on the other side of the Internet—can’t guarantee.
These concerns argue for significant local processing, contradicting the glasses’ size, weight, and power constraints. XOEye originally sought compromise in the MCU-plus-accelerator OMAP architecture. The OMAP SoC could handle conventional media-processing tasks, but “for example, stereo ranging couldn’t approach real-time,” Sharp laments. So XOEye moved to a CPU-plus-FPGA approach, allowing them to generate energy-efficient local accelerators for whatever tasks the application requires.
Even if the operating conditions can ensure local connectivity to a wireless hub, the link from the hub through the Internet to the cloud and back may still introduce unacceptable uncertainties. This is one of the structural challenges of the IoT in general. Given this situation, if it is necessary to move computing tasks out of a wearable device, it may make sense to place them in the local wireless hub rather than in the cloud (Figure 1). Of course this precludes the use of just any commodity WiFi hub.
Integrating a computing node into a WiFi hub significantly increases flexibility for the system architect. Hubs are usually comparatively unconstrained in space and power, so you can put substantial computing and storage resources there. And short-range WiFi links can provide reliable high-bandwidth, predictable latency connections, allowing the hub to participate in critical control or human-interface loops where a latency outlier would be a problem. Also, a hub equipped with a multitasking CPU and appropriate accelerators can serve exported processing tasks from a number of remote wearable devices.
But what if the wearable devices are significantly smaller than glasses—say a wrist band, a shoe implant, or a largish pill? Without room for a big battery, the power drain and mostly-on duty cycle of WiFi become unsupportable. The choices for the wireless like come down to Bluetooth or one of the seriously low-power short-range links. The hub now becomes itself a wearable device on a belt or in a pocket, necessarily within a meter or so of the sensors—perhaps considerably closer, if only a near-field wireless link is supportable. And the task-partitioning problem changes in interesting ways.
Part of the challenge now is powering the wearable devices. They must include, at a minimum, the sensor, a controller to interrogate said sensor, and a wireless interface. That load may be within the capabilities of a tiny battery or—with very careful attention to duty cycle—an energy-harvesting device. But now where do we put the computing?
Bandwidth between the sensor and the first level of sensor processing is now a greater issue. Can the wireless link carry the raw data stream from the sensor in real time? If not, does it make more sense to spend energy improving the link bandwidth, or to spend energy on local processing at the sensor? Does the answer change if the system use-model changes?
One potential path through this maze of questions begins with rethinking the radio. Designers tend to treat the baseband processor in their wireless interface as a tamper-proof black box. But Imagination Technologies’ Parvathaneni suggests taking a look inside. Imagination, for example, has a range of radio processing unit (RPU) baseband processor subsystems that offer system designers two important additional degrees of freedom.
Internally, Parvathaneni says, the Ensigma RPUs (Figure 2) comprise a general-purpose MIPS CPU core supported by a cluster of specialized accelerators. So the function is software-defined, and hence the user can change radio air interfaces by changing code. That is one degree of freedom—you can adjust the energy you put into the baseband to match the bandwidth and range requirements of the particular wireless link. Also, “in many cases, the air interface leaves room on the MIPS core for host tasks,” Parvathaneni explains. So potentially the system designer can select an air-interface standard and then load a set of processing tasks into the RPU without altering the hardware design—even on the fly, in response to a change in the operating mode of the wearable system. In some cases, this flexibility can eliminate the need for an MCU or compression engine at the sensor site altogether.
As wearable sensors get smaller, lighter, and closer to disposable, the hardware and the energy consumption necessary to support any air interface at all grow increasingly burdensome. In response, IP start-up Epic Semiconductor has an interesting proposal to eliminate the radio—among other things—from the wireless link. The key, according to Epic’s CTO, Wolf Richter, is electric fields.
Epic has developed a technology that uses an external electrode—generally, a small conductive plate or foil—for three separate purposes. First, the circuitry can harvest energy from ambient electric fields. In three to five seconds the device can collect enough energy to power a 5 mW load for a while. Richter offers examples, including executing a task on an ARM® Cortex®-M0 running at 3 MHz, rewriting a small 15V e-ink non-volatile display, or briefly activating a 30V printed-electronics circuit.
Second, the Epic intellectual property (IP) can sense any physical phenomenon that modulates an electric field—much as would a typical capacitive sensor. For example, the device can sense the presence of a human at about three meters, Richter says. Less obvious uses involve measuring the dielectric constant of a nearby surface—from which a system could infer a creature’s temperature, pulse, and muscle activity. Or, in an entirely different context, the sensor could infer from the drift in dielectric constant the degree of spoilage on the surface of a piece of packaged meat.
Finally, the technology can use the same electrode for two-way signaling, achieving near-field communications without a radio by monitoring and modulating the electric field at the electrode. Thus a single 0.25 mm2 bit of silicon can provide power, sensing, and connectivity to a smart decal, an adhesive patch, or some similar medium.
We can think of the wearable system as a traditional embedded system in which the extreme mobility puts unusual size, weight, and power constraints on individual components. The effect of these constraints is to tease out pieces of the system and separate them from the main body, leaving only a wireless link connecting the two pieces. In the example of smart glasses, the system remains largely intact, with only mass storage and the heaviest computing tasks pulled across the IoT and into the cloud.
In a comprehensive biometric system, individual adhesive-patch, strap-on, and ingestible or implanted sensors are pulled away from the central unit, each taking with it a short wireless link and enough computing power to manage bandwidth and execute local control loops. The system may even be fully distributed, with the central unit only serving as a hub for the short-range wireless connections, linking through WiFi or the cellular network to cloud resources when opportunity permits.
Laying out such systems sounds straightforward enough. Identify the physical topology—which sensor goes where. Select wireless links with the range and data rate implied by the sensors and the topology. Select local computing and power designs to meet the needs of local control loops at the sensor sites, and to provide whatever data compression, error correction, filtering, and broadband processing the wireless links demand. Repeat until happy.
There are just two issues with this simple approach. The first is the unpredictability of wireless links. A constraint on many systems will require the wearable network—for the system has in fact become a network—to maintain some level of function, and at least to do no harm, under loss of some of the wireless links. This demand may imply redundant sensors or radio channels, and perhaps significantly more local processing than would otherwise have been necessary.
The second issue is security. For both privacy and safety, every level of these wearable networks must protect itself against attacks. Completely unlike wired IoT systems, neither end of any wireless link can assume that the other end is who it claims to be, and is free from attack itself. This is obviously especially true in medical systems. There are protocols that can insure this level of security, but they demand both a level of authentication and of cryptography in each remote node. Further, prudence requires a level of functional-safety monitoring that prevents an actuator from doing harm under any circumstances. Provision for such pessimistic requirements is unfortunately rare in today’s designs. That will change. And the result will be requirements for significantly more local computing, especially at the actuators and displays, but still within the space and power constraints of the local nodes. This apparent contradiction will increase interest in concepts such as employing CPU-plus-programmable-accelerators architectures in the nodes and hubs, and then loading in control, sensor processing, and security tasks.
We can envision the wearable network of tomorrow by stretching out the entrails of today’s embedded systems and redistributing the computing tasks. The technology to implement these distributed nodes is emerging now. But using them wisely will profoundly change our current views of embedded-systems architecture.
Take a look at Altera FPGAs for small systems.
Read a recent article on IoT security protocols.