Rethinking the Internet of Things

As the Internet of Things (IoT) cements itself into place as the mandatory next big thing for 2015, more systems architects are taking a hard look at its underlying concepts. As they look, these experts are asking some hard questions about simplistic views of the IoT structure: the clouds of sensors and actuators attached to simple, low-power wireless hubs, linked through the Internet to massive cloud data centers.

Almost every stage of this description is coming into question. Some experts challenge then notion that a swarm of simple sensors is the right way to measure the state of a system in the first place. Others question the idea of a dumb, inexpensive hub. Network architects are asking about the role of traditional Internet switches, and even the Internet Protocol (IP) itself, in the IoT picture. And data-center architects are taking a hard look at the implications of the IoT for concepts they are exploring: both virtualization and application-specific data centers.

We are a long way from consensus. But the answers that are emerging from these questions could profoundly alter sensing technology, the structure of data centers down to the hardware level, and even the Internet.

Sensible Sensing

Let’s start with the sensor question. The obvious way to measure the state of a system is to identify the state variables, find points at which they are exposed so sensors can measure them, and put sensors there. Then pull the sensor data together at a hub. But the obvious way is not necessarily the best way. All those sensors and links make such an approach expensive to install and inherently unreliable.

Another way is to pick a few critical variables that can be sensed remotely and then used to estimate the state of the entire system. This process may be intuitively obvious, or it may involve some serious mathematics and use of a state estimator such as a Kalman filter. One example on the more intuitive side involves security cameras, traffic, parking, and the idea of the smart city.

A typical smart-city scenario might involve lighting management, parking management, traffic control, and security. A traditional IoT approach would put a light sensor on each street lamp, buried proximity sensors in traffic lanes near each intersection and each parking space, and security cameras at strategic locations well above ground level. Each of these sensors would have a wired connection to a local hub, which in turn would have a wireless link to an Internet access point—except for the light sensors, which would use wireless links from the tops of the lamp posts to their hubs.

Just the installation costs of this system are enough to keep most municipalities from even trying it. But the continuing maintenance costs for the buried sensors, long runs of buried wire, and all those radios could be even worse.

There is another way. An intelligent observer, watching the video from a few of the security cameras, would easily see which street lamps were on, which parking spaces were occupied, and when traffic signals should change. Accordingly, system vendors like SensitySystems, using video analytics algorithms from Eutecus, have been substituting a few high-definition cameras for the whole collection of other sensors, hubs, and wireless links. The result is not only huge savings in total cost of ownership, but improved reliability and additional safety and security features that would not have been available from a swarm of simple sensors (Figure 1).

Figure 1. A single camera may be able to collect more data, more reliably, than a swarm of simple sensors in a smart-city application.

smart_street

Similar concepts can work on other sorts of systems. State estimators using computable mathematical models of systems can compute the position of a motor shaft from readily-accessible motor winding currents and voltages, or the state of a chemical reaction from external observations. In general, there appears to be a growing trend to favor a small number of remote sensors—often cameras—supported by computing resources, rather than a swarm of simple sensors with their attendant power, connectivity, reliability, and security issues.

That Changes Everything

The idea of substituting heavy computing algorithms—such as convolutional neural networks or Kalman filters—for clouds of simple sensors has obvious advantages. But it creates problems, too. Designers seem to face a dilemma. Do they preserve the spirit of virtualization by moving the raw data—potentially multiple streams of 4K video—up to the cloud for analysis? Or do they design-in substantial computing power close to the sensors? Both approaches have their challenges and their advocates.

Putting the computing in the cloud has obvious arguments in its favor. You can have as much computing power as you want. If you wish to experiment with big-data algorithms, you can have almost infinite storage. And you only pay for roughly what you use. But there are three categories of challenges: security, latency, and bandwidth. Security we will return to later, so let’s look at the other two, both of which are primarily Internet issues.

If your algorithm is highly intolerant of latency, you have no choice but to rely on local computing. But if you can tolerate some latency between sensor input and system response, the question becomes how much, and with how much variation. For example, some control algorithms can accommodate significant latency in the loop, but only if that latency is nearly constant. You can meet many such needs with dedicated optical links between your hubs and the data center. This is how the centralized radio access network (C-RAN) works, for example.

If instead you use the Internet—in best IoT spirit—there are more serious issues. In a keynote at the recent Ethernet Technology Summit, Internet pioneer and TSL Technologies CEO Larry Roberts warned that in the face of increasing user demands, the Internet requires fundamental changes.

Roberts warned that no matter the advertised bandwidth of a link, the protocols have speed limits of their own. “A TCP flow is stuck at megabits per second (Mbps), not gigabits per second (Gbps),” he said. “TCP can’t carry 4K video over long distances, for example.”

Work-arounds are possible but dangerous. “You can jam content through using user datagram protocol (UDP). But sooner or later that will break the Internet,” Roberts explained. In order to achieve adequate bandwidth, and to secure agreements about latency, the Internet has to change. “To support quality-of-service (QoS) requirements and greater bandwidth, we need to add flow management, congestion management, and recovery to the network. To do that, the Internet has to share more information with its protocols.”

These issues are obviously not a concern when the amount of data moving to the cloud is small and time is not critical. But if a system design requires moving real-time 4K video from multiple cameras to the cloud, the limitations of the Internet become an issue.

Virtualization and Its Discontents

The requirements of our cloud-centric system extend through the network and into the data center, where profound change is already under way. As more compute-intensive, event-triggered applications descend upon the data center, server and storage virtualization become almost mandatory. The data center must be able to run an application on whatever resources are available, and still meet the outside system’s service-level requirements. Virtualization, in turn, puts greater demands on the data center’s internal networks, pushing toward 25 Gbps Ethernet in local ports, 400 Gbps between racks, and carrier-level service guarantees on the data-center network.

There is another difficult point as well. Some algorithms resist being spread across multiple cores on multiple servers. They depend on single-thread performance, and the only way to make them go faster is to run them on faster hardware. But having hardware accelerators—such as GPU, FPGA, or Xeon-Phi chips—attached to the server CPUs disrupts the homogeneity that hypervisors depend upon. Some cloud architects suggest pools of accelerators within easy reach of all servers through the switch fabric. Others argue for an accelerator-per-CPU architecture.

But this discussion goes deeper than just the need for accelerator chips. Ideally, the cloud would provide applications with exactly the I/O, compute, and storage resources they need, linked by dedicated connections that guarantee the QoS each link requires. Put this desire into the context of the virtualized, software-defined data center, and you get Cisco’s vision of application-centric infrastructure, as described by Cisco CTO Tim Edsall at the Symposium.

Edsall proposed deploying a software-defined network (SDN) within the data center. When an application is invoked, the cloud manager instantiates objects on available port, compute, and storage resources. It also requests links with specific QoS parameters from the SDN, and the network in turn deploys its transmission and switching resources to create links with the required bandwidth and latency.

“The idea is to give the network some context about the apps,” Edsall explained, “and to collect statistics by application, not just by port. Today people know shockingly little about what is really going on in their data center.”

Even mass storage gets into the act. Flash-based storage on server cards and in the racks, connected to disks through a guaranteed-QoS network link, can make any data set in the cloud effectively local to the computing node that requires it.

The endpoint of this thinking is a cloud data center that appears entirely application-specific to the user, and entirely virtualized to the operator. To the user if offers access to processing, accelerator, and storage resources configured to serve her algorithms. To the operator, the data center is a sea of identical, software-definable resources.

The Fog

Let us take one more step. We have been discussing how to provision IoT applications that could do all their computing in the cloud. Now let’s look at applications that, for safety, bandwidth, latency, or determinism reasons, cannot. These applications will require significant amounts of local computing and storage resources: either at the sensors—as in vision-processing surveillance cameras—in the hub, or in the Internet switches.

Today these resources are being designed into proprietary sensors and hubs as purely application-specific hardware, generally using light-weight CPUs supported by hardware accelerators. But Cisco and other planners are contemplating a different approach: what Cisco calls fog computing.

Imagine virtualization seeping through the walls of the data center, spreading out to engulf all the diverse computing, storage, and connectivity resources of the IoT. You could locate an application object anywhere: in the cloud, in intelligent hubs or smart sensors, eventually even inside the network fabric (Figure 2). You could move it at will, based on performance metrics and available resources. The system would be robust, flexible, and continually moving toward optimal use of resources.

Figure 2. The fog of Internet switches, smart hubs, and smart sensors outside the data center may themselves become virtualized computing sites in this distributed view of the IoT.

fog

Many steps must be taken to reach this vision. Applications must be in a portable container, such as a JAVA virtual machine or an Open Computing Language (OpenCL™) platform, for example, that allows them to execute without alteration on any of a huge range of different hardware platforms. The notion of application-directed networking must extend beyond the data center, into a version of the Internet that can support QoS guarantees on individual connections, and eventually computing tasks within nodes. And somehow, all of this must be made secure.

Oh, Yes—Security

The need for iron-bound security is already recognized inside cloud data centers. No one is going to let you store their data if they think you might let a third party modify, read, or snoop it. But preventing those things from happening, in a dynamic, virtualized environment where no one really knows the state of the total system is a daunting challenge.

Countermeasures begin with secure storage systems that hold data in encrypted form and authenticate all requests. Steps continue with roots of trust and secure physical enclaves in servers, so that client data is only unencrypted while it is within a trusted environment. But then there is the data-center network.

“About half the servers in the data-center world are virtualized now,” reported Ixia president and CEO Bethany Mayer in her Summit keynote. “But we are still thinking about security. We know there have been many attacks on virtualized cloud applications, but to a large extent the industry has not thought through what to do about them.”

One question is at what level to protect traffic between servers. An approach like IPsec encrypts just the payload inside the packets on the transmitting end, and decrypts it on the receiving end. The payload is protected from end to end, and the process is transparent to lower protocol layers. But the payload is also hidden from deep packet inspection engines, which can be a problem in SDNs. And IPsec protects the payloads from easy reading—it does not protect the packets themselves.

In contrast, there is MACsec. “MACsec is a hop-by-hop encryption of the entire packet,” explained Inside Secure vice president of field application engineering Steve Singer. This approach not only protects the secrecy of the payload: because it encrypts the headers as well, it is effective against attacks on the packet stream itself: spoofing and man-in-the-middle attacks, for instance. But because each packet must be decrypted for switching, each switch on the path must be trusted—or MACsec must be used in conjunction with IPsec–and each media access controller in the path must have hardware capable of wire-speed cryptography. This can amount to some 400K gates to support a 10 Gbps port, Singer said.

As cloud computing becomes fog computing, these security requirements expand into hubs and, eventually, into the public network, adding another computing load to both hub SoCs and SDN data planes. But with everything from biometric data to autonomous-vehicle control messages traversing the Internet, today’s attitude toward security would be catastrophic.

We have seen how a close examination of the IoT undermines the simplistic picture of a myriad of simple things all connected to the Internet. But moving beyond this view brings profound changes to the Things, their hubs, the structure of data centers, and the Internet itself. There may be no stable waypoints between where we are today and a fog-computing, fully secure new realization of the network and its data centers.


CATEGORIES : All, System Architecture/ AUTHOR : Ron Wilson

2 comments to “Rethinking the Internet of Things”

You can leave a reply or Trackback this post.
  1. ATM: Asynchronous Transfer Method. We made a really bad turn 15 years ago.

  2. I laughed a little at the Smart City concept. While I like the concept – having worked in that domain I can tell you its not that easy.

    The most obvious problem is redundancy. Then there is the fun problem of recognition algorithms. Then there is channel issues…..

    Also the author forgot all about the first concept of a decentralized system. It wasn’t that long ago that motors were expensive. So you would have 1 motor, and several pulleys/gears so that one motor did multiple jobs. This was changed when you could have multiple motors on site.
    IoT has done this to communications and how it connects to objects. Lets not try and centralize it through a “main driveshaft” because we can built a better drive shaft. If you want to shake up the IoT…….question the I part.
    What’s stopping us having transferable mediums?

    If I can take energy from gas-to-light-to-electricity-to-movement…… and other wacky transformations……why do communications have to follow a 100 year old standard?

Write a Reply or Comment

Your email address will not be published.