Learning from the Next-Gen Firewall

Torrents of packets will cascade into the data center: endless streams of data from the Internet of Things (IoT), massive flows of cellular network traffic into virtualized network functions, bursts of input to Web applications. And hidden in the cascades, far darker bits try to slip through: cyber attacks. They may seek to interfere with applications, steal private data, recruit servers into bot nets, infect data-center clients, encrypt and ransom vital files, or even do physical harm over the IoT. They are always out there, probing for an opening, altering their disguises, trying novel attacks.

Between our world and this mayhem stand firewalls—layers upon layers of defenses built over historically shaky foundations, always trying to catch up with the attackers (Figure 1). Over the years firewalls have grown from light-weight software packages into multilayered, hardware-reinforced defenses in depth. They have marshalled new computing technologies. And by watching their continuing battle, we can get an understanding of what security will mean, not just in data centers, but at the edge of the IoT—in edge computing and in endpoints.

Figure 1. Not all security measures last forever.

Humble Beginnings

The origins of firewalls are far simpler than one might think. The idea was to have a thin layer of software that could unobtrusively inspect Internet traffic for suspicious content. In the beginning, this could be a simple as inspecting the port numbers in TCP headers to make sure they made sense in context. But attackers quickly learned to use unsuspicious port numbers.

Firewalls then dug deeper, below the transport layer, to look at individual IP packet headers, attempting to spot suspicious source addresses. This is no mean task: IPv6 addresses are 128 bits wide, and the list of blocked addresses for a firewall can be long, with the addresses widely scattered across the space. And IP packet headers arrive much more frequently than TCP headers: as often as once every 20 Bytes. On a 40 Gbps link that would only give you 4 ns to deal with the packet at wire speed. Clearly that precludes comparing a source address against each entry in a large table. You need some sort of hardware acceleration.

But even if you are fast, source address inspection is no guarantee of safety. It is relatively easy to spoof the source address, so packets appear to come from a trusted—or at least unsuspicious—source. And if the attacker has recruited a botnet, malicious packets may come from many different perfectly innocent addresses.

With the decreasing effectiveness of address screening, firewalls have explored two quite different more aggressive strategies: inspecting payloads instead of headers, and monitoring the behavior of payloads after they have arrived.

Payload Inspection

With port and address screening losing their effectiveness against clever attacks, firewall developers shifted their attention to include the data portions of packets. Deep packet inspection (DPI) was born (Figure 2).

.Figure 2. Two different layers of security depend on inspecting portions of the IP packet header.

The idea behind DPI is deceptively simple. Once you have seen an attack, you can identify signatures: odd system calls, code, or data that are characteristic only of the attack. Ideally, these fragments would be necessary for the attack to work. So you just scan the data portion of each packet for known signatures, and delete any messages that contain a match.

DPI presents its own computational problems. Even simple string matching is expensive, both in terms of space (enough to handle the huge variety of known attacks) and in terms of computational load. Open-source intrusion detection systems like Snort and Suricata use the Aho-Corasick string matching algorithm or Intel’s open-source Hyperscan library for this task, while hardware approaches – network processors, FPGAs, perhaps graphics processing units (GPUs) – are also options, depending on the throughput and latency requirements.

There is an easy way for attackers to elude string matching. They can just alter functionally irrelevant bits in signatures so that the string no longer matches. The countermeasure for this is to switch from exact matching of strings to evaluating regular expressions. A regular expression can exactly describe which bits are functionally necessary to the attack, and which the attacker might use to obscure the signature. But large-scale regular expression processing at wire-speed is even more expensive than string matching. Fortunately there are options: Intel’s Hyperscan software library was designed for this application, and there are also a variety of hardware approaches, including both the list above and dedicated regular-expression processor chips.

Unfortunately, there is an embarrassingly simple way to circumvent many kinds of regular expression analysis—just make sure the signatures in your malware all lie across packet boundaries. Now no one packet contains a signature, and simple DPI systems that merely scan individual packet payloads in isolationwill fail to detect your attack.

Packets and Objects

In this situation the regular expression processor can preserve state as it moves from one packet to the next, operating in streaming mode, and so detect signatures that have split across packet boundaries. But some firewall developers and email gateway developers are moving toward a more demanding solution: assembling the entire message in a buffer and examining the whole thing at once. This gives up the potentially low latency and wire-speed operation possible in streaming, but may provide more thorough analysis. To distinguish it from DPI, this approach is sometimes called deep content inspection (DCI).

Having the entire message available for inspection opens up all sorts of possibilities. You can do more complex regular expression evaluations to sniff out telltale signatures. You can, for the first time, put the message in context, and think of it as an object with meaning rather than as a meaningless string of bits. Is it an email with links or attachments? An image or video? A block of unstructured data, or of code? Once you’ve classified it, you can compare it to what the recipient should be expecting, and apply policies. Is someone sending an executable attachment to your CTO? Does that line of text end in a string of SQL? Does the message appear to be encrypted?

This new-found ability has its costs. We have quietly given up the notion of flow-through inspection at wire speed. Unpacking packets to reveal their payload imposes new latency and requires much larger buffers than does packet-level inspection. Also, the payload may require work to make it comprehensible. It may be encoded or compressed. Or it may be encrypted. The firewall will need access to the algorithms and keys to render the payload readable—a requirement that may be hard to meet, especially in a public cloud. But DCI opens up new possibilities as well. One is the ability to do static analysis of code segments to estimate their capabilities. Could this code alter system files, or export data to an unknown destination?

Machine Learning?

Another possibility getting a lot of attention now is the ability—now that you are looking at the entire object—to apply machine learning techniques. With their proven ability to recognize patterns in unstructured, noisy data, deep learning networks (DNNs) seem, intuitively at least, ideal for spotting signs of trouble in messages. But there are limitations. DNNs require training: experts must gather millions of examples, tag them to indicate the presence or absence of threats, and feed them one by one into the network. This process is hugely demanding of human expert hours, and places a lower bound on how long it takes to prepare the DNN to recognize a new threat.

Worse, the trained network is unlikely to recognize a novel threat without retraining, due to concept drift in the data. When a new threat appears, humans will have to identify its signatures, prepare examples, tag them, add them to the training set, and retrain the network. Whether this process can be incremental or whether the whole millions of inputs will have to be repeated is an open question.

One potential solution is to employ a neural network with an entirely different kind of training: reinforcement learning. As Intel security architect Jason Martin explains, “Supervised-learning networks are successful when they get millions of labeled samples and data distribution doesn’t change frequently. But in security you don’t get that.

“In contrast, reinforcement learning is more of a feedback loop: the network takes in an observation, makes a prediction, acts on the prediction, gets a reward based on the results of its actions, and then modifies its weights based on the reward.”

For example, a security network would get positive feedback for passing a harmless message or flagging a suspicious one, negative feedback for a false alarm, and very negative feedback for passing a real threat.

Martin says that reinforcement learning still requires lots of training, but instead of being fed training data, the reinforcement network “explores on its own, even after it is deployed.” This may allow such networks to be deployed long before a supervised-learning network could be trained, and could deal with concept drift, and could allow reinforcement-learning networks to respond more quickly to novel threats.


Artificial neural networks bring a new possibility to the firewall: the ability to identify an object as suspicious without having an exact match for a known signature or regular expression. The question that arises next is what to do about objects that are suspicious but not known to be dangerous. The answer will vary with the context. In low-risk, best-effort situations, it may make sense to pass the object to the receiving task with only a warning to the user. In high-security environments it may be necessary to destroy suspicious objects. But in cases where resources and latency tolerance allow, there is another option: a sandbox.

“Think of a sandbox as a special use of virtualization,” says Intel principal engineer Ravi Sahita, “a kind of detonation box where you can try a message to see what it will do.” (Figure 3).

Figure 3. A sandbox creates a secure artificial environment in which to test objects and learn their real behavior.

Say you have a suspicious email attachment. You can create an instance of the email environment in which the system calls go not to the normal hypervisor but to a security monitor that will inspect them for inappropriate requests. The security monitor may also watch address activity for inappropriate reads, writes, or fetches. Thus if the object in the sandbox attempts any aberrant behavior the monitor will observe and flag it. And because the sandbox is logically isolated by virtualization—and sometimes physically isolated in an appliance—the object cannot harm the real system.

Sandboxes may be especially valuable when used with reinforcement learning. The network can judge an object suspicious, divert it to a sandbox, and then get immediate feedback from the actual behavior of the object, all with no risk to the real system. Such tactics can be implemented in a pipelined fashion so that the firewall responds to easily-recognized threats with very low latency, but passes questionable objects on to the sandbox for further analysis. Martin says that such an approach can also include a human expert in the loop, analyzing results from the sandbox and if necessary retraining the front-end of the pipeline to recognize new threats.

Sahita points out that attackers still have options. They have learned to delay the bad behavior of their code, outwaiting the object’s quarantine time in the sandbox. Some have learned to identify fingerprints that indicate their object is in a sandbox instead of in the real operating environment, allowing the malware to conceal itself until it is released to the real system.

Security developers are striking back. “Ideally, the sandbox merges with the endpoint,” Sahita says. This would mean building system-call monitoring and address snooping into the production hypervisor, so there is no distinctive sandbox fingerprint to detect—and, incidentally, no added delay for quarantining the object. But that requires additional protections: you would be well advised to monitor not only system calls and memory references, but also control flow to make sure the object can’t somehow link itself to privileged code somewhere else in the system, or otherwise run amok.

That is the point behind Intel’s Control Flow Enforcement Technology, which monitors return stacks and the registers used by indirect branches, looking for attempts to divert program flow. But Sahita suggests that surveillance could go even deeper.

“You can generate trace data directly from modern Intel® CPUs at runtime,” he says. “You get about 250 Mbytes/minute of control flow data, which you can score in real time with a GPU or FPGA.  Interesting bits you can decode, annotate, and analyze to see what the code is up to.”

A novel speculation is that this may turn out to be another application for reinforcement-learning networks. They may prove both faster and more perceptive in examining trace data than would a human programmer.

What about the Edge?

These measures form a plausible roadmap for data-center security. Integrating advanced firewall functions, such as DCI, and sandbox-level protections into mail gateways and applications environments can significantly increase security while keeping latencies within the limits envisioned by network functions virtualization and IoT architects. But what about the other end of the IoT: the edge computing nodes and the IoT devices?

Out there, firewall and sandbox appliances are unthinkable luxuries. Hypervisors, when present at all, are not intended for this level of security. Accelerators to speed neural network inference or reinforcement learning are virtually unknown, and for many embedded processors real-time trace with analysis is science fiction.

Looking at data centers may show us the level of threats IoT endpoints face, and the known ways of countering those threats. But it can’t tell us how to implement the increasingly demanding memory and compute loads that come with those countermeasures. We may find that we have to take another look at the resource levels really needed at the network edge. The edge of the IoT may not be as light-weight and power-thrifty as we had hoped.


CATEGORIES : Data Center, IoT, System Security/ AUTHOR : Ron Wilson

2 comments to “Learning from the Next-Gen Firewall”

You can leave a reply or Trackback this post.
  1. Hellmut Kohlsdorf says: -#1

    Wearing a pacemaker, assuming next due in 6 years will be accessible from the Internet, I do hope I will get a pacemaker that has state of the art protection. Seeing the new i.MX8 controller family aimed for automotive, ICs have and will be pad limited offering space to implement a well supported firewall functionality.

  2. Maybe a completely different model is needed. Consider ATM cells. Consider connection oriented ATM transmissions instead of IP broadcasting. Consider payloads encrypted on entry and decrypted on exit. In other words, get out of the IP domain completely once the connection is laid down.

    With just those three details in the model, how many of the threats become impossible or useless to apply?

    What threats does that leave?

    What new threat opportunities does it enable?

    From my reading, all these threats are enabled by the use of IP in the first place. And with ATM connections, most of what we have TCP for (i.e. to get packets back into correct order and recover lost packets) is unnecessary too.

    Not long ago I read a scholarly article that discussed emulating ATM with MPLS. The clueless writer was evidently not aware that MPLS was an expensive kludge built to deliver IP packets across nodes more efficiently … something ATM does naturally with its cells at hardware speeds.

    Inquiring minds want to know.

Write a Reply or Comment

Your email address will not be published.