Next generation firewalls

At the same time user-centric and enterprise applications alike are taking advantage of commonly-allowed communication ports and services to ensure their passage across security boundaries and to facilitate operation in the broadest set of networking scenarios. The result has been a steady erosion of the effectiveness of network firewalls and, consequently, the illumination of fundamental flaws in the initial design and subsequent modifications to these foundational elements of most enterprise security strategies.

For the last 15 years, port-blocking firewalls have been the cornerstone of enterprise network security. But much like a stone, they’ve stood still in the face of rapidly evolving applications and threats. It’s no secret that modern applications and threats easily circumvent the traditional network firewall – so much so that enterprises have deployed an entire crop of “firewall helpers” to help try to manage applications and threats. But that hasn’t worked – applications and threats still easily make their way around these “helpers,” frustrating enterprise IT groups who have taken on additional complexity and costs without fixing the problem

New application threats are extremely evasive

Over the past several years there have been a number of significant changes to both the application and threat landscapes.

Pervasive personal applications

To begin with, user-centric applications have become pervasive. Internet-oriented and originally intended primarily for personal communications, this class of applications includes instant messaging, peer-to-peer file sharing, web mail, and the plethora of social networking sites that have emerged in recent years. The issue is that their presence on enterprise networks is practically guaranteed, even if an organisation’s policies indicate otherwise. Not only are these applications extremely popular, but they’ve also been designed to evade traditional countermeasures, such as firewalls, by dynamically adjusting how they communicate.

Business applications mimic personal applications

Two closely related developments complicate matters further. First is the fact that many of these next-generation applications have proven to be extremely useful for more than just personal communications. These days enterprises worldwide are routinely employing them for legitimate business purposes as well – helping to accelerate key processes, improve customer service, and enhance collaboration, communications, and employee productivity in general.

The second development is that new business applications are often being designed to take advantage of the same types of evasion techniques. The intentions are typically positive in this case: to facilitate operation in the broadest set of scenarios and with the least amount of disruption for customers, partners, and the organisation’s own security and operations departments. However, the unintended side effect of IT further losing control over network communications is clearly negative.

Turning to the threat landscape, there have been significant changes there too. In particular, a shift in motivation – from building reputations to actually making money – means that hackers are now focused on evasion as well. In this regard, one of the general approaches they are pursuing is to build threats that operate at the application layer. This allows their creations to pass right through the majority of enterprise defenses, which have historically been designed to provide network-layer protection.

Today’s hackers are also paying considerable attention to the growing population of user-centric applications. This is supported by the SANS Institute routinely including instant messaging and peer-to-peer programs on its list of the SANS top-20 security risks. Not only are such applications interesting targets due to their high degree of popularity, but also because their evasion capabilities can be leveraged to provide threats with “free passage” into enterprise networks.

It is no longer in control

The impact of all the ongoing changes to the application and threat landscapes is that IT has lost control. In reality, however, the inability of their security infrastructure to effectively distinguish good/desirable applications from those that are bad/unwanted leaves most shops with no reasonable option. One possibility is to continue with business as usual, an approach that ensures the availability of desirable applications by allowing sessions associated with all types of next-generation applications to proceed unchecked. Alternately, organisations can attempt to crank down on bad and unwanted sessions as best they can with the tools they have on hand. Not only is this second approach highly unlikely to be successful, but it also suffers from the propensity to throw the good out with the bad.

To rectify this situation, enterprises need security technology with sufficient visibility and intelligence to discern:

  1. Which network traffic corresponds to applications that serve a legitimate business purpose.
  2. Which network traffic corresponds to applications that can serve a legitimate business purpose but, in a given instance, are being used for unsanctioned activities.
  3. Which communications traffic, even though it corresponds to legitimate business activities, should be blocked because it includes malware or other types of threats.

Legacy port-blocking firewalls are ineffective

Providing highly granular access control is functionality that would normally be expected of the enterprise firewall. Based on its ability to control the flow of communications traffic, this long-standing pillar of enterprise security has historically been used in strategic locations to establish the boundary between domains characterised by different levels of trust – such as at the internet gateway, on connections to partner networks, and, more recently, at the logical front door to the data centre.

The problem, though, is that most firewalls are far-sighted. They can see the general shape of things, but not the finer details of what is actually happening. This is because traditional firewalls operate by inferring the application-layer service that a given stream of traffic is associated with based on port numbers. They rely on a convention – not a requirement – that a given port corresponds to a given service (e.g., TCP port 80 corresponds to HTTP). As such, they are also incapable of distinguishing between different applications that use the same port/service.

Consequently, traditional “port-blocking” firewalls are basically blind to the new generation of applications. They can’t account for common evasion techniques such as port hopping, protocol tunnelling, and use of non-standard ports. And, therefore, they can’t even begin to address the visibility and intelligence requirements identified above. For enterprises that continue to rely on these products – as well as other countermeasures that suffer from the same limitations – the result is that their networks are becoming like the wild, wild west: users have free rein to do whatever they want with whichever applications they choose.

Firewall remedies have failed

It doesn’t really help matters that the two most common steps taken to address the inadequacies of traditional firewalls have, for all intents and purposes, been completely unsuccessful.

Bolting-on deep packet inspection is fundamentally flawed

Many purveyors of traditional firewalls have attempted to correct the myopic nature of their products by incorporating deep packet inspection (DPI) capabilities. On the surface, adding a measure of application-layer visibility and control in this manner may appear to be a reasonable approach. However, the boost in security effectiveness that can be achieved in most cases is only incremental because (a) the additional capability is being “bolted on”, and (b) the foundation it is being bolted to is weak to begin with. In other words, the new functionality is integrated rather than embedded, and the port-blocking firewall, with its complete lack of application awareness, is still used for initial classification of all traffic. The problems and limitations this leads to include the following:

  1. Not everything that should be inspected necessarily gets inspected. Because the firewall is unable to accurately classify application traffic, deciding which sessions to pass along to the DPI engine becomes a hit or miss proposition.
  2. Policy management gets convoluted. Rules on how to handle individual applications essentially get “nested” within the DPI portion of the product – which itself is engaged as part of a higher/outer level access control policy.
  3. Inadequate performance forces compromises to be made. Inefficient use of system resources and CPU and memory-intensive application-layer functionality put considerable strain on the underlying platform. To account for this situation, administrators can only implement advanced filtering capabilities selectively.

Deploying firewall “helpers” doesn’t solve the problem, and leads to complete and costly appliance sprawl

Left with no choice, enterprises have also tried to compensate for their firewall’s deficiencies by implementing a range of supplementary security solutions, often in the form of standalone appliances. Intrusion prevention systems, antivirus gateways, Web filtering products, and application-specific solutions – such as a dedicated platform for instant messaging security – are just a handful of the more popular choices. Unfortunately, the outcome is disappointingly similar to that of the DPI approach, with one additional and often painful twist.

Not everything that should get inspected does because these firewall helpers either can’t see all of the traffic, rely on the same port- and protocol-based classification scheme that has failed the legacy firewall, or only provide coverage for a limited set of applications. Policy management is an even greater problem given that access control rules and inspection requirements are spread among several consoles. And performance is still an issue as well, at least in terms of having a relatively high aggregate latency.

Then comes the kicker: device sprawl. As one “solution” after another is added to the network, the device count, degree of complexity, and total cost of ownership all continue to rise. Capital costs for the products themselves and all of the supporting infrastructure that is required are joined by a substantial collection of recurring operational expenditures, including support/maintenance contracts, content subscriptions, and facilities costs (i.e., power, cooling, and floor space) – not to mention an array of “soft” costs such as those pertaining to IT productivity, training, and vendor management. The result is an unwieldy, ineffective, and costly endeavor that is simply not sustainable.

Its time to fix the firewall

To be clear, because they are deployed in-line at critical network junctions, firewalls essentially see all traffic and, therefore, are the ideal resource for enforcing control. The challenge, as discussed, is that legacy firewalls are basically blind to the latest generation of applications and threats. This is only one part of the problem though. The other part is that attempts to remedy this situation have only focused on compensating for this deficiency. The far-from-stellar track record of these approaches raises a question however. Why not fix the problem at its core instead?

Indeed, why not avoid the need for “helpers” of any type by delivering a solution that natively addresses the essential functional requirements for a truly effective, modern firewall:

  1. The ability to identify applications regardless of port, protocol, evasive tactics or SSL encryption.
  2. The ability to provide granular visibility of and policy control over applications, including individual functions.
  3. The ability to accurately identify users and subsequently use identity information as an attribute for policy control.
  4. The ability to provide real-time protection against a wide array of threats, including those operating at the application layer.
  5. The ability to support multi-gigabit, in-line deployment with negligible performance degradation.

Palo Alto networks and the next generation firewall

Having recognised the challenges posed by the latest generation of applications and threats, Nir Zuk, security visionary and the co-inventor of Stateful Inspection, founded Palo Alto Networks in 2005. Backed by top-tier investors and a management team with extensive experience in the network security industry, its engineers set out to restore the effectiveness of the enterprise firewall by “fixing the problem at its core”. Starting with a blank slate, the team took an application-centric approach to traffic classification in order to enable full visibility and control of all types of applications running on enterprise networks, new-age and legacy ones alike. The result was the only firewall solution available in the market that fully delivers on the essential functional requirements identified in the previous section.

The new cornerstone for enterprises security

As a ground-breaking, enterprise-class security solution, the next-generation firewall affords today’s organisations with the opportunity to realise a number of significant benefits. From a technological perspective it helps CIOs tackle a broad range of increasingly substantial challenges by:

  1. Enabling user-based visibility and control for all applications across all ports.
  2. Stopping malware and application vulnerability exploits in real time.
  3. Reducing the complexity of security infrastructure and its administration.
  4. Providing a high-speed solution capable of protecting modern applications without impacting their performance.
  5. Helping to prevent data leaks.
  6. Simplifying PCI compliance efforts.

Of course, it’s also important to consider matters from a business perspective. In this regard, the advantages of the Palo Alto Networks next-generation firewall are that it helps organiaations:

  1. Better and more thoroughly manage risks and achieve compliance: by providing unmatched awareness and control over network traffic;
  2. Enable growth: by providing a means to securely take advantage of the latest generation of applications and new-age technologies; and,
  3. Reduce costs: by facilitating device consolidation, infrastructure simplification, and greater operational efficiency.

The net result is that today’s enterprises are provided with precisely what they need to take back control of their networks, to stop making compromises when it comes to information security, to put an end to costly appliance sprawl, and to get back to the business of making money.

Next generation firewalls << Comments and views

Source:  EngineerIT



Latest news

Partner Content

Show comments


Share this article
Next generation firewalls