By Prayson Pate, Chief Technologist, Overture
Cloud technology
has changed how users and developers think about applications. Why buy servers and maintain software if you
can pay per use? Why build applications
from scratch if you can make use of existing frameworks and software? Why configure web servers if all you want to
do is make a simple web page? Developers and users are now empowered to think
about the higher-level applications and services, rather than the underlying
machinery.
So far, this re-thinking has been limited to end-user
applications housed in data centers. The
network connection between the data center and the end user has been left out of
the equation. However, service providers
must find ways to couple high-performance networks with cloud and content
providers in order to participate in current revenue streams. Why have they not yet applied cloud
technology to accomplish this? In short,
the network is complicated, especially at the metro edge.
While data centers are closed
systems, where devices tend to be very similar and operated by the same entity,
the metro edge is the “Wild West” of disparate technologies and myriad
operators. Given this complexity, how
can cloud technology simplify how we deploy and manage the metro edge? How can they simplify,
accelerate and build services that traverse the edge? The answer involves virtualization, openness,
focusing on services and thinking differently.
Virtualization
The key points of virtualization are abstraction and
separation/layering.
- Abstraction - According to Wikipedia, “Abstractions may be formed by reducing the information content of a concept or an observable phenomenon, typically to retain only information which is relevant for a particular purpose.”
- Separation/Layering - The notion of layering allows a well-defined interface, which can then be used to separate functions. Separation lets us move functions to where they make the most sense.
Network
Function Virtualization (NFV) is another way that virtualization can be applied
to the metro edge. NFV is an initiative driven
by an international group of leading service providers to lower costs and
simplify networks. NFV replaces
purpose-built network devices with software applications running in a standard
server.
Examples
include:
- Managed routing: Today service providers use routers deployed at the customer site.. In particular, RFC 2547 Layer 3 VPNs are built using this approach. This adds cost and complexity to the deployment of the services. NVF provides the means to simplify the CPE (Customer Premise Equipment) by moving the routing protocol from the CPE into the network. Now, value-added services such as managed routers and Layer 3 VPNs can be added on demand without changing the CPE.
- Service assurance: Network Interface Devices (NIDs) are deployed at customer sites to facilitate measurement of service parameters using Service OAM (Operation, Assurance and Maintenance). This adds cost because configuration of SOAM is a nightmare in today’s network. NFV could apply abstraction to the NID and its MEGs (Maintenance Entity Groups), MIPs (MEG Intermediate Points) and MEPs (MEG End Points), and support the creation of applications to configure these functions. The result would allow back office systems to treat service assurance as a utility, monitoring how the service is performing against its SLA (Service Level Agreement).
- Managed security: Today, managed security applications are implemented either in dedicated hardware at the customer site or in servers at central offices. Network hardware could provide some basic filtering and capture capabilities enabling complex aspects of these applications (such as policy definition, storage and distribution) to be moved to a separate application.
- Analytics: For service providers, the majority of analytical data is generated in the edge of the network, but gathering it may require a dedicated appliance. Each service provider also has a different view of what data is important, as well as how it should be packaged. The NFV model provides a means to limit the network hardware to collecting basic data and moving the processing to an offboard server.
These
applications are prime candidates for applying the principles of NFV, although concerns
about reliability and environmental compatibility remain. In addition, new requirements will be placed
on network equipment. Effective network elements won’t be truly “dumb”. The key is to maintain essential physical and
data processing elements in a carrier-grade product, while moving many value-added
functions into commercial-grade hardware, controlled by software in an open
fashion.
Openness
Having worked
in the embedded software business for a long time, I know the skills required
are specialized and uncommon. What’s
more, the compute and storage resources are constrained inside a network
element, and the tools for embedded development are limited. Any change to the software involves a
large-scale and expensive download and upgrade cycle, possibly requiring a
service outage.
Development
of cloud-style applications is quite different.
There is a large pool of talented software developers, compute and
storage resources are plentiful and inexpensive, and development tools are
sophisticated. Since the software is
usually running in a replicated server in a data center, upgrades are easily
managed and can easily be undone.
A critical
aspect of cloud development is the use of applications developed using open
interfaces and standard protocols. Cloud-style software facilitates code reuse
and construction of large systems composed of multiple smaller pieces. Interoperability is achieved through
well-defined Application Programming Interfaces (APIs) that support interaction
between systems at a black box level.
Open
cloud-style applications can be developed more quickly and less expensively
than embedded applications. And, the
open and extensible nature of cloud-style applications means consumers of these
applications can themselves build larger applications that leverage and extend
the base capabilities.
Services, Service, Services
Applying
cloud technology to the metro edge of the network will require changes in how service
providers create, activate, and assure services.
Service creation – Replacing complexity
with a simple programming model, schema, and tools drastically reduces
development time and cost of new features.
Reducing the cost and time to develop new services will let a service
provider be more responsive to customers and market trends..
Service activation – Key to improving
the speed and accuracy of service activation is increasing the use of automation. Cloud-based technologies enable such
automation by tying together relevant systems, providing an efficient
development environment to deliver such benefits as:
- Zero-touch commissioning – Enabling a technician to install a device straight from the box, without need for local configuration.
- Flow-through provisioning – When an order goes in for a service, flow-through provisioning automatically propagates the needed changes down to relevant network elements.
- Instantiation of virtual appliances – Turning up services such as routing, firewall, security and VPN without installing new physical equipment.
- Network optimization – As services are turned up and down, available capacity in the network changes. Automating network optimization based on changes maximizes use of network resources.
Service assurance – Ethernet Service
OAM (SOAM) is the preferred, but inherently difficult, way to measure key SLA
parameters such as packet loss, latency and latency variation. . Service providers must also make provision
for handling hard faults such as power, equipment or facility failures, as well
as degradations signaled by Threshold Crossing Alarms. Finally, there must be an efficient way to
diagnose, sectionalize, and repair faults when they occur. All of this is handled today using a variety
of disparate and isolated tools. What is
needed is an efficient way to tie them together to achieve benefits such as
automated configuration, proactive performance reporting, and automatic fault
isolation.
Thinking Differently
The changes
discussed above are large, but even bigger is the need to think differently.
- People Are Mobile; Services Should Be Too: We need to consider issues like authentication, security, peering, replication, policy, and multiple platforms when considering how to build services. Doing so is consistent with cloud-style development models and will support the creation of ubiquitous services.
- Services, Not Pipes: Service providers must find ways to couple their high-performance networks with cloud and content providers in order to participate in current revenue streams – more focus on the end service or application the user is buying.
- Roles and Systems, Not Boxes: Stop thinking about installing nodes in a network and start thinking about enabling services whose elements play various roles but which can be instantiated both in network elements as well as using cloud resources.
- People Are Bad At Being Robots: People are inefficient at handling repetitive and mundane tasks. Applying cloud technology will help define new solutions to automate processes, which, in turn, will lead to greater efficiency by allowing people to focus on their creativity and problem solving skills.
- Building for Today, Anticipating Tomorrow: The cloud has enabled a whole new generation of applications and services that were not envisioned by the builders of the first data centers. As Jason Kleeh of Brocade noted, “the best app is the one that we haven't thought of yet.”
Summary
We have an
opportunity to enable the next generation of services by applying cloud
technology to the Metro Edge of the network.
Doing so will require applying cloud technology and the supporting
technologies of SDN and NFV. This is a
big change, but the benefits will be even larger.
Prayson Pate co-founded Overture and brings more than 24 years of experience developing software and hardware for networking products to the Company. Prayson is active in standards bodies such as the MEF and IETF, and he was chosen to be the co-editor of the pseudowire emulation edge-to-edge (PWE3) requirements and architecture documents (RFCs 3916 and 3985). He holds nine patents.