Sunday, October 25, 2015

Blueprint: NFV - the New and Improved Network…with Some of the Same Old Baggage

by Douglas Tait, Director, Product Marketing, Oracle Communications

Network function virtualization (NFV) is in motiion Software functions are separating from hardware infrastructure and virtualizing, then becoming capable of running on a wide array of suitable hardware. In fact, NFV adoption is so strong that according to a statement made by Heavy Reading analyst Jim Hodges at the recent NFV Everywhere event in Dallas, the global NFV market will grow from $485.3 million in 2014 to $2.2 billion by the end of this year. The promise here, of course, is that communications service providers (CSPs) can reduce operational and expenditure costs related to updating, maintaining, or enhancing network functions by decoupling the software from the hardware.  This provides CSPs with more options to buy and deploy best-of-breed software components that run on best-of-breed hardware components.

If only it were that simple.

This article will cover why the path to NFV isn’t so clear cut, and some ideas for overcoming the complexity.  

The New and Improved Network

The NFV “divide and conquer” approach makes sense—the software lifecycle is completely different from the hardware lifecycle, and IT has made huge strides in developing and testing software virtualization technology. NFV provides the blueprint to virtualize software and deploy agile services when needed, or when upgrades are required, all without major expensive network re-deployments.  

This separation of software from hardware is a significant first step forward for the communications industry that creates new ways to manage the network elements. Now, an open market for best-of-breed hardware is possible, which could drive down costs. Also possible is the encapsulation of software elements as “virtualized network functions” (VNFs), which allows CSPs to manage software lifecycle management separately so that upgrades and enhancements do not affect the hardware environment (except in the event of rare scaling and performance dependencies).

NFV is moving network technology in the right direction, and in many ways, it’s similar to the revolution of cloud computing in IT. And also like cloud computing, now that the deployment model has been revolutionized, the next step in NFV is to free up the hold on software functions. Specifically, NFV has matured to the point where CSPs can deploy on any suitable best of breed hardware. Now it’s time for CSPs to have more choices to deploy best of breed software on that hardware to build the best possible network.
   
But here is the fly in the ointment, or, the “same old baggage.” While hardware components are interoperable, the network software components were never designed to be interoperable.

Same Old Baggage

For NFV software, interoperability is required on two levels: 1) between VNFs and 2) between Management and Network Orchestrators (MANOs) and the VNFs. Regardless of NFV, the same old baggage comes from taking existing network functions that do not interoperate and virtualizing them into VNFs—you still have network functions that do not interoperate. In many ways, NFV is compounding the problem, because currently there aren’t agreed-upon standards between the MANO and VNF. As a result, the various suppliers are creating the best interface for just the NFV products they own.

If this situation sounds familiar, it probably is: the problem of NFV interoperability is not new. And there have already been several attempts to create hard standards for network functions to fix it—TINA-C, JAIN, PARLAY, OneAPI. Each made a valiant effort at standardization, yet did not fully achieve software interoperability in the communications network. Now, the NFV community is pursuing interoperability with an open sourced approach—that is, creating an open source reference implementation model for NFV and hoping that the network equipment providers will follow. This open source model has had some success in the IT industry—think Apache Foundation, Linux, and GNU. And for the communications industry, projects like OPNFV, Open Daylight, ONF, Open Stack, and Open vSwitch offer an approach that would move the industry to a common software model, but without requiring NFV vendors to comply with a standard.

The original NFV whitepaper makes it clear that many of the largest and most influential CSPs want to allow VNFs to proliferate in an open market where network providers may mix, match, buy, and deploy best of breed VNFs that would automatically connect and run their networks. But to make this objective a reality, full interoperability between VNFs and MANOs is required. So what is the best way for the industry to move forward from this stalemate?

NFV: Path to Software Interoperability 

To overcome these obstacles and achieve the full potential of NFV, the industry should consider not just one solution, but rather an integrated and multi-step path to jumpstart the VNF market from a premise to a promise with a real plan. Here are a few things that the industry should consider:

  • Assemble a policing agency or an interoperability group that tests or runs the software and generates compliance reports.  As discussed, one of the major roadblocks to reaching NFV’s potential is that there is very little standardization enforcement across the communications industry. A standards body or policing agency could help by validating that vendors’ products and solutions meet defined specifications required to call themselves “certified NFV suppliers”—and therefore deemed trustworthy by customers.
  • Continue with the open source community offerings.  Although the open source communities do not have a charter to enforce interoperability, CSPs may use the reference implementation the communities produce as a model or means to test the VNFs. 
  • Define a standard API for VNFs.  While this approach does not completely solve the interoperability issues and does not enforce the standard between the VNFs into the MANO, it would provide a universal programming interface for all VNFs. VNF providers could produce their products despite not having their own MANO product.
  • Define a standard protocol that the industry could adopt as a universal standard, or that at least would be enforceable via something like the Java Community Process. This would enable   CSPs to compare vendors, supporting a fair and free market—CSPs could buy the best product for their company without fear that the vendor is violating standards.
  • Provide an interface framework in the VNF manager.  In the absence of hard protocol standards, another way to accelerate the adoption of NFV is a VNF plugin framework. This would allow VNF suppliers to build and test executable plugins that interface with their products, yet run within the VNF manager—promoting technical interoperability between the VNF manager and the VNF, while opening the market for suppliers to work together. While a plugin framework does not solve the problem of interoperability between VNFs, VNF managers and various VNF suppliers would be able to rapidly integrate their products. And, when the industry finally advances and produces a standard, the only update required is the plugin; the VNF manager and the VNFs would require little change.  

If the industry can develop standards against which vendors can build NFV solutions, and employ a policing body to enforce these standards, VNF interoperability will move forward—driving unprecedented innovation to bring new services and new revenue streams to market quickly, with much lower risk. But the industry must continue to move forward in the meantime. So it must take action now to enable industry players to work together, promoting a culture of openness and innovation.

About the Author

Doug Tait is director of product marketing, Oracle Communications.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Oracle Partners with Intel to Take on IBM as it Pivots to Cloud

Oracle and Intel announced a special partnership aimed at migrating customers running Oracle databases on Intel Power Systems to new platform based on Intel Xeon silicon. The companies said they are able to deliver database-aware software enhancements optimized for Intel Xeon.


At this week's Oracle OpenWorld, the company is rolling out enhancements to its partner network as it makes a strategic "pivot to the cloud.: “Cloud is our top priority and we are aligning our resources to that strategic initiative,” said Shawn Price, senior vice president, Cloud, Oracle. “We will work with our partner ecosystem to pivot to the cloud and fully capitalize on the historic opportunity before us. We remain committed to expanding our partner community and providing all of its valued members the tools, technology and expertise they need to deliver excellence to our joint customers and succeed in the market.”

  https://www.oracle.com/engineered-systems/exadata/exayourpower.html

Oracle OpenWorld 2015 Expects 60,000 Attendees

This week's Oracle OpenWorld 2015 in San Francisco is expected to attract 60,000 in person attendees. The conference, which runs October 25-29, takes place at 18 locations throughout downtown San Francisco. It features 2,500 sessions, 3,000 speakers, and more than 400 Oracle demos, as well as partner and customer exhibitions.

Elton John will highlight the customer party at Treasure Island.

https://www.oracle.com/