Tuesday, June 4, 2019

Blueprint column: The importance of Gi-LAN in 5G

by Takahiro Mitsuhata, Sr. Manager, Technical Marketing at A10 Networks 

Today's 4G networks support mobile broadband services (e.g., video conferencing, high-definition content streaming, etc.) across millions of smart devices, such as smartphones, laptops, tablets and IoT devices. The number of connected devices is on the rise, growing 15 percent or more year-over-year and projected to be 28.5 billion devices by 2022 according to Cisco's VNI forecast.

Adding networking nodes to scale-out capacity is a relatively easy change. Meanwhile, it's essential for service providers to keep offering innovative value-added services to differentiate service experience and monetize new services. These services including parental control, URL filtering, content protection and endpoint device protection from malware and ID theft, to name a few. Service providers, however, are now facing new challenges of operational complexity and extra network latency coming from those services. Such challenges will become even more significant when it comes to 5G, as this will drive even more rapid proliferation of mobile and the IoT devices. It will be critical to minimize latency to ensure there are no interruptions to emerging mission-critical services that are expected to dramatically increase with 5G networks.

Gi-LAN Network Overview

In a mobile network, there are two segments between the radio network and the Internet: the evolved packet core (EPC) and the Gi/SGi-LAN. The EPC is a packet-based mobile core running both voice and data on 4G/ LTE networks. The Gi-LAN is the network where service providers typically provide various homegrown and value-added services using unique capabilities through a combination of IP-based service functions, such as firewall, carrier-grade NAT (CGNAT), deep packet inspection (DPI), policy control and traffic and content optimization. And these services are generally provided by a wide variety of vendors. Service providers need to steer the traffic and direct it to specific service functions, which may be chained, only when necessary, in order to meet specific policy enforcement and service-level agreements for each subscriber.

The Gi-LAN network is an essential segment that enables enhanced security and value-added service offerings to differentiate and monetize services. Therefore, it's crucial to have an efficient Gi-LAN architecture to deliver a high-quality service experience.

 Figure: Gi-LAN with multiple service functions in the mobile network

Challenges in Gi-LAN Segment

In today's 4G/ LTE world, a typical mobile service provider has an ADC, a DPI, a CGNAT and a firewall device as part of Gi-LAN service components. They are mainly deployed as independent network functions on dedicated physical devices from a wide range of vendors. This makes Gi-LAN complex and inflexible from operational and management perspective. Thus, this type of architecture, as known as monolithic architecture, is reaching its limits and does not scale to meet the needs of the rising data traffic in 4G and 4G+ architectures. This will continue to be an issue in 5G infrastructure deployments. The two most serious issues are:

1. Increased latency
2. Significantly higher total cost of ownership

Latency is becoming a significant concern since, even today, lower latency is required by online gaming and video streaming services. With the transition to 5G, ultra-reliable low-latency connectivity targets latencies of less than 1ms for use cases, such as real-time interactive AR/ VR, tactile Internet, industrial automation, mission/life-critical service like remote surgery, self-driving cars and many more. The architecture with individual service functions on different hardware has a major impact on this promise of lower latency. Multiple service functions are usually chained and every hop the data packet traversing between service functions adds additional latency, causing overall service degradation.

The management overhead of each solution independently is also a burden. The network operator must invest in monitoring, management and deployment services for all devices from various vendors individually, resulting in large operational expenses.

Solution – Consolidating Service Functions in Gi-LAN

In order to overcome these issues, there are a few approaches you can take. Service-Based Architecture (SBA) or microservices architecture address operational concerns since leveraging such architecture leads to higher flexibility and automation and significant cost reduction. However, it is less likely to address the network latency concern because each service function, regardless of VNF or microservice, still contributes in the overall latency as far as they are deployed as an individual VM or microservice.

So, what if multiple service functions are consolidated into one instance? For example, CGNAT and Gi firewall are fundamental components in the mobile network, and some subscribers may choose to use additional services such as DPI, URL filtering. Such consolidation is feasible only if the product/ solution supports flexible traffic steering and service chaining capabilities along with those service functions.

Consolidating Gi-LAN service functions into one instance/ appliance helps to drastically reduce the extra latency and simplify network design and operation. Such concepts are not new but there aren't many vendors who can provide consolidated Gi-LAN service functions at scale.

Therefore, when building an efficient Gi-LAN network, service providers need to consider a solution that can offer:
  • Multiple network and service functions on a single instance/ appliance
  • Flexible service chaining support
  • Subscriber awareness and DPI capability supported for granular traffic steering
  • Variety of form-factor options - physical (PNF) and virtual (VNF) appliances
  • High performance and capacity with scale-out capability
  • Easy integration and transition to SDN/NFV deployment
About the author

Takahiro Mitsuhata, Sr. Manager, Technical Marketing at A10 Networks

About A10

A10 Networks (NYSE: ATEN) provides Reliable Security Always™, with a range of high-performance application networking solutions that help organizations ensure that their data center applications and networks remain highly available, accelerated and secure. Founded in 2004, A10 Networks is based in San Jose, Calif., and serves customers globally with offices worldwide. For more information, visit: www.a10networks.com and @A10Networks

A10 adds Zero-day Automated Protection to DDoS Defense

A10 Networks is bolstering its Thunder Threat Protection System (TPS) family of Distributed Denial of Service (DDoS) defense solutions with Zero-day Automated Protection (ZAP) capabilities/

A10's ZAP capabilities are designed to automatically recognize the characteristics of DDoS attacks and apply mitigation filters without advanced configuration or manual intervention.

A10 Networks’ ZAP is comprised of two components: dynamic attack pattern recognition by a machine learning algorithm and heuristic behavior analysis recognition to dynamically identify anomalous behavior and block attacking agents. ZAP works in conjunction with A10 Networks’ adaptive DDoS security model and its five-level adaptive policy mitigation engines to provide a complete in-depth defense system. This comprehensive approach blocks DDoS attacks while protecting legitimate users from indiscriminate collateral damage typically associated with traditional DDoS protection methods.

The ZAP policies can be enforced by a combination of hardware and software. Thunder SPE (Security and Policy Engine) appliances can serve up to 100,000 ZAP policies at line rate and the remaining ZAP policies can be served by software. This provides superior mitigation performance over the traditional software-only solution, enabling superior response time and scalability.

“In today’s climate with the dramatic increase in polymorphic multi-vector attacks and the chronic shortage of qualified security professionals, enterprises and service providers need intelligently automated defenses that can accomplish tasks autonomously,” said Lee Chen, CEO of A10 Networks. “Manual interventions are not only resource-intensive but too slow and ineffective, resulting in a greater potential of network downtime and high cost to the organization.”

Separately, A10 published a study conducted by the Ponemon Institute highlights the critical need for DDoS protection that provides higher levels of scalability, intelligence integration, and automation. Some 325 IT and security professionals at ISPs, mobile carriers and cloud service providers participated in the survey.

85 percent of survey respondents expect DDoS attacks to either increase (54 percent) or remain at the same high levels (31 percent). Most service providers do not rate themselves highly in either prevention or detection of attacks. Just 34 percent grade themselves as effective or highly effective in prevention; 39 percent grade themselves as effective or highly effective in detection.

The DDoS intelligence gap was highlighted by a number of survey findings:

  • Lack of actionable intelligence was cited as the number-one barrier to preventing DDoS attacks, followed by insufficient personnel and expertise, and inadequate technologies. 
  • Out-of-date intelligence, which is too stale to be actionable, was cited as the leading intelligence problem, followed by inaccurate information, and a lack of integration between intelligence sources and security measures. 
  • Solutions that provide actionable intelligence were seen as the most effective way to defend against attacks. 
  • The most important features in DDoS protection solutions were identified as scalability, integration of DDoS protection with cyber intelligence, and the ability to integrate analytics and automation to improve visibility and precision in intelligence gathering. 
  • Communications service providers who rated their DDoS defense capabilities highly were more likely to have sound intelligence into global botnets and weapon locations. 

“Communications service providers are right, both in their expectations for increased attacks and about their need for better intelligence to prevent them,” said Gunter Reiss, vice president, marketing at A10 Networks. “The continuing proliferation of connected devices and the coming 5G networks will only increase the potential size and ferocity of botnets aimed at service providers. To better prepare, providers will need deeper insights into the identities of these attack networks and where the weapons are located. They also need actionable intelligence that integrates with their security systems and the capacity to automate their response.”

https://www.a10networks.com

Arista's enterprise hybrid cloud leverages Azure

Arista Networks introduced its next-gen hybrid cloud architecture for the enterprise that leverages the Microsoft Azure global network.

The new architecture integrates Arista EOS with Azure and Azure Stack.

Arista said its EOS software enables seamless connectivity with elastic workload scaling across regions, accounts, and availability zones for the broadest array of workload, application, service, and data types. This now includes support for Azure Virtual Machines, Azure Virtual Networks, container infrastructure, Azure Web Functions, and trusted connectivity to the existing enterprise infrastructure.

“Hybrid computing, the linking public to private clouds, and connecting the broadest set of resources to deliver amazing end-user experiences is the most network-centric computing architecture,” states Douglas Gourlay, vice president and general manager, Cloud Networking Software for Arista Networks. “Building on Microsoft Azure, we ensure a reliable and consistent experience for users, architects, and the operators of these critical systems.”

Yousef Khalidi, corporate vice president of Azure Networking, Microsoft, said, “The evolution from large central enterprise datacenters to a hybrid environment is changing the IT landscape. That’s why at Microsoft, we designed Azure to be hybrid from the beginning. Our differentiated offerings like Azure Stack, to consistently build and run hybrid applications across cloud boundaries, and Azure Virtual WAN, which provides a simple, unified global connectivity and security platform, deliver the ultimate consistent cloud experience to customers. We’re pleased Arista selected Azure to help businesses realize the benefits that hybrid can deliver.”

Stated benefits include:

  • Deployment and runtime workload portability with consistent policy, identity, and controls enabling any workload to be freely placed in the location that best suits the business requirements
  • Consistent workload, device, and user identity across the Enterprise
  • Application and user observability and telemetry to rapidly identify and resolve issues
  • Full user and application state history enabling supervised learning models and AI/ML processing to uncover issues and risks before they affect the Enterprise
  • A global connectivity model from the user and edge to the datacenter, cloud, and AI/ML pipeline
  • Autonomic operations to include elastic scaling of networking resources in the hybrid cloud
  • Cloud-based management and transactional model enabling an increasingly consistent operating model across all Enterprise computing and application assets

https://www.arista.com/en/company/news/press-release/7749-pr-20190604

Arista's 7800R switches deliver 400G scalability for cloud networks

Arista Networks introduced its new 7800R3 Series switches boasting 36 400G ports per line card, a 4X increase compared to prior modular systems, to address the largest cloud data center routing requirements.

The rollout includes the new Arista 7800R family for 400G cloud networks and the next generation of the Arista 7500R, 7280R Series. All R3-Series feature the proven VoQ deep buffer architecture, with support for 400G, increased route scale and new telemetry functions.

The 7800R3 Series is initially available in 4 and 8 slot modular chassis, with up to 576 ports of 400G or 460Tbps of capacity for the largest scale environments, with a choice of both 100G and 400G modules with optional MACsec for secure interconnect across data centers. The 7800R Series support Segment Routing, MLAG, ECMP, EVPN and VXLAN technologies.

Dell EMC collaborates with ADVA on virtualized uCPE

ADVA and Dell EMC today announced a strategic collaboration to deliver open virtualized uCPE solutions for service provider and enterprise customers.

The new uCPE solution will be built on ADVA’s Ensemble Connector NFVi platform, which enables network operators to access virtual network functions (VNFs), including software-defined WAN (SD-WAN). New and existing Dell EMC customers will be able to take advantage of consolidated management as well as a comprehensive portfolio of over 50 onboarded VNFs. The joint solution has already been harnessed by Verizon to securely deploy multiple software-based services on a single uCPE installation.

“There is a real need among service providers and enterprises to update network operations to address distributed and cloud-based applications and capitalize on changing economics enabled by cloud models,” said Tom Burns, SVP and GM, Networking and Solutions, Dell EMC. “By infusing Open Networking into access networks to the cloud with the Virtual Edge Platform family, Dell EMC can help customers modernize infrastructure and transform operations while automating service delivery and processes.”

“Today’s network operators need solutions that are optimized for uCPE. Dell EMC’s new uCPE platforms are the answer to that challenge. Combined with our high-performance Ensemble Connector NFVi software, it provides an open architecture to support multiple simultaneous VNFs. By connecting the enterprise edge to the cloud, it provides unrivaled choice, drives growth and significantly improves end-user experience,” commented James Buchanan, GM, Edge Cloud, ADVA.

FCC: List of winning bidders for 28 GHz licenses

The FCC announced the following winning bidders for its auction of 28 GHz upper microwave flexible use service licenses (Auction 101):


https://www.fcc.gov/document/auction-101-results-public-notice


Seagate ships 16TB helium-based HDDs

Seagate Technology is now shipping 16TB helium-based enterprise drives as part of the Exos X16 family for hyperscale data centers. The company also updated its IronWolf and IronWolf Pro Network Attached Storage (NAS) drive lines with new 16TB capacity models.

Seagate’s new Exos X16 16TB drive delivers 33 percent more petabytes per rack compared to 12TB drives while maintaining the same footprint.

“The Exos X16 is key in reducing total cost of ownership for enterprise system developers and cloud data centers while supporting multiple applications with varying workloads,” said Sai Varanasi, vice president of product line marketing at Seagate Technology. “The Exos X16 is the industry’s leading helium-based 16TB capacity drive. We are partnering with our cloud/enterprise customers to bring this product to the market to fulfill the pent-up exabyte demand in data centers. ”

Achronix next-gen FPGAs leverage Rambus GDDR6 PHY

Achronix Semiconductor's next-generation Speedster7t FPGA family will use Rambus GDDR6 PHY for its top-end data rates.

Rambus says its GDDR6 PHY is the fastest memory IP on the market, at 16 Gbps. The Rambus GDDR6 PHY enables the communication to and from high-speed, high-bandwidth GDDR6 SDRAM memory, which is a high-performance memory solution that can be used in a variety of applications that require large amounts of data computation.

Rambus worked closely with Achronix on their package design, to support the eight GDDR6 IP controllers on the first Speedster7t device. Providing up to 4 Tbps of performance, the new Speedster 7t FPGAs include a new 2D network-on-chip (NoC) and a high-density array of new machine learning processors (MLP). Merging FPGA programmability with ASIC routing structures and compute engines, the Speedster7t family creates a new “FPGA+” class of technology, pushing the boundaries of high-performance compute acceleration.


NASCAR picks AWS as preferred cloud

National Association for Stock Car Auto Racing (NASCAR) has chosen AWS as its standard for cloud-based machine learning and artificial intelligence workloads.

Specifically, NASCAR will leverage AWS services to enhance its full range of media assets including websites, mobile applications, and social properties for its 80 million fans worldwide. NASCAR will use the breadth of AWS technologies to build cloud-based services and automate processes, including a new video series on NASCAR.com called This Moment in NASCAR History powered by AWS.

“Amazon’s 20 years of machine learning experience, along with our broad analytics and machine learning capabilities, make us the best choice for organizations who want to use machine learning to gain insights into their data and establish new levels of engagement with their customers,” said Mike Clayville, Vice President, Worldwide Commercial Sales at AWS.