Tuesday, August 24, 2021

Windstream Wholesale announces 3 new fiber routes

Windstream Wholesale has begun the initial work to add fiber to three routes on its rapidly expanding transport network: New York City to Albany to Montreal; Jacksonville, Florida, to Savannah, Georgia, to Myrtle Beach, South Carolina, to Raleigh, North Carolina; and Tulsa to Muskogee, Oklahoma, to Little Rock, Arkansas. 

Windstream said these three new routes are part of its initiative to expand its fiber infrastructure and provide a rich set of high-bandwidth WAN advantages including low-latency transport to major data and international hubs across the U.S., including in Miami, New York City, Dallas, Los Angeles and San Jose. The project will pull high-count fibers through existing conduits on some spans as well as construction of some new paths on other segments. Diverse routing options protect networks against outages, while delivering coast-to-coast transport, as well as connectivity to thousands of Windstream-lit buildings across the U.S.

Windstream’s recently announced offerings include:

  • Dark fiber and lit transport services from the Hillsboro, Oregon, ecosystem to Portland, Seattle, Sacramento, Salt Lake City, and points beyond. Diversity options include multiple routes from Hillsboro to Portland to include or avoid the Pittock Block data centers.
  • Access from the Jacksonville and Boca Raton international cable landing stations to all major U.S. data centers.
  • Wave services up to 400G, with the first commercial deployment turned up in 2020.

Windstream Wholesale continues to leverage its new Intelligent Converged Optical Network (ICON) with efficiency, speed, and diversity in mind, providing a high-bandwidth infrastructure that will enable the digital transformation and innovation needed for today’s rapidly evolving businesses. 


Arista expands its R3-Series routers for next-gen edge

 Arista Networks introduced new routers and enhancements to its EOS (Extensible Operating System) for next-generation network edge roles in the multi-cloud era. 

The new capabilities for virtual private networking and traffic engineering enable three additional edge use cases for multi cloud, metro and 5G RAN based on Arista’s cloud-grade routing principles.

  • Multi Cloud Edge: Demand for public cloud services is creating the new cloud edge, created by the cloud footprint at the network edge to deliver services closer to the end customer. Cloud edges are being built globally, with 100G/400G directly connecting to the cloud, based on a repeatable Layer 3 architecture, common software-driven provisioning and programmatic traffic steering, to deliver uninterrupted service that scales globally.
  • Metro Edge: Legacy router designs are degrading quality and end-user experiences for service providers. To meet the new bandwidth demand and faster connectivity for E-LINE and E-LAN services, service providers are upgrading their metro ethernet edge with Capex efficient, high density merchant silicon-based 100G/400G routing platforms. Additionally, to achieve Opex efficiency, service providers are simplifying protocol complexity by adopting a single protocol for multiple edge VPN services and driving consistent automation for the metro fabric - the combination for faster and scalable service delivery.
  • 5G RAN Edge: 5G architecture is disaggregating traditional mobile backhaul, bringing the public cloud footprint in the RAN (Radio Access Network) for real-time localized service updates. To address bandwidth demand from distributed nodes and a radically scaled user/device and traffic profiles, 5G edge requires scale-out repeatable routing design with high speed, high-performance connectivity, a single OS for open standards-based protocols, and consistent automation framework across the mobile backhaul and Multi-access Edge Compute (MEC) deployments, enabling Opex efficiency and faster time to market for new services.

The features for the new routing use cases are available now in the latest EOS release.

Additions and updates to the Arista R3-Series include:

7800R3 and 7500R3 modular systems scaling to 460Tbps of capacity

  • New 7500R3 and 7800R3 line cards with native 25G SFP ports for flexible scaleout of edge routing applications
  • Expanded 7800R3 range with 16-slots for up to 576 400G or 768 100G ports

7280R3 Series of 1U and 2U systems scales up to 9.6Tbps with 24 ports of 400G in 1U and with newly released options expanding the solutions for network edge

  • 7280CR3MK-32P4S and 7280CR3MK-32D4S with 32 100G ports and four 400G ports providing both routing scale and integrated MACsec encryption at 100G
  • 7280CR3-36S and 7280CR3K-36S, offering 36 100G ports and flexible port modes for all network edge roles from 10G to 400G
  • 7280SR3M-48YC8, offering 48 25G ports plus 8 100G ports with MACsec encryption
  • 7280SR3-40YC6, offering 40 25G ports plus 6 100G ports for aggregation and access deployment

Mavenir demos first 2G containerized architecture

Mavenir announced the commercial readiness of a containerized GSM 2G architecture using an enhanced fronthaul (FH) interface between the Multi Radio Access Technology (MRAT), Remote Radiohead Unit (RRU) and the Distributed Unit (DU) and demonstrating full frequency hopping, multiple TRX’s, multiple codecs, ciphering and handovers in readiness for commercial deployments.

Mavenir combined the 2G technology from its ip.access acquisition and containerized the GSM layer 1, 2 and 3 protocols of 2G in the DU microservices architecture that can be run in parallel and on the same platform with the 4G/5G network architecture.

In addition, Mavenir has developed an enhanced 2G interface MRAT protocol on top of the O-RAN Alliance-based enhanced Common Public Radio Interfaces (eCPRI) interface with minimal adaptation and will make this combined interface openly available by standardizing the interface in the O-RAN Alliance. This containerized solution is scalable with the feature rich and industry leading Mavenir Webscale Platform as well as other 3rd party webscale platforms.

“Mavenir is excited to have developed and demonstrated a world’s first, viability of an Open vRAN containerized architecture for 2G with the addition of multiple air interface protocols using an O-RAN based architecture,” said Pardeep Kohli, president and CEO of Mavenir. “This milestone enables us to readily support operators in making a complete swap from proprietary Radio Access Network (RAN) solutions to open interfaces, and virtualized Web-Scale architectures.”

Mavenir will contribute to O-RAN Alliance in the standardization of the fronthaul interface for 2G. The Mavenir solution also brings the advantages of Open RAN lower layer split (LLS) mode to the DU Architecture allowing CSPs to centralize the 2G DUs as well freeing up space and complexity at the tower site. This also enables pooling of the Central Processing Unit (CPU) resources across several sites.


Palo Alto Networks sees rapid growth in SASE

Palo Alto Networks reported revenue of $1.2 billion for its fiscal fourth quarter 2021, ended July 31, 2021, compared with total revenue of $950.4 million for the fiscal fourth quarter 2020. GAAP net loss for the fiscal fourth quarter 2021 was $119.3 million, or $1.23 loss per diluted share, compared with GAAP net loss of $58.9 million, or $0.61 per diluted share, for the fiscal fourth quarter 2020.

Non-GAAP net income for the fiscal fourth quarter 2021 was $161.9 million, or $1.60 per diluted share, compared with non-GAAP net income of $144.9 million, or $1.48 per diluted share, for the fiscal fourth quarter 2020. A reconciliation between GAAP and non-GAAP information is contained in the tables below.

"Our strong Q4 performance was the culmination of executing on our strategy throughout the year, including product innovation, platform integration, business model transformation and investments in our go-to-market organization," said Nikesh Arora, chairman and CEO of Palo Alto Networks. "In particular, we saw notable strength in large customer transactions with strategic commitments across our Strata, Prisma and Cortex platforms."  


Cerebras advances its "Brain-scale AI"

Cerebras Systems disclosed progress in its mission to deliver a "brain-scale" AI solution capable of supporting neural network models of over 120 trillion parameters in size. 

Cerebras’ new technology portfolio contains four innovations: Cerebras Weight Streaming, a new software execution architecture; Cerebras MemoryX, a memory extension technology; Cerebras SwarmX, a high-performance interconnect fabric technology; and Selectable Sparsity, a dynamic sparsity harvesting technology.

  • Cerebras Weight Streaming enables the ability to store model parameters off-chip while delivering the same training and inference performance as if they were on chip. This new execution model disaggregates compute and parameter storage – allowing researchers to flexibly scale size and speed independently – and eliminates the latency and memory bandwidth issues that challenge large clusters of small processors. It is designed to scale from 1 to up to 192 CS-2s with no software changes.
  • Cerebras MemoryX is a memory extension technology. MemoryX will provide the second-generation Cerebras Wafer Scale Engine (WSE-2) up to 2.4 Petabytes of high performance memory, all of which behaves as if it were on-chip. With MemoryX, CS-2 can support models with up to 120 trillion parameters.
  • Cerebras SwarmX is a high-performance, AI-optimized communication fabric that extends the Cerebras Swarm on-chip fabric to off-chip. SwarmX is designed to enable Cerebras to connect up to 163 million AI optimized cores across up to 192 CS-2s, working in concert to train a single neural network.
  • Selectable Sparsity enables users to select the level of weight sparsity in their model and provides a direct reduction in FLOPs and time-to-solution. Weight sparsity is an exciting area of ML research that has been challenging to study as it is extremely inefficient on graphics processing units. Selectable sparsity enables the CS-2 to accelerate work and use every available type of sparsity—including unstructured and dynamic weight sparsity—to produce answers in less time.

“Today, Cerebras moved the industry forward by increasing the size of the largest networks possible by 100 times,” said Andrew Feldman, CEO and co-founder of Cerebras. “Larger networks, such as GPT-3, have already transformed the natural language processing (NLP) landscape, making possible what was previously unimaginable. The industry is moving past 1 trillion parameter models, and we are extending that boundary by two orders of magnitude, enabling brain-scale neural networks with 120 trillion parameters.”


Cerebras unveils 2nd-gen, 7nm Wafer Scale Engine chip

Cerebras Systems introduced its Wafer Scale Engine 2 (WSE-2) AI processor, boasting 2.6 trillion transistors and 850,000 AI optimized cores.

The wafer-sized processor, which is manufactured by TSMC on its 7nm-node, more than doubles all performance characteristics on the chip - the transistor count, core count, memory, memory bandwidth and fabric bandwidth - over the first generation WSE. 

“Less than two years ago, Cerebras revolutionized the industry with the introduction of WSE, the world’s first wafer scale processor,” said Dhiraj Mallik, Vice President Hardware Engineering, Cerebras Systems. “In AI compute, big chips are king, as they process information more quickly, producing answers in less time – and time is the enemy of progress in AI. The WSE-2 solves this major challenge as the industry’s fastest and largest AI processor ever made.”

The processors powers the Cerebras CS-2 system, which the company says delivers hundreds or thousands of times more performance than legacy alternatives, replacing clusters of hundreds or thousands of graphics processing units (GPUs) that consume dozens of racks, use hundreds of kilowatts of power, and take months to configure and program. The CS-2 fits in one-third of a standard data center rack.

Early deployment sites for the first generation Cerebras WSE and CS-1 included Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center (PSC) for its groundbreaking Neocortex AI supercomputer, EPCC, the supercomputing centre at the University of Edinburgh, pharmaceutical leader GlaxoSmithKline, and Tokyo Electron Devices, amongst others.

“At GSK we are applying machine learning to make better predictions in drug discovery, so we are amassing data – faster than ever before – to help better understand disease and increase success rates,” said Kim Branson, SVP, AI/ML, GlaxoSmithKline. “Last year we generated more data in three months than in our entire 300-year history. With the Cerebras CS-1, we have been able to increase the complexity of the encoder models that we can generate, while decreasing their training time by 80x. We eagerly await the delivery of the CS-2 with its improved capabilities so we can further accelerate our AI efforts and, ultimately, help more patients.”

“As an early customer of Cerebras solutions, we have experienced performance gains that have greatly accelerated our scientific and medical AI research,” said Rick Stevens, Argonne National Laboratory Associate Laboratory Director for Computing, Environment and Life Sciences. “The CS-1 allowed us to reduce the experiment turnaround time on our cancer prediction models by 300x over initial estimates, ultimately enabling us to explore questions that previously would have taken years, in mere months. We look forward to seeing what the CS-2 will be able to do with more than double that performance.”


Keysight joins Google Cloud Partner Initiative

 Keysight Technologies has joined Google Cloud’s partner initiative to support agile orchestration of 5G services at the network edge.

Keysight offers a wide range of solutions for validating early designs, system interoperability and performance, network and end-point security, as well as compliance to 3GPP and O-RAN specifications. Keysight’s end-to-end solution portfolio, built on common hardware and software platforms, enables a cloud-centric ecosystem to speed deployment of multi-access edge computing (MEC), network function virtualization (NFV) and artificial intelligence (AI) technology. This allows mobile operators to confidently orchestrate innovative wireless connectivity services at the edge of the network.

“As a Google Cloud partner, Keysight will support service providers transitioning to cloud and edge computing, which are needed for delivering advanced applications and use cases such as streaming media, cloud gaming, connected vehicles, private wireless networks and immersive experiences,” said Scott Bryden, vice president of Keysight’s operator industry solutions group. “Keysight’s solutions across wireless and wireline technologies enable hyperscalers and mobile operators to create unified, heterogenous networks that support a wide range of use cases, requirements and applications.”

KGPCo to re-sell Ribbon's Cloud & Edge and IP Optical solutions

KGPCo, a leading communications network services and supply chain provider, will re-sell Ribbon's entire portfolio of Cloud & Edge and IP Optical solutions.

Ribbon and KGPCo, which already have a number of customers in the IP Optical market including joint wins at CL Tel, Eastern Slope Rural Telephone and Tombigbee Electric Cooperative, said the expanded partnership enables both organizations to leverage their respective expertise and further extend their offerings to help service providers and utilities modernize, enhance and improve their communications networks.

"We have enjoyed a number of strategic customer wins with KGPCo for our IP Optical business and are excited to broaden our relationship to now cover our Cloud & Edge solutions," said David Hogan, Vice President of Enterprise & Channel Sales for Ribbon. "Our expanded partnership allows both organizations to leverage our individual strengths to help service providers transform their legacy networks. Additionally, having the ability to now extend KGPCo's world-class services capabilities to our customers and prospects will be a critical component of our joint successes moving forward."