Thursday, June 22, 2023

Arelion doubles capacity per fiber pair on long-haul route with L-band

Arelion has upgraded its existing Infinera FlexILS flexible-grid line system to support the L-band on a high-traffic route from Ashburn, VA to Atlanta. The upgrade doubles capacity per fiber pair, marking Arelion’s first route with active L-band capacity in service.

Arelion, which plans additional L-band deployments on high-traffic routes in North America, said this innovation enables it to bring more capacity to market on a continent where long-distance fiber is in short supply in many regions.

“We’re seeing significant demand for additional capacity from our customers along this high-traffic route as these tech hubs grow rapidly,” said Georgios Tologlou, Senior Network Architect, Arelion. “This innovation is a strong business case for us to optimize the cost per bit and minimize operational expenditures. Leveraging L-band, we can maximize the capacity per fiber pair to quickly serve our customers’ demands and supercharge our sales growth on one of the most popular routes in our North American network.”

Infinera FlexILS is the industry’s most widely deployed flexible grid-compliant open optical line system, featuring C+L-band support and colorless-directionless-contentionless ROADM. It seamlessly doubles fiber capacity through L-band expansion from the adjacent C-band without impacting service or operational quality.

Infinera said its system enhances flexibility by supporting programmable configuration that optimizes operation based on performance, spectral efficiency, long span reach, and fiber conditions.

“Our platforms are purpose-built to enable seamless upgrades to provide the greatest amount of investment protection and enable our customers to meet relentlessly growing bandwidth demand,” said Ron Johnson, Infinera’s General Manager, Optical Systems and Network Solutions Group. “Expanding the network to support L-band doubles the spectrum that can be used to transmit optical signals, hence enabling Arelion to double the amount of services they can provide per fiber while simultaneously achieving simplification of their network through automation and flexible operation.”

Tech Update: LLM-as-a-Service via Supercomputer

Hewlett Packard Enterprise offer large language models on a cloud subscription basis. The upcoming service will use HPE's supercomputing resources attained via its acquisition of legendary supercomputing pioneer Cray Inc. in 2019.

In this video, Edmondo Orlotti discusses the LLM-as-a-Service model, how and where it will be deployed, and additional resources that HPE brings to the table.

Recorded at HPE Discover 2023 in Las Vegas.

Check out other Tech Updates on our YouTube Channel (subscribe today): and check out our latest reports at:

Have a tech update that you want to brief us on? Contact

Verizon tests 5G network slicing

Verizon recently demonstrated multiple network slices using a commercially available smartphone, virtualized and non-virtualized RAN equipment in production in the field, and Verizon’s multi-vendor 5G standalone core.. 

This end-to-end test successfully accessed network slicing capabilities from the device and validated the ability for the device chipset, operating system, application, radio network base station, and the core of the network to work in harmony to demonstrate a full end-to-end path for data to travel on a virtual network slice. Network slicing will be made available with the evolution of Verizon’s 5G standalone core.

Verizon said its 5G standalone core’s cloud-native virtualized applications, in combination with built-in Artificial Intelligence (AI) and Machine Learning (ML), will enable the dynamic allocation of the appropriate resources, referred to as network slicing. It will also allow for automated network configuration changes, including the ability to scale up or scale down network function capacity in real time - to provide the right service levels and network resources needed for each use case.

“Matching network performance characteristics to specific application requirements, network slicing promises differentiated customer experiences to efficiently provide our customers with the type of service they need to complete the task they want to complete on our network and provide them an exceptional experience,” said Adam Koeppe, Senior Vice President of Technology Planning at Verizon.

Argonne National Lab advances Aurora supercomputer

The Aurora supercomputer at Argonne National Laboratory, which is a collaboration of Intel, Hewlett Packard Enterprise (HPE) and the Department of Energy (DOE), reached a milestone with the installation of  all 10,624 compute blades, boasting 63,744 Intel Data Center GPU Max Series and 21,248 Intel Xeon CPU Max Series processors.

The system incorporates more than 1,024 storage nodes (using DAOS, Intel’s distributed asynchronous object storage), providing 220 terabytes (TB) of capacity at 31TBs of total bandwidth, and leverages the HPE Slingshot high-performance fabric. Later this year, Aurora is expected to be the world’s first supercomputer to achieve a theoretical peak performance of more than 2 exaflops (an exaflop is 1018 or a billion billion operations per second) when it enters the TOP500 list.

Aurora's sleek rectangular blades contain processors, memory, networking and cooling technologies. Each blade consists of two Intel Xeon Max Series CPUs and six Intel Max Series GPUs. The Xeon Max Series product family is already demonstrating great early performance on Sunspot, the test bed and development system with the same architecture as Aurora. Developers are utilizing oneAPI and AI tools to accelerate HPC and AI workloads and enhance code portability across multiple architectures.

“Aurora is the first deployment of Intel’s Max Series GPU, the biggest Xeon Max CPU-based system, and the largest GPU cluster in the world. We’re proud to be part of this historic system and excited for the groundbreaking AI, science and engineering Aurora will enable,”states Jeff McVeigh, Intel corporate vice president and general manager of the Super Compute Group.

“While we work toward acceptance testing, we're going to be using Aurora to train some large-scale open source generative AI models for science," said Rick Stevens, Argonne National Laboratory associate laboratory director. "Aurora, with over 60,000 Intel Max GPUs, a very fast I/O system, and an all-solid-state mass storage system, is the perfect environment to train these models.”

Vertical Systems: 2022 Global Provider SD-WAN LEADERBOARD

Vertical Systems Group announces that the following eight companies attained a position on the year end 2022 Global Provider Carrier Managed SD-WAN LEADERBOARD (in rank order based on site share outside of home country as of December 31, 2022): AT&T (U.S.), Orange Business (France), Verizon (U.S.), BT Global Services (U.K.), NTT (Japan), Telefonica Global Solutions (Spain), Hughes (U.S.), and Vodafone (U.K.). This industry benchmark for multinational SD-WAN market presence ranks companies that hold a 4% or higher share of billable retail sites outside of their respective home countries.

Twelve companies qualify for the 2022 Global Provider Managed SD-WAN Challenge Tier (in alphabetical order): Aryaka (U.S.), Colt (U.K.), Comcast Business (U.S.), Deutsche Telekom (Germany), Global Cloud Xchange (India), GTT (U.S.), Liberty Networks [formerly Cable & Wireless] (Barbados), PCCW Global (Hong Kong), Singtel (Singapore), Tata (India), Telia (Sweden), and Telstra (Australia). The Challenge Tier includes companies with site share between 1% and 4% of this defined SD-WAN segment.

“Leading global SD-WAN providers continued to expand their footprints into dozens of new countries during 2022, with the goal of providing multinational customers with seamless connectivity,” said Rosemary Cochran, principal of Vertical Systems Group. “There was some shuffling of provider rankings since our last Leaderboard release, as competition for global customers is intense and share differentials in this segment are extremely tight.”

Research Highlights

  • Vertical’s initial benchmark for this specialized segment was the Mid-2021 Global Provider Managed SD-WAN LEADERBOARD, which included site installations as of June, 30 2021. The share comparisons provided in this analysis are based on these two time periods.
  • The roster of companies ranked on the LEADERBOARD increased to eight in 2022, up from seven previously.
  • AT&T advances to first position on the LEADERBOARD, up from second and displacing Orange Business. AT&T also ranks first on the 2022 U.S. Carrier Managed SD-WAN LEADERBOARD.
  • BT Global Services moves up to the fourth LEADERBOARD position, which drops NTT to fifth position.
  • Hughes enters the LEADERBOARD in seventh position, moving up from the Challenge Tier. Vodafone dips from seventh to the eighth and final position.
  • The 2022 Challenge Tier remains at twelve companies, however with lineup changes. Lumen drops from the Challenge Tier into the Market Player tier, and Comcast Business (includes Masergy) moves up from the Market Player tier.
  • Carrier Managed SD-WAN solutions for multinational customers are typically custom hybrid network configurations that require global infrastructures and technical expertise, and may incorporate MPLS VPNs bundled with cloud connectivity, plus advanced security that is integral or provided with technology partners.
  • MEF 3.0 SD-WAN certification has been attained by the top three companies ranked on the 2022 Global Provider Carrier Managed SD-WAN LEADERBOARD – AT&T, Orange Business, and Verizon. Additionally, five companies cited in the Challenge Tier have MEF 3.0 SD-WAN certification as follows: Colt, Comcast Business, PCCW Global, Tata and Telia.
  • The primary technology suppliers utilized by the Global Provider SD-WAN LEADERBOARD and Challenge Tier companies are as follows (in alphabetical order): Cisco, Fortinet, HPE Aruba, Nuage Networks from Nokia, Palo Alto, Versa and VMware.

AWS to invest $100 million in Generative AI Innovation program

Amazon Web Services disclosed its intention to invest $100 million in a AWS Generative AI Innovation Center. The program aims to connect AWS AI and machine learning (ML) experts with customers around the globe to help them envision, design, and launch new generative AI products, services, and processes. 

“Amazon has more than 25 years of AI experience, and more than 100,000 customers have used AWS AI and ML services to address some of their biggest opportunities and challenges. Now, customers around the globe are hungry for guidance about how to get started quickly and securely with generative AI,” said Matt Garman, senior vice president of Sales, Marketing, and Global Services at AWS. “The Generative AI Innovation Center is part of our goal to help every organization leverage AI by providing flexible and cost-effective generative AI services for the enterprise, alongside our team of generative AI experts to take advantage of all this new technology has to offer. Together with our global community of partners, we’re working with business leaders across every industry to help them maximize the impact of generative AI in their organizations, creating value for their customers, employees, and bottom line.”

The center will offer workshops, engagements, and training. Customers will work closely with generative AI experts from AWS and the AWS Partner Network to select the right models, define paths to navigate technical or business challenges, develop proofs of concepts, and make plans for launching solutions at scale. The Generative AI Innovation Center team will provide guidance on best practices for applying generative AI responsibly and optimizing machine learning operations to reduce costs. 

Engagements will deliver strategy, tools, and assistance that will help customers use AWS generative AI services, including Amazon CodeWhisperer, an AI-powered coding companion, and Amazon Bedrock, a fully managed service that makes foundational models (FMs) from AI21 Labs, Anthropic, and Stability AI, along with Amazon’s own family of FMs, Amazon Titan, accessible via an API. They can also train and run their models using high-performance infrastructure, including AWS Inferentia-powered Amazon EC2 Inf1 Instances, AWS Trainium-powered Amazon EC2 Trn1 Instances, and Amazon EC2 P5 instances powered by NVIDIA H100 Tensor Core GPUs. Additionally, customers can build, train, and deploy their own models with Amazon SageMaker or use Amazon SageMaker Jumpstart to deploy some of today’s most popular FMs, including Cohere’s large language models, Technology Innovation Institute’s Falcon 40B, and Hugging Face's BLOOM.

Yokogawa enhances its Optical Spectrum Analyzer

Yokogawa Test & Measurement Corporation released an upgraded optical spectrum analyzer equipped with features that enhance performance and improve usability. The instrument (AQ6370E) can be used to characterize a wide range of components, including lasers for optical communications, optical transceivers, and optical amplifiers.

One of these new features is HCDR (high close-in dynamic range) mode, with which a user can measure a single longitudinal mode laser with a high close-in dynamic range. Close-in dynamic range is a key performance criterion when developing lasers and optical devices.  

There is also a SMSR (side mode suppression ratio) mode, which can reduce SMSR measurement time, and an APP mode.