Tuesday, November 7, 2023

RISC-V gains momentum

 RISC-V, the open-source instruction set architecture (ISA) based on established reduced instruction set computer (RISC) principles, is gaining traction beyond embedded systems to include aerospace, AI/ML, Android smartphones and IoT devices, automotive, data center acceleratos, HPC, 5G infrastructure, and security applications.  RISC-V is already in tens of billions of cores on the market, with hundreds of design wins. 

“The biggest takeaway for the RISC-V community this year is that we’re going to see RISC-V everywhere. More and more industries and companies are turning to RISC-V to innovate faster and take advantage of the vibrant culture of collaboration,” said Calista Redmond, CEO of RISC-V International. “Our biggest priorities looking ahead are to continue to drive technical progress and deepen community engagement, while offering even more value and resources to accelerate the RISC-V ecosystem.”


Some additional highlights:

  • RISC-V is being used by Meta in its own silicon designs to accelerate a number of workloads because it offers 64-bit addressing, vectoring in the core, and custom instructions.
  • RISC-V is used in Meta's first in-house silicon for video processing. This implementation is currently processing 100% of all video uploads to Meta's platforms.


  • Meta is also using RISC-V for training and inference acceleration 


  • A new RISC-V Labs will provide resources for developers to build and test their software, from porting of existing projects to development of new components that will power the next wave of computing innovation. 


  • The RISC-V Exchange, a directory of RISC-V hardware and software solutions for different markets, has grown over 40% since the beginning of 2023. Additionally, the RISC-V Developer Boards program is helping to make development boards more accessible to the global RISC-V community to further spur innovation. 


Microsoft runs inference processing in Oracle Cloud Infrastructure

Microsoft is using Oracle Cloud Infrastructure (OCI) AI infrastructure, along with Microsoft Azure AI infrastructure, for inferencing of AI models that are being optimized to power Microsoft Bing conversational searches daily.  Oracle confirmed that is has a multi-year with Microsoft supporting this application.

Leveraging the Oracle Interconnect for Microsoft Azure, Microsoft is able to use managed services like Azure Kubernetes Service (AKS) to orchestrate OCI Compute at massive scale to support increasing demand for Bing conversational search.

“Generative AI is a monumental technological leap and Oracle is enabling Microsoft and thousands of other businesses to build and run new products with our OCI AI capabilities,” said Karan Batta, senior vice president, Oracle Cloud Infrastructure. “By furthering our collaboration with Microsoft, we are able to help bring new experiences to more people around the world.”

“Microsoft Bing is leveraging the latest advancements in AI to provide a dramatically better search experience for people across the world,” said Divya Kumar, global head of marketing for Search & AI at Microsoft. “Our collaboration with Oracle and use of Oracle Cloud Infrastructure along with our Microsoft Azure AI infrastructure, will expand access to customers and improve the speed of many of our search results.”

OCI Superclusters include OCI Compute Bare Metal instances, ultra-low latency RDMA cluster networking, and a choice of HPC storage. OCI Superclusters can scale up to 4,096 OCI Compute Bare Metal instances with 32,768 A100 GPUs or 16,384 H100 GPUs, and petabytes of high-performance clustered file system storage to efficiently process massively parallel applications.

https://www.oracle.com/news/announcement/oracle-cloud-infrastructure-utilized-by-microsoft-for-bing-conversational-search-2023-11-07/

VMware collaborates with Intel on Private AI stack

VMware and Intel are working to deliver a jointly validated AI stack that will enable customers to use their existing general-purpose VMware and Intel infrastructure and open source software to simplify building and deploying AI models. 

The partnership leverages VMware Cloud Foundation and Intel’s AI software suite, Intel Xeon® processors with built-in AI accelerators, and Intel Max Series GPUs.

“When it comes to AI, there is no longer any reason to debate trade-offs in choice, privacy, and control. Private AI empowers customers with all three, enabling them to accelerate AI adoption while future-proofing their AI infrastructure,” said Chris Wolf, vice president of VMware AI Labs. “VMware Private AI with Intel will help our mutual customers dramatically increase worker productivity, ignite transformation across major business functions, and drive economic impact.”

“For decades, Intel and VMware have delivered next-generation data center-to-cloud capabilities that enable customers to move faster, innovate more, and operate efficiently,” said Sandra Rivera, executive vice president and general manager of the Data Center and AI Group (DCAI) at Intel. “With the potential of artificial intelligence to unlock powerful new possibilities and improve the life of every person on the planet, Intel and VMware are well equipped to lead enterprises into this new era of AI, powered by silicon and software.”

Infinera: Anatomy of a Coherent Optical Engine

Telenet picks Ciena for 400G upgrade across Belgium

Telenet, a Liberty Global company and the largest provider of cable broadband services in Belgium, is deploying Ciena 6500 Reconfigurable Line System (RLS) and WaveLogic 5 Nano (WL5n) 400G QSFP-DD coherent pluggable transceivers to support its network expansion across Belgium.

The deployment also includes Ciena’s Waveserver 5 interconnect platform and its Manage, Control and Plan (MCP) domain controller, for controlling and automating the network throughout its entire operational lifecycle. Telenet is also using Ciena’s PinPoint app to ensure network availability by quickly isolating and troubleshooting fiber faults.

Virginie Hollebecque, Vice President, EMEA at Ciena commented: “We greatly appreciate and value Telenet's choice and commitment to our products and service. We understand Telenet's requirements and with our mix of leading optical, network management and software innovations, Telenet can support the growing demand for AI services, streaming, content, gaming, and other data applications, while also making the network simpler and more efficient.”

https://www.ciena.com

Dell’Oro: Telecom server market to grow 19% CAGR

Dell’Oro Group has lowered its forecast for the telecom server market relative to its October 2022 report due to a few factors, including economic headwinds leading to reduced spending on mobile network infrastructure, challenges with the adoption of Open RAN and MEC, and a realization that it’s previous estimates for servers deployed for Operation Support Systems (OSS) and Business Support Systems (BSS) were overstated.

The new forecast anticipates the telecom server market to experience a 19% five-year CAGR, reaching $12.5 billion by 2027, slightly outpacing the overall data center growth rate of 15%. This adjustment comes as the firm has raised its projections for the data center market, driven by increased investments in accelerated computing for artificial intelligence (AI) applications.

Additional highlights from the November 2023 Telecom Server Forecast include:

  • Centralized data center use cases, encompassing MCN and other internal IT workloads, are expected to grow at an 11% CAGR in revenue from 2022 to 2027, while edge data center cases, covering MEC, RAN, and Broadband Access, are anticipated to grow by 38% during the same period.
  • Server revenue for edge data centers is expected to achieve a significantly higher growth rate compared to that of centralized data centers for telecom applications. The report anticipates changes in the ecosystem as telecom edge infrastructure adoption increases. Servers in centralized data centers for telecom applications will resemble those from traditional IT vendors like Dell, HPE, and Lenovo. In contrast, we anticipate a wider range of solutions from both server and telecom equipment vendors for edge data centers. These systems will be designed to handle harsh environmental conditions and security challenges in remote areas.
  • The report anticipates the implementation of AI at both the network core and edge. At the core, AI engines could be utilized to automate real-time and near-real-time decision-making based on raw data from internet traffic and allocate network resources in real-time dynamically. At the edge, new AI use cases, primarily related to computer vision, that could enable applications such as industrial automation, autonomous driving, security, and various consumer services.

https://www.delloro.com/news/telecom-server-market-to-grow-19-cagr-over-the-next-5-years-driven-by-nfv-and-edge-use-cases/

Arelion expands in Mexico, Connects to Oracle Cloud

Arelion will offer direct connectivity to Oracle Cloud Infrastructure (OCI) via OCI FastConnect in the new Oracle Cloud Monterrey Region in Monterrey, Mexico. 

Arelion Cloud Connect will provide customers with self-provisioned, flexible, and scalable private network connectivity to OCI FastConnect locations at speeds of 1, 2, 5, or 10Gb/s over Arelion's global Internet backbone. Via OCI FastConnect, the Oracle Cloud Monterrey Region will provide customers throughout Mexico with access to a wide range of applications and infrastructure services through an elastic, resilient connection, featuring higher bandwidth, lower latency, and more consistent performance versus public Internet-based connections. 

“As a global carrier providing FastConnect services to the Oracle Cloud Monterrey Region, we are proud to provide the global reach and direct, private connectivity to cloud services that enterprise customers demand for success”, said Luis Velasquez, Mexico Business Manager, Arelion. “With this collaboration we will serve businesses that seek to deploy local services, content, and applications via OCI, helping enable continued investment and innovation across Mexico”.


A10 Networks posts Q3 in-line with expectations

A10 Networks reported Q3 revenue of $57.8 million, in-line with preliminary expectations and down $14.3 million year-over-year due to delays related to North American service provider customers’ capital expenditures. Non-GAAP net income was $12.0 million (representing 20.8% of revenue), or $0.16 per diluted share (non-GAAP EPS).

“Our intentional revenue diversification and proven business model is enabling A10 to navigate a challenging period while maintaining profitability, cash generation and the continued return of capital to shareholders,” said Dhrupad Trivedi, President and Chief Executive Officer of A10 Networks. “Growth in enterprise revenue partially offset delays in service provider spending that resulted in a decline in short-term service provider revenue. We continue to believe opportunities have been delayed, not lost, and that the long-term demand for security and network expansion solutions remains robust, supporting our intermediate-term outlook.”

https://www.a10networks.com/