Monday, November 13, 2023

JUPITER supercomputer to leverage NVIDIA Grace Hopper, Infiniband

JUPITER, an exascale supercomputer being built at the Forschungszentrum Jülich facility in Germany, will be powered by the NVIDIA Grace Hopper accelerated computing architecture. NVIDIA says it be the world’s most powerful AI system when completed in 2024, able to deliver extreme-scale computing power for AI and simulation workloads. 

JUPITER, which is owned by the EuroHPC Joint Undertaking and contracted to Eviden and ParTec, is being built in collaboration with NVIDIA, ParTec, Eviden and SiPearl to accelerate the creation of foundational AI models in climate and weather research, material science, drug discovery, industrial engineering and quantum computing.

JUPITER marks the debut of a quad NVIDIA GH200 Grace Hopper Superchip node configuration, based on Eviden’s BullSequana XH3000 liquid-cooled architecture, with a booster module comprising close to 24,000 NVIDIA GH200 Superchips interconnected with the NVIDIA Quantum-2 InfiniBand networking platform.  

The NVIDIA Quantum-2 family of switches comprises 64 400Gb/s ports or 128 200Gb/s ports on physical 32 octal small form-factor (OSFP) connectors. The compact 1U switch design includes air-cooled and liquid-cooled versions that are either internally or externally managed. The NVIDIA Quantum-2 family of switches delivers an aggregated 51.2 terabits per second (Tb/s) of bidirectional throughput with a capacity of more than 66.5 billion packets per second (bpps).

NVIDIA's quad GH200 features a node architecture with 288 Arm Neoverse cores capable of achieving 16 petaflops of AI performance using up to 2.3 terabytes of high-speed memory. Four GH200 processors are networked through a high-speed NVIDIA NVLink connection.

“The JUPITER supercomputer powered by NVIDIA GH200 and using our advanced AI software will deliver exascale AI and HPC performance to tackle the greatest scientific challenges of our time,” said Ian Buck, vice president of hyperscale and HPC at NVIDIA. “Our work with Jülich, Eviden and ParTec on this groundbreaking system will usher in a new era of AI supercomputing to advance the frontiers of science and technology.”

“At the heart of JUPITER is NVIDIA’s accelerated computing platform, making it a groundbreaking system that will revolutionize scientific research,” said Thomas Lippert, director of the Jülich Supercomputing Centre. “JUPITER combines exascale AI and exascale HPC with the world’s best AI software ecosystem to boost the training of foundational models to new heights.”

NVIDIA debuts HGX H200 Tensor Core GPU

NVIDIA launched its H200 Tensor Core GPU based on its Hopper architecture and designed with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.

The NVIDIA H200 is the first GPU to offer HBM3e — faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads. With HBM3e, the NVIDIA H200 delivers 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100.

H200-powered systems from the world’s leading server manufacturers and cloud service providers are expected to begin shipping in the second quarter of 2024.



With HBM3e, the H200 delivers 141 GB of memory at 4.8 terabytes per second, nearly doubling the capacity and providing 2.4 times more bandwidth compared to its predecessor, the NVIDIA A100.

NVIDIA H200 will be available in NVIDIA HGX H200 server boards with four- and eight-way configurations, which are compatible with both the hardware and software of HGX H100 systems.

H200-powered systems from leading server manufacturers and cloud service providers are anticipated to hit the market in the second quarter of 2024. 

NVIDIA says the introduction of H200 will lead to further performance leaps, including nearly doubling inference speed on Llama 2, a 70 billion-parameter LLM, compared to the H100. Additional performance leadership and improvements with H200 are expected with future software updates. Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to deploy H200-based instances starting next year, in addition to CoreWeave, Lambda and Vultr.

“To create intelligence with generative AI and HPC applications, vast amounts of data must be efficiently processed at high speed using large, fast GPU memory,” said Ian Buck, vice president of hyperscale and HPC at NVIDIA. “With NVIDIA H200, the industry’s leading end-to-end AI supercomputing platform just got faster to solve some of the world’s most important challenges.”


U.S. outlines National Spectrum Strategy

 The White House published a National Spectrum Strategy that aims to expand access to advanced wireless broadband networks and technologies, whether terrestrial-, airspace-, satellite- or space-based, for all Americans. 

The 23-page paper was developed by the National Telecommunications and Information Administration (NTIA), in collaboration with the FCC and in coordination with other Federal agencies.

The Strategy has four pillars

  • Pillar One: A Spectrum Pipeline to Ensure U.S. Leadership in Advanced and Emerging Technologies
  • Pillar Two: Collaborative Long-Term Planning to Support the Nation’s Evolving Spectrum Needs
  • Pillar Three: Unprecedented Spectrum Innovation, Access, and Management through Technology Development
  • Pillar Four: Expanded Spectrum Expertise and Elevated National Awareness

Significantly this Strategy identifies five spectrum bands in government hands totaling 2,786 megahertz of mostly mid-band spectrum for in-depth, near-term study to determine suitability for potential repurposing to address evolving needs,, including terrestrial wireless broadband, innovative space services, and unmanned aviation and other autonomous vehicle operations. This includes the following:

Lower 3 GHz (3.1-3.45 GHz)

  • The Department of Defense (DoD) has studied the potential for sharing 350 megahertz of spectrum with the private sector, determining that sharing is feasible with advanced interference-mitigation features and a coordination framework.
  • The Departments of Commerce and Defense will co-lead follow-on studies focusing on future use of the 3.1-3.45 GHz band, exploring dynamic spectrum sharing and private-sector access while preserving Federal mission capabilities
5030-5091 MHz
  •  The FCC, in coordination with NTIA and the Federal Aviation Administration, will facilitate limited deployment of UAS in this band, followed by studies to optimize UAS spectrum access while avoiding harmful interference to other operations.
7125-8400 MHz
  •  This 1,275 megahertz of spectrum will be studied for wireless broadband use, with some sub-bands potentially studied for other uses, while protecting incumbent users from harmful interference.
18.1-18.6 GHz
  • This 500 megahertz of spectrum will be studied for expanded Federal and non-Federal satellite operations, consistent with the U.S. position at the 2023 World Radiocommunication Conference.
37.0-37.6 GHz
  • This 600 megahertz of spectrum will be further studied to implement a co-equal, shared-use framework allowing Federal and non-Federal users to deploy operations in the band.

HPE tunes its supercomputing solutions for Gen AI

At Supercomputing 23 in Denver, HPE announced a supercomputing solution for generative AI designed for large enterprises, research institutions, and government organizations to accelerate the training and tuning of artificial intelligence (AI) models using private data sets.

Key elements:

  • AI/ML acceleration software – A suite of three software tools will help customers train and tune AI models and create their own AI applications.
  • HPE Machine Learning Development Environment is a machine learning (ML) software platform that enables customers to develop and deploy AI models faster by integrating with popular ML frameworks and simplifying data preparation.
  • NVIDIA AI Enterprise for security, stability, manageability, and support. It offers extensive frameworks, pretrained models, and tools that streamline the development and deployment of production AI.
  • HPE Cray Programming Environment suite offers programmers a complete set of tools for developing, porting, debugging and refining code.
  • Scale – Based on the HPE Cray EX2500, an exascale-class system, and featuring NVIDIA GH200 Grace Hopper Superchips, the solution can scale up to thousands of graphics processing units (GPUs) with an ability to dedicate the full capacity of nodes to support a single, AI workload for faster time-to-value. The system is the first to feature the quad GH200 Superchip node configuration.
  • HPE Slingshot Interconnect offers an open, Ethernet-based high performance network designed to support exascale-class workloads. 

“The world’s leading companies and research centers are training and tuning AI models to drive innovation and unlock breakthroughs in research, but to do so effectively and efficiently, they need purpose-built solutions,” said Justin Hotard, executive vice president and general manager, HPC, AI & Labs at Hewlett Packard Enterprise. “To support generative AI, organizations need to leverage solutions that are sustainable and deliver the dedicated performance and scale of a supercomputer to support AI model training. We are thrilled to expand our collaboration with NVIDIA to offer a turnkey AI-native solution that will help our customers significantly accelerate AI model training and outcomes.”

https://www.hpe.com/us/en/newsroom/press-release/2023/11/hewlett-packard-enterprise-and-nvidia-accelerate-ai-training-with-new-turnkey-solution.html

Utah’s Strata Networks picks Ekinops for optical network

 Ekinops has been selected by Strata Networks, Utah's largest telecommunications cooperative, to upgrade its optical transport network using the Ekinops360 with FlexRate technology. 

Strata Networks, based in Roosevelt, Utah, extends its network throughout the Uintah Basin, into the Wasatch Front, and to Denver, serving a diverse demographic including mid-sized urban and remote rural areas. Strata also chose Ekinops' advanced network management system.

Ekinops says its optical solution along with 200G and 400G FlexRate modules and Celestis NM will enhance the scope and performance of Strata's optical transport network. This upgrade allows Strata to extend 100G links all the way to its point-of-presence in Denver, serving its customers in Colorado. 

The Ekinops PM400FR05 utilizes high-power pluggable coherent optics to provide up to 400G of capacity for metro/regional connectivity at a lower cost than traditional transponders. 

Additionally, Celestis NMS provides Strata with complete control over its network, allowing for monitoring, troubleshooting, and service upgrades from a centralized network operations center, thereby minimizing truck rolls and reducing the company's carbon footprint.

https://www.ekinops.com

Linux Foundation to form the High Performance Software Foundation

 The Linux Foundation announced an intention to form the High Performance Software Foundation (HPSF) with an aim to build, promote, and advance a portable software stack for high performance computing (HPC.

HPSF intends to leverage investments made by the United States Department of Energy's (DOE) Exascale Computing Project (ECP), the EuroHPC Joint Undertaking, and other international projects in accelerated HPC to exploit the performance of this diversifying set of architectures. 

HPSF will be organized as an umbrella project under the Linux Foundation. It will provide a neutral space for pivotal projects in the high performance software ecosystem, enabling industry, academia, and government entities to collaborate together on the scientific software stack.

The HPSF is launching with the following initial open source technical projects:

  • Spack: the HPC package manager
  • Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.
  • AMReX: a performance-portable software framework designed to accelerate solving partial differential equations on block-structured, adaptively refined    meshes.
  • WarpX: a performance-portable Particle-in-Cell code with advanced algorithms that won the 2022 Gordon Bell Prize
  • Trilinos: a collection of reusable scientific software libraries, known in particular for linear, non-linear, and transient solvers, as well as optimization and uncertainty quantification.
  • Apptainer: a container system and image format specifically designed for secure high-performance computing.
  • VTK-m: a toolkit of scientific visualization algorithms for accelerator architectures.
  • HPCToolkit: performance measurement and analysis tools for computers ranging from laptops to the world’s largest GPU-accelerated supercomputers.
  • E4S: the Extreme-scale Scientific Software Stack
  • Charliecloud: HPC-tailored, lightweight, fully unprivileged container implementation.

O-RAN Alliance awards $200K to Northeastern University

 The O-RAN ALLIANCE (O-RAN) has awarded $200,000 in seed funding to the Institute for the Wireless Internet of Things at Northeastern University for their proposal to develop an O-RAN digital twin platform based on the Colosseum network emulator, with the capability to automate end-to-end AI/ML development, integration, and testing.

The funding initiative was led by O-RAN ALLIANCE's next Generation Research Group (nGRG). Its objective is to provide a forum to facilitate O-RAN related 6G research efforts and determine how O-RAN may evolve to support mobile wireless networks in the 6G timeframe and beyond, by leveraging industry and academic 6G research efforts worldwide. The purpose of the seed funding is to be a significant enabler for broader funding of research platforms for next generation infrastructure.

In addition to the winning proposal, the O-RAN ALLIANCE also recognized two other proposals with honorable mentions for their excellent quality and value for the industry:

  • EURECOM and OpenAirInterface Software Alliance – Evolving 5G End-to-End Network Platform towards a Next Generation Infrastructure
  • Virginia Polytechnic Institute and State University and George Mason University – FEMO-CLOUD: Federated, Multi-site O-Cloud Platform for Next generation RAN Research and Experimentation

“The O-RAN ALLIANCE continues to focus on developing a stable and mature specification framework for open and intelligent RAN, enabling the RAN industry to deliver commercial products and solutions,” said Alex Jinsung Choi, Chair of the Board of O-RAN ALLIANCE, and SVP Network Technology at Deutsche Telekom. “It’s great to see such high interest and cooperation in research for open innovations in future RAN generations, which will provide the basis for upcoming detailed specifications by the O-RAN ALLIANCE to enable even higher-performing and more feature-rich mobile networks.”

www.o-ran.org

Arm appoints Ami Badani as Chief Marketing Officer

Arm appointed Ami Badani as chief marketing officer (CMO).

Badani joins Arm from NVIDIA where she held the role of Vice President of Marketing and Developer Products. At NVIDIA, her responsibilities included cultivating the developer ecosystem for Data Processing Units (DPUs), driving the data strategy for Generative AI, and leading the company’s product and technical marketing efforts for the data center portfolio, one of NVIDIA’s largest growth areas. Prior to NVIDIA, Ami was CMO at Cumulus Networks, a provider of enterprise-class software that was acquired by NVIDIA in 2020. Badani also held marketing and product management leadership roles at several technology companies, including Cisco Systems. Prior to joining Cisco, Badani worked as an investment banker at Goldman Sachs and J.P. Morgan.

“As we continue to advance the Arm compute platform, reaching a more diverse set of customers and developers in the AI era is critical,” said Rene Haas, chief executive officer, Arm. “Ami’s experience in AI and proven track record in creating awareness among developer ecosystems make her a natural fit to lead our marketing efforts in building the future of computing on Arm.”

https://www.arm.com




Utah’s Strata Networks picks Ekinops for optical upgrade

Ekinops has been selected by Strata Networks, Utah's largest telecommunications cooperative, to upgrade its optical transport network using the Ekinops360 with FlexRate technology. 

Strata Networks, based in Roosevelt, Utah, extends its network throughout the Uintah Basin, into the Wasatch Front, and to Denver, serving a diverse demographic including mid-sized urban and remote rural areas. Strata also chose Ekinops' advanced network management system.

Ekinops says its optical solution along with 200G and 400G FlexRate modules and Celestis NM will enhance the scope and performance of Strata's optical transport network. This upgrade allows Strata to extend 100G links all the way to its point-of-presence in Denver, serving its customers in Colorado. 

The Ekinops PM400FR05 utilizes high-power pluggable coherent optics to provide up to 400G of capacity for metro/regional connectivity at a lower cost than traditional transponders. 

Additionally, Celestis NMS provides Strata with complete control over its network, allowing for monitoring, troubleshooting, and service upgrades from a centralized network operations center, thereby minimizing truck rolls and reducing the company's carbon footprint.

https://www.ekinops.com

Arelion activates PoP at DataVerge interconnect in Brooklyn

Arelion established a point-presence for its Internet backbone, AS1299, at DataVerge, the owner and operator of the only carrier-neutral interconnection facility in Brooklyn. This provides DataVerge customers access to Arelion’s portfolio of leading connectivity services, including high-speed IP Transit, Dedicated Internet Access (DIA), Cloud Connect, Global 40G Ethernet Virtual Circuit (VC), IPX, and DDoS Mitigation services. High availability is guaranteed by dual entry points and diverse paths into the DataVerge Datacenter.

https://www.arelion.com/about-us/press-releases/new-pop-in-new-york

Sunday, November 12, 2023

NTT demos low-latency transport and precision control on IOWN

NTT demonstrated  Sony's precision bilateral control technology running over low-latency transport through the IOWN All-Photonics Network.

The demo aimed at achieving precise remote manipulation with haptic feedback, unimpeded by the distance between separated locations. 

Some of the results of this research will be exhibited at this week’s NTT R & D Forum - IOWN ACCELERATION event.

Bilateral control technology, which allows for the synchronization of a robot arm with the operator's movements at a distant location.

NTT provided uncompressed video transmission with direct mapping of data directly from the SDI signal to SMPTE ST 2110 streams .  This minimizes delay from video input on the sending side to video output on the receiving side to less than 1 millisecond. The demo also leverages RDMA acceleration technology, facilitating direct memory-to-memory data transfer without the need for CPU involvement.

https://group.ntt/en/newsrelease/2023/11/10/pdf/231110ba.pdf

Deutsche Telekom raises guidance on strong Q3

Deutsche Telekom posted third quarter revenue of 27.6 billion euros, up 0.7 percent in organic terms. High-margin service revenues up 4.1 percent in organic terms.

Net profit was up 21.9 percent to 1.9 billion euros.

“In these uncertain times, Deutsche Telekom continues to grow unabated on both sides of the Atlantic,” said Tim Höttges, CEO of Deutsche Telekom. “We want our shareholders to participate in this positive development by way of a higher dividend.”

Deutsche Telekom raised its guidance for the third time this year. For the full year, the Group now expects adjusted EBITDA AL of around 41.1 billion euros and free cash flow AL of more than 16.1 billion euros, in each case 0.1 billion euros more than planned as of the midpoint of 2023. At the start of the year, expectations for adjusted EBITDA AL were still at around 40.8 billion euros, and for free cash flow AL, at more than 16 billion euros. Adjusted earnings per share are still expected to reach more than 1.60 euros.

Germany

In its home market, Deutsche Telekom recorded a very positive trend in its customer numbers and financial figures in the third quarter. The company once again led the market with 96,000 broadband net additions. 6.7 million or 45 percent of Telekom consumers have now subscribed to a line offering bandwidths of up to 100 Mbit/s or higher. The MagentaTV customer base increased by 51,000 in the quarter to 4.3 million.

The new mobile rate plan structure continues to draw in strong numbers. Telekom recorded 350,000 branded contract customer additions between July and September. Telekom remains market leader in mobile service revenues, which were up 2.9 percent.

Deutsche Telekom has managed to increase its earnings in Germany in every quarter for seven years now. In the third quarter of 2023, adjusted EBITDA AL recorded organic growth of 3.1 percent year-on-year, increasing to 2.6 billion euros. At the same time, revenue increased by 2.1 percent in organic terms to 6.3 billion euros. 

U.S.

Between June and September, T-Mobile US recorded postpaid net additions of 1.2 million. The year-on-year decline is attributable to the deactivation of SIM cards issued to students during the coronavirus pandemic, which are now no longer required. In the postpaid phone customer segment, which is particularly important for service revenues, net customer additions were on a par with the prior-year level at 850,000. Both figures represent the best in the U.S. mobile industry. Another 557,000 users opted for the fixed-network high-speed internet substitute product in the third quarter, bringing the customer base for this offering to 4.2 million.

The company recorded organic year-on-year growth in service revenues of 4.7 percent in the quarter to reach 15.9 billion U.S. dollars. The key earnings indicator, adjusted core EBITDA, which eliminates effects from the planned withdrawal from the terminal equipment lease business, grew by 12.7 percent in organic terms to 7.3 billion U.S. dollars.

Europe

The Europe operating segment once again delivered strong financials. In organic terms, adjusted EBITDA AL increased by 3.3 percent year-on-year in the third quarter to 1.1 billion euros. Revenue generated by the European national companies increased by 3.7 percent in organic terms to 3.0 billion euros. This growth was primarily driven by organic growth of 5.2 percent in mobile service revenues.

Customer numbers in Europe also saw good growth. Mobile contract net adds totaled 223,000, the number of broadband lines increased by 76,000, and the number of TV customers by 52,000.






SpaceX launches SES’s Fifth and Sixth O3b mPOWER Satellites

SES confirmed the successful launch of its fifth and sixth O3b mPOWER satellites by a SpaceX Falcon 9 rocket from Cape Canaveral Space Force Station in Florida.

The duo completes the six #MEO satellites required for SES to offer high-performance network services delivering high throughput, predictable low latency, unique flexibility and service availability.

The first four O3b mPOWER satellites launched in the last year have arrived at their target orbital position and are undergoing in-orbit checks, including a series of system validation tests encompassing both space and ground components. 

O3b mPOWER commercial service is expected to begin during the second quarter of 2024.

https://www.ses.com/press-release/sess-fifth-and-sixth-o3b-mpower-satellites-successfully-launched

  • Last month, SES announced it will add to the constellation two more satellites built by Boeing, bringing the total number of O3b mPOWER satellites to 13.

Softbank deploys new optical network in Japan with Fujitsu

 SoftBank completed the nationwide deployment of an all optical network in Japan using Fujitsu’s next-generation optical transmission platform “1 FINITY Ultra Optical System T900”.

Fujitsu says the new network realizes a reduction of power consumption of up to 90% compared to previous networks by connecting to equipment compatible with all optical technology and applying liquid cooling technology. 

The network has a maximum capacity of 48.8 Tbps over an optical pair.

SoftBank's IP routers are now equipped with coherent optical transceivers connected via the Fujitsu 1FINITY platforms.

Groq scales AI inference processing

Groq, an AI start-up based in Mountain View, California offering a  Tensor Streaming Processor (TSP), announced a new performance bar of more than 300 tokens per second per user on Meta AI's Llama-2 70B LMM.

The benchmark was set using Groq’s  Language Processing Unit system. 

Jonathan Ross, CEO and founder of Groq commented, "When running LLMs, you can't accurately generate the 100th token until you've generated the 99th. An LPU™ system is built for the sequential and compute-intensive nature of GenAI language processing. Simply throwing more GPUs at LLMs doesn't solve for incumbent latency and scale-related issues. Groq enables the next level of AI."

https://groq.com

  • Prior to founding Groq, Jonathan began what became Google’s Tensor Processing Unit (TPU) as a 20% project where he designed and implemented the core elements of the first generation TPU chip.

Softbank upgrades optical network in Japan with Fujitsu

SoftBank completed the nationwide deployment of an all optical network in Japan using Fujitsu’s next-generation optical transmission platform “1 FINITY Ultra Optical System T900”.

Fujitsu says the new network realizes a reduction of power consumption of up to 90% compared to previous networks by connecting to equipment compatible with all optical technology and applying liquid cooling technology. 

The network has a maximum capacity of 48.8 Tbps over an optical pair.

SoftBank's IP routers are now equipped with coherent optical transceivers connected via the Fujitsu 1FINITY platforms.

Trillion Parameter Consortium targets AI systems

A new Trillion Parameter Consortium (TPC) has been formed by federal laboratories, research institutes, academia, and industry to address the challenges of building large-scale artificial intelligence (AI) systems and advancing trustworthy and reliable AI for scientific discovery.

Training LLMs (large language models) with this many parameters requires exascale class computing resources, such as those being deployed at several U.S. Department of Energy (DOE) national laboratories and multiple TPC founding partners in Japan, Europe, and elsewhere. 

TPC will:

  • Build an open community of researchers interested in creating state-of-the-art large-scale generative AI models aimed broadly at advancing progress on scientific and engineering problems by sharing methods, approaches, tools, insights, and workflows.
  • Incubate, launch, and coordinate projects voluntarily to avoid duplication of effort and to maximize the impact of the projects in the broader AI and scientific community.
  • Create a global network of resources and expertise to facilitate the next generation of AI and bring together researchers interested in developing and using large-scale AI for science and engineering.

Founding partners of TPC:

* AI Singapore

* Allen Institute For AI

* AMD

* Argonne National Laboratory

* Barcelona Supercomputing Center

* Brookhaven National Laboratory

* CalTech

* CEA

* Cerebras Systems

* CINECA

* CSC - IT Center for Science

* CSIRO

* ETH Zürich

* Fermilab National Accelerator Laboratory

* Flinders University

* Fujitsu Limited

* HPE

* Intel

* Juelich Supercomputing Center

* Kotoba Technologies, Inc.

* LAION

* Lawrence Berkeley National Laboratory

* Lawrence Livermore National Laboratory

* Leibniz Supercomputing Centre

* Los Alamos National Laboratory

* Microsoft

* National Center for Supercomputing Applications

* National Institute of Advanced Industrial Science and Technology (AIST)

* National Renewable Energy Laboratory

* National Supercomputing Centre, Singapore

* NCI Australia

* New Zealand eScience Infrastructure

* Northwestern University

* NVIDIA

* Oak Ridge National Laboratory

* Pacific Northwest National Laboratory

* Pawsey Institute

* Princeton Plasma Physics Laboratory

* RIKEN

* Rutgers University

* SambaNova

* Sandia National Laboratories

* Seoul National University

* SLAC National Accelerator Laboratory

* Stanford University

* STFC Rutherford Appleton Laboratory, UKRI

* Texas Advanced Computing Center

* Thomas Jefferson National Accelerator Facility

* Together AI

* Tokyo Institute of Technology

* Université de Montréal

* University of Chicago

* University of Delaware

* University of Illinois Chicago

* University of Illinois Urbana-Champaign

* University of New South Wales

* University of Tokyo

* University of Utah

* University of Virginia

https://www.anl.gov/article/new-international-consortium-formed-to-create-trustworthy-and-reliable-generative-ai-models-for


Thursday, November 9, 2023

Nokia powers SCinet with 1.6 Tbps optical backbone

Nokia will deploy its IP and optical networking gear to support SCinet, the multi-vendor network created to support the SC 2023 conference, which is slated for next week in Denver.

As part of the network, Nokia will deploy its:

  • Nokia 1830 PSI-M compact modular optical transport platform, which will deliver 1.6Tb/s of capacity to support distribution of live customer traffic on the SC23 show floor.  
  • Nokia 7750 SR-1x FP5-based routers, which will provide layer 3 network services and border router capabilities with port speeds ranging from 10GE to 800GE, and a system capacity of 6.0Tb/s full duplex.  
  • Nokia Data Center Fabric solution, consisting of 7220 IXR data center switches running on SR Linux, an open, extensible and programmable NOS, which will provide L2 access to exhibitor booths. 

Vach Kompella, Head of Nokia’s IP business, said: “We are excited to be part of SC 2023, the world’s most important supercomputing event, and contribute our high-performance portfolio to SCinet, the world’s fastest, most powerful and advanced live network. Our IP routing and optical transport equipment provides the capacity, resiliency, security and flexibility that this community requires to conduct projects and collaborate on a global scale – especially as the data sets involved grow ever larger and security remains a paramount concern.”


Adtran opens Terafactory in Germany

Adtran opened a new "Terafactory" in Meiningen, Germany.  

The company says the new facility brings production back to Central Europe, fortifies supply chain resilience, and creating local jobs.  The Terafactory also streamlines workflows and reduces resource consumption by harnessing advanced automation technologies. The move towards supply chain autonomy for Adtran’s core European market echoes a similar strategy in the US, where the company has recently expanded its manufacturing facility in Huntsville, Alabama.

As part of the BMBF-sponsored 6G-Terafactory project, Adtran will deploy an Open-RAN based private mobile network across the campus, enabling automated processes, making the production of hardware, such as the company’s flagship FSP 3000 open optical transport platform, more efficient. Quality control is also simplified, as this can now be conducted by experts at the Meiningen site prior to distribution. And with its photovoltaic solar power system, Adtran is further reducing its carbon footprint as it moves towards energy self-sufficiency throughout the Terafactory. This significant initiative has been bolstered by a substantial investment from the Thuringian government.

“Our new Terafactory helps us mitigate against supply chain challenges like those we experienced during the Covid-19 pandemic. By enhancing the production and logistics side of our business, we’re not just reducing our dependency on third parties but also putting us in control of our own destiny. This strategic move makes us more responsive and resilient to shifting supply chain pressures,” said Christoph Glingener, CTO of Adtran. “Our new Terafactory generates a significant portion of the power it needs, making day-to-day operations more energy efficient. And by bringing the production of our world-leading optical transport technology back to Germany, we can more easily ensure precision and quality. What’s more, it will strengthen Europe’s position in optical transport technology, fostering regional innovation and setting new benchmarks for the industry worldwide.”


Arista unveils Zero Trust Networking Vision with Open API

Arista announced an expanded zero trust networking architecture that uses the underlying network infrastructure to break down security silos, streamline workflows and enable an integrated zero trust program. 

Arista’s strategy combines in-house developed technologies and strategic alliances with key partners to compensate for harder-to-implement zero trust controls across the domains of devices, workloads, identity, and data.

 The key components of this integrated security solution are:

  • Arista CloudVision AGNI greatly simplifies the secure onboarding and troubleshooting for users and devices, as well as ongoing posture analysis and network access control.
  • Arista Macro Segmentation Service (MSS) enables the creation and enforcement of microperimeters through edge switches that can protect or isolate each asset without requiring the deployment of firewalls all across the enterprise network. Segmentation policies can be defined once in Arista CloudVision and enforced dynamically based on real-time network, application, device, or user identity information.
  • Arista NDR autonomously discovers, profiles, and classifies every device, user, and application across the distributed network. Based on this deep understanding of the attack surface, the platform detects threats to and from these entities while providing the context necessary to respond rapidly.
  • Arista natively supports encryption capabilities such as MACsec and Tunnelsec, enabling organizations to encrypt data to and from legacy applications and workloads without changing those systems but instead relying on the network to protect data from unauthorized access, interception, and tampering.

The Arista zero trust architecture is designed to be open and API-friendly. 

Partners within the Arista zero trust ecosystem include Microsoft, CrowdStrike, and our newest partner Zscaler. Arista is a member of the Microsoft Intelligent Security Association (MISA), having integrated with Microsoft’s security technology offerings.