Tuesday, July 16, 2024

Juniper opens Ops4AI Lab in Silicon Valley

Juniper Networks is opening a multivendor lab dedicated to validating end-to-end automated AI Data Center solutions. This lab will support automated operations with switching, routing, storage, and compute solutions from leading vendors. Juniper is also introducing new Juniper Validated Designs (JVDs) to accelerate the deployment of AI clusters, along with key software enhancements to optimize AI workloads over Ethernet.

A core component of Juniper’s AI-Native Networking Platform, the Networking for AI solution features a spine-leaf data center architecture with AI-optimized 400G and 800G QFX Series Switches and PTX Series Routers. Managed by Juniper Apstra and Marvis Virtual Network Assistant (VNA), this solution enhances AI workload performance through intent-based networking, multivendor switch management, and proactive AIOps actions. Juniper’s solution aims to reduce AI training job completion times (JCTs), lower latency, increase GPU utilization, and decrease deployment and operational costs significantly.

Highlights:

  • Fabric Autotuning for AI: Uses telemetry from routers and switches to optimize congestion control automatically, enhancing AI workload performance.
  • Global Load-Balancing: Provides real-time load-balancing of AI traffic to reduce latency, improve network utilization, and decrease JCTs.
  • End-to-End Visibility: Offers a holistic view of the network, including SmartNICs from Nvidia and others.
  • Ops4AI Lab: Located in Sunnyvale, CA, this lab allows customers and partners to test AI workloads with advanced GPU compute, storage technologies, Ethernet-based networking fabrics, and automated operations.
  • Juniper Validated Designs (JVDs): These detailed implementation documents offer pre-validated blueprints for AI data centers, ensuring faster and reliable deployments.

The Ops4AI Lab represents Juniper’s commitment to openness and collaboration, moving AI Data Centers from early adoption to mass market deployment. The lab includes participation from partners like Broadcom, Intel, Nvidia, and WEKA, providing a platform for testing and validating AI workloads.

Juniper’s new software enhancements and JVDs further simplify AI cluster deployment and maximize network performance, offering unique value to customers looking to optimize their AI infrastructure.

Tech Update: What can AIOps actually do for networks?

How long before AI becomes not just an add-on, but a transformative force enhancing every aspect of networking from design and deployment to management and optimization. What can AIOps do today? Jean English, Chief Marketing Officer from Juniper Networks, explains:- The importance of starting with experience-first questions to enhance user experience for both the end-user and the operator, and how AI plays a crucial role in this process.- The...

Tech Update: Can AI proactively predict the user experience?

It is the dream of the CIO that someday IT problems will fix themselves before the user notices. How far are we from this bright future? Sudheer Matta, Group VP Product Management, AI-driven Enterprise from Juniper explains:- The introduction of Marvis Minis, a proactive AI tool that anticipates and resolves network issues before they affect the user experience.- The concept of a 'digital twin' of the user experience, built natively into the...

CoreSite's 2024 State of the Data Center report

CoreSite has published its 2024 State of the Data Center Report, which explores the latest trends, strategies, and requirements in data centers and cloud computing. Now in its fifth year, the report highlights a consensus among IT leaders on the increasing variety of data management and processing needs, necessitating highly customized IT solutions.

Despite high confidence in the economy among C-suite executives, a volatile, uncertain, complex, and ambiguous (VUCA) environment has led many business leaders to proceed cautiously with their IT and data strategies. The emphasis is on cost control, predictability, flexibility, and risk management. This approach must also accommodate the growing volume of resource-intensive AI and high-density workloads essential for growth and innovation. Consequently, there is a significant shift towards hybrid IT ecosystems, with 98% of organizations having adopted or planning to adopt a hybrid model using colocation, private cloud, and public cloud to manage their workloads.

Key Insights from the 2024 Report:

  • Cloud Interconnection: Direct connections to the cloud and interconnection systems are crucial for transferring large-scale data efficiently. Nearly half of the surveyed workloads use colocation primarily for cloud interconnection, though only 31% of respondents report their current colocation provider offers interconnection to a variety of cloud providers. Additionally, 95% of respondents consider native, direct connections to major cloud providers important, with 69% rating it as very important.
  • Shift from Public Cloud: Organizations are increasingly valuing “cloud smart” hybrid IT infrastructures over an “all-in” cloud approach due to cost, performance, and compliance considerations. Many survey participants are considering moving from public cloud to colocation for various workloads, especially for generative AI applications, BI/analytics, and IoT connectivity and management. The use of public cloud is trending down across all workloads compared to 2023.
  • AI Driving Hybrid IT Adoption: The rising use of AI, which demands more computing resources and high data volumes, is prompting IT leaders to re-evaluate their hosting options. The 2024 report shows a shift of AI-specific workloads from on-prem environments to colocation data centers. Over three-quarters of respondents are considering moving AI-related workloads from public cloud to colocation, including GenAI applications (91%), chatbots (81%), predictive analytics (79%), and augmented AI applications (76%).

The 2024 State of the Data Center report is based on a quantitative survey of 300 CIOs, CTOs, and other IT decision-makers, along with in-depth interviews with senior technology executives from various sectors. The research was conducted by Foundry, an IDG, Inc. company.



Dell'Oro: AI buildouts expanding data center switch market by 50%

Spending on switches deployed in AI back-end networks is forecast to expand the Data Center Switch Market by 50 percent, according to a new report from Dell'Oro Group. Current data center switch market spending is on front-end networks used primarily to connect general-purpose servers. AI workloads will require a new back-end infrastructure buildout. The competition intensifies between InfiniBand and Ethernet as manufacturers vie for market dominance in AI back-end networks. While InfiniBand is expected to maintain its lead, Ethernet is forecast to make substantial gains, such as 20 revenue-share points by 2027.

"Generative AI applications usher in a new era in the age of AI, standing out for the sheer number of parameters that they have to deal with," said Sameh Boujelbene, Vice President at Dell'Oro Group. "Several large AI applications currently handle trillions of parameters, with this count increasing tenfold annually. This rapid growth necessitates the deployment of thousands or even hundreds of thousands of accelerated nodes. Connecting these accelerated nodes in large clusters requires a data center-scale fabric, known as the AI back-end network, which differs from the traditional front-end network used mostly to connect general-purpose servers.

"This predicament poses the pivotal question: what is the most suitable fabric that can scale to hundreds of thousands and potentially millions of accelerated nodes while ensuring the lowest Job Completion Time (JCT)? One could argue that Ethernet is one speed generation ahead of InfiniBand. Network speed, however, is not the only factor. Congestion control and adaptive routing mechanisms are also important. We analyzed AI back-end network build-outs by the major Cloud Service Providers (such as Google, Amazon, Microsoft, Meta, Alibaba, Tencent, ByteDance, Baidu, and others) as well as various considerations driving their choices of the back-end fabric to develop our forecast," continued Boujelbene.

Additional highlights from the AI Networks for AI Workloads Report:

  • AI networks will accelerate the transition to higher speeds. For example, 800 Gbps is expected to comprise the majority of the ports in AI back-end networks by 2025, within just two years of the latest 800 Gbps product introduction.
  • While most of the market demand will come from Tier 1 Cloud Service Providers, Tier 2/3 and large enterprises are forecast to be significant, approaching $10 B over the next five years. The latter group will favor Ethernet.

https://www.delloro.com/news/ai-back-end-networks-to-drive-80-b-of-data-center-switch-spending-over-the-next-five-years/

New Plans for 324 MW Hyperscale Campus in Atlanta

TA Realty LLC and EdgeConneX have announced a joint venture to develop a 324MW hyperscale data center campus in Atlanta, GA. This project marks a significant step in addressing the growing demand for high-performance computing infrastructure driven by advancements in AI, cloud services, and other emerging technologies. This collaboration combines TA Realty’s real estate acumen with EdgeConneX’s data center expertise to deliver a state-of-the-art facility designed to meet the requirements of hyperscale customers.

TA Realty, through its TA Digital Group, will manage the site acquisition, power procurement, and secure necessary utilities and permits, leveraging its extensive real estate expertise and deep market knowledge in Atlanta. EdgeConneX will apply its comprehensive experience in designing, building, and operating data centers to ensure the project’s success, aiming for an on-time and on-budget delivery.

Key Points:

  • Project Scope: Development of a 324MW hyperscale data center campus.
  • Location: Atlanta, GA, strategically positioned in a key sub-market.
  • Construction Start: Later this year, with the first phase operational by 2026.
  • TA Realty’s Role: Site acquisition, power procurement, securing utilities, zoning approvals, permits, and entitlements.
  • EdgeConneX’s Role: Designing, building, and operating the data center.
  • Strategic Importance: Supports the demand for low latency and high-performance computing driven by AI and cloud services.

www.edgeconnex.com

EchoStar opens Open RAN Center for Integration and Deployment

EchoStar has inaugurated the Open RAN Center for Integration and Deployment (ORCID) at its data center in Cheyenne, Wyoming. This cutting-edge lab, supported by a $50 million grant from the U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) Public Wireless Supply Chain Innovation Fund, allows vendors to test and validate O-RAN solutions using EchoStar’s live commercial-grade cloud-native Open RAN network.

ORCID provides a “living laboratory” environment, facilitating the development, deployment, and adoption of open and interoperable standards-based radio access networks. Managed by EchoStar, the ORCID consortium includes partners like Fujitsu, Mavenir, and VMware by Broadcom. Together, they have validated O-RAN technology at scale, building a 5G network that now provides connectivity to over 240 million Americans.

Key Points:

  • Facility Location: Cheyenne, Wyoming data center.
  • Funding: $50 million grant from NTIA’s Innovation Fund.
  • Launch Announcement: Six months after the grant announcement by NTIA Administrator Alan Davidson and Innovation Fund Director Amanda Toman.
  • Consortium Partners: Includes Fujitsu, Mavenir, VMware by Broadcom, among others.
  • Network Reach: O-RAN 5G network providing connectivity to more than 240 million Americans.
  • Purpose: To drive the O-RAN ecosystem from lab to commercial deployment through real field test setups.

"The Open RAN Center for Integration and Deployment (ORCID) is now open for business. We appreciate the trust and partnership of NTIA in this effort, which includes a historic $50 million grant from the Innovation Fund," said Charlie Ergen, co-founder and chairman, EchoStar. "ORCID represents a significant milestone in both EchoStar and the U.S.'s journey to drive and lead the adoption of open and interoperable radio access networks. We look forward to the groundbreaking advancements expected to emerge from this initiative."

https://www.orcid.us/home

Vectara adds $25 million for LLMs without hallucinations

Vectara, a start-up based in Palo Alto, California, closed a $25 million Series A round for its Generative AI product platform aimed at advancing the state of Retrieval Augmented Generation (RAG) as a Service for regulated industries.

The new funding was led by FPV Ventures and Race Capital. Additional investors include Alumni Ventures, WVV Capital, Samsung Next, Fusion Fund, Green Sands Equity, and Mack Ventures. This funding round, combined with last year’s $28.5 million seed funding round, brings the total funding to $53.5 million.

In addition, Vectara introduced Mockingbird, a fine-tuned generative Large Language Model (LLM) designed for Retrieval-Augmented Generation (RAG) applications. Mockingbird aims to reduce hallucinations and improve structured output, offering reliable performance with low latency and cost efficiency. It is particularly beneficial for regulated industries such as health, legal, finance, and manufacturing, where accuracy, security, and explainability are crucial. Mockingbird, combined with Vectara’s Hughes Hallucination Evaluation Model (HHEM), excels in producing structured outputs, essential for AI integration with downstream systems and autonomous agents. Sunir Shah, Founder of HuckAI, praised Mockingbird for providing clearer, more direct responses, enhancing user productivity. Mockingbird surpasses GPT-4 by 26% in Bert-F1 for RAG output quality and sets a new benchmark in operational excellence, integrating seamlessly within Vectara’s ecosystem without third-party dependencies.

“The recent $25 million Series A funding will enable us to further innovate and expand our offerings, ensuring we continue to lead the way in trusted generative AI technology,” said Amr Awadallah, Co-Founder and CEO of Vectara. “With Mockingbird, we’re not just pushing the boundaries of AI trustworthiness; we’re empowering regulated industries to leverage reliable AI solutions with confidence, paving the way for a future where AI can be a dependable partner in mission-critical tasks.”

About the Founders of Vectara


Dr. Amr Awadallah - Co-Founder & CEO: Before co-founding Vectara, Amr was the VP of Developer Relations for Google Cloud. He co-founded Cloudera, where he developed enterprise tools for big data. Previously, he was VP of Product Intelligence Engineering at Yahoo, which acquired his first startup, Aptiva, a search engine company. Amr holds a PhD in Electrical Engineering from Stanford University and a MA from Cairo University.

Amin Ahmad - Co-Founder & CTO: Amin spent a decade as a senior engineer at Google Research, where he led the development of question-answering and neural information retrieval systems. With 20 years of search industry experience, he has worked with Fortune 500 companies, startups, and government entities. Amin holds a BS in Computer Science and Mathematics from Bowling Green State University.

Dr. Tallat Shafaat - Co-Founder & Chief Architect: Tallat was a Senior Software Engineer for Google Search and Google Ads before founding Vectara. As part of the Google Knowledge Graph indexing team, he designed systems that processed petabytes of data and handled up to 200,000 queries per second. Tallat holds a PhD in Distributed Systems from KTH Sweden.


https://vectara.com/

MEF announces 2024-2025 Board of Directors

MEF announced its newly appointed 2024-2025 Board of Directors

  • Debika Bhattacharya, Chair, Chief Technology Officer, Verizon Business
  • Franck Morales, Secretary, Senior Vice President, Marketing & Business Development, Orange Wholesale International
  • Bob Victor, Treasurer, Senior Vice President of Product Management, Comcast Business
  • Colin Bannon, Chief Technology Officer, BT Business
  • Nan Chen, Chief Executive Officer, MEF
  • Paul Gampe, Chief Technology Officer, Console Connect by PCCW Global
  • Shawn Hakl, Vice President, 5G Strategy, Microsoft
  • Silke Hoesch, Senior Vice President Wholesale, Telekom Deutschland
  • Daniele Mancuso, Chief Marketing & Product Management, Sparkle
  • Mike Troiano, Senior Vice President, Product & Pricing, AT&T Business
  • Mirko Voltolini, Vice President, Technology & Innovation, Colt
  • Dave Ward, Chief Technology Officer, Lumen

MEF Officers

  • Nan Chen, Chief Executive Officer, MEF 
  • Kevin Vachon, Chief Operating Officer, MEF 
  • Pascal Menezes, Chief Technology Officer, MEF 
  • Daniel Bar-Lev, Chief Product Officer, MEF 
  • Sunil Khandekar, Chief Enterprise Development Officer, MEF

“We’re excited to welcome these industry trailblazers to MEF’s Board of Directors,” said Nan Chen, CEO, MEF. “Their diverse expertise and innovative perspectives will be crucial as we advance our global digital transformation agenda and focus on automating the full lifecycle of multi-provider NaaS services. This new leadership brings fresh perspectives that will accelerate network innovation and strengthen MEF’s ability to power the digital economy through collaborative, standards-based solutions.” 

https://www.mef.net

Viavi launches NITRO Fiber Sensing for critical infrastructure monitoring

Viavi Solutions introduced NITRO Fiber Sensing, an advanced real-time asset monitoring and analytics solution designed to safeguard critical infrastructure such as oil, gas, and water pipelines, electrical power transmission, border and perimeter security, and data center interconnects.

NITRO Fiber Sensing integrates Distributed Temperature Sensing (DTS), Simultaneous Temperature and Strain Sensing (DTSS), and Distributed Acoustic Sensing (DAS) technologies. These technologies collectively provide essential intelligence to quickly identify and pinpoint threats to infrastructure.

The solution employs remote Fiber Test Heads (FTH), or interrogators, to monitor fiber optic cables and fiber-enabled infrastructure. These FTHs perform real-time distributed fiber optic sensing, measuring temperature and strain along the fiber and detecting acoustic vibrations nearby. Strategically deployed along power cables and pipelines, FTHs offer valuable data on infrastructure health, enabling proactive maintenance and preventing downtime.

Operators receive alerts about potential threats such as human interference, vehicle movement, digging operations, and fishing nets or ship anchors encroaching on assets. These alerts include precise location information, aiding maintenance, response, and repair teams in addressing issues promptly.

https://www.viavisolutions.com/en-us/news-releases/viavi-introduces-sensing-solutions-fiber-optic-cables-and-fiber-enabled-critical-infrastructure