Monday, August 5, 2019

Amdocs and Microsoft expand cloud alliance

Amdocs and Microsoft are expanding their alliance to help communication service providers (CSPs) with differentiated, cloud-based services.  Amdocs and Microsoft will collaborate across several domains, including Data and AI, NFV and virtualized networking, IoT (including eSIM) and Media.

Under the collaboration, SES, a global satellite operator, will be the first to deliver virtual network services, such as SD-WAN, orchestrated and managed using Amdocs NFV Powered by ONAP on Microsoft Azure. 

“As the communications and media industries merge, CSPs are jockeying to bring fresh, new offerings to their brand to retain and grow their customer base and gain market share,” said Gary Miles, chief marketing officer, Amdocs.  “With today’s expanded agreement, CSPs can now offer a one-stop shop of new and differentiated cloud services to drive growth, stickiness and value-add, while also streamlining operations, improving service agility and reducing complexity.”

Bob De Haven, general manager, worldwide media & communications industries at Microsoft Corp. said, “Amdocs and Microsoft have been working together for several years to enable and develop services to accelerate CSPs’ transformation to the cloud. Through this expansion of our work together, Microsoft and Amdocs will collaborate on new work across several of the industry’s most important growth drivers, including expanding into the media and entertainment business, leveraging artificial intelligence and evolving to open, cloud-based services.”

Amdocs also announced that its scalable, Hadoop-based data management platform and self-service visualization and reporting solution is now available hosted on Azure.  CSPs can now bring real-time data from multiple sources, both cloud-based and on-premises, into a communications industry-specific data model based on best practice, prebuilt reports and visualization capabilities.

http://www.amdocs.com

MEF18 PoC - AT&T on Intent-Based Networks & Services



MEF18 Proof of Concept, 29 - 31 Oct - Transformation in Action: Harmonizing Delivery of Intent-Based Networks & Services. 

SES operationalizes ONAP for satellite services

SES will create an open, standards-based network automation and service orchestration platform, built on Open Network Automation Platform (ONAP) and powered by Amdocs’ network functions virtualization (NFV) technology for scalable, automated delivery of satellite-enabled network services on Microsoft Azure.

Specifically, SES is implementing ONAP with Amdocs on Microsoft Azure to extend network services and activate virtualized network functions quickly and at scale. In addition, SES is partnering with Amdocs.

SES is a founding member of Linux Foundation Networking (LFN), which hosts the ONAP project, an initiative with widespread adoption as the preferred platform for open network automation and orchestration. By standardizing on the same orchestration platform as leading telcos and mobile network operators, SES says it can make it easier and faster for its customers to deliver services over its high-performance satellite-based network.

“Our vision is to make satellite-based networks a seamless and wholly integrated part of a global, cloud-scale network ecosystem. Central to this vision is an open, automated operational environment that allows our customers to easily create and deliver new, innovative services anywhere,” said JP Hemingway, CEO of SES Networks. “To make our vision a reality, we are pleased to be the first satellite operator to develop ONAP with Amdocs on Microsoft Azure. SES envisions delivering cloud-scale connectivity services and virtualized network functions such as SD-WAN, virtualized Evolved Packet Core (vEPC), security and more, creating massive value for our customers well into the future.”


Intel ships FPGA acceleration card for HPE Gen10 servers

The new high-performance Intel FPGA Programmable Acceleration Card (Intel FPGA PAC) D5005 is now shipping now in the HPE ProLiant DL3809 Gen10 server.

The Intel FPGA PAC D5005 acceleration card, which is based on an Intel Stratix 10 SX FPGA, provides high-performance inline and lookaside workload acceleration to servers based on Intel Xeon Scalable processors using the Intel Acceleration Stack, which includes acceleration libraries and development tools. Initial workloads specifically developed for the Intel FPGA PAC D5005 accelerator card include:

  • AI (speech-to-text translation) from Myrtle
  • Network security from Algo-Logic
  • Image transcoding from CTAccel
  • Video transcoding from IBEX


Compared with the Intel programmable acceleration card with Intel Arria 10 GX FPGA, the Intel FPGA PAC D5005 accelerator card offers significantly more resources including three times the amount of programmable logic, as much as 32 GB of DDR4 memory (a 4x increase) and faster Ethernet ports (two 100GE ports versus one 40GE port). With a smaller physical and power footprint, the Intel PAC with Intel Arria 10 GX FPGA fits a broader range of servers, while the Intel PAC D5005 is focused on providing a higher level of acceleration.

“The HPE ProLiant Gen10 server family is the world’s most secure, manageable and agile server platform available on the market today. By integrating the Intel FPGA PAC D5005 accelerator into the HPE ProLiant DL380 Gen10 server, we are now delivering optimized configurations for an increasing number of workloads, including AI inferencing, big data and streaming analytics, network security and image transcoding. Combined with our broad portfolio of services from HPE Pointnext, we enable our customers to accelerate time-to-value and increase ROI,” stated Bill Mannel, vice president and general manager, HPC and AI, at Hewlett Packard Enterprise.

Intel launches FPGA-based accelerator for 5G core and vRAN
Intel is introducing an FPGA-based acceleration card for 5G core and virtualized radio access network solutions.

The Intel FPGA Programmable Acceleration Card N3000 is designed to accelerate network traffic for up to 100 Gbps and supports up to 9GB DDR4 and 144MB QDR IV memory for high-performance applications. Programmability and flexibility of an FPGA allow customers to create tailored solutions by utilizing reference IPs for networking function acceleration workloads such as vRAN, vBNG, vEPC, IPSec and VPP.

Affirmed Networks is using Intel’s FPGA PAC in a new solution for 5G core network (CN)/evolved packet core – a 200 Gbps/server that provides smart load balancing and CPU cache optimizations.

Rakuten, the soon to be the operator of Japan’s newest mobile network, is including Intel x86 and FPGA-based PAC for acceleration from the core to the edge to provide the first end-to-end cloud-native mobile network. Intel FPGA PAC N3000 is the distributed unit accelerator next to Intel® Xeon Scalable processor where Layer 1 functions, such as forward error correction and front haul transmission, are offloaded onto an Intel FPGA.

NeoPhotonics posts revenue of $81.7M, adjusts for Huawei ban

NeoPhotonics reported Q2 revenue of $81.7 million, up 3% quarter-over-quarter and up 1% year-over-year. Gross margin was 19.2%, down from 19.8 % in the prior quarter. Diluted net loss per share was $0.16, up from a net loss of $0.30 per share in the prior quarter.

“Q2 was a volatile quarter for NeoPhotonics and I am proud of our team and their continued focus and execution to extend our leadership position in high-speed digital optoelectronics while making changes needed to adjust for the Huawei ban,” said Tim Jenks, NeoPhotonics Chairman and CEO. “Market drivers are well aligned with our advanced technologies and high-speed capabilities. These trends transcend the current Huawei ban and, coupled with the continued demand with hyperscale data centers, we are optimistic about NeoPhotonics’ new product prospects,” concluded Mr. Jenks.

http://www.neophotonics.com

HPE acquires MapR assets for its intelligent data platform

Hewlett Packard Enterprise has acquired the business assets of MapR, a start-up that developed a data platform for artificial intelligence and analytics applications powered by scale-out, multi-cloud and multi-protocol file system technology. This transaction includes MapR’s technology, intellectual property, and domain expertise in artificial intelligence and machine learning (AI/ML) and analytics data management.  Financial terms were not disclosed.

HPE said it welcomes MapR customers and partners and plans to support existing deployments along with ongoing renewals.

“The explosion of data is creating a new era of intelligence where the winners will be the ones who harness the power of data, wherever it lives,” said Antonio Neri, president and CEO of Hewlett Packard Enterprise. “MapR’s file system technology enables HPE to offer a complete portfolio of products to drive artificial intelligence and analytics applications and strengthens our ability to help customers manage their data assets end to end, from edge to cloud.”

“At HPE, we are working to simplify our customers’ and partners’ adoption of artificial intelligence and machine learning,” said Phil Davis, president, Hybrid IT, Hewlett Packard Enterprise. “MapR’s enterprise-grade file system and cloud-native storage services complement HPE’s BlueData container platform strategy and will allow us to provide a unique value proposition for customers. We are pleased to welcome MapR’s world-class team to the HPE family.”

Western Digital intros data center NVMe SSDs

Western Digital unveiled two new 96-layer 3D flash NVMe SSD families for enterprise data centers

The Ultrastar DC SN640 family is optimized for extreme performance for mixed-workload applications such as SQL Server, MySQL, virtual desktops, and other business-critical workloads using hyperconverged infrastructures (HCI) such as VMware vSAN and Microsoft Azure Stack HCI solutions. It delivers 2x the performance in sequential writes compared to its predecessor. Supporting a variety of system designs, the new family comes in three form factors and offers a broad range of capacity points up to 30.72TB.

The Ultrastar DC SN340 Gen3 x4 PCIe SSD is optimized for power efficiency and low heat signature with less than 7W at full performance. It is ideal for very read-intensive workloads such as warm storage and other applications that write in large block sizes. These include content delivery networks (CDN) and video caching, where data is written in large sequential blocks and which benefit significantly from the high-bandwidth of Gen3 x4 and low read latency of NVMe. Distributed NoSQL databases like Apache Cassandra® and MongoDB® can also take advantage of the large-block write characteristics of the drive. The Ultrastar DC SN340 comes in capacities of up to 7.68TB. The drive will be sampling to select customers this quarter.

“Customers are rapidly transitioning to a variety of purpose-built NVMe storage solutions to improve storage performance, efficiency, density and overall TCO,” said Eyal Bek, vice president of product marketing for Enterprise Devices at Western Digital. “It’s no longer a one-size-fits-all world. Our Ultrastar NVMe SSDs are based on our deep understanding of evolving workloads and trends within the data center and are aligned to our proven and reliable 96L NAND nodes. We take pride in knowing that our new Ultrastar DC SN640 and Ultrastar DC SN340 SSDs are optimized to support the purpose-built workloads and data volume demands of today, while laying the foundation for the future of zettabyte scale.”

Lenovo and Intel enter multiyear alliance

Intel and Lenovo announced a multiyear collaboration focused on the convergence of high-performance computing (HPC) and artificial intelligence (AI).

The collaboration plans to focus on three areas:

  • Systems and solutions: bringing together Lenovo TruScale Infrastructure and Intel technologies, including Intel Xe computing architecture; Intel Optane™ DC persistent memory; Intel oneAPI programming framework; and both current and future generations of Intel Xeon Scalable processors.
  • Software optimization for HPC and AI convergence: A key focus area will be building out Lenovo’s smarter software offerings, including optimizing Lenovo’s LiCO HPC/AI software stack for Intel’s next-generation technologies, and alignment with the Intel oneAPI programming framework. Additionally, the collaboration will work to enable DAOS advanced storage frameworks and other exascale-class software optimizations, targeted at helping HPC and AI users run their applications with greater ease than before.
  • Ecosystem enablement: Additionally, Intel and Lenovo plan to partner to help create the new ecosystem for the convergence of HPC and AI. This includes building joint “HPC & AI centers of excellence” around the world to further enable research and university centers to develop solutions that address some of the most pervasive world challenges, including genomics, cancer, weather and climate, space exploration and more.

“Our goal is to further accelerate innovation into the Exascale era, aggressively waterfalling these solutions to scientists and businesses of all sizes to speed discovery and outcomes. We are passionate in helping researchers solve humanity’s greatest challenges,” said Kirk Skaugen, executive vice president of Lenovo and president of Lenovo Data Center Group. “Lenovo’s Neptune™ liquid cooling, in combination with the 2nd Gen Intel Xeon Scalable platform, helps customers unlock new insights and deliver unprecedented outcomes at new levels of energy efficiency.”

Toshiba intros highest-performing NAND

Toshiba Memory America launched of a new Storage Class Memory (SCM) called "XL-FLASH" that is based on its BiCS FLASH 3D flash memory and sits between DRAM and NAND flash.

The new XL-FLASH is designed for low latency and high performance in data center and enterprise storage. Sample shipments will start in September, with mass production expected to begin in 2020. XL-FLASH will initially be deployed in an SSD format but could be expanded to memory channel attached devices that sit on the DRAM bus, such as future industry standard non-volatile dual in-line memory modules (NVDIMMs).

Key Features

  • 128 gigabit (Gb) die (in a 2-die, 4-die, 8-die package)
  • 4KB page size for more efficient operating system reads and writes
  • 16-plane architecture for more efficient parallelism
  • Fast page read and program times. XL-FLASH provides a low read latency of less than 5 microseconds, approximately 10 times faster than existing TLC2