Showing posts with label SmartNICs. Show all posts
Showing posts with label SmartNICs. Show all posts

Wednesday, August 17, 2022

Napatech milestone: 350,000 SmartNIC ports shipped

Napatech announced a significant company milestone: 350,000 programmable SmartNIC port shipments to date.

Napatech notes that the annual demand for data center servers is forecasted to grow from 12 million to 18 million units. Each server requires connectivity from a network interface card (NIC) and requires anywhere from two to eight NICs per server. This massive increase in connectivity sits behind the projected growth of the NIC market from $2.6 billion in 2021 to $7 billion in 2026. The highest growth within the NIC market is the programmable SmartNIC segment. 

Napatech's SmartNICs have won more than 400 customers globally.

Jarrod Siket, chief marketing officer, Napatech, said: "Hyperscale cloud operators were the early adopters of SmartNICs and chose FPGA-based designs to overcome their most complex networking challenges. As a result, more than 70% of all SmartNIC ports deployed globally are based on FPGAs. Napatech is building upon this proven architecture to make those same solutions available to the next wave of cloud, telco and enterprise datacenter operators who are fueling the monumental growth within the programmable SmartNIC market."

https://www.napatech.com

SmartNIC Offload for 5G User Plane Function

 Optimizing the utilization of servers is important to carriers as they look to maximize the ROI for their network infrastructure. In this video, Charlie Ashton, Senior Director of Business Development at Napatech, discusses the business benefits of offloading the 5G user plane function (UPF) to dedicated accelerators like SmartNICs and details Napatech’s UPF offload solution.Download the 2022 SmartNICs and Infrastructure Acceleration Report:...





Wednesday, August 10, 2022

Dell'Oro: Smart NICs to drive Ethernet adapter market to $5B by 2026

 Driven by SmartNICs, the Ethernet Controller and Adapter market is expected to reach $5 Billion in 2026, according to a new report from Dell'Oro Group. Server network connectivity will transition to higher speeds, with 100 Gbps and higher-speed ports accounting for 44 percent of the shipments in five years.

“We predict Smart NICs will account for 38 percent of the total Ethernet Controller and Adapter market by 2026,” said Baron Fung, Research Director at Dell’Oro Group. “Smart NICs will displace traditional NICs for most of the hyperscale cloud infrastructure for general-purpose and high-end workloads such as accelerated computing. There are also opportunities for Smart NICs in the Tier 2 Cloud, Enterprise and Telco segments, with compelling use cases such as network protocol offloads, distributed storage, and virtualized network security applications. However, vendors would first need to address cost-of-ownership and implementation challenges before we see broader Smart NIC adoption outside of the hyperscale cloud market,” added Fung.

Additional highlights:

  • Total Ethernet Controller and Adapter market revenue is forecast to grow 10 percent by 2026.
  • 100 and 200 Gbps will be the dominant server port speeds for the Top 4 US Cloud SPs—Amazon, Google, Meta, and Microsoft—over the next five years.
  • Smart NIC revenues are projected to grow at a 21 percent compound annual growth rate over five years, compared to 5 percent growth for traditional NICs.

https://www.delloro.com/news/smart-nics-to-drive-ethernet-adapter-market-to-5-billion-by-2026/

Saturday, June 25, 2022

Video: Emulation fabrics for SmartNICs


https://youtu.be/feIX3PpgbbQ

Builders of advanced data center networks powered by SmartNICs will be interested in understanding traffic patterns and measuring performance of their systems. 

Rezvan Stan, Senior Engineering Manager, Keysight, talks about the emulation of complex data center topologies.

Get up to speed on SmartNICs by downloading our complimentary 2022 SmartNICs and Infrastructure Acceleration Report https://ngi.how/ia-2022

Wednesday, June 22, 2022

Linux Foundation kicks off Open Programmable Infrastructure Project

The Linux Foundation has launched a new Open Programmable Infrastructure (OPI) Project aimed at establishing an open ecosystem for next-generation architectures and frameworks based on DPU and IPU technologies.

The OPI Project aims to define the architecture and frameworks for the DPU and IPU software stacks that can be applied to any vendor’s hardware offerings. The OPI Project also aims to foster a rich open source application ecosystem, leveraging existing open source projects, such as DPDK, SPDK, OvS, P4, etc., as appropriate.  

The project intends to:

  • Define DPU and IPU, 
  • Delineate vendor-agnostic frameworks and architectures for DPU- and IPU-based software stacks applicable to any hardware solutions, 
  • Enable the creation of a rich open source application ecosystem,
  • Integrate with existing open source projects aligned to the same vision such as the Linux kernel, and, 
  • Create new APIs for interaction with, and between, the elements of the DPU and IPU ecosystem, including hardware, hosted applications, host node, and the remote provisioning and orchestration of software

With several working groups already active, the initial technology contributions will come in the form of the Infrastructure Programmer Development Kit (IPDK) that is now an official sub-project of OPI governed by the Linux Foundation. IPDK is an open source framework of drivers and APIs for infrastructure offload and management that runs on a CPU, IPU, DPU or switch. 

In addition, NVIDIA DOCA , an open source software development framework for NVIDIA’s BlueField DPU, will be contributed to OPI to help developers create applications that can be offloaded, accelerated, and isolated across DPUs, IPUs, and other hardware platforms. 

Founding members of OPI include Dell Technologies, F5, Intel, Keysight Technologies, Marvell, NVIDIA and Red Hat with a growing number of contributors representing a broad range of leading companies in their fields ranging from silicon and device manufactures, ISVs, test and measurement partners, OEMs to end users. 

“When new technologies emerge, there is so much opportunity for both technical and business innovation but barriers often include a lack of open standards and a thriving community to support them,” said Mike Dolan, senior vice president of Projects at the Linux Foundation. “DPUs and IPUs are great examples of some of the most promising technologies emerging today for cloud and datacenter, and OPI is poised to accelerate adoption and opportunity by supporting an ecosystem for DPU and IPU technologies."

https://opiproject.org





Tuesday, May 31, 2022

AMD completes its $1.9 billion acquisition of Pensando

AMD completed its previously-acquisition of Pensando Systems in a transaction valued at approximately $1.9 billion. 

Pensando’s distributed services platform will expand AMD’s data center product portfolio with a high-performance data processing unit (DPU) and software stack that are already deployed at scale across cloud and enterprise customers including Goldman Sachs, IBM Cloud, Microsoft Azure and Oracle Cloud.  The Pensando team will join the AMD Data Center Solutions Group, led by AMD Senior Vice President and General Manager Forrest Norrod. 

“The data center remains one of the largest growth opportunities for AMD. The addition of the Pensando Systems team with their hardware and software portfolio will enables us to offer cloud, enterprise and edge customers a broader portfolio of leadership compute engines that can be optimized for their specific workloads,” said AMD Chair and CEO Dr. Lisa Su. “Pensando’s leadership DPU complements our data center product portfolio, enabling AMD to offer solutions that can significantly accelerate data transfer speeds while providing additional levels of security and analytics that will play a larger role in defining the performance of next-generation data centers.”

https://www.amd.com/en/corporate/pensando-acquisition



AMD to acquire Pensando for its DPU + software stack

AMD agreed to acquire Pensando, a Silicon Valley start-up offering a software-defined edge services platform powered by a custom packet processor, for approximately $1.9 billion before working capital and other adjustments. 

Pensando ("thinking" in Spanish) is led by Cisco’s legendary “MPLS” team — Mario Mazzola, Prem Jain, Luca Cafiero, Soni Jiandani and Randy Pond. Its platform is designed to accelerate networking, security, storage and other services for cloud, enterprise and edge applications. Its architecture leverages the programmable packet processor distributed throughout a network to efficiently accelerate multiple infrastructure services simultaneously, offloading workloads from the CPU and increasing overall system performance. The company says it can achieve between 8x and 13x greater performance compared to competitive solutions.

Pensando claims multiple deployments at scale across cloud and enterprise customers, including Goldman Sachs, IBM Cloud, Microsoft Azure and Oracle Cloud. 

CEO Prem Jain and the Pensando team will join AMD as part of the Data Center Solutions Group, led by AMD Senior Vice President and General Manager Forrest Norrod. Pensando will remain focused on executing their product and technology roadmaps, now with additional scale to accelerate their business and address growing market opportunities across a broader number of customers.

http://pensando.io

Pensado emerges from stealth, led by "MPLS" team from Cisco

Pensando Systems, a start-up based in San Jose, California, emerged from stealth to unveil its first product -- a software-defined edge services platform that was developed in collaboration with the world’s largest cloud, enterprise, storage, and telecommunications companies.

Pensando ("thinking" in Spanish) is led by Cisco’s legendary “MPLS” team — Mario Mazzola, Prem Jain, Luca Cafiero, Soni Jiandani and Randy Pond. Hewlett Packard Enterprise and Lightspeed Venture Partners led a Series C round to raise up to $145 million in funding. This will bring the total amount raised to $278 million after an earlier founder-led series A round of $71 million, and a customer-led series B round of $62 million. Cited customers, investors, and partners including HPE, Goldman Sachs, NetApp, and Equinix.

The Pensando platform is a custom programmable processor optimized to execute a software stack delivering cloud, compute, networking, storage and security services wherever data is located, all managed via its Venice Centralized Policy and Services Controller. The platform delivers highly programmable software-defined cloud, compute, networking, storage, and security services wherever data is located.

The platform promises an improved security posture through distributed network protection and east-west security. It offloads networking and security functions at wire speed to dedicated accelerators, and it is designed to scale to > 1000 tenants per server and >1M routes.

The company claims its capability means that cloud providers can now gain a technological advantage over the current market leader, Amazon Web Services Nitro, delivering 5-9x improvements in productivity, performance, and scale when compared to current architectures with no risk of lock-in.

The portfolio includes:

  • Naples 100 and Naples 25 cards for installation in standard servers. The Naples Distributed Services Card (DSC) delivers high-performance cloud, compute, networking, storage and security functions. 
  • Venice Centralized Policy and Services Controller - Centrally-managed enterprise-grade security and visibility at every level of the stack enables seamless distribution of all infrastructure services policies to active Naples nodes. In addition, Venice handles lifecycle management such as deploying in-service software upgrades to Naples nodes and delivers always-on telemetry, deep end-to-end observability, and operational simplicity across the environment. 
Pensando also announced that Mark Potter, chief technology officer of Hewlett Packard Enterprise (HPE), and Barry Eggers, a partner of Lightspeed Venture Partners, joined the board of directors, with John Chambers, CEO of JC2 Ventures, leading as chairman.
http://pensando.io


Monday, May 23, 2022

AWS launches instances powered by Graviton3 processors

AWS announced the general availability of Amazon Elastic Compute Cloud (Amazon EC2) C7g instances, the next generation of compute-optimized instances powered by AWS-designed Graviton3 processors.

AWS says its AWS Graviton3 processors running C7g instances provide up to 25% better compute performance for compute-intensive applications than current generation C6g instances powered by AWS Graviton2 processors. The higher performance of C7g instances makes it possible for customers to run more efficiently a wide range of compute-intensive workloads—from web servers, load balancers, and batch processing to electronic design automation (EDA), high performance computing (HPC), gaming, video encoding, scientific modeling, distributed analytics, machine learning inference, and ad serving. 

“Customers of all sizes are seeing significant performance gains and cost savings using AWS Graviton-based instances,” said David Brown, Vice President of Amazon EC2 at AWS. “Since we own the end-to-end chip development process, we’re able to innovate and deliver new instances to customers faster. With up to 25% better performance than current generation Graviton instances, new C7g instances powered by AWS Graviton3 processors make it easy for organizations to get the most value from running their infrastructure on AWS.”

New C7g instances are built on the AWS Nitro System, a collection of AWS-designed hardware and software  that streamline the delivery of isolated multi-tenancy, private networking, and fast local storage. The AWS Nitro System offloads the CPU virtualization, storage, and networking functions to dedicated hardware and software, delivering performance that is nearly indistinguishable from bare metal. 

AWS also promises that C7g instances in the coming weeks will include support for Elastic Fabric Adapter (EFA), which allows applications to communicate directly with network interface cards, providing lower and more consistent latency. C7g instances are available for purchase as On-Demand Instances, with Savings Plans, as Reserved Instances, or as Spot Instances. C7g instances are available today in US East (N. Virginia) and US West (Oregon), with availability in additional AWS Regions coming later this year.

http://www.aws.amazon.com/ec2/instance-types/c7g

AWS launches EC2 instances powered by its own Graviton2 processor

Amazon Web Services announced the general availability of its sixth generation of Amazon Elastic Compute Cloud (Amazon EC2) instances with three new instances powered by AWS-designed, Arm-based Graviton2 processors. Graviton2 is a custom AWS design that is built using a 7nm manufacturing process and based on 64-bit Arm Neoverse cores. AWS says it can deliver up to 7x the performance of the A1 instances, including twice the floating point performance....

Reports: AWS May Buy Israeli Start-up

Amazon is looking to acquire Annapurna Labs, a start-up based in Israel believed to be developing data center switching chipsets.  According to various media sources, the deal could excedd US$350 million. The company was founded in 2011 by Avigdor Willenz, who previously founded Galileo Technology. The companies have not yet commented on the reports. http://www.annapurnalabs.com/ https://aws.amazon.com/blogs/a...


Thursday, May 12, 2022

Vietnam's Viettel builds with Qualcomm 5G RAN Accelerator Card

Viettel High Technology, which is the R&D arm of Viettel and in charge of design, development, manufacturing, and commercialization of telecommunications solutions, is using the Qualcomm X100 5G RAN Accelerator Card and Massive MIMO Qualcomm QRU100 5G RAN Platform to accelerate the development and commercialization of high-performance Open RAN massive MIMO solutions.

Viettel's 4G infrastructure coverd 97% of Vietnam population and the operator's 5G services are available in 16 cities and provinces in Vietnam to date. 

Viettel develops full network elements including Devices, Radio Access Network (RAN), Transmission Network, and Core Network which are forming a strong foundation for digital society.

“Viettel has been a pioneer in adopting new telecommunications technologies including 5G. We are delighted to have Qualcomm Technologies as a key technology provider in our 5G gNodeB project,” said Nguyen Vu Ha, general director, Viettel High Technology. “This collaboration between Qualcomm Technologies and Viettel Group will be the cornerstone of Vietnam’s national strategy for Made in Vietnam 5G infrastructure.”

“As the need for reliable, robust, and powerful mobile experiences increases across Vietnam, we anticipate a new wave of demand for 5G services from both end users and enterprises. Joining forces with Viettel will allow us to innovate through and launch technology that will advance the cellular ecosystem and accelerate the enablement and deployment of modern networks at scale,” said ST Liew, vice president, QUALCOMM CDMA Technologies Asia-Pacific Pte. Ltd. and president, Qualcomm Taiwan and South East Asia. “We look forward to working closely with Viettel for the rollout of advanced 5G infrastructure and services for Vietnam and globally.”

https://www.qualcomm.com/news/media-center

HPE offers 5G RAN virtualized DU powered by Qualcomm accelerator

Hewlett Packard Enterprise (HPE) will deliver 5G distributed units powered by Qualcomm Technologies X100 5G RAN inline accelerator card and its HPE ProLiant DL110 Gen10 Plus Telco Server.The Qualcomm X100 5G RAN accelerator card, which leverages a combination of DSPs and ARM CPUs, offloads Layer 1 and Layer 2 MAC functionality, including compute-intensive 5G baseband processing. HPE's 5G RAN virtualized distributed unit (vDU) solution is designed...

Rakuten Symphony and Qualcomm target Massive MIMO RU and DU

Rakuten Mobile and Qualcomm Technologies agreed to collaborate to develop a next-generation 5G Radio Unit (RU) with Massive MIMO capabilities and distributed units (DUs).The new products, which will leverage the Qualcomm X100 5G RAN Accelerator Card and high-performance Massive MIMO Qualcomm QRU100 5G RAN Platform, are designed to enhance Rakuten Symphony’s Symware product portfolio of Open RAN solutions.Rakuten Mobile, Rakuten Symphony and Qualcomm...

Qualcomm intros 5G DU accelerator card

Qualcomm introduced its 5G DU X100 PCIe inline accelerator card with concurrent Sub-6 GHz and mmWave baseband support.The accelerator card is designed to ease of deployment with O-RAN fronthaul and 5G NR layer 1 High (L1 High) processing. It plugs into standard Commercial-Off-The-Shelf (COTS) servers to offload CPUs from latency-sensitive and compute-intensive 5G baseband functions such as demodulation, beamforming, channel coding, and Massive MIMO...

Tuesday, May 10, 2022

Intel unveils IPU roadmap with ASIC and FPGA designs

Intel unveiled its IPU roadmap extending through 2026, featuring new FPGA + Intel architecture platforms (code-named Hot Springs Canyon) and the Mount Morgan (MMG) ASIC, as well as next-generation 800GB products. The discussion also a look at Intel's open-source software foundation, including the infrastructure programmer development kit (IPDK), which builds upon SPDK, DPDK and P4.

In terms of the timeline, Intel's roadmap includes:

  • 2022: Mount Evans, the company's first ASIC IPU; and Oak Springs Canyon, Intel’s second-generation FPGA IPU shipping to Google and other service providers.
  • 2023/24: introduction of 400 Gbps IPUs, code-named Mount Morgan and Hot Springs Canyon,
  • 2025/26: introduction of 800 Gbps IPUs

Here are some details on the 200 Gbps and 400 Gbps IPUs:

Mount Evans -- the code name for Intel’s first ASIC IPU, architected and developed with Google Cloud

  • Hyperscale-ready, it offers high-performance network and storage virtualization offload while maintaining a high degree of control.
  • Provides a programmable packet processing engine enabling use cases like firewalls and virtual routing.
  • Implements a hardware accelerated NVM storage interface scaled up from Intel Optane technology to emulate NVMe devices.
  • Deploys advanced crypto and compression acceleration, leveraging high- performance Intel Quick Assist technology.
  • Can be programmed using commonly deployed, existing, software environments, including DPDK, SPDK; the pipeline can be configured utilizing P4 programming.
  • Shipping is expected to begin in 2022 to Google and other service providers; broad deployment is expected in 2023.

Oak Springs Canyon -- the code name for Intel’s 2nd generation FPGA-based IPU platform built with the Intel Xeon D and the Intel Agilex FPGA, the industry’s leading FPGA in power, efficiency, and performance.

  • Network virtualization function offload for workloads like open virtual switch (OVS) and storage functions like NVMe over fabric and RoCE v2
  • Standard yet customizable platform that enables customers to customize their data path and their solutions with FPGA and Intel Xeon-D with software like Intel Open FPGA Stack, a scalable, source-accessible software and hardware infrastructure
  • Programmable using commonly deployed existing software environments, including DPDK and SPDK, which have been optimized on x86.
  • A more secure, high speed 2x 100 gigabit Ethernet network interface with the hardened crypto block
  • VirtIO support in Hardware for Native Linux support

Mount Morgan -- a next-generation ASIC IPU expected in 2023/2024.

Hot Springs Canyon -- a next-generation FPGA-based IPU platform expected in 2023/2024.

https://www.intel.com/content/www/us/en/newsroom/home.html

Intel demos Xeon + Tofino switch + Mount Evans IPU

As part of the Intel Innovation event this week, Intel demonstrated an Intelligent Fabric based on its Xeon Scalable processors and next-generation Xeon D processors, Tofino 3 programmable switching silicon and new "Mount Evans" infrastructure processing unit (IPU). The idea is to leverage P4 programming across all 3 processing platforms for use cases such as near real-time telemetry and analytics with the Intel Deep Insight Network Analytics...

Google collaborates on Intel's ASIC-based infrastructure processor

Intel and Google Cloud announced a deep collaboration to develop an ASIC P4-programmable infrastructure processing unit (IPU).Code-named “Mount Evans,” this open solution supports open source standards, including an infrastructure programmer development kit (IPDK) to simplify developer access to the technology in Google Cloud data centers. Machine learning, large-scale data processing and analytics, media processing, and high-performance computing...

Intel rolls FPGA-based Infrastructure Processing Unit (IPU)

Intel outlined its vision for the infrastructure processing unit (IPU), a programmable network device that intelligently manages system-level infrastructure resources by securely accelerating those functions in a data center.In a video, Guido Appenzeller, chief technology officer with Intel's Data Platforms Group says the idea is to cleanly separate the processing of client workloads from workloads of the cloud service provider.Intel cites several...

Tuesday, May 3, 2022

Where did the concept of SmartNICs originate?

https://youtu.be/TPeyQgKxyn0

The concept of SmartNICs can be traced back a long way to networking company called FORE Systems and subsequently to the founding of Netronome, says Niel Viljoen, CEO of Netronome.

The follow-up question: "is DPU just another name for network flow processor?"

Download the 2022 SmartNICs and Infrastructure Acceleration Report: https://ngi.how/ia-2022

Wednesday, April 27, 2022

Do customers want to program their SmartNICs?

https://youtu.be/0XlSXN1r5Lk

In this video, Mike Bushong, Group Vice President, Cloud-Ready Data Center at Juniper Networks, looks at the bifurcation between the hyperscalers and the rest of the market around programmable SmartNICs and the challenges that need to be solved.

Download the 2022 SmartNICs and Infrastructure Acceleration Report: https://ngi.how/ia-2022



Tuesday, April 19, 2022

Fungible launches a partner program

Fungible is launching a partnership program for its composable data center solutions.

The Fungible Partner Exchange (FunPX) partner program allows resellers, distributors, and technology providers worldwide to connect, collaborate, and grow their composable infrastructure capabilities to cloudify the world’s data centers.

“It’s my pleasure to announce the FunPX partner program. We recognize the importance of the channel. Our partners have been instrumental in fueling Fungible’s success and are a critical component in reaching our growth targets,” said Brian McCloskey, Chief Revenue Officer at Fungible. “We are investing in the tools, people, and processes to enable our partners to grow with us by providing immediate access to the essential information they need to win and help our mutual customers realize the benefits that the Fungible portfolio can bring to their infrastructure.”

https://www.fungible.com/partners

Fungible leverages DPU to DPU to centralize GPUs across Ethernet

Fungible introduced a means by which data centers could centralize their existing GPU assets into a single resource pool to be attached to servers on demand.The Fungible GPU-Connect (FGC) solution leverages the company's DPU to dynamically compose GPU and CPU resources across an Ethernet network.Instead of dedicated GPUs sitting idle most of the time, data centers can provide new users with access to the GPU pool, making greater use of existing assets....


Wednesday, April 13, 2022

Join our DPU-powered Infrastructure Acceleration series

 

AMD's decision to acquire Pensando has given the market 1.9 billion reasons to believe that data processing units (DPUs) and the SmartNICs they enable are indeed a really HOT networking idea.

Join us as we explore solutions from the leading players in our upcoming 2022 Infrastructure Acceleration showcase and report.

If you would like to contribute to our series, please reach out to us at research@avidthink.com or sales@avidthink.com

https://youtu.be/dHhrvkShU3I

Monday, April 4, 2022

AMD to acquire Pensando for its DPU + software stack

AMD agreed to acquire Pensando, a Silicon Valley start-up offering a software-defined edge services platform powered by a custom packet processor, for approximately $1.9 billion before working capital and other adjustments. 

Pensando ("thinking" in Spanish) is led by Cisco’s legendary “MPLS” team — Mario Mazzola, Prem Jain, Luca Cafiero, Soni Jiandani and Randy Pond. Its platform is designed to accelerate networking, security, storage and other services for cloud, enterprise and edge applications. Its architecture leverages the programmable packet processor distributed throughout a network to efficiently accelerate multiple infrastructure services simultaneously, offloading workloads from the CPU and increasing overall system performance. The company says it can achieve between 8x and 13x greater performance compared to competitive solutions.

Pensando claims multiple deployments at scale across cloud and enterprise customers, including Goldman Sachs, IBM Cloud, Microsoft Azure and Oracle Cloud. 

“To build a leading-edge data center with the best performance, security, flexibility and lowest total cost of ownership requires a wide range of compute engines,” said Dr. Lisa Su, AMD chair and CEO. “All major cloud and OEM customers have adopted EPYC processors to power their data center offerings. Today, with our acquisition of Pensando, we add a leading distributed services platform to our high-performance CPU, GPU, FPGA and adaptive SoC portfolio. The Pensando team brings world-class expertise and a proven track record of innovation at the chip, software and platform level which expands our ability to offer leadership solutions for our cloud, enterprise and edge customers.”

“We are excited to join the AMD family. Our shared cultures of innovation, excellence and relentless focus on partners and customers make this an ideal combination. Together, we have the talent and tools to deliver on our customers’ vision for the future of computing,” said Pensando CEO Prem Jain. “In less than five years Pensando has assembled a best-in-class engineering team that are experts in building systems together with a rich, deep ecosystem of partners and customers who have currently deployed over 100,000 Pensando platforms into production. Joining together with AMD will help accelerate growth in our core business and enable us to pursue a much larger customer base across more markets.”

“Industry leadership is based on catching business model disruptions enabled by new technologies,” said John Chambers, chair of the board of Pensando. "Pensando is built upon strong customer relationships and a solution that is at least two years ahead in cloud, edge and enterprise. For example, the performance and scale of Pensando’s distributed services platform is 8x-13x of the largest cloud provider and uses less power. Pensando’s smart switching architecture has 100x the scale, 10x the performance at one-third the cost of ownership of any comparable products in the enterprise market.  Pensando’s leadership position in software-defined cloud, compute, networking, security and storage services as part of the much larger AMD portfolio is in my opinion a perfect fit to shape the data center computing landscape for the next decade.”

CEO Prem Jain and the Pensando team will join AMD as part of the Data Center Solutions Group, led by AMD Senior Vice President and General Manager Forrest Norrod. Pensando will remain focused on executing their product and technology roadmaps, now with additional scale to accelerate their business and address growing market opportunities across a broader number of customers.

http://pensando.io

Pensado emerges from stealth, led by "MPLS" team from Cisco

Pensando Systems, a start-up based in San Jose, California, emerged from stealth to unveil its first product -- a software-defined edge services platform that was developed in collaboration with the world’s largest cloud, enterprise, storage, and telecommunications companies.

Pensando ("thinking" in Spanish) is led by Cisco’s legendary “MPLS” team — Mario Mazzola, Prem Jain, Luca Cafiero, Soni Jiandani and Randy Pond. Hewlett Packard Enterprise and Lightspeed Venture Partners led a Series C round to raise up to $145 million in funding. This will bring the total amount raised to $278 million after an earlier founder-led series A round of $71 million, and a customer-led series B round of $62 million. Cited customers, investors, and partners including HPE, Goldman Sachs, NetApp, and Equinix.

The Pensando platform is a custom programmable processor optimized to execute a software stack delivering cloud, compute, networking, storage and security services wherever data is located, all managed via its Venice Centralized Policy and Services Controller. The platform delivers highly programmable software-defined cloud, compute, networking, storage, and security services wherever data is located.


The platform promises an improved security posture through distributed network protection and east-west security. It offloads networking and security functions at wire speed to dedicated accelerators, and it is designed to scale to > 1000 tenants per server and >1M routes.

The company claims its capability means that cloud providers can now gain a technological advantage over the current market leader, Amazon Web Services Nitro, delivering 5-9x improvements in productivity, performance, and scale when compared to current architectures with no risk of lock-in.

The portfolio includes:

  • Naples 100 and Naples 25 cards for installation in standard servers. The Naples Distributed Services Card (DSC) delivers high-performance cloud, compute, networking, storage and security functions. 
  • Venice Centralized Policy and Services Controller - Centrally-managed enterprise-grade security and visibility at every level of the stack enables seamless distribution of all infrastructure services policies to active Naples nodes. In addition, Venice handles lifecycle management such as deploying in-service software upgrades to Naples nodes and delivers always-on telemetry, deep end-to-end observability, and operational simplicity across the environment. 
Pensando also announced that Mark Potter, chief technology officer of Hewlett Packard Enterprise (HPE), and Barry Eggers, a partner of Lightspeed Venture Partners, joined the board of directors, with John Chambers, CEO of JC2 Ventures, leading as chairman.
http://pensando.io

Are next-gen SmartNICs gaining traction with top cloud providers?

https://youtu.be/fEEFyIJYzXIIn this video, Prem Jain, co-founder and CEO of Pensando, discusses SmartNIC adoption trends by major cloud providers. Hot topics include bare-metal-as-a-service acceleration and virtualized storage. For more insights from industry thought leaders check out: https://nextgeninfra.io/VMware's Project Monterey for SmartNICs - Pensando's perspectiveWednesday, September 30, 2020  Pensando, SmartNICs  Pensando...

VMware's Project Monterey for SmartNICs - Pensando's perspective

Pensando Systems is working with VMware on Project Monterey to integrate the next generation of SmartNIC technology into fully virtualized enterprise networks.The project aims to rearchitect VMware Cloud Foundation to enable disaggregation of the server including extending support for bare metal servers, thereby allowing physical resources to be dynamically accessed by applications based on policy or via software API.In this video, Silvano Gai of...

AMD completes acquisition of Xilinx

AMD completed its previously-announced  acquisition of Xilinx in an all-stock transaction. Xilinx stockholders received 1.7234 shares of AMD common stock and cash in lieu of any fractional shares of AMD common stock for each share of Xilinx common stock.AMD expects the acquisition to be accretive to non-GAAP margins, non-GAAP EPS and free cash flow generation in the first year.Former Xilinx CEO Victor Peng will join AMD as president of the newly...

Xilinx looks beyond FPGAs with Adaptive Compute Acceleration Platform

At its second annual Xilinx Developer Forum (XDF) in San Jose, Xilinx unveiled strategic moves beyond its mainstay field-programmable gate array (FPGAs) with the introduction of its own accelerator line cards and, more significantly, a new Adaptive Compute Acceleration Platform (ACAP). Xilinx, which got its start in 1984 and now sells a broad range of FPGAs and complex programmable logic devices (CPLDs), is transforming itself into a higher-value...

Xilinx to acquire Solarflare for SmartNIC solutions

Xilinx agreed to acquire Solarflare Communications, a provider of high-performance, low latency networking solutions for customers spanning FinTech to cloud computing. Financial terms were not disclosed. Xilinx said the acquisition enables it to combine its FPGA, MPSoC and ACAP solutions with Solarflare's ultra-low latency network interface card (NIC) technology and Onload application acceleration software. The target is new converged SmartNIC solutions,...