Friday, August 31, 2018

ColorChip to showcase 100G-400G PAM4 optical interconnects

ColorChip will showcase a family of PAM4 optical interconnects ranging from 100G to 400G, with reaches up to 40km, at the CIOE 2018 exhibition in Shenzhen, China.

ColorChip's100G CWDM4 2km and 4WDM-10 10km QSFP28 solutions leverage its proprietary "SystemOnGlass" technology.

"To support the massive use of fiber in fronthaul and backhaul networks, the evolving 5G infrastructure will require unparalleled volumes of high speed optical modules," commented Yigal Ezra, ColorChip's CEO. "ColorChip is well positioned to leverage existing 100G QSFP28 CWDM4 production lines, already proven and scaled for massive mega datacenter demand, to support the growing needs of the 5G market, with capacity ramping up to millions of units per year."

https://www.color-chip.com

Thursday, August 30, 2018

OpenStack's "Rocky" release enhances bare metal provisioning

OpenStack, which now powers more than 75 public cloud data centers and thousands of private clouds at a scale of more than 10 million compute cores, has now advanced to its 18th major release.

OpenStack "Rocky" has dozens of dozens of enhancements, the significant being refinements to Ironic (the bare metal provisioning service) and fast forward upgrades. There are also several emerging projects and features designed to meet new user requirements for hardware accelerators, high availability configurations, serverless capabilities, and edge and internet of things (IoT) use cases.

OpenStack bare metal clouds, powered by Ironic, enable both VMs and containers to support emerging use cases like edge computing, network functions virtualization (NFV) and artificial intelligence (AI) /machine learning.

New Ironic features in Rocky include:

  • User-managed BIOS settings—BIOS (basic input output system) performs hardware initialization and has many configuration options that support a variety of use cases when customized. Options can help users gain performance, configure power management options, or enable technologies like SR-IOV or DPDK. Ironic now lets users manage BIOS settings, supporting use cases like NFV and giving users more flexibility.
  • Conductor groups—In Ironic, the “conductor” is what uses drivers to execute operations on the hardware. Ironic has introduced the “conductor_group” property, which can be used to restrict what nodes a particular conductor (or conductors) have control over. This allows users to isolate nodes based on physical location, reducing network hops for increased security and performance.
  • RAM Disk deployment interface—A new interface in Ironic for diskless deployments. This is seen in large-scale and high performance computing (HPC) use cases when operators desire fully ephemeral instances for rapidly standing up a large-scale environment.

“OpenStack Ironic provides bare metal cloud services, bringing the automation and speed of provisioning normally associated with virtual machines to physical servers,” said Julia Kreger, principal software engineer at Red Hat and OpenStack Ironic project team lead. “This powerful foundation lets you run VMs and containers in one infrastructure platform, and that’s what operators are looking for.”

"At Oath, OpenStack manages hundreds of thousands of bare metal compute resources in our data centers. We have made significant changes to our supply chain process using OpenStack, fulfilling common bare metal quota requests within minutes,” said James Penick, IaaS Architect at Oath.

Database for the Instant Experience -- a profile of Redis Labs

The user experience is the ultimate test of network performance. For many applications, this often comes down to the lag after clicking and before the screen refreshes. We can trace the packets back from the user's handset, through the RAN, mobile core, metro transport, and perhaps long-haul optical backbone to a cloud data center. However, even if this path traverses the very latest generation infrastructure, if it ends up triggering a search in an archaic database, the delayed response time will be more harmful to the user experience than the network latency. Some databases are optimized for performance. Redis, an open source, in-memory, high-performance database, claims to be the fastest -- a database for the Instant Experience. I recently sat down with Ofer Bengal to discuss Redis, Redis Labs and the implication for networking and hyperscale clouds.



Jim Carroll:  The database market has been dominated by a few large players for a very long time. When did this space start to open up, and what inspired Redis Labs to jump into this business?

Ofer Bengal: The database segment of the software market had been on a stable trajectory for decades. If you had asked me ten years ago if it made sense to create a new database company, I would have said that it would be insane to try. But cracks started to open when large Internet companies such as Amazon and Facebook, which generated huge amounts of data and had very stringent performance requirements, realized that the relational databases provided by market leaders like Oracle, were not good enough for their modern use cases. With a relational database, when the amount of data grows beyond the size of a single server it is very complex to cluster and performance goes down dramatically.

About fifteen years ago, a number of Internet companies started to develop internal solutions to these problems. Later on, the open source community stepped in to address these challenges and a new breed of databases was born, which today is broadly categorized under “unstructured" or "NoSQL" databases.

Redis Labs was started in a bit of an unusual way, and not as a database company. The original idea was to improve application performance, because we, the founders, came from that space. We always knew that databases were the main bottleneck in app performance and looked for ways to improve that. So, we started with database caching. At that time, Memcached was a very popular open source caching system for accelerating database performance. We decided to improve it and make it more robust and enterprise-ready. And that's how we started the company.

In 2011, when we started to develop the product, we discovered a fairly new open source project by the name "Redis" (which stands for "Remote Dictionary Server"), which was started by Salvatore Sanfilippo, an Italian developer, who lives in Sicily until this very day. He essentially created his own in-memory database for a certain project he worked on and released it as open source. We decided to adopt it as the engine under the hood for what we were doing. However, shortly thereafter we started to see the amazing adoption of this open source database.  After a while, it was clear we were in the wrong business, and so we decided to focus on Redis as our main product and became a Redis company.  Salvatore Sanfilippo later joined the company and continues to lead the development of the open source project, with a group of developers. A much larger R&D team develops Redis Enterprise, our commercial offering.

Jim Carroll: To be clear, there is an open source Redis community and there's a company called Redis Labs, right?

Ofer Bengal:  Yes. Both the open source Redis and Redis Enterprise are developed by Redis Labs, but by two separate development teams. This is because a different mindset is required for developing open source code and an end-to-end solution suitable for enterprise deployment.
 
Jim Carroll: Tell us more about Redis Labs, the company.

Offer Bengal: We have a monumental number of open source Redis downloads. Its adoption has spread so widely that today you find it in most companies in the world. Our mission, at Redis Labs, is to help our customers unlock answers from their data. As a result, we invest equally into both open source Redis and enterprise-grade Redis, Redis Enterprise, and deliver disruptive capabilities that will help our customers find answers to their challenges and help them deliver the best application and service for their customers. We are passionate about our customers, community, people and our product. We're seeing a noticeable trend where enterprises that adopt OSS Redis are maturing their implementation with Redis Enterprise, to better handle scale, high availability, durability and data persistence. We have customers from all industry verticals, including six of the top Fortune 10 companies and about 40% of the Fortune 100 companies. To give you a few examples of some of our customers, we have AMEX, Walmart, DreamWorks, Intuit, Vodafone, Microsoft, TD Bank, C.H. Robinson, Home Depot, Kohl's, Atlassian, eHarmony – I could go on.

Redis Labs has now over 220 employees across our Mountain View CA HQ, R&D center in Israel, London sales office and other locations around the world.  We’ve completed a few investment rounds, totaling $80 million from Bain Capital Ventures, Goldman Sachs, Viola Ventures (Israel) and Dell Technologies Capital.

Jim Carroll: So, how can you grow and profit in an open source market as a software company?

Ofer Bengal:  The market for databases has changed a lot. Twenty years ago, if a company adopted Oracle, for example, any software development project carried out in that company had to be built with this database. This is not the case anymore. Digital transformation and cloud adoption are disrupting this very traditional model and driving the modernization of applications. New-age developers now have the flexibility to select their preferred solutions and tools for their specific problem at hand or use cases. They are looking for the best-of-breed database to meet each use case of their application. With the evolution of microservices, which is the modern way of building apps, this is even more evident. Each microservice may use a different database, so you end up with multiple databases for the same application. A simple smartphone application, for instance, may use four, five or even six different databases. These technological evolutions opened the market to database innovations.

In the past, most databases were relational, where the data is modeled in tables, and tables are associated with one another. This structure, while still relevant for some use cases, does not satisfy the needs of today’s modern applications.

Today, there are many flavors of unstructured NoSQL databases, starting with simple key value databases like DynamoDB, document-based databases like MongoDB, column-based databases like Cassandra, graph databases like Neo4j, and others.  Each one is good for certain use cases. There is also a new trend called multi-model databases, which means that a single database can support different data modeling techniques, such as relational, document, graph, etc.  The current race in the database world is about becoming the optimal multi-model database.

Tying it all together, how do we expect to grow as an organization and profit in an open source market?  We have never settled for the status quo. We looked at today’s environments and the challenges that come with them and have figured out a way to deliver Redis as a multi-model database. We continually strive to lead and disrupt this market. With the introduction of modules, customers can now use Redis Enterprise as a key-value store, document store, graph database, and for search and so much more. As a result, Redis Enterprise is the best-of-breed database suited to cater to the needs of modern-day applications. In addition to that, Redis Enterprise delivers the simplicity, ease of scale and high availability large enterprises desire. This has helped us become a well-loved database and a profitable business

Jim Carroll: What makes Redis different from the others?

Ofer Bengal: Redis is by far the fastest and most powerful database. It was built from day one for optimal performance: besides processing entirely in RAM (or any of the new memory technologies), everything is written in C, a low-level programming language. All the commands, data types, etc., are optimized for performance. All this makes Redis super-fast. For example, from a single, average size, cloud instance on Amazon, you can easily generate 1.5 million transactions per second at sub-millisecond latency. Can you imagine that? This means that the average latency of those 1.5 million transactions will be less than one millisecond. There is no database that comes even close to this performance. You may ask, what is the importance of this?  Well, the speed of the database is by far the major factor influencing application performance and Redis can guarantee instant application response.

Jim Carroll: How are you tracking the popularity of Redis?

Ofer Bengal: If you look at DockerHub, which is the marketplace for Docker containers, you can see the stats on how many containers of each type were launched there.  The last time I checked, over 882 million Redis containers have been launched on DockerHub. This compares to about 418 million for MySQL, and 642 million of MongoDB containers. So, Redis is way more popular than both MongoDB and MySQL. And we have many other similar data points confirming the popularity of Redis.

Jim Carroll: If Redis puts everything in RAM, how do you scale? RAM is an expensive resource, and aren’t you limited by the amount that you can fit in one system?

Ofer Bengal: We developed very advanced clustering technology which enables Redis Enterprise to scale infinitely. We have customers that have 10s of terabytes of data in RAM. The notion that RAM is tiny and used only for very special purposes, is no longer true, and as I said, we see many customers with extremely large datasets in RAM. Furthermore, we developed a technology for running Redis on Flash, with near-RAM performance at 20% the servers cost. The intelligent data tiering that Redis on Flash delivers allows our customers to keep their most used data in RAM while moving the less utilized data onto cheaper flash storage. This has organizations such as Whitepages saving over 80% of their infrastructure costs, with little compromise to performance.

In addition to that, we’re working very closely with Intel on their Optane™ DC persistent memory based on 3D Xpoint™. As this technology becomes mainstream, the majority of the database market will have to move to being in-memory.


Jim Carroll: What about the resiliency challenge? How does Redis deal with outages?

Ofer Bengal: Normally with memory-based systems, if something goes wrong with a node or a cluster, there is a risk of losing data. This is not the case with Redis Enterprise, because it is redundant and persistent.  You can write everything to disk without slowing down database operations. This is important to note because persisting to disk is a major technological challenge due to the bottleneck of writing to disk. We developed a persistence technology that preserves Redis' super-fast performance, while still writing everything to disk. In case of memory failures, you can read everything from disk. On top of that, the entire dataset is replicated in memory.  Each database can have multiple such replicas, so if one node fails, we instantly fail-over to a replica. With this and some other provisions, we provide several layers of resiliency.

We have been running our database-as-a-service for five years now, with thousands of customers, and never lost a customer's data, even when cloud nodes failed.

Jim Carroll: So how is the market for in-memory databases developing? Can you give some examples of applications that run best in memory?

Ofer Bengal: Any customer-facing application today needs to be fast. The new generation of end users expect instant experience from all their apps and are not tolerant to slow response, whether caused by the application or by the network.

You may ask "how is 'instant experience' defined?"  Let’s take an everyday example to illustrate what ‘instant’ really means., When browsing on your mobile device, how long are you willing to wait before your information is served to you? What we have found is that the expected time from tapping your smartphone or clicking on your laptop until you get the response, should not be more than 100 milliseconds. As an end consumer, we are all dissatisfied with waiting and we expect information to be served instantly. What really happens behind the scenes, however, is once you tap your phone, a query goes over the Internet to a remote application server, which processes the request and may generate several database queries. The response is then transmitted back over the Internet to your phone.

Now, the round trip over the Internet (in a "good" Internet day) is at least 50 milliseconds, and the app server needs at least 50 milliseconds to process your request. This means that at the database layer, the response time should be within sub-millisecond or you’re pretty much exceeding what is considered the acceptable standard wait time of 100 milliseconds. At a time of increasing digitization, consumers expect instant access to the service, and anything less will directly impact the bottom line. And, as I already mentioned, Redis is the only database that can respond in less than one millisecond, under almost any load of transactions.

Let me give you some use case examples. Companies in the finance industry (banks, financial institutions) are using relational databases for years. Any change, such as replacing an Oracle database, is analogous to open heart surgery. But when it comes to new customer facing banking applications, such as checking your account status or transferring funds, they would like to have instant experience. Many banks are now moving this type of applications to other databases, and Redis is often chosen for its blazing fast performance bar none.

As I mentioned earlier, the world is moving to microservices. Redis Enterprise fits the needs of this architecture quite nicely as a multi-model database. In addition, Redis is very popular for messaging, queuing and time series capabilities. It is also strong when you need fast data ingest, for example, when massive amounts of data are coming in from IoT devices, or in other cases where you have huge amounts of data that needs to be ingested in your system. What started off as a solution for caching has, over the course of the last few years, evolved into an enterprise data platform.

Jim Carroll: You mentioned microservices, and that word is almost becoming synonymous with containers. And when you mention containers, everybody wants to talk about Kubernetes, and managing clusters of containers in the cloud. How does this align with Redis?

Ofer Bengal: Redis Enterprise maintains a unified deployment across all Kubernetes environments, such as RedHat OpenShift, Pivotal Container Services (PKS), Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), Amazon Elastic Container Service for Kubernetes (EKS) and vanilla Kubernetes. It guarantees that each Redis Enterprise node (with one or more open source servers) reside on a POD that is hosted on a different VM or physical server. And in using the latest Kubernetes primitives, Redis Enterprise can now be run as a stateful service across these environments.

We use a layered architecture that splits responsibilities between tasks that Kubernetes does efficiently, such as node auto-healing and node scaling, tasks that Redis Enterprise cluster is good at, such as failover, shard level scaling, configuration and Redis monitoring functions, and tasks that both can orchestrate together, such as service discovery and rolling upgrades with zero downtime.

Jim Carroll: How are the public cloud providers supporting Redis?

Ofer Bengal:  Most cloud providers, such as AWS, Azure and Google, have launched their own versions of Redis database-as-a-service, based on open source Redis, although they hardly contribute to it.

Redis Labs, the major contributor to open source Redis, has launched services on all those clouds, based on Redis Enterprise.  There is a very big difference between open source Redis and Redis Enterprise, especially if you need enterprise-level robustness.

Jim Carroll: So what is the secret sauce that you add on top of open source Redis?

Offer Bengal:  Redis Enterprise brings many additional capabilities to open source Redis. For example, as I mentioned earlier, sometimes an installation requires terabytes of RAM, which can get quite expensive. We have built-in capabilities on Redis Enterprise that allows our customers to run Redis on SSDs with almost the same performance as RAM. This is great for reducing the customer's total cost of ownership.  By providing this capability, we can cut the underlying infrastructure costs by up to 80%. For the past few years, we’ve been working with most vendors of advanced memory technologies such as NVMe and Intel’s 3D Xpoint.  We will be one of the first database vendors to take advantage of these new memory technologies as they become more and more popular. Databases like Oracle, which were designed to write to disk, will have to undergo a major facelift in order to take advantage of these new memory technologies.

Another big advantage Redis Enterprise delivers is high availability. With Redis Enterprise, you can create multiple replicas in the same data center, across data centers, across regions, and across clouds.  You can also replicate between cloud and on-premise servers. Our single digit seconds failover mechanism guarantees service continuity.

Another differentiator is our active-active global distribution capability. If you would like to deploy an application both in the U.S. and Europe, for example, your will have application servers in a European data center and in a US data center. But what about the database? Would it be a single database for those two locations? While this helps avoid data inconsistency it’s terrible when it comes to performance, for at least one of these two data centers. If you have a separate database in each data center, performance may improve, but at the risk of consistency. Let’s assume that you and your wife share the same bank account, and that you are in the U.S. and she is traveling in Europe. What if both of you withdraw funds at an ATM at about the same time? If the app servers in the US and Europe are linked to the same database, there is no problem, but if the bank's app uses two databases (one in the US and one in Europe), how would they prevent overdraft? Having a globally distributed database with full sync is a major challenge. If you try to do conflict resolution over the Internet between Europe and the U.S., database operation will slow down dramatically, which is a no-go for the instant experience end users demand. So, we developed a unique technology for Redis Enterprise based on the mathematically proven CRDT concept, developed in universities. Today, with Redis Enterprise, our customers can deploy a global database in multiple data centers around the world while assuring local latency and strong eventual consistency. Each one works as if it is fully independent, but behind the scene we ensure they are all in sync.          

Jim Carroll: What is the ultimate ambition of this company?

Offer Bengal: We have the opportunity to build a very big software company. I’m not a kid anymore and I do not live on fantasies. Look at the database market – it’s huge! It is projected to grow to $50–$60 billion (depending on which analyst firm you ask) in sales in 2020. It is the largest segment in the software business, twice the size of the security/cyber market. The crack in the database market that opened up with NoSQL will represent 10% of this market in the near term. However, the border line between SQL and NoSQL is becoming a blur, as companies such as Oracle add NoSQL capabilities and NoSQL vendors add SQL capabilities. I think that over time, it will become a single large market. Redis Labs provides a true multi-model database. We support key-value with multiple data structures, graph, search, JSON (document based), all with built-in functionality, not just APIs. We constantly increase the use case coverage of our database, and that is ultimately the name of the game in this business. Couple all that with Redis' blazing fast performance, the massive adoption of open source Redis and the fact that it is the "most loved database" (according to StackOverflow), and you would agree that we have once in a lifetime opportunity!





Ciena posts strong quarter as revenue rises to $818.8m

Ciena reported revenue of $818.8 million for its fiscal third quarter 2018,  as compared to $728.7 million for the fiscal third quarter 2017.

Ciena's GAAP net income for the fiscal third quarter 2018 was $50.8 million, or $0.34 per diluted common share, which compares to a GAAP net income of $60.0 million, or $0.39 per diluted common share, for the fiscal third quarter 2017.

Ciena's adjusted (non-GAAP) net income for the fiscal third quarter 2018 was $74.3 million, or $0.48 per diluted common share, which compares to an adjusted (non-GAAP) net income of $56.4 million, or $0.35 per diluted common share, for the fiscal third quarter 2017.

"The combination of continued execution against our strategy and robust, broad-based customer demand resulted in outstanding fiscal third quarter performance," said Gary B. Smith, president and CEO of Ciena. "With our diversification, global scale and innovation leadership, we remain confident in our business model and our ability to achieve our three-year financial targets.”

Some highlights:

  • U.S. customers contributed 57.3% of total revenue
  • Three customers accounted for greater than 10% of revenue and represented 33% of total revenue
  • 37% of revenue comes from non-telco customers; In Q3, three of the top ten revenue accounts were webscale customers, including one that exceeded 10% of total quarterly sales – a first for Ciena.
  • Secured wins with tier one global service providers – many of whom are new to Ciena – including Deutsche Telekom in support of its international wholesale business entity. The project includes a Europe-wide network deployment leveraging our WaveLogic technology.
  • APAC sales were up nearly 50%, with India once again contributing greater than 10% of global revenue. India grew 100% year-over-year, and Japan doubled in the same period. Australia also remained a strong contributor to quarterly results.
  • The subsea segment was up 23% year-over-year, largely driven webscale company demand. Ciena noted several new and significant wins in Q3, including four new logos, and Ciena was selected as the preferred vendor for two large consortia cables.
  • The Networking Platforms business was up more than 14% year-over-year.
  • Adjusted gross margin was 43.4%
  • Headcount totaled 5,889
https://investor.ciena.com/events-and-presentations/default.aspx




At this

ZTE counts its losses for 1H18, renews focus on 5G

ZTE Corporation reported revenue of RMB 39.434 billion for the first six months of 2018, down 27% from RMB 54.010 for the same period in 2017.

For the six months, ZTE's net profit attributable to holders of ordinary shares of the listed company amounted to RMB-7.824 billion, representing year-on-year decline of 441.24%. Basic earnings per share amounted to RMB-1.87, which reflected mainly the company’s payment of the US$1 billion penalty to the U.S. government.

ZTE's operating revenue from the domestic market amounted to RMB25.746 billion, accounting for 65.29% of the Group’s overall operating revenue, while international sales amounted to RMB13.688 billion, accounting for 34.71% of the total.

ZTE's operating revenue for carriers’ networks, government and corporate business and consumer business amounted to RMB23.507 billion, RMB4.433 billion and RMB11.494 billion, respectively.

Management's commentary included the following:  "Looking to the second half of 2018, the Group will welcome new opportunities for development, given rapid growth in the volume of data flow over the network and the official announcement of the complete fully-functional 5G standards of the first stage. Specifically, such opportunities will be represented by: the acceleration of 5G commercialisation with the actual implementation of trial 5G deployment backed by ongoing upgrades of network infrastructure facilities; robust demand for smart terminals; as well as an onrush of new technologies and models with AI, IOT and smart home, among others, providing new growth niches. "

"In the second half of 2018, the Group will step up with technological innovation and enhance cooperation with customers and partners in the industry with an ongoing focus on high-worth customers and core products. In the meantime time, we will improve our internal management by enhancing human resources, compliance and internal control to ensure our Group’s prudent and sustainable development."



QSFP-DD MSA Group completes mechanical plugfest

The Quad Small Form Factor Pluggable Double Density (QSFP-DD) Multi Source Agreement (MSA) completed a mechanical plugfest to validate the compatibility and interoperation between member’s designs.

The MSA said the event confirmed that the maturity of design experience resulted in a highly successful outcome. A key value proposition of QSFP-DD form factor is the backward compatibility with the widely adopted QSFP28.

The areas of focus for this event included testing the electrical, latching and mechanical designs all of which address the industry need for a high-density, high-speed networking solution.

In total, 15 companies participated in the private Plug Fest, which was hosted by Cisco at their headquarters in San Jose, California.

http://www.qsfp-dd.com/

New QSFP-DD MSA Targets 400G


The Quad Small Form Factor Pluggable Double Density (QSFP-DD) Multi Source Agreement (MSA) group has released a specification for the new QSFP-DD form factor, which is a next generation high-density, high-speed pluggable module with a QSFP28 compatible double-density interface. QSFP-DD pluggable modules can quadruple the bandwidth of networking equipment while remaining backwards compatible with existing QSFP form factors used across Ethernet, Fibre...

Nutanix says software sales growing at 49% annual clip

Nutanix reported revenue of $303.7 million for its fourth quarter ended July 31, 2018, up from $252.5 million a year earlier, reflecting the elimination of approximately $95 million in pass-through hardware revenue in the quarter as the company continues to execute its shift toward increasing software revenue.

Software and support revenue amounted to $267.9 million in the quarter, growing 49% year-over-year from $179.6 million in the fourth quarter of fiscal 2017.

GAAP net loss was $87.4 million, compared to a GAAP net loss of $66.1 million in the fourth quarter of fiscal 2017. Non-GAAP net loss of $19.0 million, compared to a non-GAAP net loss of $26.0 million in the fourth quarter of fiscal 2017.

“We ended the year on a high note with a record quarter on many fronts, positioning us extremely well for the future. We will continue to invest in talent and hybrid cloud technology while incubating strategic multi-cloud investments such as Netsil, Beam, and now Frame,” said Dheeraj Pandey, Chairman, Founder and CEO of Nutanix. “Frame increases our addressable market, brings another service to our growing platform, and adds employees with insurgent mindsets who will help us continue to challenge the status quo.”

“The company’s strong achievement of 78 percent non-GAAP gross margin, the best in our history, is the direct result of our successful execution toward a software-defined business model,” said Duston Williams, CFO of Nutanix. “We’re also tracking above our target performance we set using the ‘Rule of 40’ framework, demonstrating our ability to balance growth and cash flow.”


Dell'Oro: Sales of 25 Gbps NICs take off

Sales of 25 Gbps controller and adapter ports is forecasted to grow at a 45 percent compound annual growth rate over the next five years, according to a new report from Dell'Oro Group, as 25 Gbps advances to become the mainstream speed in cloud and enterprise servers.

“25 Gbps has seen a strong initial ramp-up and is now expected to be the dominant speed over the next five years. We have seen Amazon and Facebook as early adopters of 25 Gbps technology, but more end users are transitioning as product availability increases," said Baron Fung, Senior Business Analysis Manager at Dell'Oro Group. "There's been a steady wave of 10 Gbps to 25 Gbps migration as other cloud service providers and high-end enterprises renew and upgrade their servers. Shipment of 25 Gbps ports is expected to peak in 2021, when 50 and 100 Gbps products based on 56 Gbps serial lanes start to ramp-up," said Fung.

Additional highlights from the Server 5-Year Forecast Report:
  • The total controller and adapter market is forecasted to grow at a four percent compound annual rate, with 25 Gbps sales driving most of the growth.
  • Smart NICs could offer adapter vendors an opportunity to introduce innovative new products at higher price point, which could lower the total cost of ownership in the data center

Dell'Oro: Server landscape shifts toward white box cloud servers

The server market is on track to surge $10 billion higher in 2018 before growth rates taper, according to a new report from Dell'Oro Group.  Vendor landscape is trending to lower-cost white box cloud servers.

“Although we forecast a five-year compounded annual growth rate of only two percent, the growth of the server market in 2018 will be at an unprecedented level,” said Baron Fung, Senior Business Analysis Manager at Dell’Oro Group. “However, the cloud segment, which consists of a high proportion of lower-cost custom designed servers, will continue to gain unit share over the Enterprise, putting long-term revenue growth under pressure.  Furthermore, the vendor landscape will continue to shift from OEM to white box Servers as the market is shifting towards the cloud,” added Fung.

Additional highlights from the Server 5-Year Forecast Report:


  • The 2018 growth is primarily attributed to rising average selling prices, resulting from vendors passing on higher commodity prices and end-users purchasing higher-end server configurations.
  • We estimate half of all servers shipping this year go to the cloud, and foresee this share growing to two-thirds by 2022.

A10 announces preliminary revenue of $60.7 million

A10 Networks announced preliminary revenue of $60.7 million for the quarter ended June 30, 2018, up 12% year-over-year. GAAP net loss was $4.5 million, or $0.06 per share, and non-GAAP net income was $1.6 million, or $0.02 per share.

“We have made steady progress across our key initiatives including strengthening our team, increasing our pace of innovation, and targeting our R&D investments in cloud, security and 5G. While our first quarter was impacted by our sales transformation, we were pleased to see improved momentum in the second quarter,” said Lee Chen, president and chief executive officer of A10 Networks. “There are a number of trends in the market that play to A10's strengths that we believe present many opportunities for growth over the long-term. We are focused as a management team and believe we are on the right path to continue to improve our execution and drive growth.”

Wednesday, August 29, 2018

Google hands over management of Kubernetes project to the community

Kubernetes, which is the container orchestration system introduced by Google in 2014, is taking the next step in its evolution.

Throughout this period, Google has provided the cloud resources that support the project development—namely CI/CD testing infrastructure, container downloads, and other services like DNS, all running on Google Cloud Platform (GCP).

Since 2015, Kubernetes has been part of the Cloud Native Computing Foundation (CNCF) under the direction of the Linux Foundation.

Google said now that Kubernetes has become one of the world’s most popular open-source projects, it is time to hand over control. Google hosts the Kubernetes container registry and last month it served 129,537,369 container image downloads of core Kubernetes components. That’s over 4 million per day—and a lot of bandwidth!

Google will hand over all project operations of Kubernetes to the community (including many Googlers), who will take ownership of day-to-day operational tasks such as testing and builds, as well as maintaining and operating the image repository and download infrastructure.

Under the new plan, Google will make a $9 million grant of GCP credits to the CNCF, split over three years, to cover infrastructure costs. In addition to the world-wide network and storage capacity required to serve all those container downloads, a large part of this grant will be dedicated to funding scalability testing, which regularly runs 150,000 containers across 5,000 virtual machines.

Equinix to offer private connectivity to VMware cloud on AWS

Equinix will offer private connectivity to VMware Cloud on AWS via AWS Direct Connect at its data centers globally. VMware Cloud on AWS is an on-demand service that enables enterprises to run applications across vSphere-based cloud environments with access to a broad range of AWS services. Powered by VMware Cloud Foundation, this service integrates vSphere, vSAN and NSX along with VMware vCenter management, and is optimized to run on dedicated, elastic, bare-metal AWS infrastructure.

The services is now available in 24 metros around the world by connecting to an AWS Direct Connect edge node deployed at Equinix IBX data centers within the same metro or via Equinix Cloud Exchange Fabric (ECX Fabric).

Equinix said it already deploys more AWS Direct Connect onramps than any other data center provider.

Currently, AWS Direct Connect is available to customers in Equinix IBX data centers across 24 strategic markets including Amsterdam, Chicago, Dallas, Frankfurt, Helsinki, Los Angeles, London, Madrid, Manchester, Miami, Munich, New York, Osaka, Paris, Rio de Janeiro, São Paulo, Seattle, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Warsaw and Washington, D.C.

Mellanox ships 200G LinkX Copper and Optical Cables

Mellanox Technologies is now shipping 200GbE Ethernet and InfiniBand HDR LinkX optical transceivers, Active Optical Cables (AOCs) and Direct Attach Copper cables (DACs) for use in upcoming 200 Gbps systems.

The new LinkX 200 Gbps product line provides comprehensive options for switch, server, and storage network connectivity for HDR InfiniBand and 200/400GbE infrastructures. LinkX is part of the Mellanox “end-to-end” ecosystem including Spectrum2 200GbE and Quantum HDR systems and ConnectX-6 network adapters, which includes:

  • 200G SR4/HDR Transceiver: Designed and manufactured by Mellanox, the 4x50G PAM4 transceiver uses the QSFP56 form-factor and forms the basis for transceivers and AOC products for Mellanox’s upcoming 200G systems.
  • 200GbE and HDR DAC and AOC cables: Designed and manufactured by Mellanox we will be displaying both straight and y-splitter 100GbE and HDR100 form-factors.
  • 400GbE DAC Cables: Mellanox LinkX™ kicks off its 400GbE line with announcing beginning shipments of its 400G 8x50G PAM4 DAC cables in the QSFP-DD form-factor.
  • Live Demos: At ECOC we will host a live demo with Keysight/Ixia showing 200Gb/s SR4 transceivers and 400Gb/s QSFP-DD DAC cables.
  • 400G SR8 Transceiver: Mellanox-designed, 8-channel parallel transceiver will be on display.
  • Low-Loss DAC Cables: Extending one of the industry’s largest offerings of interconnect products, with new low-loss DAC cables that enables simplified or even FEC-less links for Mellanox SN2000 series of 25/50/100G network switches and ConnectX network adapters. The new cables offer lengths up to 5 meters and support the IEEE CA-N and CA-L specifications. This enables considerable interconnect latency savings.

Mellanox also began shipping 400G QSFP-DD DAC cables for use in next-generation systems.

Facebook establishes Express Wi-Fi Certified program

Facebook launched an Express Wi-Fi Certified partner program for access point manufacturers.

The initial list of Express Wi-Fi Certified partners includes Arista Networks, Cambium Networks, and Ruckus Networks, an ARRIS Company.

Facebook's Express Wi-Fi enables local entrepreneurs, internet service providers, and mobile network operators to offer fast, affordable internet access in local communities. Currently, Express Wi-Fi is available with 10 partners in five countries — India, Indonesia, Kenya, Nigeria, and Tanzania. People typically access Express Wi-Fi hotspots by signing up with a participating retailer and purchasing a prepaid data pack.

Facebook said Express Wi-Fi Certified hotspots must be able to perform two key tasks: authenticate people who want to use a hotspot and account for the Wi-Fi data they use.

Facebook sets 2020 target for 100% renewable power

Facebook has set 2020 as its target to reduce greenhouse gas emissions by 75% and to power 100% of global operations with renewable energy. Three years ago, the company set a goal of powering 50% of its operations with renewable power by 2018. It reached this goal by the end of 2017.

To date, Facebook has signed contracts for over 3 gigawatts of new solar and wind energy. Most of the contracts have been signed over the past 12 months for new solar and wind projects that will deliver energy to its hyperscale data operations.

https://newsroom.fb.com/news/2018/08/renewable-energy/

AT&T adds Indianapolis to its mobile 5G rollout

AT&T added Indianapolis to its list of 5G rollout cities, joining Atlanta, Charlotte, Dallas, Oklahoma City and Raleigh, and Waco.

“Indy is a city on the forefront of innovation and technology. Home to a variety of large and small businesses, thriving communities, and a local government that understands the importance of technology to fuel innovation and boost economic growth,” said Bill Soards, president AT&T Indiana.  “Whether you’re a retailer, car wash owner, hospital, manufacturer, public safety entity or a bank, we expect 5G will eventually change the customer experience and provide new economic opportunities for your business. It was a natural choice for AT&T to name Indy as one of the twelve introductory 5G cities.”

AT&T invested nearly $425 million in its Indianapolis area wireless and wired networks during 2015-2017. In 2017, AT&T made more than 525 wireless network upgrades in the Indianapolis area. These include new cell sites, boosting network capacity and new wireless high-speed internet connections. The upgrades included its 5G Evolution technologies and LTE-LAA.

Cloudian raises $94 million for hyperscale data fabric

Cloudian, a start-up offering a hyperscale data fabric for enterprises, raised $94 million in a Series E funding, bringing the company’s total funding to $173 million.

“Cloudian redefines enterprise storage with a global data fabric that integrates both private and public clouds — spanning across sites and around the globe — at an unprecedented scale that creates new opportunities for businesses to derive value from data,” Cloudian CEO Michael Tso. “Cloudian’s unique architecture offers the limitless scalability, simplicity, and cloud integration needed to enable the next generation of computing driven by advances such as IoT and machine learning technologies.”

The funding round included participation from investors Digital Alpha, Eight Roads Ventures, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures, Inc. and WS (Wilson Sonsini) Investments.

“Computing now operates without physical boundaries, and customers need storage solutions that also span from the data center to the edge,” said Takayuki Inagawa, president & CEO of NTT DOCOMO Ventures. “Cloudian’s geo-distributed architecture creates a global fabric of storage assets that supports the next generation of connected devices.”

Dell’Oro: WLAN market to hit $18.2 billion by 2022

The WLAN market revenue will grow to $18.2 billion by 2022, according to a new report from Dell'Oro Group. Broadband CPE and Mesh routers will fuel growth for SOHO class WLAN.

“We estimate the Enterprise and SOHO markets to maintain modest growth over the next five years,” said Ritesh Patel, Business Analyst at Dell’Oro Group. “Specifically in the SOHO market, the need for better Wi-Fi coverage and in-home management will be driven by the demand for improved network visibility and application mobility, i.e. having the ability from anywhere in the home to experience high bandwidth applications such as FaceTime, Netflix, and YouTube. Broadband CPE and Wi-Fi Router vendors will fulfill this need by integrating new wireless technology, 802.11ax, and by improving visibility into analytics and management controls. Similarly, vendors will address the need for better coverage through Mesh Routers, which will continue to see modest to high growth over the next five years,” added Patel.

The WLAN 5-Year Forecast Report highlights other key trends, including:


  • Enterprise Cloud license revenue will surpass Cloud Managed AP revenue for the first time.
  • Ethernet switch sales to aggregate WLAN are poised to eclipse switch sales lost to WLAN. See our Campus Networks Advanced Research report.
  • China eclipses the largest region, North America, in terms of units by 2020.

Tuesday, August 28, 2018

Huawei intros Universal Transport for electrical grids

Huawei announced the commercial release of its next-generation Universal Transport Solution for smart grids.

Huawei said traditional low-speed connectivity services for the power grid, such as Supervisory Control and Data Acquisition (SCADA) and relay protection, will continue to exist for a long time, providing high levels of security. However, the rapid expansion of the global power grid is driving a spike in the construction of power transmission and transformation networks that have greater requirements for more types of service and higher bandwidth.

Electric power companies also have the possibility to launch connectivity services based on their investment in network resources.

The Huawei Next-Generation Universal Transport Solution integrates OTN, SDH, packet, PCM, and other technologies based on the MS-OTN architecture. It supports single wavelength at 100 Gbps or 200 Gbps.  DWDM enables a pair of optical fibers can achieve an ultra-large bandwidth of 20 TBbps. The platform enables the power transmission bearer networks to carry multiple services including commercial traffic.

https://www.huawei.com/us/press-events/news/2018/8/next-generation-universal-transport-solution

Huawei adapts 4.5G/5G solution for China's power companies

Huawei announced a 4.5G-based and 5G-oriented eLTE-DSA (eLTE Discrete Spectrum Aggregation) solution to help global power companies build a last mile neural network for managing smart grids.

Huawei says traditional VHF (30~300MHz) / UHF (300~3000MHz) narrowband discrete spectrum used in the energy industry cannot meet the requirements for power IoT development because it is commonly based on data radio technology, which causes technical bottlenecks due to long-latency, small capacity, insufficient bandwidth, and high power consumption.

In China, 230MHz of discrete spectrum in VHF narrowband is allocated to the power industry.
The spectrum is aggregated to achieve a minimum latency of 20 ms, with a maximum of 4000 users in a cell, and a transmission rate from Kbps to Mbps for a single user. The minimum static power consumption of the module is 0.15w.

Earlier this month, Huawei conducted a performance and service verification test of its 4.5G-based and 5G-oriented eLTE-DSA solution with the China Electric Power Research Institute (CEPRI). The results showed excellent performance in terms of speed, capacity, security and reliability, and that the solution can fully meet the intelligent control service requirements, such as precise load control and power distribution automation.

Huawei also notes that its 4.5G-based and 5G-oriented eLTE-DSA solution has strong anti-interference capabilities, as it can run stably in a complex radio environment where data transmission stations coexist.

Huawei expects its solution to be widely adopted by the electric power utilities in China for carrying mission-critical services such as precise load control, power distribution automation, and collection of power consumption information.

Open source "Zowe" framework bridges apps to mainframes

The Open Mainframe Project, which is backed by IBM and CA Technologies amongst others, introduced Zowe, a new open source software framework that bridges the divide between modern applications and the mainframe.

Zowe is the first open source project based on z/OS. Its mission is to enable better integration capabilities for z/OS through an extensible open source framework and the creation of an ecosystem of independent Software Vendors (ISVs), System Integrators, clients and end users.

Initial Zowe modules will include:

  • An extensible z/OS framework that provides new APIs and z/OS REST services to transform enterprise tools and DevOps processes that can incorporate new technology, languages and modern workflows.
  • A unifying workspace providing a browser-based desktop app container that can host both traditional and modern user experiences and is extensible via the latest web toolkits.
  • An innovative interactive and scriptable Command Line interface (CLI) allowing new ways to seamlessly integrate z/OS in cloud and distributed environments.  

In addition to this technical milestone, Rocket Software is now part of the Open Mainframe Project as a Platinum Member, joining Platinum members IBM and CA Technologies as key contributors to the Zowe framework.

"The mainframe continues to be a critical platform offering new possibilities for next generation applications. We are excited to participate with the OMP and Zowe community members to streamline the development process for applications leveraging the platform," said Greg Lotko, General Manager, Mainframe, CA Technologies. "We are committed to the Zowe initiative as it provides simplified and familiar infrastructure services for the mainframe benefiting both experienced and newer developers and will help our customers accelerate the time-to-market as they deploy their mission critical digital transformation strategies."

ARM and Facebook join Yocto Project for Linux in embedded devices

The Yocto Project, the open source collaboration project that launched in 2011 to help developers create custom Linux-based systems for embedded products, announced ARM and Facebook as new platinum members, joining its 20 other member companies.

The Yocto Project provides a flexible set of tools and a space where embedded developers worldwide can share technologies, software stacks, configurations, and best practices to create tailored Linux images for embedded and Internet of Things (IOT) devices. An upcoming release is expected this fall.


“The next release will demonstrate Yocto Project’s ability to efficiently build and importantly, test complete Linux software stacks which are reproducible, easily audited and totally customizable in a maintainable way,” said Richard Purdie, Project Architect of the Yocto Project.

Cloudera Data Warehouse handles 50 PB loads

Cloudera Data Warehouse entered general availability status. The service is a modern hybrid cloud data warehouse for storing, analyzing and managing data in public clouds and on-premises. The company said its hybrid, cloud-native architecture routinely handles 50 PB data workloads, delivering sub-microsecond query performance and serving clusters with hundreds of compute nodes.

NYSE is using Cloudera to run over 80,000 queries a day on petabytes of data, while adding 30 TB of fresh data daily.

"Before Cloudera, several data warehouse appliances were necessary to support our complex analytic requirements including market surveillance and member compliance analysis. Because the warehouse appliances could not scale we were forced to silo our data by market," said Steve Hirsch, Chief Data Officer, Intercontinental Exchange / NYSE.

The company also announced the availability of Cloudera Altus Data Warehouse, a data warehouse as-a-service, built with the same Cloudera Data Warehouse hybrid, cloud-native architecture.


Lattice Semiconductor appoints AMD exec as its new CEO

Lattice Semiconductor appointed Jim Anderson as its new President and Chief Executive Officer, and to the company’s Board of Directors. He most recently served as at Advanced Micro Devices (AMD) as the General Manager and Senior Vice President of the Computing and Graphics Business Group.

Jeff Richardson, Chairman of the Board, said, “On behalf of the Board, we are pleased to announce the appointment of Jim Anderson as Lattice’s new President and Chief Executive Officer. Jim brings a strong combination of business and technical leadership with a deep understanding of our target end markets and customers. The transformation he drove of AMD’s Computing and Graphics business over the past few years is just a recent example of his long track record of creating significant shareholder value.

President Trump blocks sale of Lattice Semi citing National Security

President Trump signed an order blocking the sale of Lattice Semiconductor to Canyon Bridge Capital Partners on national security grounds. The issue was referred to the President by the Committee on Foreign Investment in the United States (CFIUS) due to concerns regarding China Venture Capital Fund Corporation Limited and its interest in Canyon Bridge Capital Partners.

Darin G. Billerbeck, CEO of Lattice Semiconductor, issued the following statement:

“The transaction with Canyon Bridge was in the best interests of our shareholders, our customers, our employees and the United States. We also believe our CFIUS mitigation proposal was the single most comprehensive mitigation proposal ever proposed for a foreign transaction in the semiconductor industry and would have maximized United States national security protection while still enabling Lattice to accept Canyon Bridge’s investment and double American jobs. While it is disappointing that we were not able to prevail, the Board and I would like to thank Canyon Bridge for their support during this time.”

https://www.whitehouse.gov/the-press-office/2017/09/13/order-regarding-proposed-acquisition-lattice-semiconductor-corporation

Private Equity Firm Acquires Lattice Semi for $1.3 Billion - FPGAs

Canyon Bridge Capital Partners agreed to acquire all outstanding shares of Lattice Semiconductor Corporation (NASDAQ:LSCC) for approximately $1.3 billion inclusive of Lattice’s net debt, or $8.30 per share in cash. This represents a 30% premium to Lattice’s last trade price on November 2, 2016, the last trading day prior to announcement.

Lattice supplies low power FPGA, video ASSP, 60 GHz millimeter wave, and IP products to the consumer, communications, industrial, computing, and automotive markets worldwide. The company is based in Portland, Oregon.

Dell'Oro: Huawei recaptures microwave transmission market share

The microwave transmission market maintained its positive momentum in 2Q 2018, according to a new report from Dell'Oro Group. Huawei regained the highest market share in the quarter.

“This was a second consecutive quarter of year-over-year growth for the microwave transmission equipment market,” stated Jimmy Yu, Vice President with Dell’Oro Group. “We are increasingly confident that demand for mobile backhaul is improving, and that it will expand this year following two years of revenue contraction,” continued Yu.

Top Four Microwave Transmission Manufacturers in 2Q 2018

Rank                  Revenue Share
1    Huawei         24 %
2    Ericsson       21 %
3    Nokia         11 %
4    NEC         10 %

Additional highlights from the 2Q 2018 Microwave Transmission Quarterly Report:

  • The point-to-point Microwave Transmission market grew six percent year-over-year in the quarter.
  • Huawei regained the highest market share in the quarter after briefly losing this top spot to NEC in 1Q 2018.
  • Ericsson and Nokia grew their revenue in the quarter, giving them the second and third highest market shares, respectively.
  • The country with the largest demand for Microwave Transmission equipment was India. We estimate radio shipments to India grew about 45 percent year-over-year in the first half of 2018.

http://www.delloro.com

Intel brings 802.11ac and LTE to its latest mobile CPUs

New additions to the 8th Gen Intel Core processor family offer integrated Wi-Fi and LTE capabilities

The new U-series (formerly code-named Whiskey Lake) and Y-series (formerly code-named Amber Lake) are optimized for connectivity in thin, light laptops and 2 in 1s.

The U-series processors bring integrated Gigabit Wi-Fi (with 2x2 160MHz channels), while delivering up to 2-times better performance, compared with a 5-year-old PC, and double-digit gains in office productivity for everyday web browsing and light content creation over the previous generation.

The Y-series processors also deliver fast connectivity options, including fast Wi-Fi and LTE capabilities, while offering double-digit gains in performance compared with the previous generation, enabling fresh innovations in sleek and compact form factor designs with extended battery life.

Gremlin offers Failure-as-a-Service for Docker

Gremlin, a start-up based in San Francisco, has developed a "failure injection platform" that allows developers to stress test Docker environments to better prepare for real-world disasters by simulating compounding issues.

The company said its Failure-as-a-Service platform aims to make containerized infrastructure more resilient.

In December of 2017, Gremlin launched the first iteration of its platform alongside a $7.5 Million Series A funding round, recreating common failure states within hybrid cloud infrastructure.

“The concept of purposefully injecting failure into systems is still new for many companies, but chaos engineering has been practiced at places like Netflix and Amazon for over a decade,” said Matthew Fornaciari, CTO and Co-Founder of Gremlin. “We like to use the vaccine analogy: injecting small amounts of harm can build immunity that proactively avoids disasters. With today’s updates to the Gremlin platform, DevOps teams will be able to drastically improve the reliability of Docker in production.”

http://www.gremlin.com


  • The Series A funding came from Index Ventures and Amplify Partners.


Monday, August 27, 2018

VMware positions NSX for multi-cloud, containerized networking

VMware is well positioned at the intersection of major industry trends, including virtualization, containerization cloud, SDN, hyperconverged infrastructure, application security, and AI, said company CEO Pat Gelsinger, kicking off the annual VMworld conference in Las Vegas and marking the 20th anniversary of the Palo Alto, California based firm. The VMware Vision is to become "the essential, ubiquitous digital foundation for any device, any application, and any cloud.

Gelsinger said NSX, which is its SDN network virtualization and security platform, is now enabling 80 million switch ports installed by over 7,500 customers, including 82% of the Fortune 100 with NSX running in their network. VMware estimates this footprint at 10X its largest competitor. Gelsinger argues that this positions NSX as the best platform to address the complexity, security and scale of multi-cloud corporate environments and for containerized networking. The latest version of NSX extends multi-cloud networking and security capabilities to AWS, in addition to Microsoft Azure and on-premises environments. NSX is adding support for bare metal hosts, in addition to hypervisor and container environments, including Linux-based workloads running on bare-metal servers, as well as containers running on bare-metal servers without a hypervisor.

VMware showcased a collaboration with Arista which enables NSX security policies to be enforced natively on Arista switches across a multi-cloud enterprise, extending security policies across both virtual to physical workloads, from mainframes to data center to public clouds. The collaboration also integrates Arista’s Macro-Segmentation Services with VMware NSX micro-segmentation capabilities.

VMware's vSAN, which is a hyper-converged, software-defined storage (SDS) product, is likewise making strong inroads into software-defined data centers. The company claims over 15,000 customers, a 50% presence in Global 2000 enterprises, and a 37% market share.

The partnership with Amazon is growing. Earlier, the company announced that VMware Cloud on AWS is now available in Amazon Web Services’ (AWS) Asia-Pacific (Sydney) region. AWS CEO Andy Jassy appeared on-stage at VMworld to announced that new VMware Cloud + AWS solutions will be arriving soon for GovCloud, the Amazon initiative for secure government cloud infrastructure. AWS and VMware are also developing new NSX + AWS DirectConnect networking connectivity options.

Amazon Web Services and VMware announced Amazon Relational Database Service (Amazon RDS) on VMware. This service will make it easy for customers to set up, operate, and scale databases in VMware-based software-defined data centers and hybrid environments and to migrate them to AWS or VMware Cloud on AWS.

VMware agreed to acquire CloudHealth Technologies, a start-up based in Boston. Financial terms were not disclosed. CloudHealth Technologies delivers a cloud operations platform across AWS, Microsoft Azure and Google Cloud. It claims 1,300 customers.

VMware will now support live migration of virtual machines using Nvidia Quadro vDWS vGPUs. Operators can move a guest to another compatible host while performing maintenance on the original server.