Monday, August 13, 2018

Intel's Future of the Enterprise looks to cloud repatriation

A fundamental question regarding the astonishing rise of public cloud companies is how far will they go in capturing the enterprise networking and IT services businesses? 

Nearly every week, this journal tracks another "all-in" customer migration story to AWS, Azure, Google Cloud, Alicloud, etc. However, in reading the press announcements in detail, we often find caveats. Sometimes it is only one division of a corporation that is fully migrating to the cloud, or maybe it is a next-gen application that is all-in, or maybe the news is only the expression of an intent to go all-in to cloud but no timeline is given.

This topic was the subject of a  presentation given last week at the Intel Data Centric Innovation Summit by Rajeeb Hazra, Corporate Vice Presiden of Intel's Data Center Group. Earlier in the day, Intel had spoken quite enthusiastically about partnerships with the big cloud providers to build custom silicon for their pressing scalability challenges. It is clear that Intel is benefitting from the explosive growth of hyperscale data centers from these guys. The forthcoming Intel Xeon processors with integrated persistent memory is an effort to accelerate this fire even faster.

The afternoon talk by Hazra tackled the "Future of the Enterprise" from the Intel perspective. Plenty of Xeon chips have shipped to enterprise customers of the past two decades and the company is determined that this party won't end for any reason, even to be cannibalized by Xeon sales to the public cloud.

The key takeaway from Hazra's presentation is a data point cited from IDC: that 80% of enterprises are considering re-patriating workloads from public clouds to their on-premise infrastructure.

One argument for this trend is that biggest enterprise applications require many clusters of high-performance systems, which currently are best procured in a public cloud. However, if we think about the mid to long-term future of computing, we should question whether this assumption will always hold true. How big are the largest corporate applications? What would it take to run them in-memory?  Hundreds of cores and thousands of threads? Terabytes of persistent memory? 

The development of persistent memory technologies, such as Intel's Optane, promises to redraw the boundaries between computing and storage. In the microservices and containerization, could very efficient on-premise infrastructure do the job better than an all-in public cloud solution or hybrid cloud solution. Ten years from now or maybe sooner, the amount of compute and storage resources and AI resources that can be packed in one standard rack of equipment, may have caught up to the big data needs of most enterprise applications. In some sense, this is the same old question of whether it is better to rent or own. The answer depends on many factors, which may well shift away from public clouds when resources are cheap and powerful enough.

A video of Rajeeb Hazra's "Future of the Enterprise" is archived here:

IETF completes Transport Layer Security 1.3

The IETF has officially completed and published Transport Layer Security (TLS) Protocol Version 1.3, which is the most important security protocol update for the Internet since SSL was completed twenty years ago.

The preceding TLS 1.2 version was known to have several high profile vulnerabilities. TLS 1.3 brings major improvements in security, performance, and privacy in the way that client/server applications communicate over the Internet.

The IETF says it engaged with the cryptographic research community to analyze, improve, and validate the security of TLS 1.3 so to prevent eavesdropping, tampering, and message forgery.

Some highlights:

  • TLS 1.3 encrypts more of the negotiation handshake to protect it from eavesdroppers.  
  • TLS 1.3 enables forward secrecy by default which means that the compromise of long term secrets used in the protocol does not allow the decryption of data communicated while those long term secrets were in use. This ensures that current communications will remain secure even if future communications are compromised.
  • TLS 1.3 shaves an entire round trip from the connection establishment handshake. In the common case, new TLS 1.3 connections will complete in one round trip between client and server.

TLS v1.3: How Do We Get from Here to There?

Spirent has been leading the way in providing companies with the tools they need to prepare for TLS v1.3. I had a chance to sit down with David DeSanto, Director, Products and Threat Research for Spirent to talk about the transition to TLS v1.3 and some of the hurdles that organizations face as they make the switch to the new de facto standard.

Jim Carroll, OND: Tell me about the evolution of TLS. How did we get to this point?

David DeSanto, Spirent: Transport Layer Security, or TLS, is a cryptographic protocol used by many applications and services such as web browsing, email communications, and multimedia communications.  It made Secure Sockets Layer, or SSL, obsolete as it offered better encryption properties such as perfect forward secrecy (PFS), newer cipher suites, etc.

TLS v1.2 is a considered the current de facto standard for cryptography when paired with a strong cipher suite and large private key (i.e., asymmetric key).  However, this comes at an impact to the user’s experience, as the protocol itself and the cipher suites offering elliptic curve with PFS and large asymmetric keys come at a performance hit.  Even in today’s world of data breach roulette, organizations choose to go with a lower encryption standard or cipher suite such that cryptographic steps do not overburden the user and potentially have them stop using their services altogether.
TLS v1.3 looks to address the concerns commonly seen with TLS v1.2.  This new standard includes performance improvements such that the user does not have as much overhead or burden in initializing the secure connection.  It also makes additional insecure cryptographic practices obsolete, which can lead to attackers improperly gaining access to the encrypted communication.

Jim Carroll, OND: What's the current status of TLS v1.3 and what is the next phase of specification development?

David DeSanto: TLS v1.3 is still in draft—specifically draft-28—and this draft has been submitted as the official standard. This was submitted in March to the IETC and is going through the IETF process to be a ratified standard.  It is expected to be ratified this year, and you can track its progress at

Jim Carroll, ONDWhat is the market reaction so far?  Are customers implementing TLS v1.3 in big numbers?

David DeSanto: The market adoption has varied depending on the specific technology and vertical.
As TLS v1.3 is a cryptographic protocol used by a client and server to provide privacy and data integrity, users can be put into a “forced adoption” model without realizing it.  The best example of this is with one of the bigger champions of TLS v1.3: Google.  Google rolled out TLS v1.3 earlier this year within its services and consumer solutions.  If you have a Gmail account and access it using the Chrome browser, you are using TLS v1.3 and may not even know it.

There is a parallel effort—started by development at Google in 2016—to build a new transport layer protocol named QUIC (short for Quick UDP Internet Connections).  It was first submitted to the IETF in 2016 and is currently still in draft with draft-12 being the current working draft.  QUIC has encryption requirements built right into its standard and these requirements are based on TLS v1.3.  

Just these two examples show strong adoption of TLS v1.3 so far and it is expected to grow at a consistent rate.  TLS v1.3 is expected to be adopted at a much faster rate than previous iterations of the TLS protocol—due in large part to the providers we rely on today who are actively making the switch to support it quickly.  Google is joined by many others who have already implemented and have enabled support by default.

Jim Carroll, ONDWhat technical hurdles are there to implementing TLS v1.3?

David DeSanto: There are three crucial considerations that organizations need to keep in mind as they prepare to migrate to TLS v1.3. Every organization should be thinking about three crucial considerations:

1.      How to handle zero round trip time resumption (0-RTT)
2.      Preparing for downgrades to TLS v1.2
3.      The need for infrastructure and application testing

The 0-RTT option has the potential to significantly increase performance during an encrypted session between endpoints. With TLS v1.2, secure web communications requires two round trips between the client and server prior to the client making an HTTP request and the server generating a response. TLS v1.3 reduces this requirement to one round trip and offers the ability to inherit trust to accomplish zero round trips, or 0-RTT. 0-RTT potentially provides better performance, but it also creates a significant security risk. With 0-RTT, a transaction could be easy prey for a replay attack, in which a threat actor can intercept an encrypted client message and resend it to the server, tricking the server into improperly extending trust to the threat actor and thus potentially granting the threat actor access to sensitive data. Organizations should be wary of allowing or using 0-RTT due to the potential security risks.  Unless your application or service is highly latency sensitive, the new option is simply not worth the security risk.

Another concern is that TLS v1.3 is backward compatible to TLS v1.2 to allow for interoperability with legacy clients and servers during the transition to the new standard. It’s important to configure the security settings to ensure fallback to TLS v1.2 uses higher security standards. Organizations should disable lower cryptographic algorithms to prevent security breaches such as man-in-the-middle attacks. Select strong cipher suites, including ones that leverage elliptic curve key exchange, use large asymmetric keys, and implement PFS.

Testing is also crucial. The change to TLS v1.3 may be disruptive, and it’s important to discover and address issues proactively. Businesses should test for interoperability, security, and performance in a combined, holistic manner. Use a realistic load, generating, inspecting, and processing appropriate levels of encrypted traffic. Validate how internal and external users will interact with your systems and consider what this change in encryption may mean for an employee, customer, partner, or any other relevant stakeholder. You also have to test all clients—including mobile devices and tablets, and the entire network infrastructure, such as identity and access management systems, firewalls, web proxies, etc.

NVIDIA's 8th GPU architecture for real time graphics and AI

NVIDIA CEO Jensen Huang unveiled the company's  NVIDIA’s eighth-generation GPU architecture, which he described as "the greatest leap since the invention of the CUDA GPU in 2006."

The new GPU architecture, which is codenamed "Turing", leverages dedicated ray-tracing processors — called RT Cores — accelerate the computation of how light and sound travel in 3D environments. It also employs Tensor Core processors to accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second.

These hardware capabilities, along with a new software stack merging rastering and ray tracing, are expected "to fundamentally change how computer graphics is done."

Initial Turing-based products include the NVIDIA Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs.

Samsung supplies 16Gb GDDR6 Memory for NVIDIA Quadro

Samsung Electronics has supplied its latest 16-gigabit (Gb) Graphics Double Data Rate 6 (GDDR6) memory for NVIDIA’s new Turing architecture-based Quadro RTX™ GPUs.

Samsung's 16Gb GDDR6, which doubles the device capacity of the company’s 20-nanometer 8Gb GDDR5 memory. The new solution performs at a 14 Gbps pin speed with data transfers of 56 gigabytes per second (GB/s), which represents a 75 percent increase over 8Gb GDDR5 with its 8Gbps pin speed.

Samsung says its GDDR6 consumes 35 percent less power than that required by the leading GDDR5 graphics solutions.

"It’s a tremendous privilege to have been selected by NVIDIA to launch Samsung’s 16Gb GDDR6, and to have enjoyed the full confidence of their design team in making our key contribution to the NVIDIA Quadro RTX GPUs," said Jim Elliott, Corporate Senior Vice President at Samsung Semiconductor, Inc.

BAE Systems partners with Flexera for government cloud migration

BAE Systems, the British defence, security and aerospace company, has formed a partnership with  Flexera to help government agencies moving to the cloud better manage their software licenses and more accurately plan and budget for their future information technology (IT) needs.

Specifically, BAE Systems will integrate Flexera’s  asset and license management tools into its scalable, hybrid cloud environment for government. The federated secure cloud, developed by BAE Systems and Dell EMC, is designed for any U.S. Intelligence Community, Department of Defense (DoD), or federal/civilian government organization.

Flexera is based in Itasca, Illinois.

“With our federated secure cloud, we’re helping government agencies rethink how they share data, analyze information, and collaborate across their enterprises real-time while remaining consistent with strict governance and security requirements,” said Peder Jungck, vice president and general manager of BAE Systems’ Intelligence Solutions business. “It’s only natural that we’d partner with Flexera – a company reimagining how government IT assets and software licenses are bought, sold, managed, and secured.”

U.S. Defense Authorization Act bans Huawei and ZTE from government purchase

The John S. McCain National Defense Authorization Act for FY 2019, which was signed into law by President Trump, officially prohibits the  U.S. government from purchasing telecommunications equipment produced by Huawei Technologies, ZTE Corporation, or any of their affiliates.  The U.S. government is also prohibited from using telecommunications or video surveillance services from any entities using such equipment.

Earlier versions of the legislation had threatened more severe action against Huawei and ZTE but were later removed from the bill.

Nokia supplies Optical LAN for new Korean resort hotel

Nokia has completed the first phase of an Optical LAN deployment for the Jeju Shinhwa World in Jeju, South Korea. The new hotel is a massive resort project that will incorporate a theme park and other attractions.

The installation brings fiber to each of the 1,326 hotel rooms for supporting in-room IT services such as IPTV, VoIP, room control and Wi-Fi.

Technology used for the deployment:

  • Nokia's 7360 Intelligent Services Access Manager (ISAM) FX serves as a high-capacity access node
  • Nokia's 7368 Intelligent Service Access Manager (ISAM) Optical Network Terminals (ONTs) deliver superior triple-play services with high bandwidth capacity to the end users.
  • Nokia's 5571 PCC controls all LAN systems from one centralized advanced management platform optimized for performance and usability.

256 GB microSD cards are aimed at mobile gamers

Kingston Technology introduced a 256 microSD car under its HyperX brand aimed at gamers.

The new Gaming microSD Card line is designed for mobile gamers who need additional storage to store and play games. The HyperX Gaming microSD cards feature read speeds of 100MB/s and write speeds of 80MB/s, meeting or exceeding Nintendo Switch requirements. The new product line is available in 64G, 128G and 256G capacities.

The HyperX Gaming microSD Card is compatible with Nintendo Switch, mobile phones, tablets and other portable gaming devices that have a microSD slot for extended storage.

Deutsche Telekom activates vectoring upgrade in more areas

Deutscke Telekom announced another milestone for its broadband upgrade program as 226,400 households in 151 municipalities can now surf faster on the Internet with download speeds of up to 100 Mbps and upload speeds of up to 40 Mbps.

The cities benefiting include Haltern am See, with 12,000 households, Werdau, with 11,200, Aachen with 10,900, Illingen with 8,300, and Peißenberg with another 6,400 households.

“We’re not just building information superhighways between major metropolises and population centers; our network also extends to rural areas. We are the only company pursuing comprehensive broadband expansion," says Tim Höttges, CEO of Deutsche Telekom. “Some of our build-out projects are designed to serve tens of thousands of households, while others benefit just a handful. For us, every line counts. It doesn’t matter if it’s in Aachen, Chemnitz or Munich or in Aulendorf, Bisingen or Schwindegg.” No other company is investing as much in broadband expansion in rural areas as Deutsche Telekom. The next wave of commissioning is scheduled for September 17.

Switch posts $102 million in revenue

Switch, the Las Vegas based company that develops and operates the SUPERNAP data centers, reported record quarterly revenue of $102.2 million, compared to $92.1 million for the same quarter in 2017, an increase of 11%. Operating income of $15.8 million, compared to $23.5 million for the same quarter last year, a decrease of 33%. Operating income in the second quarter of 2018 includes the impact of $8.2 million in equity-based compensation expense compared with $1.3 million in the same quarter of 2017. A significant portion of this equity-based compensation expense in the second quarter of 2018 relates to the continued vesting of Common Unit awards granted in connection with Switch's initial public offering.

Adjusted EBITDA was $50.3 million, compared to $46.8 million for the same quarter in 2017. Adjusted EBITDA margin of 49.2%, compared to 50.8% for the same quarter in 2017, a decrease of 160 basis points.
Capital expenditures of $99.4 million, compared to $112.9 million for the same quarter in 2017, a decrease of 12%.

"The logistics and timing required for customer implementation of our holistic cloud solution impacted our expectations for the year," said Thomas Morton, president of Switch. "We firmly believe in the long-term growth prospects of our business, and that the unique and market-defining solutions available only at the Switch PRIME campus ecosystems will establish our organization as the recognized pillar of enterprise hybrid cloud."

BigBear cites rapid growth in government cloud services

BigBear, a privately-held company offering cloud-based big data analytics solutions for government and commercial customers, reports more than 220 percent revenue growth last year and is on track to gain another 50 percent boost to revenue by the end of this year.

The company has office locations in Charlottesville, Virginia, and San Diego, California, and is opening a new office in Reston, Virginia.

“We’re excited about our new office location in the Washington, D.C. area as well as the progress our work has made serving our nation’s most critical defense missions,” said Frank Porcelli, CEO of BigBear. “By having our senior technology experts and engineers located near our customers, it enables the kind of close collaboration that is required to provide the high-level mission-critical support we deliver. We look forward to continuing to bring the incredible cost savings and productivity-enhancing benefits of our platform and expert team to more customers throughout the defense and intelligence communities.”