Showing posts with label LSI. Show all posts
Showing posts with label LSI. Show all posts

Thursday, August 14, 2014

Intel to Acquire LSI's Axxia Networking Business from Avago

Intel to acquire LSI's Axxia Networking Business and related assets for $650 million in cash from Avago Technologies.

The Axxia Networking Business generated revenues of $113 million in calendar 2013 and employs approximately 650 people.

The sale follows Avago Technologies acquisition of LSI earlier this year in a deal valued at $6.6 billion

http://www.avagotech.com/

In November 2013, LSI announced first commercial delivery of its Axxia 4500 family of communication processors for enterprise and data center networking applications such as evolving application aware networking.

LSI's Axxia 4500, which is the company's first ARM technology-based communication processor, features a new "StreamSight" next-generation Deep Packet Inspection (DPI) engine that provides improved Software-Defined Networks (SDN) controller traffic inspection, efficiency of network management, QoS, security and other network functions. The Axxia 4500 processors combine LSI's acceleration engines with up to four ARM Cortex-A15 cores with a CoreLink CCN-504 coherent, QoS aware interconnect in 28nm process technology. The processors also include up to 100 Gbps of L2 switching functionality.

The company said that by combining its Ethernet switches, networking accelerators and "Virtual Pipeline" technology with power-efficient ARM cores, its new Axxia 4500 can address the performance challenges facing next-generation networks. Customers using the Axxia 4500 are able to solve the control plane scalability challenges of SDN and extract the necessary information required to enable Network Functions Virtualization (NFV) implementations without burdening the server processor.

In 2013, LSI introduced its ARM-based family of Axxia 5500 processors designed for multi-radio LTE base stations, cell site routers, gateways and mobile backhaul equipment.

The new processors combine up to 16 ARM Cortex-A15 cores with LSI specialized networking accelerators to optimize performance and power efficiency.  On-chip network accelerators deliver up to 50 Gbps packet processing, 20 Gbps security processing and 160 Gbps of Ethernet switching via 16 10G Ethernet interfaces. LSI plans to offer several versions with different core counts and throughput capabilities.  The chip design leverages ARM’s CoreLink CCN-504 low-latency interconnect in 28nm process technology.

LSI said its combination of networking expertise, specialized acceleration engines and Virtual Pipeline technology with ARM’s power-efficient processors and interconnect IP delivers communication processors that are uniquely suited for building intelligent, heterogeneous networks.

Wednesday, August 6, 2014

LSI's 3rd Generation SandForce Flash Controller Hits 1800 MB/s

LSI's third-generation SandForce SF3700 flash controller has demonstrated sequential performance of 1800MB/s,as well as mixed 80/20 (read/write) workload performance of up to 1300MB/s with a native PCIe interface.  The company said this industry-leading performance is supported by its integrated SHIELD error correction with hard and soft LDPC and DSP technology.

The controller is optimized and architected for bi-directional PCIe traffic.

"Flash storage solutions used for client computing, big data, andhyperscale enterprises are continuing to grow at unprecedented levels, driving the need for more advanced flash controllers to manage these data-intensive environments," said Thad Omura, vice president of marketing, Flash Components Division, LSI. "The SF3700 is the ideal-building block for next-generation storage solutions with its
full-duplex architecture. Both enterprise and client applications will benefit from our proprietary LDPC engine which enables customers to significantly extend NAND flash life by dynamically balancing
performance and reliability with minimal latency."

http://www.lsi.com/sandforce


  • LSI is now a subsidiary of Avago Technologies.

Tuesday, January 14, 2014

LSI Supplies PCIe Flash for Oracle's Next-Gen Exadata X4 Systems

LSI's Nytro flash accelerator cards have been selected as the PCIe flash acceleration technology for Oracle’s next-generation Database Machine, Oracle Exadata X4. In addition, the companies have collaborated on bringing a new LSI Nytro technology called Dynamic Logical Capacity (DLC) to Exadata customers.

“Oracle Exadata X4 is a fully integrated and optimized database platform combining hardware and software designed from the ground up to work together to deliver maximum performance and value to customers,” said Juan Loiaza, senior vice president for Exadata systems at Oracle. “Flash-based storage is a key element of Exadata, and LSI Nytro flash accelerator cards deliver the performance and reliability that Oracle demands for Exadata. The expanded flash capacity enabled by DLC technology further enhances cost-effectiveness, raising the value we’re able to deliver customers.”

LSI said its Nytro DLC technology can extend the logical flash storage capacity of Exadata beyond the system’s physical flash capacity by leveraging intelligent caching and management software.

http://www.lsi.com/products/flash-accelerators/pages/default.aspx

Monday, December 16, 2013

Avago to Acquire LSI for $6.6 Billion

Avago Technologies agreed to acquire LSI Corporation for $11.15 per share in an all-cash transaction valued at $6.6 billion.

The acquisition creates a highly diversified semiconductor market leader with approximately $5 billion in annual revenues by adding enterprise storage to Avago's existing wired infrastructure, wireless and industrial businesses.

The combined company will be strongly positioned to capitalize on the growing opportunities created by the rapid increases in data center IP and mobile data traffic.

"This highly complementary and compelling acquisition positions Avago as a leader in the enterprise storage market and expands our offerings and capabilities in wired infrastructure, particularly system-level expertise," stated Hock Tan, President and Chief Executive Officer of Avago. "This combination will increase the Company's scale and diversify our revenue and customer base. In addition to these powerful strategic benefits, as we integrate LSI onto the Avago platform, we expect to drive LSI's operating margins toward Avago's current levels, creating significant additional value for stockholders."

"This transaction provides immediate value to our stockholders, and offers new growth opportunities for our employees to develop a wider range of leading-edge solutions for customers," said Abhi Talwalkar, President and Chief Executive Officer of LSI.  "Our leadership positions in enterprise storage and networking, in combination with Avago, create greater scale to further drive innovations into the datacenter."

Regarding financials, Avago said it expects the deal will be significantly and immediately accretive to its non-GAAP free cash flow and earnings per share. Avago currently anticipates achieving annual cost savings at a run rate of $200 million by the end of the fiscal year ending November 1, 2015, the first full fiscal year after closing.  Avago will fund the acquisition with $1.0 billion of cash from the combined balance sheet, a $4.6 billion term loan from a group of banks, and a $1 billion investment from Silver Lake Partners.

http://investors.avagotech.com/phoenix.zhtml?c=203541&p=irol-newsArticle&ID=1884827&highlight=\
http://www.lsi.com/

Sunday, December 8, 2013

Blueprint Tutorial: SDN and NFV for Optimizing Carrier Networks, Part II

By Raghu Kondapalli, Director of Strategic Planning at LSI

This is the second article in a two-part series. The first article, which discussed the drivers, benefits and trends of unified datacenter-carrier networks, and introduced SDN and NFV technology, is available here.

This article provides some examples of how SDN and NFV can be applied to various segments of a carrier network, and how the functions of a traditional carrier network can be offloaded to a virtualized datacenter to improve end-to-end performance.

Application of SDN and NFV to a Unified Datacenter-Carrier Network

With roots in voice, carrier networks are connection-oriented, while datacenter networks, with roots in data, utilize connectionless protocols. Carriers wanting to fully integrate its datacenter(s) will, therefore, need a common set of protocols. Possible choices include VxLAN (Virtual Extensible LAN) and NvGRE (Network Virtualization using Generic Routing Encapsulation), which are both extensible and scalable with the ability to support thousands of virtual machines (VMs), as well as tunneling protocols, such as IPSec, which can be used to establish end-to-end virtual private networks (VPNs).

In addition to these well-known protocol-level techniques, network-level abstraction based on SDN similarly enables the integration of datacenter and carrier networks. Here are two examples.

Offloading of network control functions to a centralized datacenter using SDN

Control plane components, such as discovery and dissemination of network state, can be decoupled and executed in a centralized datacenter using commodity servers. Centralizing the control plane has the advantage of providing an end-to-end network state view, and enables the network operator to allocate hardware resource pools based on different application needs. Centralizing the control plane also enables the network operator to use standard APIs to monitor and manage the network, and to provision the network according to changing conditions, such as the number of active subscribers.

Offloading of network application software to a virtualized datacenter using SDN

SDN’s centralized control platform for managing hardware resources also supports the virtualization and execution of core applications in one or more datacenters. For a “software-on-demand” capability, for example, a network operator could designate core application software to run on any hardware platform in any datacenter that provides the required processing capacity. Or to provide LTE services in a certain city, the operator might program serving gateway (SGW) or mobility management entity (MME) software to run on a local platform. A major benefit of having network services being fully abstracted is that the operator need not manage the underlying hardware.

Application of SDN and NFV to Carrier Network Segments

Carrier networks are composed of access networks, transport or backhaul networks, and core networks as shown in Figure 6. Note the use of both connection-oriented (Time Division Multiplexing and Asynchronous Transfer Mode) and connectionless (IP) networks end-to-end across the infrastructure.

Applying virtualization schemes based on SDN and NFV enables the entire carrier network to run on a common, commodity and multi-purpose hardware resource pool. This reduces network cost and complexity significantly, which also simplifies network management. By leveraging centralized control and virtualized hardware platforms, core applications share a common hardware pool that improves both scalability and resource utilization. The use of virtualized resource pools can also enable new services and upgrades to be implemented in many cases without costly hardware upgrades.

Figure 7 shows a conceptual view of the SDN-based carrier network. Note how the hardware platform is decoupled from the software platform, and how this enables different cellular technologies to run as virtualized network elements concurrently and independently of any specific hardware platform.

Application of SDN and NFV to Core Networks

Mobile core networks consist of network elements that reside between connection-oriented radio access networks (RANs) and connectionless backbone networks, including the Internet, that employ packet switching. Core networks now also need to support a growing variety of cellular technologies, including 3G, LTE and 4G—all concurrently. The underlying core network functions, such as packet forwarding, as well as control tasks, such as mobility management, session handling and security, are implemented today using dedicated network elements.

Consider, for example, an SGW that forwards packets and an MME that is responsible for activation or authentication in an LTE network. Because these functions are typically executed on common and proprietary hardware platforms, they are visible to one another, resulting in management, resource sharing and security problems. Abstraction with SDN enables the use of commodity hardware, while also mitigating the management, resource sharing and security issues.

Another example is shown in Figure 8. In this example, dedicated application software, which implements network functions for each dedicated core network element like MME or Gateway and Serving GPRS Support Nodes (GGSN and SGSN), can be virtualized and centralized with SDN to run in a private cloud, on virtualized commercial server platforms, or on multi-vendor, non-proprietary hardware.

Application of SDN and NFV to Carrier Access Networks

Subscribers interface with the carrier network via basestation nodes in the RAN, as shown in Figure 6. Owing to the explosion in mobile device adoption and mobile data usage worldwide, the RAN must now be optimized to address these challenges:

  • rapid increase in the number of more closely-spaced base stations needed to cover a given area with LTE eNodeB deployments
  • relatively low basestation utilization with relatively high power consumption
  • similarly low utilization of RF bandwidth resulting from RF interference and limited network capacity in a multi-standard environment
Virtualization of resources in a basestation based on SDN and NFV holds tremendous promise for confronting these and other challenges. As shown in Figure 9, the virtualized real-time operating system and the multiple, multi-standard basestation instances run on top of resource pools, which have been allocated from the physical processing resources. The virtualized operating system dynamically allocates the processing resources based on each virtualized basestation’s changing requirements. Virtualization also enables different basestation instances using different standards and different application software to be provisioned dynamically through resource reconfigurations performed exclusively in software.

Under an SDN architecture, basestation pools with high-bandwidth, low-latency interconnects can be centralized to form virtualized “basestation clouds.” A centralized control plane, which has a global view of all physical processing resources throughout the cloud, enables network operators to program basestation processing tasks for different standards. For example, operators can deploy 3G or 4G RANs by programming different virtual basestations, and then adjust the capacity of each, all through software reconfigurations.

Figure 10 shows an implementation of such a “Cloud-based RAN” (C-RAN) architecture that has been proposed by China Mobile (CMCC). The wireless remote radios connect to a cloud-based, virtualized basestation cluster, which can be implemented using SDN running on heterogeneous hardware processors.

Application of SDN and NFV to Carrier Transport Networks

The transport network in a wireless infrastructure serves as the backhaul network connecting the basestations in the access network to the core network. Transport networks can utilize many different technologies, including SONET, TDM, carrier Ethernet and IP, each of which exhibits different operating characteristics. For example, TDM has a simple operational model characterized by static routes and traffic flows across the network’s centralized control. By contrast, the IP network operational model routes traffic packet-by-packet across the network under distributed network control.

SDN is able to combine these different networking technologies in a way that leverages their respective strengths; in the example above, the simplicity of static network routing is combined with the flexibility and economic advantages of IP. This is possible because SDN decouples the network control and traffic-forwarding functions, thereby eliminating any interdependencies. For this reason, a distributed transport network element is able to support both static routes and dynamic traffic flows.

Summary

The telecommunications industry today, fueled by exploding growth in mobile subscribers and data usage, is undergoing an unprecedented transformation. As a result, service providers are under enormous pressure to deploy new value-added services while lowering costs to remain competitive. To achieve these objectives, carriers are integrating datacenters into their networks to create a more versatile and affordable unified datacenter-carrier network model.

Service providers also need to increase average revenue per user (ARPU) while reducing capital and operational expenditures through hardware consolidation, network resource optimization and ease-of-service deployment. Virtualization is a proven technology that has been adopted universally in datacenters to enhance resource utilization and scalability. By extending virtualization principles to the carrier infrastructure, service providers can optimize the unified datacenter-carrier network end-to-end and top-to-bottom.

Software-defined Networking and Network Function Virtualization enable this versatility in all three segments of a carrier network. SDN enables network functions and applications to leverage virtualized datacenter resources, while a combination of SDN and NFV enables carriers to deploy and scale innovative services more cost-effectively than ever before.

 Raghu Kondapalli is director of technology focused on Strategic Planning and Solution Architecture for the Networking Solutions Group of LSI Corporation.

Kondapalli brings a rich experience and deep knowledge of the cloud-based, service provider and enterprise networking business, specifically in packet processing, switching and SoC architectures.

Most recently he was a founder and CTO of cloud-based video services company Cloud Grapes Inc., where he was the chief architect for the cloud-based video-as-a-service solution.  Prior to Cloud Grapes, Kondapalli led technology and architecture teams at AppliedMicro, Marvell, Nokia and Nortel. Kondapalli has about 25 patent applications in process and has been a thought leader behind many technologies at the companies where he has worked.

Kondapalli received a bachelor’s degree in Electronics and Telecommunications from Osmania University in India and a master’s degree in Electrical Engineering from San Jose State University.

Wednesday, November 20, 2013

LSI Delivers its ARM-based Axxia 4500 Processor for SDN

LSI announced first commercial delivery of its Axxia 4500 family of communication processors for enterprise and data center networking applications such as evolving application aware networking.

LSI's Axxia 4500, which is the company's first ARM technology-based communication processor, features a new "StreamSight" next-generation Deep Packet Inspection (DPI) engine that provides improved Software-Defined Networks (SDN) controller traffic inspection, efficiency of network management, QoS, security and other network functions. The Axxia 4500 processors combine LSI's acceleration engines with up to four ARM Cortex-A15 cores with a CoreLink CCN-504 coherent, QoS aware interconnect in 28nm process technology. The processors also include up to 100 Gbps of L2 switching functionality.

The company said that by combining its Ethernet switches, networking accelerators and "Virtual Pipeline" technology with power-efficient ARM cores, its new Axxia 4500 can address the performance challenges facing next-generation networks. Customers using the Axxia 4500 are able to solve the control plane scalability challenges of SDN and extract the necessary information required to enable Network Functions Virtualization (NFV) implementations without burdening the server processor.

“With so much content and so many applications residing in the cloud, the need to manage network traffic both more efficiently and securely is increasingly important,” said Jim Anderson, senior vice president and general manager, Networking Solutions Group, LSI. “The Axxia 4500 communication processor family, with its combination of intelligence and acceleration, delivers the high performance necessary for efficient SDN implementations.”

http://www.lsi.com/company/newsroom/Pages/20131120bpr.aspx

Thursday, October 17, 2013

Blueprint Tutorial: SDN and NFV for Optimizing Carrier Networks

By Raghu Kondapalli, Director of Strategic Planning at LSI

The ongoing convergence of video and cloud-based applications, along with the exploding adoption of mobile devices and services, are having a profound impact on carrier networks. Carriers are under tremendous pressure to deploy new, value-added services to grow subscriber numbers and increase revenue per user, while simultaneously lowering capital and operational expenditures.

To help meet these challenges, some carriers are creating some of these new services by more tightly integrating the traditionally separate data center and carrier networks. By extending the virtualization technologies that are already well-established in data centers into the telecom network domain, overall network utilization and operational efficiencies can be improved end-to-end, resulting in a substantially more versatile and cost-effective infrastructure.

This two-part article series explores the application of two virtualization techniques—software-defined networking (SDN) and network function virtualization (NFV)—to the emerging unified datacenter-carrier network infrastructure.

Drivers for virtualization of carrier networks in a unified datacenter-carrier network

In recent years, user expectations for “anywhere, anytime” access to business and entertainment applications and services are changing the service model needed by carrier network operators. For example, e-commerce applications are now adopting cloud technologies, as service providers continue incorporating new business applications into their service models. For entertainment, video streaming content now includes not only traditional movies and shows, but also user-created content and Internet video. The video delivery mechanism is evolving, as well, to include streaming onto a variety of fixed and mobile platforms. Feature-rich mobile devices now serve as e-commerce and entertainment platforms in addition to their traditional role as communication devices, fueling deployment of new applications, such as mobile TV, online gaming, Web 2.0 and personalized video.

Figures 1 and 2 show some pertinent trends affecting carrier networks. Worldwide services revenue is expected to reach $2.1 trillion in 2017, according to an Insight research report, while the global number of mobile subscribers is expected to reach 2.6 billion by 2016, according to Infonetics Research.



To remain profitable, carriers need to offer value-added services that increase the average revenue per user (ARPU), and to create these new services cost-effectively, they need to leverage the existing datacenter and network infrastructures. This is why the datacenters running these new services are becoming as critical as the networks delivering them when it comes to providing profitable services to subscribers.

Datacenter and carrier networks are quite different in their architectures and operational models, which can make unifying them potentially complex and costly. According to The Yankee Group, about 30 percent of the total operating expenditures (OpEx) of a service provider are due to network costs, as shown in Figure 3. To reduce OpEx and, over time, capital expenditures (CapEx), service providers are being pushed to find solutions that enable them to leverage a more unified datacenter-carrier network model as a means to optimize their network and improve overall resource utilization.

Virtualization of the network infrastructure is one strategy for achieving this cost-effectively. Virtualization is a proven technique that has been widely adopted in enterprise IT based on its ability to improve utilization and operational efficiency of datacenter server, storage and network resources. By extending the virtualization principles into the various segments of a carrier network, a unified datacenter-carrier network can be fully virtualized—end-to-end and top-to-bottom—making it far more scalable, adaptable and affordable than ever before.

Benefits of integrating datacenters into a carrier network

Leveraging the virtualized datacenter model to virtualize the carrier network has several benefits that can help address the challenges associated with a growing subscriber base and more demanding performance expectations, while simultaneously reducing CapEx and OpEx. The approach also enables carriers to seamlessly integrate new services for businesses and consumers, such as Software-as-a-Service (SaaS) or video acceleration. Google, Facebook and Amazon, for example, now use integrated datacenter models to store and analyze Big Data. Integration makes it possible to leverage datacenter virtualization architectures, such as multi-tenant compute or content delivery networks, to scale or deploy new services without requiring expensive hardware upgrades. Incorporating the datacenter model can also enable a carrier to centralize its billing support system (BSS) and operation support system (OSS) stacks, thereby doing away with distributed, heterogeneous network elements and consolidating them to centralized servers. And by using commodity servers instead of proprietary network elements, carriers are able to further reduce both CapEx and OpEx.

Integrated datacenter-carrier virtualization technology trends

The benefits of virtualization derive from its ability to create a layer of abstraction with the physical resources. For example, the hypervisor software creates and manages multiple virtual machines (VMs) on a single physical server to improve overall utilization.

While the telecom industry has lagged behind the IT industry in virtualizing resources, most service providers are now aggressively working to adapt virtualization principles in their carrier networks. Network function virtualization (NFV), for example, is being developed by a collaboration of service providers as a standard means to decouple and virtualize carrier network functions from traditional network elements, and then distribute these functions across the network more cost-effectively. By enabling network functions to be consolidated onto VMs running on a homogenous hardware platform, NFV holds the potential to minimize both CapEx and OpEx in carrier networks.

Another trend in virtualized datacenters is the abstraction being made possible with software-defined networking, which is enabling datacenter networks to become more manageable and more open to innovation. SDN shifts the network paradigm by decoupling or abstracting the physical topology to present a logical or virtual view of the network. SDN technology is particularly applicable to carrier networks, which usually consist of disparate network segments based on heterogeneous hardware platforms.

Technical overview of network virtualization

Here is a brief overview of the two technologies currently being used in unified datacenter-carrier network infrastructures: SDN and NFV.

Software-Defined Networking

SDN is a network virtualization technique based on the logical separation and abstraction of both the control and data plane functions, as shown in Figure 4. Using SDN, the network elements, such as switches, routers, etc., can be implemented in software, virtualized as shown, and executed anywhere in a network, including in the cloud.


SDN decouples the network functions from the underlying physical resources using OpenFlow®, the vendor-agnostic standard interface being developed by the Open Networking Foundation (ONF). With SDN, a network administrator can deploy a new network application by writing a program that simply manipulates the logical map for a “slice” of the network.

Because most carrier networks are implemented today with a mix of different platforms and protocols, SDN offers some substantial advantages in a unified datacenter-carrier network. It opens up the network for incorporating innovation. It makes it easier for network administrators to manage and control the network infrastructure. It reduces CapEx by facilitating the use of commodity servers and services, potentially by mixing and matching platforms from different vendors. In the datacenter, for example, network functions could be decoupled from the network elements, like line and control cards, and moved onto commodity servers. Compared to expensive proprietary networking solutions, commodity servers provide a far more affordable yet fully mature platform based on proven virtualization technologies, and industry-standard processors and software.

To ensure robust security—always important in a carrier network—the OpenFlow architecture requires authentication when establishing connections between end-stations, and operators can leverage this capability to augment existing security functions or add new ones. This is especially beneficial in carrier networks where there is a need to support a variety of secure and non-secure applications, and third-party and user-defined APIs.

Network Function Virtualization

NFV is an initiative being driven by network operators with a goal to reduce end-to-end network expenditures by applying virtualization techniques to telecom infrastructures. Like SDN, NFV decouples network functions from traditional network elements, like switches, routers and appliances, enabling these task-based functions to then be centralized or distributed on other (less expensive) network elements. With NFV, the various network functions are normally consolidated onto commodity servers, switches and storage systems to lower costs. Figure 5 illustrates a virtualized carrier network in which network functions, such as a mobility management entity (MME), are run on VMs on a common hardware platform and an open source hypervisor, such as a KVM.

NFV and SDN are complementary technologies that can be applied independently of each other. Or NFV can provide a foundation for SDN. By using an NFV foundation combined with SDN’s separation of the control and data planes, carrier network performance can be enhanced, its management can be simplified, and new services can be more easily deployed. 


***********************************

 Raghu Kondapalli is director of technology focused on Strategic Planning and Solution Architecture for the Networking Solutions Group of LSI Corporation.

Kondapalli brings a rich experience and deep knowledge of the cloud-based, service provider and enterprise networking business, specifically in packet processing, switching and SoC architectures.

Most recently he was a founder and CTO of cloud-based video services company Cloud Grapes Inc., where he was the chief architect for the cloud-based video-as-a-service solution.  Prior to Cloud Grapes, Kondapalli led technology and architecture teams at AppliedMicro, Marvell, Nokia and Nortel. Kondapalli has about 25 patent applications in process and has been a thought leader behind many technologies at the companies where he has worked.

Kondapalli received a bachelor’s degree in Electronics and Telecommunications from Osmania University in India and a master’s degree in Electrical Engineering from San Jose State University.

Wednesday, March 13, 2013

Networks to Get Smarter and Faster in 2013 and Beyond

By Greg Huff, Chief Technology Officer at LSI 

Architects and managers of networks of all types – enterprise, storage and mobile – are struggling under the formidable pressure of massive data growth. To accelerate performance amid this data deluge, they have two options: the traditional brute force approach of deploying systems beefed up with more general-purpose processors, or turning to systems with intelligent silicon powered by purpose-built hardware accelerators integrated with multi-core processors.

Adding more and faster general-purpose processors to routers, switches and other networking equipment can improve performance but adds to system costs and power demands while doing little to address latency, a major cause of performance problems in networks. By contrast, smart silicon minimizes or eliminates performance choke points by reducing latency for specific processing tasks. In 2013 and beyond, design engineers will increasingly deploy smart silicon to achieve the benefits of its order of magnitude higher performance and greater efficiencies in cost and power.

Enterprise Networks

In the past, Moore’s Law was sufficient to keep pace with increasing computing and networking workloads. Hardware and software largely advanced in lockstep: as processor performance increased, more sophisticated features could be added in software. These parallel improvements made it possible to create more abstracted software, enabling much higher functionality to be built more quickly and with less programming effort. Today, however, these layers of abstraction are making it difficult to perform more complex tasks with adequate performance.

General-purpose processors, regardless of their core count and clock rate, are too slow for functions such as classification, cryptographic security and traffic management that must operate deep inside each and every packet. What’s more, these specialized functions must often be performed sequentially, restricting the opportunity to process them in parallel in multiple cores. By contrast, these and other specialized types of processing are ideal applications for smart silicon, and it is increasingly common to have multiple intelligent acceleration engines integrated with multiple cores in specialized System-on-Chip (SoC) communications processors.

The number of function-specific acceleration engines available continues to grow, and shrinking geometries now make it possible to integrate more engines onto a single SoC. It is even possible to integrate a system vendor’s unique intellectual property as a custom acceleration engine within an SoC. Taken together, these advances make it possible to replace multiple SoCs with a single SoC to enable faster, smaller, more power-efficient networking architectures.

Storage Networks

The biggest bottleneck in data centers today is caused by the five orders of magnitude difference in I/O latency between main memory in servers (100 nanoseconds) and traditional hard disk drives (10 milliseconds). Latency to external storage area networks (SANs) and network-attached storage (NAS) is even higher because of the intervening network and performance restrictions resulting when a single resource services multiple, simultaneous requests sequentially in deep queues. 

Caching content to memory in a server or in a SAN on a Dynamic RAM (DRAM) cache appliance is a proven technique for reducing latency and thereby improving application-level performance. But today, because the amount of memory possible in a server or cache appliance (measured in gigabytes) is only a small fraction of the capacity of even a single disk drive (measured in terabytes), the performance gains achievable from  traditional caching are insufficient to deal with the data deluge.

Advances in NAND flash memory and flash storage processors, combined with more intelligent caching algorithms, break through the traditional caching scalability barrier to make caching an effective, powerful and cost-efficient way to accelerate application performance going forward. Solid state storage is ideal for caching as it offers far lower latency than hard disk drives with comparable capacity. Besides delivering higher application performance, caching enables virtualized servers to perform more work, cost-effectively, with the same number of software licenses.

Solid state storage typically produces the highest performance gains when the flash cache is placed directly in the server on the PCIe® bus. Intelligent caching software is used to place hot, or most frequently accessed, data in low-latency flash storage. The hot data is accessible quickly and deterministically under any workload since there is no external connection, no intervening network to a SAN or NAS and no possibility of associated traffic congestion and delay. Exciting to those charged with managing or analyzing massive data inflows, some flash cache acceleration cards now support multiple terabytes of solid state storage, enabling the storage of entire databases or other datasets as hot data.

Mobile Networks

Traffic volume in mobile networks is doubling every year, driven mostly by the explosion of video applications. Per-user access bandwidth is also increasing rapidly as we move from 3G to LTE and LTE-Advanced.  This will in turn lead to the advent of even more graphics-intensive, bandwidth-hungry applications.

Base stations must rapidly evolve to manage rising network loads. In the infrastructure multiple radios are now being used in cloud-like distributed antenna systems and network topologies are flattening. Operators are planning to deliver advanced quality of service with location-based services and application-aware billing. As in the enterprise, increasingly handling these complex, real-time tasks is only feasible by adding acceleration engines built into smart silicon.


To deliver higher 4G data speeds reliably to a growing number of mobile devices, access networks need more, and smaller, cells and this drives the need for the deployment of SoCs in base stations. Reducing component count with SoCs has another important advantage: lower power consumption. From the edge to the core, power consumption is now a critical factor in all network infrastructures.

The use System-on-Chip ICs with multiple cores and multiple acceleration engines will be essential in 3G and 4G mobile networks.

Enterprise networks, datacenter storage architectures and mobile network infrastructures are in the midst of rapid, complex change. The best and possibly only way to efficiently and cost-effectively address these changes and harness the opportunities of the data deluge is by adopting smart silicon solutions that are emerging in many forms to meet the challenges of next-generation networks.

About the Author 

Greg Huff is Chief Technology Officer at LSI. In this capacity, he is responsible for shaping the future growth strategy of LSI products within the storage and networking markets.  Huff joined the company in May 2011 from HP, where he was vice president and chief technology officer of the company’s Industry Standard Server business. In that position, he was responsible for the technical strategy of HP’s ProLiant servers, BladeSystem family products and its infrastructure software business. Prior to that, he served as research and development director for the HP Superdome product family.  Huff earned a bachelor's degree in Electrical Engineering from Texas A&M University and an MBA from the Cox School of Business at Southern Methodist University.  

Wednesday, March 6, 2013

LSI and NetApp Collaborate on Server-based Flash Acceleration


LSI's Nytro WarpDrive family of application acceleration PCIe flash cards has been validated for use with NetApp Flash Accel software. The combined server flash caching solution can be used to speed application performance by converting server-based flash into "hot" data cache for critical business applications.

Specifically, the LSI and NetApp solution delivers automated and intelligent caching of hot data to PCIe flash storage. The LSI Nytro WarpDrive cards deployed in conjunction with Flash Accel software intelligently place the most frequently accessed or "hot" data on ultra-low latency, high-performance PCIe flash storage. The companies said test results have shown a reduction in application and server latency by up to 90 percent while increasing throughput by up to 80 percent. By allowing infrequently accessed data to remain on HDD storage, organizations can deploy an economical mix of flash and hard-disk storage, optimizing both cost per IOPS and cost per gigabyte of storage capacity.

"Flash memory adoption in the enterprise is a powerful complement to hard-disk-based network storage," said Tim Russell, vice president, Data Lifecycle Ecosystem Group, NetApp. "Deploying flash as a high-speed cache in the server is a simple and cost-effective way to significantly reduce latency and I/O bottlenecks, while providing enterprise-level data protection and manageability for the entire infrastructure. Working with our server cache partners, we're able to offer customers a complete end-to-end, high-speed solution."

http://www.lsi.com/acceleration
http://www.netapp.com

Tuesday, February 19, 2013

LSI Introduces ARM-based Axxia 5500 Processors for 4G Base Stations

LSI introduced its ARM-based family of Axxia 5500 processors designed for multi-radio LTE base stations, cell site routers, gateways and mobile backhaul equipment.

The new processors combine up to 16 ARM Cortex-A15 cores with LSI specialized networking accelerators to optimize performance and power efficiency.  On-chip network accelerators deliver up to 50 Gbps packet processing, 20 Gbps security processing and 160 Gbps of Ethernet switching via 16 10G Ethernet interfaces. LSI plans to offer several versions with different core counts and throughput capabilities.  The chip design leverages ARM’s CoreLink CCN-504 low-latency interconnect in 28nm process technology.

LSI said its combination of networking expertise, specialized acceleration engines and Virtual Pipeline technology with ARM’s power-efficient processors and interconnect IP delivers communication processors that are uniquely suited for building intelligent, heterogeneous networks.

“The combination of ARM’s leading coherent interconnect, energy-efficient multicore processor technology and advanced physical IP in the LSI Axxia 5500 series delivers significant gains in performance density. This will enable highly efficient systems with new levels of software flexibility and scalability for heterogeneous networks,” said Ian Ferguson, vice president of Segment Marketing, ARM. “LSI and ARM enjoy a strong relationship, and we are working to meet the needs of customers who are focusing on building next-generation network infrastructure.”

http://www.lsi.com

Wednesday, January 23, 2013

LSI's Revenues Hit $600 Million, Up 15% YoY

LSI reported Q4 2012 revenues from continuing operations of $600 million, compared to $523 million generated from continuing operations in the fourth quarter of 2011, and compared to $624 million generated from continuing operations in the third quarter of 2012. GAAP income from continuing operations for Q4 was $29 million or $0.05 per diluted share, compared to fourth quarter 2011 GAAP income from continuing operations of $11 million or $0.02 per diluted share.


"2012 was a year of exciting progress for LSI as we delivered 23% revenue growth, strong expansion in operating margin and earnings per share from continuing operations, and record design wins. We introduced several important new products, and customers are increasingly looking to new LSI solutions for mega datacenters, mobile networks and flash," said Abhi Talwalkar, LSI's president and CEO. "LSI's intelligent silicon offers proven solutions as businesses turn to the cloud and look for new ways to accelerate their ability to quickly analyze, store, share and protect data. While there is uncertainty in the macro environment and softness in some end markets, we are centered in dynamic new growth cycles that are expected to drive long-term growth in our flash, server and networking businesses."

http://www.lsi.com

Wednesday, November 14, 2012

LSI Debuts Syncro Storage Architecture

LSI introduced Syncro, a new storage architecture that enhances direct-attached storage (DAS) with advanced sharing capabilities in multi-server data center applications.

The first member of the Syncro product family, the Syncro MX-B Boot Appliance, is a standalone, pre-configured, 1U form factor rack boot device for as many as 48 servers.  It eliminates boot drives, helping to reduce the overall system and maintenance costs for the largest cloud and mega-datacenter environments.

“The Syncro architecture is the result of customer requests to increase data protection across multiple server and storage systems by enabling levels of enterprise-class availability and flexibility never before offered for DAS,” said Bill Wuertz, senior vice president and general manager, RAID Storage Division, LSI. “The new Syncro architecture and product family demonstrate LSI’s long, proven commitment to delivering the most innovative and productive solutions for next-generation server and storage systems.”

http://www.lsi.com