Consumers these days are hungrier than ever for data, and smart devices are the forks that feed. To keep pace, service providers have begun adopting higher-bandwidth networks like 100G infrastructures to meet ever increasing demand. But the advent of this new technology also brings a heightened need to test and measure the resulting infrastructure so providers ensure they are delivering content to consumers as expected.
100G Helps Cool Data Explosion
Not far behind, service providers have begun to juice their bandwidth to accommodate. To bolster network infrastructures, carriers began volume implementations of 100G optical equipment in 2012. 100G took off much quicker than 40G as all network players bought into the overarching need for 100G and developed a healthy market with a variety of competing solutions. Many experts believe that 100G is the new 10G because it creates the new baseline for network performance and its architecture will provide the basis for future technologies. Implementation of 100G started as line cards, much like 10G did in its day, but the technology will quickly become smaller due to new developments in components such as modules – driving down costs and power requirements.To properly enable a 100G-based network ecosystem, the test and measurement building blocks must be set in place. We see novel technology at the physical layer to guarantee ultra-high bandwidth signals can cross a circuit board. Because 100G touches a rich mix of traffic types, the equipment used to test these networks must be capable of validating the performance with real 100G signals forged from this dynamic mix. Looking forward, as standards are solidified and providers move to 400G in several years, test equipment must continue providing deeper insight into the real issues and root causes with increasing bitrates. Even temperature management poses a challenge as the equipment must work with a wide range of temperatures while offering more insightful applications and test ports in ever-smaller boxes.
Expect 400G in the Near Future
Most agree that 400G will be implemented in two forms – four super carrier wavelengths at 100G using dual polarizations and QPSK modulation or with two carrier wavelengths at 200G using dual polarizations and 16QAM modulation. Each has its benefits and detractions. Four carrier wavelengths at 100G provide better performance over long distances, but consume more spectrum within the fiber. Two carriers at 200G will have a shorter reach, but a more efficient use of spectrum.Expect to see more solid standards developed around 400G sometime in the next 18 to 36 months, a timeline reminiscent of 100G development.
Self-Aware Networks Take Root
Self-Aware Networks also made significant strides in 2012 and their first full deployments will come in 2013. These are networks that can automatically restore and rebalance bandwidth, optimize performance and lower overall costs for network providers.These intelligent networks will require test systems to interact with not just the data plane, but also the network’s control plane. It will also require test visibility across the network as a whole, not just an isolated snapshot. One solution on the market currently enables tremendous flexibility and real-time insight into all reaches of the network. They ensure that operators can deliver the service benefits of such networks. With this insight, network managers can see their network as their customers experience it in real time, leading to a successful network and happy customer.
Carriers have now bought into the concept of Self-Aware networks and are committed to their implementation. Other optical equipment makers are now working to get involved and create their own solutions for this promising opportunity.
Test and Measurement Gets Faster, Smarter
Bandwidth increases unquestionably have implications for how companies test and measure network effectiveness. However, providers must also remember that it is not only bandwidth’s sheer quantity, it’s the nature of the bandwidth that becomes critical to the eye. More latency-sensitive real-time video, more complex traffic structures that need to interact with the self-aware network and far higher port densities all affect the test strategy. This is all set against a background of not only increased opex and capex pressure, but with continually increasing customer expectations. Test and measurement equipment must continue to offer novel applications that are ‘aware’ of the nature of the traffic and ‘aware’ of the nature of high-performance, dynamic networks.About the Author
Dr. Paul Brooks is the Product Manager for the JDSU high-speed test portfolio. He covers a wide range of technology including 100 GE and OTU3/4 and has been very involved in developing test procedures for 100G systems and components. He was formerly a principal engineer, leading engineering teams developing a wide range of products for communications test and measurement. Prior to JDSU, Paul was a weapons officer in the Royal Navy specializing in electronic warfare. He currently lives in southern Germany where he laments the lack of first-class rugby.
About JDSU
JDSU (NASDAQ: JDSU; and TSX: JDU) innovates and collaborates with customers to build and operate the highest-performing and highest-value networks in the world. Our diverse technology portfolio also fights counterfeiting and enables high-powered commercial lasers for a range of applications. Learn more about JDSU at www.jdsu.com