Saturday, April 16, 2011

Optical Switching in Transport Networks

Backbone transmission infrastructures have been going through a number of incarnations in the past 20 years: from PDH to SDH/SONET, then WDM, and more recently the Optical Transport Network (OTN): instead of switching circuits or packets, the core network elements will now be switching wavelengths.

This is a realization of the optical switching technologies that were developed about 10 years ago (micro-mirrors, bubbles, liquid crystals, ...) combined with technologies that vastly increased the bandwidth x distance product of optical transmission such as amplification, advanced modulation techniques, dispersion compensation, etc. Around that time, I was employed by Corning Inc. as research scientist, and our research group was developing network architecture concepts based on the new enabling technologies. While some of the scientists at Corning were leading the field of optical materials and components, our group was more focusing on practical aspects like how those technologies could best be applied in networks, how network management and provisioning would change; we were doing traffic studies, cost modeling, and we were involved in early initiatives on signaling in optical networks which led to generalized MPLS (GMPLS) concepts.

An important outcome of our research work at that time was that large cost savings could be realised by optical switching technologies, since the bulk of the cost in a network results from opto-electrical conversions and electronic switching. We also concluded that the end-to-end traffic demands in those days were not yet big enough to require switching of entire wavelengths in the network core. As a result, a lot of the technologies were shelved, and only today we are seeing the demand for capacity that justifies switching of wavelengths.

Equipment suppliers are now filling those needs with devices like all-optical cross-connects (OXCs) and reconfigurable optical add-drop multiplexers (ROADMs). This enables the cost savings that we predicted 10 years ago, but it also implies that networks become analog again: you have more to deal with than just a point-to-point optical link budget, since you are no longer connecting a box on one end of the fiber to a similar box on the other end. Every circuit has to be engineered considering all the components that it crosses, and switching one circuit on or off on a link will affect other circuits because amplifiers need to be retuned. All this puts a critical emphasis on the software and tools that the equipment suppliers provide with their equipment, and the functionality they provide for carriers to support their operations.

In my professional life, I've seen telecommunications go through several migrations, evolutions, revolutions, successful introductions and failed introductions of new technologies: the introduction of SDH as successor to PDH and 'plain old telephony' in the early 90s, then WDM to increase capacity and distance, and now optical switching technologies; the convergence of telecommunications and datacommunications and the shift from circuit-based voice to packetbased and connectionless data; deregulation of telecommunications services which triggered massive investments in infrastructure, but also overcompetition and overcapacity leading to bankrupcies, consolidation and an industry-wide deferral of new builds for several years that put the equiment and component out of business.

Today the telecommunications industry is out of the recession, but everybody, operators and vendors alike, should be cautious not to fall into the same trap as 10 years ago. Network infrastructure builds must follow real demands for capacity, resulting in real revenues. To compete in a deregulated market, operators must choose solutions that provide optimum flexibility and keep operational expenses low. Those are nice buzzwords - any equipment vendor will claim their products meet the carrier's needs. But it requires a good understanding of the technology and the implications to know what best serves those needs.

1 Comments:

Blogger Michael said...

Leo,

One of the issues you fail to mention is that the consolidation has lead to oligopoly pricing for bandwidth at the edge. As a result applications are still stuck in the kilo-mega range, while all technology today points to and supports giga-tera bandwidth. The cost between back-haul/mid mile and many of the access pricing topologies is that the retail and economic cost/bit is the widest it has been since 1984. Also, because of the vertical orientation of the industry the distribution model is inefficient with respect to selling an ocean of capacity at the core one drop at a time at the edge. Until we resolve these issues, there is little need to overinvest in the core.

Michael Elling

11:10 AM  

Post a Comment

Links to this post:

Create a Link

<< Home