Microsoft’s Network: Hitches with Quasi-Carrier Mindset

Avatar photoPublished on
blog 1486467697 50

Microsoft has been caught in the undesirable middle with its Data Center (DC) infrastructure between striving to innovate with the lowest cost, state-of-the-art optical solutions as a huge enterprise player, and reconciling it with a historical propensity to use standard service provider equipment, which often has proprietary aspects that make it a more expensive proposition. The proclivity toward the latter seems to have also resulted in more of a carrier mentality in terms of a “pay-as-you-go” type of situation, constantly responding to bandwidth growth spurts throughout its global network, but there are apparently no signs of an effort to do a full redesign of its intra- and inter-data center build-out to really get out in front of accommodating the rising needs for data transmission. While such a move would be quite costly, it also adds up to an exorbitant amount with its evident strategy of continuing to throw more human resources to adequately deal with the vital matter of providing adequate forecast capacity planning, which might be ineffective almost by definition without major changes to the infrastructure, and even potentially counterproductive in creating bureaucratic inertia.

Compared with other hyperscale DC operators, including Google, Facebook, and Amazon, both Microsoft’s history and the nature of its business resulted in its current situation. Microsoft has been in existence for 40 years, and by the time optics started to be used for enterprise applications in a big way in the second half of the 1990s, the company’s data traffic substantially dwarfed the lion’s share of other large businesses, and it was looked upon as kind of a small telecom operator. Consequently, for its internal, corporate metro network, the software giant initially used a good number of SONET rings (including BLSRs), ATM switches, cross-connects, and other transport devices. (An important reason for the wide collection of products was to simulate conditions, such as at a service provider, to view how its own offerings would work in those surroundings.)

Since that time, the vast majority of Microsoft’s DC connectivity is Ethernet-based, and while not even close to being as terrible as an incumbent local exchange carrier, which has multiple layers of legacy technology, it would not be surprising to find at the enterprise – SONET/SDH, Fiber Channel – as well as certainly some InfiniBand clusters for its High Performance Computing Group. More importantly, the company has hired a lot of engineering people with an extensive amount of service provider experience with the major goal of adapting technology from the space to better address the scaling of bandwidth at the firm, such as in integrating its own passive, short reach DWDM links. The heavy dependence on the industry, which as usual is slow to change, has been a definite problem for Microsoft.

To be fair, Google started the major construction of its internal network at a later time. Yet, there was out-of-the box thinking right from the beginning (including employing optical engineers with a range of experiences) with an emphasis on simple transport pipes, along with making its own routers and switches with the unneeded (as well as costly) bells and whistles (not clear whether Microsoft had decided to do so – although Microsoft builds its own servers). In addition, Google always viewed the optical network as a whole, whereas it appears that Microsoft has always been more focused on finding the most cost-effective optical connections to support the deployment of the next group of 10,000 or so servers.

[written by Mark Lutkowitz]

(To read Google’s Surrender to Extreme Environmentalist Pressure – Hit to Optics, please click here.

SHARE