Quite a few articles have been written over time about the jealousy suffered by Microsoft over Alphabet’s Google, but they have not really addressed that the same dynamic exists in the optics realm. One cannot help but discern the palpable pain caused by the dominant search engine provider to the software giant in having a bigger fiber optic network despite the latter starting its construction on its infrastructure in general about two decades earlier. Most interestingly, the characterizations of leaving Google (as well as others) in the “digital drone dust” or offering “dramatic new ways of running big data applications,” as it relates to its partnership with Inphi is way over-the-top rhetoric in that the adoption of this technology is just an extension of Microsoft’s strategy at 10G for short distances.
At an OIF workshop that took place in Anaheim during the week of OFC 2016, a presentation by an Azure Networking Principal Architect, Brad Booth, rather inadvertently called attention to the strain surrounding Google’s perceived leadership role among enterprise networks. While fully acknowledging that it had the second biggest Data Center (DC) network, Booth not only made sure that it was remembered that Microsoft installed its first DC in 1980s, he also asked the audience about the significance of 2009 with the hope that it would be understood that it was a relatively short time ago that his company became one of a small number of significant hyperscale players, driven by a spike in cloud growth. Unfortunately, the answer to his question that came back from one audience member was that it was the time of the Great Recession, which obviously had to be recognized as being true, and then after quickly moving back to the narrative, it somewhat took away from the uniqueness of Microsoft’s position because a depressed market meant that there was a rush by large businesses to reduce their costs by farming out their data center requirements. Even more surprising, despite the fact that both Google and Microsoft have networks that stretch internationally, with what could only be described as a serious face, Booth suggested that “Google Fiber,” which in actuality, represents a comparatively paltry number of strands, was a major reason for the disparity in the overall size of the two networks.
Of course, the overblown announcement during OFC, at the OSA Executive Forum, regarding the collaboration with Inphi, was an attempt to really set Microsoft apart from Google. However, Microsoft has not been reticent about telling current and potential suppliers for quite a while that for the shorter distances between buildings at 100G, the company had been looking to duplicate the process at 10G of purchasing colored optics and integrating them into the routers and switches, eliminating the need for external chasses and transponders, while saving space and excluding features that are not desired. Huge volumes at low cost would trump ultimate performance as well as the longest reach, or even squeezing the last dB of OSNR out of a link in most cases – hence, PAM-4 fit the bill. Obviously, the use of one vendor within the confined distances obviates any interoperability concerns.
As we have talked about in the past, Microsoft has been somewhat of an enterprise trapped in a traditional carrier’s body and mindset unlike Google, which by starting from scratch later on could take advantage of constructing a simpler, flatter, and cheaper network of optical pipes. While the Redmond-based corporation must realize that the traditional standards bodies, such as the IEEE 802.3 working group, have been designed to cater more to the needs of historically slower-moving customers, such as the service providers, the firm feels compelled to continue to participate, while Google does not attend these meetings – as the individual requirements of very large Web 2.0 entities can differ significantly anyway. In addition, as Microsoft is caught between two worlds, although it will not admit it publicly, it has to be concerned on some level about the detrimental effects on the leadership position it has taken in transitioning to an open line approach (perhaps another way at first to get a one-up on Google) potentially leading to an all-out war, as component suppliers move upstream and systems integrators move downstream. Also, with its schizophrenic position in the optics space, Microsoft has to be at little bit more worried than apparently Facebook (the originator of the concept) that the push to a dollar per gig (which is within striking distance) is not sustainable long term — in terms of the health of the entire ecosystem.
So, this Google-envy syndrome at Microsoft can have the effect of somewhat blinding it to the prospect that it may become problematic to farm out for certain optical componentry in the very long term because, naturally, it will never be economical to build everything by itself. For example, its push for the very complex and R&D-intensive On-Board Optics (OBOs), which again, may partially be an attempt to differentiate itself from Google and others, may be one of the final straws. Oclaro eloquently expressed the idea at the Executive Forum that after all of the intensive cost-cutting to the bone to get to a healthy situation (and of course, it was fully understood that all large component suppliers have been making a massive investment in the myriad iterations of 100G), it could not afford to shoulder that much of the burden for OBOs.
[written by Mark Lutkowitz]