More 400G Insanity at ECOC

Avatar photoPublished on
blog 1486467726 59

At the recent ECOC Exhibition 2015, there was the continuation of the prevailing optical industry’s preposterous, public narrative that the deployment of 400G is within sight. There can be no doubt that in the foreseeable future that there will be an extremely small number of ports at that data rate found on routers, and perhaps even in data centers, and many analysts and reporters in the market can be counted on to disingenuously declare that the market for that speed has really arrived. Nevertheless, in looking at the current migration path and the historic pace of change, it is quite logical that volume deployment of 400G even within 20 years could easily be a stretch, and that it may be so far out as to be irrelevant in current transport planning by both service providers and large enterprises for future bandwidth requirements.

When it came to the aberration of aberrations, the Internet explosion, the market did not go beyond 10G and DWDM for quite an extensive period of time. Without it, the current discussion would probably today only surround moving to 40G, instead of the seemingly, knee-jerk, predictions by a number of players on an annual basis that the rate has been on its way out over the past several years. Obviously, the optical business cannot count on a similar type of abnormality, like the Internet, that would boost the need for capacity in such a substantial way.

There also seems to be a lack of appreciation for the impact of lower rate deployments, such as the record number of 1-gigs being deployed, and that 10-gig devices will be needed to incorporate them, and so there will be more opportunities for 4x10G going forward, which will continue to be used for growth in networks. On the other hand, there is also the matter of 25-gig servers only barely happening now because they have not been exactly a piece of cake to develop them – and so 50-gig servers cannot be expected to become real anytime soon, along with the optical devices at that speed. Therefore, an additional impact on delaying any kind of meaningful appearance of 400G will be noticed way down the road (because we just are not believers in 8x50G becoming a reality).

Most importantly, 100G will become the new currency in the same way as 10G over the last 20 years or so, as suppliers look to drive economies of scale, and the former may easily be leveraged even longer than the latter — with the lion’s share of the funding having been generated internally by those same vendors (as we noted in the past, the first time it has happened in the optical space). So, with a real opportunity to move down the cost curve on 100G, there will hardly be any rush to take on the R&D expenses of 200G on the client side, the likely interim step of 4x50G before any kind of substantial move to 400G.

There are other practical considerations. For example, after hefty investments by large service providers in test systems that have 100G capability, it is unrealistic to expect budgets to be available for much more expensive testing devices at higher data rates within a relatively short period of time. In addition, if the large component vendors have any chance for survival, they need a long break to recover after having to develop and produce CFP, CFP2, and CFP4/QSFP28 for 100G, with probably too short of a window to sell a sufficient number of units between the introduction of those form factors to have received an adequate return on investment.

[written by Mark Lutkowitz]

SHARE