What exactly is a “real” client-side 400G device? We can certainly count on many marketing folks, and just about all research firms to define it as broadly as possible because for them, it is all about making the size of the opportunity as big as possible. Certainly, in the past and more recently, when fibeReality talked about the slow arrival of 400GbE, it was in the context of how it was defined by the vast majority of optical technologists in the industry: after standardization of the physical medium dependent sublayers by the IEEE — in particular, in moving forward, the efforts of the 802.3ck Task Force, as the main presumption is that the industry needs to get the SerDes line rates to 100 Gb/s. Of course, two of the Web 2.0 operators, Google and Amazon, for their internal optical networks, have chosen to go with what at least some engineers might call band-aid approaches, the former already with a 2x200G FR4, while the latter claims it will be initially using four 100G-BASE-DRs packaged into one QSFP-DD. Yet, one of the most prominent optics engineers in the space, Chris Cole, Vice President, Advanced Development at Finisar, has been very fervent in his remarks, such as during several presentations at OFC 2019, that they are not 400G, and he provided demonstrations, which asserted that there is no demand for the data rate on the client side outside of transport — and that it has always been the case with any other speed. Cole also pointed out that 400G is tied to switch bandwidth, which will not be deployed until the 25.6T and 51.2T gear becomes available. All in all, if one defines 400GbE as all-encompassing as possible, then one million switch ports shipped next year is not out of the question. Yet, if, again, given what many technical people in the space would argue is the true rate, then fibeReality currently foresees (with some level of hopefulness) quite moderate quantities starting to be received by customers in 2023, with some degree of higher volumes beginning to happen in 2024, but still well-under a million switch ports, regarding the total installed base. Therefore, one should avoid being surprised that we also do not believe any forecast approaching a million modules shipped in total, before 2025. So, at a minimum, we are around a couple of years more pessimistic than Cole’s projections.
It should be noted that 12T switches may be viable substantially longer than the conventional wisdom, as decades of experience teach us that the rate of obsolescence of infrastructure is frequently, well-underestimated, albeit, most conspicuously in traditional networks. Not only could there be a desire to prolong the use of the Broadcom Tomahawk T3 ASICs, but the T4 switch might be postponed over technical issues, such as initial struggles in adequately packaging all of that functionality. Furthermore, we heard over the last year that 100G lane rates from the switch have proven to be harder than designers first supposed.
The great obstacles in achieving 100G electrical lanes have not been a secret. Certainly, it was anticipated early by some development people that the electronics were starting to be pushed to their limits, even with 50G. Also, with the collapse of the datacom ecosystem, we think the industry is being provided the benefit of the doubt to some extent, with our forecast out to 2025.
In addition, there has to be the assumption that samples of both 25.6T switch ASICs and PAM4 chips are made well in advance of the 802.3ck finishing its work. Then several matters have to be resolved by both line-card designers and optics suppliers, especially signal integrity. Naturally, another reason that one has to be careful with any timeline is that history teaches us that the drive toward a standard should not be confused with the actual scheduling of market implementation.
On the other hand, there could theoretically be a more bullish case for 400G with the configuration for 8x50G electrical interfaces, potentially in volume, in the meantime, on the existing 12T devices, which could be used partially in the data center. However, in general, these operators look to get as many ports of a MAC rate as feasible. Additionally, waiting for the higher-capacity solutions allows for a more elegant path toward their future architecture.
Returning to the “debate” over definition, while the earlier versions of transceivers being adopted by the hyperscalers are not the same as a 400GbE MAC rate, in fairness, to a module vendor, it is pretty much still 400G in the aggregate. It can also be legitimately argued that Google, for example, sees itself as being constrained by its architectural requirements, and it is taking a reasonable risk with just a rather minor step-up in functionality. Nevertheless, there is widespread agreement in the engineering community that matching electrical and optical rates is imperative in getting to the lowest cost, and in using the minimal amount of power.
For our fast-growing, totally separate, quick update blog, which is exclusively on fibeReality’s LinkedIn page, please follow us here.
As always, fibeReality does not recommend any securities, and this writer does not invest in any companies being analyzed by us.
[written by Mark Lutkowitz]
SHARE THIS