At least in fantasy football, the actions are based on activities that genuinely exist in the present world. At a time when the telecom/datacom optical market is arguably as bad as ever, it is unseemly for a relatively large amount of capital to be spent on engineers traipsing around the globe developing standards based on technology that is not real or proven. Using history as a guide, the lion’s share of the individuals attending these meetings will never see 400GbE in volume while they are still on the job.
While we complimented Finisar in our last blog article for advocating NRZ, why did it seemingly take anyone on the IEEE P802.3bs 400 GbE Task Force a year to formerly bring up something so fundamental – the advantage of using existing lanes in achieving higher data rates (for example, 4x10G, 4x25G)? It apparently took the same vendor the same amount of time to be the first to present another basic tenet of optics – the Shannon-Hartley Theorem, which is almost a century old, and show how it relates to the optical signal-to-noise ratio in optics. One could guess that if these doctrines were brought up in March 2014, perhaps there would be not much to talk about over the following 12 months.
Given the current vaporware situation, Finisar could only advocate the best attractive option possible 8x50GbE (which signifies to us that the supplier certainly knows that 400GbE is hardly around the corner.) The systems integrators are still getting their gear ready for just 25GbE, and nobody seriously will be in favor of 16x25GbE. All in all, there are will be no basic building blocks of capacity in existence for quite a while in order to legitimately develop a 400GbE standard.
Another aspect to consider is whether such standards bodies are becoming anachronistic. Representatives from enterprise powerhouses, Google and Facebook, for instance, evidently do not show up to these events. With the confiscation by the US federal government of the Internet, large investment in fiber optic equipment by service providers will likely come to a crawl for a period of time (at least until everything is sorted out by the courts), and the dependence on these mega-data center operators (although, again, terribly inflated in terms of size by the public conventional wisdom) will only be more acute.
Certainly, executives at the large telecom carriers would start hyperventilating if such standards entities were to disappear given the multiple layers of infrastructure installed over several decades. As more time goes by, we hope there will be a greater sense of inevitability to developing a standard that would have a greater level of rationality – like 200GbE – in terms of getting to a data rate that would be closer in terms of the timing of available technology to construct something that at least has the appearance of being authentic. In addition, 200GbE would be more of a realistic rate for large users to drive forward in a timeframe that could resemble a foreseeable future.
[written by Mark Lutkowitz]