400Gbit/s is right around the corner for data center switching, with significant implications for how data centers get built and connected. Given cloud customers will be the first to deploy this new speed and the technology behind it and the fact that the cloud will be that much larger in two years, we're discussing a massive inflection point for the market and the vendors that serve it.

Let's look at 400Gbit/s from two perspectives: first the technology itself, and second what that technology will ultimately enable.

First Generation

On the technology front, 400Gbit/s will actually hit the market twice based on different SERDES technologies. First generation will start sampling in 2018 and some early shipments in 2019. This first generation of 400Gbit/s will be based on 50Gbit/s lanes and the most popular form factor will be a 32-port 400Gbit/s 1RU switch. Very similar to what the cloud uses today for 100Gbit/s in look and feel. There's debate on the form factor of that port with one camp pushing DDQSFP and the other OSFP. To avoid a long debate here and lay out the main difference that matters to the customer, DDQSFP is backwards compatible with the optics module while OSFP is forward looking. Given the supply constraints still in the market for optics at 100Gbit/s, one can see why some end customers are demanding DDQSP. I find this funny as the cloud is always thought of as bleeding edge, but in some cases it remains risk adverse.

Second Generation

The second generation of 400Gbit/s will be based on a 100Gbit/s SERDES with four lanes instead of eight. This second generation is what the entire industry from supplier to customer wants to get to. It will have a lot longer life to it and spawn many other important transitions in the data center based on that 100Gbit/s SERDES. Second generation products will hit next decade. Given the difference between the two generations, the forecast looks odd as this is not one continuous technology. Different pieces will interface drastically differently at each generation. For example, server access in generation one is likely with a splitter cable to lower speed server NIC ports. In generation two, customers might transition to purpose-built 100Gbit/s ports for top-of-rack. We should think of things like form factor, thermals, and many other factors that will creep into the networks topology.

Terabit and Beyond

Both generations will be challenging and will likely have supply constraints. Let's face it, there hasn't been a data center networking transition without supply constraints so we shouldn't get ahead of ourselves here. We also look at second generation 400Gbit/s as providing the building blocks for what's next and the push to reach and break the 1Tbit/s mark for a single port. If the industry looks and thinks about future speeds and how to leverage investments into the future it will help mitigate some of these supply constraints and the general concern of component supplier survival as the industry simply can't support every speed, reach, and form factor along the journey.

Looking over at what this journey to 400Gbit/s will enable, first and foremost the line between switching and routing will all but go away in the market, vanishing completely in the cloud and blurring to an almost unrecognizable form even in the enterprise space. While some vendors might call it a switch and others a router, it doesn't really matter as it will be the same box. Merchant silicon will play a key role in driving this transition and pushing rapid innovation to make a universal switch/routing platform be ubiquitous with 400Gbit/s.

400Gbit/s will also change and push the definition of DCI. The universal switch/router will take coherent line cards and be used for some DCI use cases. We already see this in today's switch/router platforms to support the bleeding edge. How the cloud deploys DCI will look far different from the telco space, solving a different problem and moving a different set of bits around.

All the above transitions and the pure size of the cloud two to four years out will have a significant impact on the market. Data center switching is about to celebrate its 10-year anniversary from when it became a unique set of products and the journey to 400Gbit/s will be the most important the market has ever undergone.