If there's one thing that can be said about the emerging distributed data environment, it's that it will be chaotic.

At the moment, most long-haul connections between data centers and remote sites are built for largely predictable workloads: a daily data dump or at least semi-regular file transfers of reasonable size. As the enterprise gravitates toward multi-cloud infrastructure, however, the wide area network will take on more of the characteristics of the local area network: rapid, dynamic interchanges of data and applications that can wreak havoc on capacity planning, bandwidth management and other functions.

Part of the solution to this problem is the data center interconnect (DCI), which provides broadband connectivity over great distances, usually on optical transport. But simply building a wider pipe is not enough. In order to gain optimal efficiency, the DCI will also have to scale dynamically to suit this constantly shifting mix of traffic.

One of the primary considerations when incorporating scale into the DCI is to select a platform that can handle it natively. One of the most effective ways to do this is to reconfigure the architecture without an active backplane and other limiting factors like a client port lock-in. In this way a single 400 Gbit/s line card can accommodate a wide variety of Ethernet data rates as they become standardized, allowing enterprises and service providers to utilize a single architecture across their DCI chassis. This provides a high degree of scale without giving in to hardware sprawl.

But more than just innovative hardware designs, the next-generation DCI will require substantial management and configuration prowess that can only come from software-defined functionality and the adoption of open protocols and APIs like Open Optical Line System (OOLS) and OpenConfig. Indeed, the same flexibility that is currently being implemented within the data center must extend over the wide area if the enterprise expects to achieve the geographically distributed yet highly integrated data ecosystem necessary to accommodate Big Data, the Internet of Things and other emerging data-intensive initiatives.

In this light, the standard approach of overprovisioning long-haul connectivity is a non-starter for the DCI. A typical network is provisioned at only 30 percent to 40 percent capacity, which means the enterprise is paying for more than twice what it needs on any given day just to ensure that bandwidth is there for temporary peak loads. By extending SDN to the transport layer, and then coupling that with an intelligent, automated management stack, resources can be adjusted up or down on the fly to meet actual, not anticipated, data loads.

A key aspect in gauging the need for broad scalability on the data center interconnect is that fact that the DCI is not simply a means of transporting bits from one place to another. Rather, it’s intended to lend full support to the applications and content that drive revenue. As such, it places a premium on port costs, capacity requirements and even the physical footprint, as no one wants to tie up multiple rack units just to maintain connectivity to the outside world.

The DCI, then, is more than just a new kind of networking, but an integral component of a new, geographically distributed data ecosystem. And, in order to fulfil its role as the link between highly dynamic virtual data centers, it must be designed from the ground up with scalability in mind.