3 principles of data center network design

More ports with higher data rates
3 principles of data center network design



A guest post by Lewis White*

providers on the subject

The increase in global data traffic and resource-intensive applications such as big data, IoT, AI and machine learning require greater capacity and lower latency in the data center. Operators must provide more ports with higher data rates and a higher number of optical strings. This requires, among other things, well-thought-out scaling with more flexible deployment options.

Lewis White, vice president of Enterprise Infrastructure Europe at Commscope, explains three principles for designing data centers to achieve the lowest possible latency.

(Image: Public Domain: Pexels / Pixabay)

The underlying infrastructure in a data center must be well thought out and equipped with flexible deployment options. There are three design principles:

1. Application-based building blocks

Typically, application support is limited by the maximum number of I/O ports on the face of the switch. For a 1RU switch, the capacity is currently limited to 32 QSFP/QSFP-DD/OSFP ports. The key to maximizing port efficiency is the ability to optimally utilize switch capacity.

Traditional four-lane quad design enabled a stable migration from 50G to 100G and 200G. Beyond 400G, however, the 12- and 24-fiber configurations used to support quad-based applications become less efficient, leaving significant switchport capacity wasted. This is where octal technology comes into play.

From 400G, eight-fiber technology and 16-fiber MPO breakout becomes the most efficient multi-pair building block for trunk applications. Moving from quad-based implementations to octal configurations doubles the number of breakouts, allowing network administrators to remove some switch layers.

Switches are evolving to provide more lanes at higher speeds while reducing network costs and power consumption.  Octal modules enable these additional connections across the 32 ports of a 1U switch.  Preserving the higher radix is ​​achieved by using Lane Breakout from the optical module.
Switches are evolving to provide more lanes at higher speeds while reducing network costs and power consumption. Octal modules enable these additional connections across the 32 ports of a 1U switch. Preserving the higher radix is ​​achieved by using Lane Breakout from the optical module.

(Image: Commscope: “The Migration to 400G/800G: The Fact File”)

In addition, today’s applications are designed for 16-fiber cabling. Data centers that support 400G and higher applications with 16-fiber technology can significantly expand their switching capacity. This 16-fiber design—including matching transceivers, trunk/array cables, and distribution modules—will be the common building block that enables data centers to go from 400G to 800G, 1.6T and beyond.

But not every data center is ready to ditch its old 12- and 24-fiber deployments. Operators must also be able to support and manage applications without wasting fiber or losing throughput. Therefore, efficient application-based building blocks for 8-, 12-, and 24-fiber configurations are also needed.

2. Flexibility through design

Another key requirement is a more flexible design that allows data center managers and their partners to quickly redistribute fiber capacity on the patch panel and adapt their network to changes in resource allocation. One way to do this is to develop modular panel components that allow alignment between point-of-delivery (POD) and network design architectures.

In a traditional fiber optic platform design, components such as modules, cassettes and adapter packs are panel specific. When replacing components with different configurations, the panel must therefore also be replaced. This increases the time and money required to supply new components and new panels. At the same time, data center customers must accept additional costs for product orders and inventory.

In contrast, a design in which all panel components are essentially interchangeable and fit into a single common panel allows rapid reconfiguration and delivery of fiber capacity in the shortest possible time and at lower cost.

3. Fiber deployment and management routines

Routine tasks in connection with the implementation, upgrading and management of fiber optic infrastructure should be simplified and accelerated. While panel and blade designs have seen advances in functionality and style over the years, there is still room for significant improvement.

There is also potential for optimization in the fiber polarity. As fiber installations become more complex, it becomes increasingly difficult to ensure proper alignment of transmit and receive paths throughout the link.

In the worst case scenario, installers must reverse modules or cable assemblies to establish correct polarity. Errors may not be detected until the connection is already established, and it takes time to resolve the problem.

However, there are already fiber optic platforms on the market that solve this problem. They have a standardized polarity that simplifies adjustment and prevents reversal of the cable assemblies.

e-book

The physical data center’s data network

pull the strings

pull the strings
e-book: Pull the strings

(Image: DataCenter Insider)

Same space, more data, pressure increases. The high availability of distributed workloads eats up bandwidth; the demand for electricity increases. Only the cable paths will not grow with them; the cable density must therefore be increased.
However, high-density fiber optics with high performance and low installation costs have always been the holy grail of data center connectivity. Well, anything new?
Here is the table of contents for the e-book:

  • All threads are already in hand
  • Cable standards: What belongs in the data center?
  • Cable types and topologies: where to place the tangled cables?
  • Cable management: the high density of the server cabinet

* Lewis White is Vice President Enterprise Infrastructure Europe at Commscope.

Article files and article links

(ID:48563068)

Leave a Comment