Three key considerations for creating a flexible, future-ready fibre platform for data centres

By Ken Hall, Data Centre Solutions Architect at CommScope.

  • Friday, 21st October 2022 Posted 2 years ago in by Phil Alsop

We all know data is one of the most valuable assets an organisation can have. And as it continues to generate at breakneck speed (the latest estimates suggest 2.5 quintillion bytes of data is produced every day), the demand for more storage space grows with it.

 

This has resulted in unprecedented growth in the global data centre market, with recent research estimating it will reach USD 288.3 billion by 2027, growing at a CAGR of 4.95%.

 

It’s no great revelation therefore that the data centre environment, whether hyperscale, global-scale, multi-tenant, or enterprise, is in a state of constant change - and now is a time of great opportunity.

 

It is also the case that with data centre applications becoming more resource-intensive and fluid, network managers must up their infrastructure game to meet these rapid technological advances head on.

 

However, some of the changes in data centre evolution - characterised by an ever-increasing change in fibre density, bandwidth, and lane speeds - can be complex to implement and have greater and further reaching consequences than others.

 

And it’s not just the data centre that feels them. Everyone in the ecosystem — from the designers, integrators and installers to the OEM and infrastructure partners — must adapt to these fundamental changes.

 

So, what’s causing them in the first place? For a start, we’re witnessing the next great migration in speed —the octal era — where 400G applications make the jump to 800G (and beyond). That in turn is increasing global data consumption, and demand for resource-intensive applications like big data, IOT, AI and machine learning has pushed the need for more capacity and reduced latency within the data centre.

 

At switch level, the use of higher capacity ASICs (Application Specific Integrated Circuits: the brains of the switch) makes this possible. Data centre managers need to provision more ports at higher data rates and higher optical lane counts. That’s no small feat and requires thoughtful scaling with more flexible deployment options at a time when data centres are being forced to achieve more with fewer resources.

 

This task might fall at the feet of data centre network managers, but installers, integrators, system designer and OEMs are all invested in the data centre’s success. The value of the physical layer infrastructure is dependent in large part on how easy it is to deploy, reconfigure, manage, and scale.

 

Identifying the criteria for a flexible, future-ready fibre platform

We began focusing on the next generation of fibre platform at CommScope several years ago. After our own initial research, we went straight to our customers and partners. We asked them ‘Knowing what you know now about network design, migration and installation challenges, and application requirements — how would you design your next-generation fibre platform?’

The answers were illuminating - and the dominant themes matched our own research. These advanced platforms must enable easier, more efficient migration to higher speeds; ultra-low-loss optical performance, faster deployment, and more flexible design options.

By synthesising this customer input with lessons learned from over 40 years in the industry, we have identified three critical design requirements necessary for addressing the changes affecting both data centre customers and their design, installation, and integration partners. These are:

 

Application-based building blocks

 

Generally, application support is limited by the maximum number of I/O ports on the front of a switch. For a 1RU switch, for example, capacity is currently limited to 32 QSFP/QSFP-DD/OSFP ports.  Making the best use of switch capacity is the key to maximising port efficiency.

 

The more traditional four-lane quad designs offered us a steady migration to 50G, 100G and 200G. But where we are now, at 400G and above, the 12- and 24-fibre configurations used to support quad-based applications become less efficient, leaving significant capacity stranded at the switch port.

 

This is where octal technology comes into its own. At 400G and above, the most efficient multi-pair building block for trunk applications is eight-lane octal technology, and 16-fibre MPO breakouts.

 

By doubling the number of breakouts, we give network managers the ability to eliminate some switch layers. Not to mention that today’s applications are being designed with 16-fibre cabling in mind. By supporting 400G and higher applications with 16-fibre technology, we are allowing data centres to maximise and deliver full switch capacity.

 

This 16-fibre design — including matching transceivers, trunk/array cables and distribution modules — is the foundation for enabling data centres to move forward from 400G to 800G, 1.6T and beyond.

 

Of course, data centres have got to be ready to move away from 12- and 24-fibre deployments — and not all are. They can’t waste fibres or lose port counts while supporting and managing applications, so efficient application-based building blocks for 8f, 12f and 24f configurations are just as important too.

 

Design flexibility

 

The second requirement is the flexibility in design. This is important so data centre managers and their design partners can quickly redistribute fibre capacity at the patch panel and adapt their networks to support changes in resource allocation. The best way of achieving this is to build in modularity in the panel components that enable alignment between point-of-delivery (POD) and network design architectures.

 

Let’s consider a traditional fibre platform design. Any components — including modules, cassettes, and adapter packs — are all specific to the panel. Therefore, changing components to options with different configurations means that you need to swap out the panel as well. This requires extra time and cost to deploy both new components and new panels, which at the same time, data centre customers need to consider ordering more products and the related inventory costs.

 

Compare that to a design where all panel components are essentially interchangeable, and can be installed in a single, common panel. This would allow designers and installers to reconfigure and deploy fibre capacity in much less time and at a much lower cost, helping data centre customers to streamline their infrastructure inventory and all the associated costs.

 

Simplifying and accelerating fibre deployment and management

 

The last piece of the puzzle is the need to make the routine tasks involved in deploying, upgrading, and managing the fibre infrastructure easier and quicker. While panel and blade designs have helped to push along functionality and design to some extent over the years, there is plenty of room for improvement.

 

There’s also the issue of polarity management. The fact is as fibre deployments get more complex, it’s getting harder to ensure the transmit and receive paths stay aligned throughout the link. At the very worst, ensuring polarity means installers may have to flip modules or cable assemblies, and mistakes can be easily missed before deployment, which takes time to resolve.

 

And finally, end-face performance for the fibre connections can have a significant impact on network performance and the ability to support these applications. Ultra-low loss fibre connections, whether single mode or multimode, flat or angled connector choice can determine the architectural options for these critical networks.

 

What’s the solution?

 

With data speed and the complexity of infrastructure on the up, forward-thinking data centres are having to evolve quickly to keep up. Hyperscale environments are particularly feeling this, where lane speeds are accelerating to 100G, 200G and beyond, and fibre counts across all layers of the network multiply.

 

We need everyone involved in this process — from network managers and designers to integration professionals and installers — to work together and help data centre operators to make the very most of their existing infrastructure investments, while also preparing for future applications. Early alignment in the process is critical.