Next-generation traffic monitoring for the modern data centre

Over the past decade or so, advancements in technology and evolving ways of working have caused a shift in the way that data centres are designed and managed. While IT has enabled businesses to do so much more, and allows workers to accomplish things today that were not possible just a few years ago, the impact on data centre operations cannot be denied – and IT professionals must keep up with these changes to ensure efficient processes and reliable services are maintained. By Trevor Dearing, EMEA marketing director, Gigamon.

  • Monday, 2nd September 2013 Posted 11 years ago in by Phil Alsop

While simple, static infrastructures were adequate to support IT operations of the past, modern demands have created a level of complexity that can make it difficult to deliver a secure, flexible and well-maintained data centre infrastructure. Earlier this year, Gartner forecast that the global cloud market would rise to $131 billion, up from $111 billion in 2012 with strong demand for all types of cloud services offerings1. As this becomes a reality, and as more businesses turn to cloud computing for the obvious cost and agility benefits, infrastructure providers have seen a fundamental shift in what is required of them, and the complexity of service delivery has increased exponentially.


As an example, various business units within a single enterprise now represent entirely different requirements when it comes to data retrieval and management. The security team wants information related to security event monitoring, the marketing team wants data to analyse customer experience, and so forth – which takes the organisation from a single entity to one with multiple discrete, multi-faceted and specific teams with disparate needs within it. In this respect, the enterprise is beginning to operate more like a service provider in that they are servicing multiple tenants – though these tenants are internal to the organisation.


The importance of network monitoring
While the data centre industry has experienced great transformations to address the challenges of power, cooling, space, and IT assets, little thought has been put into addressing the basic needs of the tools needed to manage, monitor and secure these transformed data centre environments.
Traditionally, managing IT infrastructure and understanding the user experience involved monitoring various IT elements, such as devices, storage, servers and the network separately, and IT attempted to correlate the information manually. This siloed approach was far from ideal, but it was effective when IT infrastructure was static and simple. Today, these legacy monitoring tools are ineffective, and it is critical that service providers have a network-centric strategy to monitor traffic in a dynamic data centre environment, where infrastructure is no longer deployed in tightly defined silos. In doing so, they will gain the ability to bridge the physical and virtual infrastructure and have a better understanding of network traffic flows throughout the facility. Modern monitoring tools can also deliver unparalleled insight into user experience – which is more often than not based on network performance.


While many operators choose to deploy a wide range of monitoring tools – such as security appliances, performance management systems, network forensic applications, application monitoring solutions and network analytic systems – that deliver part of the overall picture, this approach is inherently flawed. This is because such tools are significantly limited in value to the type of information and traffic that is originally fed into the system.
For instance, if the traffic were limited to a specific LAN segment or branch, the visibility of that tool was also limited to these parameters. In the aforementioned simple, static IT world, this would be fine, as network managers could simply connect the tool in the area where value would be maximised, but as business operations become more fluid, that area is constantly shifting and the value of the tools therefore becomes limited.
Finally, as IT continues to expand beyond the physical walls of the enterprise and move into the cloud, the problem gets worse, and understanding the ‘who, what, where and when’ of IT problems becomes virtually impossible without a proper network monitoring strategy.


See what you’re missing
With more than 70 different vendors of packet monitoring tools, and several other players that are moving into the market, enterprise and service provider infrastructure teams now face the difficult challenge of getting the right packets delivered to their tools.


They also face the challenge of giving their business units and customers direct visibility into the specific pieces or types of data that they need to satisfy their very unique requirements. This is especially apparent in multi-tenanted facilities, where stakeholders all need to see different packets of information, or have the ability to tailor monitoring policies to their individual needs.


IT infrastructure is typically designed to accommodate elements such as applications, servers, storage and networks, while networks usually comprise the core, aggregation and access layers. However, every link in that network is now hitting peak utilisation, which leaves no bandwidth available to support the management traffic that the operations teams need or the tailored data that businesses need to analyse in order to make more informed decisions.


A tailored approach
In 2005, Gigamon introduced a new IT infrastructure layer – the Visibility Fabric layer – which is an out-of-band network used to carry monitoring and analytic data at wire speed for 1, 10 and 40gbps network links, essentially taking copies of that traffic off of the production network. With products that have demonstrated the ability to optimise visibility for every layer of the IT infrastructure stack, including the virtual network and, eventually software defined data centres, Gigamon has helped deliver pervasive visibility across various topologies for all tools.


Following a recent enhancement to the Visibility Fabric, Gigamon now makes it possible for IT organisations and service providers to deliver Visibility as a Service to multiple internal stakeholders, meaning that end users within the data centre are able to obtain the visibility they need based on their unique requirements. These multiple tenants may be various IT operations teams or different business functions within a single enterprise.
With this capability, the marketing team can, for instance, obtain a complete picture of what the customer experiences when visiting their web site, enabling them to make decisions that will both improve the user experience as well as more quickly isolate problems the user may have experienced. Similarly, other teams can view and extract data relevant to their own business function – without impacting the monitoring services of other tenants or business functions, and without overloading existing tools on the network.


In the future, when cloud service providers begin to deploy solutions such as this, they will benefit from a unique competitive edge, enabling them to offer more flexible services that truly make a difference in today’s demanding IT environments.