Data centre downtime: a thing of the past

By Trevor Dearing, EMEA Marketing Director, Gigamon.

  • Monday, 8th July 2013 Posted 11 years ago in by Phil Alsop

The data centre is the life blood of any large organisation – and within it, data is constantly traversing the networked infrastructure, providing critical services to both internal and external users. With increasingly complex and diverse technologies working together at incredible speeds, bottlenecks and outages can often occur – causing headaches for operators and their clients, as well as impacting performance against SLAs. In order to prevent this, and to ensure the data centre operates as efficiently as possible, it is essential to carefully monitor and analyse all traffic passing through the networks.


Service providers typically understand the need to monitor the data centre – the vast number of clients that they service and the complexity of tools that are used to operate a data centre make it virtually impossible not to do so. However, the most common approach is to implement networks with a myriad of monitoring and security tools, but having a monitoring tool connected to every critical data path is neither an efficient nor cost-effective practice. Service providers often run into trouble when monitoring, and security tools, become over-burdened by the sheer amount of data they are receiving. This is likely to lead to dropped packets and the potential loss of critical information, as well as service disruptions and unscheduled downtime. In addition, as network operators begin upgrading to higher speed networks, monitoring and security tools are being placed under further pressure to perform at a level they were not designed to.


The key to creating improved monitoring capabilities is to build in a tool that filters, aggregates, consolidates and replicates data to the monitoring and security tools that are already in place within the data centre. There are millions of traffic flows, thousands of events and hundreds of changes occurring within the infrastructure on a daily basis and, in order to effectively secure and monitor all of this activity, pervasive, intelligent and dynamic visibility is required. The tools that are used need to be able to recognise when specific traffic is significant to more than just one management system and be able to see over boundaries (of physical and virtual) in order to provide the clarity required to secure, maintain and support the whole data centre.


Adding complexity to the management of data centre networks is the fact that they are continuously evolving. New applications, services and tools are frequently added, creating a need for constant re-evaluation in the management of each, which often involves a level of downtime. However, when all data that needs to be monitored is routed through a traffic visibility node, users can easily connect new tools or monitor new applications without disturbing existing monitoring connections – as the data is automatically filtered and routed to the most appropriate tool. By having this technology in place, new tools can be added without any lengthy change management processes and no downtime.


A traffic visibility tool will also virtually eliminate the problem of tools becoming overburdened by using filtering technology. Filtering allows a user to reduce the amount of data being sent to a tool so that it is only receiving the data it needs to see, rather than processing vast amounts of unnecessary information. Not only does this remove the issue of dropped packets and compromised analysis, but also improves longevity and associated ROI of existing management and analysis tools.


As networks are upgraded to 10Gbps and higher, data centres will often operate underutilised network links that require monitoring in order for them to be used to their full capability, but lack the tools capable of this. Through the use of traffic filtering, data passing through these large networks can be reduced to manageable levels that can be monitored by 1Gbps tools – significantly increasing visibility into the data, while also removing the need to purchase monitoring tools specifically for the 10Gbps networks.


Employing a network traffic visibility solution, allows service providers to see exactly what is happening on the network at all times – from threats to performance issues – and maximise data centre performance, while lowering the total cost of management. With increased visibility, operators are able to see what they would otherwise miss, limit downtime to a minimum and adhere to those all important SLAs.