Network monitoring is dead, says cPacket

Current solutions aren’t agile enough to keep-up with today’s demands.

  • Thursday, 17th October 2013 Posted 11 years ago in by Phil Alsop

Network monitoring is dead, that is, unless network monitoring solutions become agile enough to deliver real-time visibility, while keeping up with the increasing complexity, volume and speed of network traffic in virtualised data centre and cloud application delivery environments.


Today’s legacy monitoring solutions cannot address the demands of modern networks because they were not conceived to handle the problems that arise in large-scale data centres and high-speed networks. In simple terms, the legacy architecture relies on aggregating all the traffic and analysing it after-the-fact in a centralised location. The increasing volume and speed of network traffic makes this legacy approach a bottleneck by design.
Such a bottleneck implies lower operational agility, restricted visibility, and slow response time to situations that require corrective actions, as the operation teams do not have proactive situational awareness and real-time access to true facts.


Rony Kay, founder and CEO of cPacket, explained: “The traditional monitoring architecture is like the cashier in a struggling legacy retailer. At legacy retail stores, during busy times the centralised cashier is a bottleneck that causes customers to stand aimlessly and wait. However, in an Apple™ store, the approach is more agile and distributed, every employee at the store can help you with your needs and take your payment from anywhere on the floor, instantly. This distributed model delivers higher customer satisfaction and enables more efficient utilisation of space and time.


“Traditional monitoring solutions aim to aggregate all the traffic from across the network for centralised post-processing, during which potential issues can be unearthed. In contrast, “Pervasive Network Intelligence”, which is physically distributed across the network and virtually centralised, allows the heavy lifting of inspecting every packet and every flow in real-time. This distributed approach to network monitoring is inherently more effective for large complex environments, providing operators of modern networks with real-time situational awareness about what is happening anywhere in their networks.


Old-fashioned monitoring systems do not scale because they try to post-process massive amounts of aggregated data centrally, after-the-fact. That hindsight approach is not conducive to addressing problems before they impact users.


Kay suggested that network professionals assess whether they need more agile network monitoring solutions by considering a few questions:
· Is it easy enough to immediately and conclusively resolve intermittent problems that negatively impact business activities and users' quality-of-experience?
· Is timely, consistent, and proactive situational awareness available based on granular performance and health indicators?
· Is it possible to drive capacity planning and traffic engineering based on granular information about temporal behaviours like spikes and jitter?
· Is it possible to interactively search network traffic in real time to find tell-tale signs of imminent issues and problems like distributed denial of service?
· Do you have the information agility needed to optimise your operational efficiency and infrastructure utilisation?


To address all of these issues, a novel hardware-software architecture can be deployed to deliver pervasive real time intelligence. It should combine real-time hardware inspection of every bit in every packet and every flow across the entire network environment with radically simplified software to allow instant access to relevant information from any web browser.


“It’s time for monitoring solutions to catch up with the rest of the networking and data centre world,” concluded Kay. “Otherwise, it risks the fate of legacy retail models that could not modernise to work in today’s world.”