More than 40% of companies have inadequate real-time breach detection capabilities

With security breaches being a certainty, it makes great practical sense to have a “Plan B” in place, says Varonis.

  • Thursday, 4th July 2013 Posted 11 years ago in by Phil Alsop

Recent research, by data protection specialists Varonis, has revealed that more than 40% of respondents had either no or very limited capabilities to detect data breaches using some form of automation—either real-time alerting or daily/weekly reporting. The survey results show that a remarkable 24% or almost one-quarter of those surveyed had no automation technologies in place to detect breaches by monitoring for privilege escalations, suspicious data access, file access changes, or unusual email event activity. Another 19% of respondents had a basic capability to detect some of these events using automation. Varonis has found that only 6% of the survey could monitor all these events in real-time.


The survey of 248 security professionals[i], at Infosecurity events in London and Orlando, was aimed at better understanding how well companies are able to spot breaches in progress.


The findings were particularly alarming in light of the fact that, since there’s no perfect system of safeguards, a breach by hackers, other unauthorised users and authorised users that abuse their access is inevitable, says David Gibson, VP at Varonis.


With security breaches being a certainty, it makes great practical sense to have a “Plan B” in place, or strategy for mitigating liabilities from a data break-in, he adds.


Topping risk mitigation lists are techniques for detecting and monitoring unusual system events. Detective controls that track and analyse user, file system and OS activity for anomalous patterns outside of the norm become a critical layer of defence, and are as important as preventive controls like authentication, access control lists, and firewalls, he says.


Once corporate defences have been breached, hackers look for high-value content, such as personal information, intellectual property, credit card numbers, and other sensitive data, says Gibson.


An IT department’s ability to track this data is key to breach mitigation efforts. Unfortunately respondents fared poorly here, with only 29% having the ability to detect when files containing sensitive data had been accessed or created. With the rise of cloud services, such as Dropbox, that are used informally by employees, companies have another place to search for sensitive content.


The survey results showed that organisations need to improve their cloud monitoring as well: only 22% could track data uploaded to the cloud.
On the positive side, large enterprises showed they do a better job of spotting anomalous file and system events. 36% of these entities use automated techniques to detect files access control changes versus an overall 28% average, and 37% use automation to spot privilege escalation, versus a 30% average.


Finally, Gibson says although it is widely accepted that auditing and analysis of OS, security, applications and especially file system logs is critical to good breach mitigation practices, the survey results were, again, less than encouraging, particularly in discovering breaches involving human readable, sensitive data in corporate file systems. “A mere 28% of respondents report being able to detect suspicious access to data.”


There is no doubt that first-line defences are critical in preventing breaches. However, cyber criminals have many more successful attack vectors, which, in combination with advanced persistent threats, cannot always be prevented. Organisations need to be able to detect what they don’t prevent.
“In other words, businesses must assume that as long as they store sensitive data, someone will try to get to it, and a hacker or an insider will gain access at some point. Therefore, Plan-B detection methods are vital in stopping breaches as soon as they start, thereby limiting the damage,” he concludes.