Given the conversations that abound across the web, you might think that the main causes of data loss are to do with ransomware and other cyber-security threats. Reports from the likes of New Era Technology, Recovery Explorer and BackupVault, nevertheless, point out that the leading causes of data loss are more to do with hardware failure, which is often cited as responsible for roughly 50–67% of data loss incidents. Human error also accounts for up to 30% of the data loss incidents.
Despite this, BackupVault worryingly points to the results of a survey of 500 UK businesses. It was conducted by independent Internet Service Provider (ISP) Beaming and discloses that well over 50% of UK businesses risk losing critical data. It also says, “Nearly 4 million businesses put their very existence in danger by having inadequate backup strategies, with a staggering 700,000+ businesses having no strategy at all.”
Robust data management
David Trossell, CEO and CTO of Bridgeworks, emphasises the importance of robust data management and backup strategies, noting that organisations often underestimate the risks posed by hardware failure and human error. He says that, while ransomware and cyber-security threats are highly publicised, the day-to-day threats to data integrity are just as significant.
This means there needs to be a comprehensive approach to data protection – one that includes stringent cyber-security and a fastidiously frequent – in real-time if possible – backup strategy to 3 disparate locations. In essence, organisations need to regularly review and strengthen their backup protocols to safeguard against all forms of data loss. This includes ensuring that all backups are to the 3 disaster recovery sites that sit outside each other’s circles of disruption.
Cost of doing nothing
The alternative of doing either nothing or somewhere next to it, is not worth contemplating. Security and cloud-based software solutions Managed Service Providers (MSPs), Datto, writes in their article ‘The backup myth that is putting businesses at risk’ for BleepingComputer:
“When systems are down, employees can’t work, customers can’t access services and revenue stops immediately. According to research by Oxford Economics, downtime costs businesses roughly $9,000 per minute, or $540,000 per hour. At that scale, even short interruptions are no longer acceptable. Organisations need more than data protection: They need business continuity.” In fact, they need more than business continuity which occurs after-the-fact; they need service continuity to prevent any disruptions and to keep entire organisations and their ecosystems going.
General guidance for achieving this includes using WAN Acceleration to ensure that data can be sent at speed while being encrypted. Unlike with WAN Optimisation, there is a need to unencrypt data before it can be sent and then re-encrypted at the other end. In contrast, the zero touch approach of WAN Acceleration means that encrypted data is sent as it is without any human interaction.
General CISO best practices
Trossell’s colleague, Mihai Popa, Chief Information Security Officer (CISO) at Bridgeworks, also offers the following tips for data backups to prevent data loss:
1. Treat backups as a Core Security Control: This is critical because backups are the primary defence against ransomware, other forms of cyber-attack, systems failures and human error.
2. Follow the 3‑2‑1 (and 3‑2‑1‑1‑0) Backup Principle: 3 copies of critical data, 2 different media types, 1 offsite copy, 1 air-gapped copy and 0 backup errors through regular testing. This is based on best practices that are endorsed by CISA and NIST-aligned guidance. By following this guidance, and by using WAN Acceleration to achieve it, there is no single point of failure.
3. Use immutable and isolated backups: They can neither be deleted nor altered, and they can be kept in logical or physical isolation from production environments.
4. Encrypt and strictly control access to backup data: Data might be encrypted or air-gapped, but it could still be at risk from bad actors operating within a company or organisation. Due to this, measures need to be put in place to limit access to authorised personnel. To prevent data from being diverted in transit, it’s always best to ensure that it is encrypted at rest and in transit. Another tip is to segregate duties between administrators and security teams.
Three other best practices include testing backups and restore procedures regularly, using WAN Acceleration to speed up the backing up and restoring of data. This includes validating data integrity and auditing backup coverage. This should be aligned with business risk – including measures to prevent the diversion of data while in transit to bad actors.
Business impact analyses
This requires frequently undertaking business impact analyses and defining recovery time objectives (RTOs) and recovery point objectives (RPOs) with some differentiation between critical, sensitive and low‑value data. Any backup should also be integrated into incident response and recovery plans.
Popa adds: “From a CISO perspective, backups are no longer just an IT hygiene issue—they are a core cyber‑resilience control. What matters is not simply having backups, but ensuring they are secure, isolated where appropriate, regularly evaluated and aligned with business risk.”
“In today’s threat landscape, incidents are assumed to happen, so the ability to recover safely is critical. I strongly support the focus on immutability, restore testing and integration with incident response. Finally, backup strategy must be risk‑based and proportionate, reflecting business priorities rather than just technical convenience.”
Protect backups
This also related to how backups are protected. Bad actors are increasingly going for the backups – even when they’ve been air-gapped. So, measures need to be put in place to ensure their security cannot be breached. This means that backups shouldn’t be considered as absolute protection, particularly if they are stored in the same location. However, backups are crucial to be able to regain access to breached or failed systems, and to maintaining service continuity.
Datto writes: “With a traditional backup setup, the response is straightforward. Identify the breach, wipe affected systems and begin restoring data from backups. Depending on the size of the environment, this process can take hours or even days.” However, with WAN Acceleration days can be reduced to hours or minutes – enabling organisations to achieve high levels of uptime.
So, the key causes of data loss aren’t always about cyber-security, and so any business continuity and service continuity plan needs to factor in other causes of it. By thinking ahead and putting in place technologies that support accelerated data backups and restores, you can prevent downtime, reduce the chances of data leaks and failures to meet regulatory requirements to just keep going. That’s because prevention is cheaper than a cure – protecting operations and preventing GDPR fines.