Coming out of the cloud – why businesses are repatriating their data

By Jon Fielding, Managing Director, EMEA, Apricorn.

  • Sunday, 14th December 2025 Posted 7 hours ago in by Phil Alsop

Once hyped to be the future of technology and business operations, the cloud promised unlimited scalability and potential. But there’s now a growing realisation that some data and applications just don’t belong in the ether and a sense of unease over the lack of control. As a result, more organisations are choosing to repatriate their data by cancelling migrations and drawing down data from the cloud and back onto on-premise.

In many respects, it’s a move that indicates the maturation of the market. There’s now much greater awareness of where the cloud can add value, versus where it can create lag. Those that adopted a ‘lift and shift’ attitude by porting their applications lock stock and barrel into the cloud have come to realise that many of those applications then suffer from performance issues. Problems tend to arise if the application requires input from external databases, devices or humans or requires constant connectivity.

But it’s also becoming significantly easier to take that data back inhouse. The EU Data Act, which became applicable in September 2025 on the continent, removes many of the technical and contractual obstacles that previously made it complex and onerous for organisations to switch service providers or move data to on premise. Cloud Service Providers (CSPs) are now only permitted to charge the costs they incur in making the switch but by January 2027 they will not be able to impose any charges for migration. If the UK follows suit, it will effectively see the end of CSP lock-in.

Politics and protectionism

The appetite for cloud also seems to be on the wane in light of recent geopolitical events. There’s a much greater awareness of the need for data sovereignty, for example, following the protectionism movement which has seen governments advocate home solutions and seek to reduce the dominance of GAMMA (Google, Amazon, Microsoft, Meta and Apple).

It’s a movement that has resulted in some interesting repercussions with respect to data sharing. The US, for example, chose to extend the FISA ruling which allows US authorities to seize non-US domiciled data. This has caused concerns over the reach of the US authorities as it effectively enables the government to demand access to data housed by US companies abroad that would not normally fall under US jurisdiction. And just how data is managed between nations continues to be subject to change. In the UK, which moved to UK GDPR post-Brexit, we benefit from the adequacy decision which allows data to flow freely between us and Europe but this is due to expire on 27 December.

With respect to the US, the flow of data is mainly protected by the UK-US Data Bridge which came into effect in October 2023 and is an extension of the EU-US Data Privacy Framework (DPF). It allows data transfer across the Atlantic provided the US recipient is DPF certified or the business, otherwise the option is to use a UK Data International Data Transfer Agreement. However, processing data overseas is costly because of the need to carry out due diligence, complete legal documentation and develop compliance strategies, making it easier? to store data in the country of origin.

Taking back control

The cloud exodus is also in part due to a general feeling that CSPs have been calling the shots for too long, with the Shared Responsibility Model a case in point. This sees the bulk of the responsibility for data protection fall to the organisation i.e. how data is secured, managed and accessed, with the CSP held accountable for the security monitoring of the cloud environment and infrastructure. It's contentious because it’s not always clear who is responsible for what, particularly as many CSPs offer security as an add-on. And it’s an issue that’s about to get a whole lot more complex with the advent of AI, making it incredibly difficult to discern who is responsible if data leaks via an AI LLM, for example.

Of course, another major reason for the retreat is the rising costs of capacity. A survey conducted by Forrester found 72% of IT decision makers exceeded their cloud budgets in 2024 predominantly due to excessive storage. This suggests cloud optimisation strategies that see data stored in tiers according to access priorities may not be working. Businesses are now having to not just access data but replicate it for data analysis, for instance, or are having to draw down data as part of disaster recovery, both of which result in surges of use.

While halting migration is one thing, taking data and applications out of the cloud is likely to be quite painful, however, at least initially. The business will need to crunch the numbers associated with identifying and investing in servers, storage and networking systems and other associated hardware as well as the cost of managing these and keeping them updated. Yet it’s undoubtedly the right move for sensitive data sets and to improve resiliency.

Get set, 3-2-1-1-0

If we consider the 3-2-1 rule of data storage, for instance, this advocates that at least three copies of such data should be held on at least two different types of media with one copy stored offsite. This has now been superseded by the 3-2-1-1-0 rule which is slightly different. It recommends three copies of data in addition to the original copy are held, with the two different media options stored in a different location than the original.

The one copy held offsite rule remains, although this should not be the same as the original so if that lives in the cloud, the offsite version will need to be either with a different provider or held on hardware. The second number one in the sequence is for

an airgapped or immutable copy of the data which remains completely isolated, while the zero stands for zero errors. All too often, backups are created that cannot be restored so the idea is to identify and resolve those during the process and through regular testing so that the backup has no errors.

The new sequence aims to make backups far more reliable and robust but it also emphasises the fact that data storage needs to be varied. Relying on the cloud for backups can be problematic because there are multiple charges for storage, testing and data transfer and as those begin to mount, the business inevitably opts for a bare-bones backup function, leaving them more exposed. In contrast, localised data storage can reduce the time needed to recover from backup following an incident, making it easier to get back to business as usual.

Both have their place but there’s little doubt that many are now beginning to think much more selectively about how they use the two when storing their data. While the cloud might provide scalability and redundancy, it also comes at cost, from the material cost of consumption to the cost of control and ownership. On premise, on the other hand, can be used to wrestle back some of that control while providing more immediate access, making it a necessary adjunct for data storage.

Happy New WAN: Protecting Data Into 2026

Posted 9 hours ago by Phil Alsop
By David Trossell, CEO and CTO of Bridgeworks.
By Peter Miller, sales manager and refurbished technology expert, at ETB Technologies.
By Dave Adamson, Solutions Director at Creative ITC.
By Pete Wilson, Senior Director, Channel Sales EMEA, Illumio.
By Orla Daly, Chief Information Officer at Skillsoft.
By Rohan Vaidyanathan - Vice President, Content Intelligence, Hyland.
By Shuaib Rabbani, Major Incident Management Product Owner at HaloITSM.
By Eric Herzog, Chief Marketing Officer at Infinidat.