If uptime is the new black, edge computing can keep you on trend

Alan Conboy, office of the CTO at Scale Computing, explains how edge computing and hyperconvergence can ensure businesses remain “always-on”.

  • Monday, 17th February 2020 Posted 4 years ago in by Phil Alsop

Because we live in a world increasingly driven by technology and data, any length of downtime is bad news for everyone. But, every business should be especially concerned with mitigating dreaded downtime, as the threat of detrimental impact to customers and reputation looms alongside the inevitable mammoth-cost of recovery and recompense. The actual cost of downtime is demonstrated clearly with British Airways’ recent high-profile data centre outage leading the company to fork out an estimated £100,000,000 after cancelling more than 400 flights and stranding 75,000 passengers in one day.


Unlike well-established enterprises such as BA, the monetary and reputational consequences of downtime could so adversely impact a small business that they never fully recover. On top of this, SMEs can often be stretched for budget and skilled staff, in comparison to an established enterprise that can afford the latest threat-detecting, data-protecting technology and a multitude of expert IT staff to manage it all.


But, there is an answer f
or those smaller businesses looking to keep up with the necessity to remain “always-on”. In the face of growingly sophisticated cyber threats, and customer demand for 24/7 uptime, those SMEs -- or businesses in industries with distributed enterprises like retail or financial services -- can look to the latest technologies like hyperconvergence and edge computing. Combined, these solutions will provide that crucial high availability, as well as offering a lower total cost of ownership (TCO), and easy deployment and management without the need for onsite IT experts. Thanks to hyperconvergence and edge computing, distributed organisations are realising a sophisticated, cutting-edge cyber-security strategy that mitigates the risk of costly and damaging downtime.

 

Making single point of failure a thing of the past

With traditional single point of failure an outage at the central data centre can affect all other branch locations, such as point-of-sale cash registers in retail stores, that all connect to a centralised data centre. But, with to edge computing, it is all about putting the computing resources close to where they are needed. This means a failure at the central data centre wouldn’t need to bring everything down because each branch can run independently from it. A solid virtualised environment can run all of the different applications needed to provide customers with the high-tech services they have come to expect.

 

Some might ask why this hasn’t been done before, and there is a straightforward answer: until very recently, it was not cost-effective to implement the highly-available infrastructure to make this work. The necessary highly-available virtual infrastructure would mean investing a sizeable amount into a shared-storage appliance, multiple host servers, virtual hypervisor licensing, as well as a disaster recovery solution.

 

Why hyperconvergence works

While not all hyperconverged infrastructure (HCI) solutions are cost-effective for edge computing, hyperconvergence can consolidate those components into an easy-to-deploy, low-cost solution. The issue is that some HCI solutions are designed like traditional virtualisation architectures and emulate SAN technology to support that legacy architecture. This results in resource inefficiency and the need for bigger systems that are not cost-compatible with edge computing.

 

However, HCI with hypervisor-embedded storage can offer smaller, highly-available, and cost-effective infrastructure that allows each branch location to run independently, even if the central data centre goes down. A cluster of three HCI appliances will continue to run despite drive failures or even in the instance that an entire appliance fails. There isn’t a way to prevent downtime entirely, but edge computing, with the right highly-available infrastructure, can insulate branches to continue operating independently. 

 

The data centre remains a vital piece of the overall IT infrastructure with HCI, and this still doesn’t need to change with edge computing. HCI can consolidate data from all of the branch locations for analysis to make key business decisions. On-site edge computing platforms can provide local computing at the same time as communicating key data back to the central data centre for reporting and analytics. In removing the single point of failure from the central data centre, outages at any location shouldn’t have wide-reaching effects across the entire organisation.

 

Keep up with the rest of the world  

Technology is not just revolutionising business, but has become second nature in our everyday lives both at work and at home. Because of this, expectation and demand has shifted dramatically, and high availability continues to rise to the top of our priorities. 

 

No matter if your business is just one small location or hundreds of distributed branches, we all have to keep up with the changing landscape. HCI and edge computing can replace traditional virtualisation infrastructure and make that necessary high availability possible for any business.