Mapping modern IT – build and maintain dynamic visibility across all your assets

A map is a useful tool. It can help you understand where you are, where you are planning to be and how to get there. However, it is also constrained by how you understand the world. The Hereford Mappa Mundi was state of the art when it was put together in around 1300AD – it brought together geography, Christian history and Greco-Roman myths in one place. However, to eyes used to views from space, Google Earth and 3D maps on your phone, it looks anachronistic. By Marco Rottigni, Qualys.

  • Friday, 6th September 2019 Posted 5 years ago in by Phil Alsop

For IT teams, visibility of IT assets can be a similar problem. When all IT assets were on a company network in a single data centre and lived on specific desks, getting a view of all these items in one place was simple. Today, with more assets held in the cloud and more mobile endpoints than ever before, what was effective in the past is no longer as useful. State of the art visibility must be dynamic. Any approach that doesn’t offer visibility into all of your IT assets is anachronistic and isn’t dealing effectively with stale assets.

 

Viewing on-premise IT in context

For traditional data centres and networks, the challenge is gaining visibility across the sheer volume of IT assets involved. In the past, enterprise admins managed 1:1,000 devices, today they manage 1:10,000 devices and in the future with the explosive growth of IoT –  they will handle 1:100,000. In this environment, visibility involves looking across all the servers, clients and network and security devices.

 

To complicate this further, all of these devices and assets will be running a variety of operating system software depending on how up to date they are. There’s the potential for different operating system platforms, different versions of the respective operating systems and device-specific OS platforms, such as custom Linux distributions for IP phones and IoT devices, for example.

 

Alongside this, virtualisation remains a de facto standard in most data centres because it provides greater agility and flexibility around instantiating new servers and responding to business demands. Additionally, virtualisation can also be extended to desktops, where end clients can be imaged and made available for rapid access.

 

Obtaining dynamic centralised visibility is challenging and managing all of these assets is an even greater challenge, but gaining continuous, global visibility across heterogeneous, – and rapidly increasing number of – devices and assets is extremely important. In the same way that an accurate map is pivotal for several tasks, such as navigating from point A to point B and delineating territory, visibility in IT is a fundamental requirement for many IT processes and initiatives. Visibility facilitates true understanding of risk, drives efficiency and cost reduction and ensures compliance. In fact, most security frameworks and compliance require a complete and up to date inventory as a first requirement.

 

 

 

Each platform needs tending to avoid problems developing over time and to ensure that machines are not vulnerable. This involves understanding the processes that should go on to keep each set of assets up to date. Contemporary map platforms have to supply up-to-date street, satellite, terrain and traffic tilesets, while new data overlays like business data or traffic information are also needed to reflect a world experiencing constant change. In the same way, today’s IT asset lists have to keep up with the ongoing rapid evolution of IT.

 

For example, how do you identify and decommission end of life or end of sale software? Equally, how do you know that software installations have been decommissioned because they were too vulnerable to issues, and were they removed from those machine images? How do you prioritise the remediation of this situation, depending on the real exposure and exploitability of the vulnerable surface? Crucially, how long does it take for you to become aware of this situation in the first place?

 

All of these questions apply to assets that are deployed – from desktops and devices through to servers and virtual machines – each will have its own deployment method and set of standard images, that will need to be maintained and updated. These devices will also have to be scanned for new issues or variances over time. By bringing information on all of these sets of assets together in one place, you can make the process easier. Mapping out asset lists using this data – and keeping those lists up to date continuously - can also make it easier to manage priorities around changes and security issues.

 

Cloud, containers and context

However, just as maps have to change based on new ways of creating or handling data, IT is evolving rapidly too. This year, IDC reported that the rapidly growing investment in cloud infrastructure has, for the first time, surpassed that of traditional IT hardware, reporting that in Q318 quarterly vendor revenues from IT infrastructure product sales into cloud environments has surpassed revenues from sales into traditional IT environments, up from 43.6% a year ago.

 

Cloud adoption is a subtle expansion of the way companies have to approach IT, taking advantage of powerful and flexible infrastructure available on demand. However, it’s not possible to simply transpose traditional computing management approaches into the cloud and hope for success. Even moving existing applications into the cloud will involve some different platforms being used and new tools implemented to manage those services.

 

Alongside the most simple “lift and shift” deployments, there will be new cloud-based applications that will take advantage of new services. The applications are often deployed based on microservices, which involves disassembling traditional applications into their component parts: storage, application logic, functions, networking, load balancing logic, database, identity and access management, and so on.

 

Each of these elements will have to be managed and kept up to date. Alongside this visibility, there are other additional questions that have to be answered. For instance, how can you know where and when these parts are instantiated? How do you manage their security? How do you add this expanded universe into your existing approach? Whereas on-premise IT could be mapped through traditional agent deployments and passive network scanning on a regular basis to track changes, these techniques have to evolve to keep pace with cloud computing deployments that can change far more frequently.

 

Alongside cloud, there is another approach becoming more popular for deployment. Containerisation involves using smaller, dedicated software wrappers to bring together and build all the necessary elements required. Rather than including full operating systems, containers include only what is needed to run the specific application component and no more. Limiting the container in this way makes it smaller and more efficient, while the ability to run multiple containers alongside each other can help applications scale up when they need more power.

 

Containers can, therefore, meet enterprise demands for agility, greater flexibility and raw power. However, they can potentially decrease visibility around what is deployed and where. For example, the ability to respond quickly to peaks and troughs in demand can help enormously when you have lots of customers all hitting your applications at the same time. Conversely, this ephemeral infrastructure can be difficult to track. The images themselves need to be updated, and any remediation managed over time. Tracking this is even more challenging, as the instances may come and go solely based on demand.

 

For cloud and container deployments that run continuously, providing management information on installations, therefore, involves looking at what is taking place continuously as well. By getting information on activities taking place all the time, IT can keep a check on any asset that is running and whether it is up to date. More importantly, this information should be brought into the same place as any data on traditional endpoints or on-premise IT assets so a complete picture of IT assets can be generated. This provides context on any issues that come up, and helps to map out what updates are deployed first based on business and IT priorities.

 

Managing IT in an age of cloud

To be successful in mapping out IT in this distributed era, it’s important to look at five key areas:

 

        Visibility – are you able to look across your whole IT landscape and get data on what is in place, including campus, cloud and data centre?

        Immediacy – above getting insight into what is in place, can you get that information updated right now?

        Accuracy – are you able to get that data in place and normalised, so that you have a single source of truth on what is installed? More importantly, is it as accurate as possible?

        Scale – can you cope with all the different environments that are in place, and can you keep up as the number of physical, geographical and computing environments goes up?

        Situational Awareness – can you monitor your data in real time and understand what is taking place? More importantly, can you change your priorities and requirements to see what are the most pressing issues at any point in time?

 

By looking at these five priorities, you can improve your management of IT assets over time regardless of the platforms that are deployed or any changes taking place. By getting this accurate data, you can improve your approach to any new challenges around security, compliance or risk. By understanding what makes each platform tick – and the processes that those teams take around them as well – you can help your IT teams collaborate and prioritise their efforts too.

 

Mapping out your IT relies on good quality, accurate data that can be updated continuously. Bringing this data together in one place is essential if you want to be able to take the right path in the future.