Rapid insights and enhanced AI are more important than ever

By Ram Chakravarti, chief technology officer, BMC Software.

  • Saturday, 13th April 2024 Posted 7 months ago in by Phil Alsop

When I've talked with business leaders in recent months, the conversation inevitably turns to ChatGPT and the rise of generative AI.

Nearly universally, executives recognise that the technology represents a huge paradigm shift in business as a tool that could radically change their enterprise. Invariably, I also hear them say they and the people who work for them already have full workloads. They're not entirely sure how to proceed, and they don't have time to learn all of the intricacies of the new AI models, develop a plan that would dramatically transform their business and their internal workflows, and then implement it.

There is a path forward to realising the value of AI in the enterprise—and it doesn't involve a wholesale transformation of the business in one sweeping change, as some chest-thumping futurists would tell you. However, it does leverage existing assets that many businesses have long tried to gain more value from.

For executives wondering how to move forward with AI, three main points can help guide the process. First, AI and big data are inextricably linked. Second, it's still early days for large language models (LLMs), and they are best deployed in domain-specific models. Third, the absolute key to extracting value from AI is the ability to operationalise it.

AI And Data Are Intertwined

A few short years ago, we used to hear about data lakes—that single repository of firm-wide data built to accelerate insights. Today, companies talk about data oceans. In this age of cloud computing, IoT devices and social media, the trend toward increasing data volumes points ever upward.

Yet depending on the industry, somewhere between 40% and 90% of data goes unused. Companies have so much data, they don't know what to do with it.

For AI to have value, it must be trained on high-quality datasets. For many use cases, the quality of the datasets matters just as much as the volume of data. At the same time, data volumes are so large that organisations can't unravel meaning from their data without AI. AI and data are intricately intertwined, one unlocking the value of the other.

LLMs Are Still In A Nascent Stage

There's tremendous hype over LLMs because they allow users to interact with systems using the same language we would use to talk to a friend or colleague. The potential for the democratisation of once-complex tasks is immense. At first glance, LLMs seem to have almost unlimited potential.

Most organisations should look to apply AI to specific use cases using domain-specific models that can provide immediate value. Further, they should team up with strategic partners (software vendors and systems integrators) from concept through implementation and value realisation. Finally, they should ensure that the solution addresses all of the elements of risk—security, accuracy, quality, privacy, biases and ethics—to be viable for operationalisation.

Company-wide transformation isn't going to happen overnight. However, organisations can look for well-defined projects that can generate success and provide their teams with the experience they need for future iterations.

Operationalising Innovation

What does a successful marriage of data and domain-specific AI look like in action? Let's consider IT operations and service management to illustrate the concept. On the IT operations side, organisations have a large volume of data—metrics, logs, events, traces, network, storage, application performance data and cloud monitoring data—extracted from various environments. This data can be linked to the service context of the business such as tickets, downtime and maintenance requests.

This marriage of service and operations data has become known as ServiceOps, and it's commonly used to drive collaboration across the organisation—automating routine tasks and gaining advanced warning of disruptions. By training and fine-tuning an LLM on ServiceOps, domain-specific data organisations can identify patterns and generate previously unobtainable information such as resolution insights, business risk prediction and more.

Generative AI has the potential to democratise complex tasks by allowing users to interact with them using natural language. They also automate cybersecurity defences in response to attacks that are themselves deployed at speeds faster than humans could possibly react.

Despite generative AI being in a nascent stage, organisations can reap the benefits by operationalising high-value use cases with a pragmatic approach—one that encompasses domain-specific LLMs complemented by instituting the requisite guardrails such as prompt engineering orchestration, security and regulatory compliance built into the solution.

By Kashif Nazir, Technical Manager at Cloudhouse.
By Terry Storrar, Managing Director at Leaseweb UK.
By Manuel Sanchez, Information Security and Compliance Specialist, iManage.
By Peter Hayles, Product Marketing Manager at Western Digital.
By Richard Eglon, CMO, Nebula Global Services.
Anita Mavridis, VP of Product at Zivver, and Sue Musumeci, Director of Quality & Clinical Informatics at Chronic Care Staffing, explore practical...
By Graham Jarvis, Freelance Business and Technology Journalist, Lead Journalist – Business and Technology, Trudy Darwin Communications.
By Krishna Sai, Senior VP of Technology and Engineering.