Critical gaps in responsible AI practices

Qlik has sponsored a study by TechTarget’s Enterprise Strategy Group (ESG) to shine a light on the state of responsible AI practices across industries.

  • Friday, 26th April 2024 Posted 7 months ago in by Phil Alsop

This landmark research delves into the pressing need for robust ethical frameworks, transparent AI operations, and cross-industry collaboration to navigate the complexities of AI integration into business processes. The findings underscore the urgency for organisations to prioritise responsible AI to not only adhere to emerging regulations, but also to foster trust and inclusivity in AI-driven innovations.

The ESG research report reveals insightful data on the adoption, challenges, and strategic initiatives surrounding responsible AI:

Widespread Adoption of AI Technologies: An overwhelming 97% of surveyed organisations are actively engaging with AI, with a significant portion (74%) already incorporating generative AI technologies in production. This marks a notable shift towards AI-driven operations across sectors.

Investment Versus Strategy Gap: While all respondents acknowledge active investments in AI, a stark 61% are dedicating a substantial budget towards these technologies. However, there's a notable discrepancy in strategic planning with 74% of organisations admitting they still lack a comprehensive, organisation-wide approach to responsible AI.

Challenges in Ethical AI Practices: The report highlights several key challenges faced by organizations, including:

A significant 86% face challenges with ensuring transparency and explainability in AI systems, pointing to a critical need for solutions that demystify AI processes.

Nearly all organisations (99%) face hurdles in staying compliant with AI regulations and standards, underscoring the complex regulatory landscape surrounding AI technologies.

Operational Impact and Prioritisation of Responsible AI: Despite the challenges, a robust 74% of organisations rate responsible AI as a top priority, signaling a growing recognition of its importance. Yet, over a quarter of organisations have encountered increased operational costs, regulatory scrutiny, and market delays due to inadequate responsible AI measures.

Stakeholder Engagement in AI Decision-making: The research emphasises a broad stakeholder landscape in the realm of responsible AI, with a significant emphasis on IT departments playing a proactive role. This highlights the necessity for inclusive and collaborative approaches in ethical AI deployment and governance.

In light of the ESG Research findings, Qlik recognises the imperative of aligning AI technologies with responsible AI principles. The company’s initiatives in this area are grounded in providing robust data management and analytics capabilities, essential for any organisation aiming to navigate the complexities of AI responsibly. Qlik underscores the importance of a solid data foundation, which is critical for ensuring transparency, accountability, and fairness in AI applications.

Qlik's commitment to responsible AI extends to its approach to innovation, where ethical considerations are integrated into the development and deployment of its solutions. By focusing on creating intuitive tools that enhance data literacy and governance, Qlik aims to address key challenges identified in the report, such as ensuring AI explainability and managing regulatory compliance effectively.

Brendan Grady, General Manager, Analytics Business Unit at Qlik, said, “The ESG Research echoes our stance that the essence of AI adoption lies beyond technology—it's about ensuring a solid data foundation for decision-making and innovation. At Qlik, we empower businesses not just to deploy AI but to integrate it meaningfully, aligning with their strategic objectives. This study underscores the importance of responsible AI integration as a catalyst for sustainable and impactful organisational growth.”

Michael Leone, Principal Analyst at ESG, commented, “Our research confirms the growing adoption of AI across industries, but it also highlights a gap in effectively implementing responsible AI practices. As organizations accelerate their AI initiatives, the necessity for a solid foundation that supports ethical guidelines and robust data governance becomes crucial. This research aims to guide enterprises in fostering responsible innovation that aligns with both business objectives and ethical standards.”

Beacon, NY, Dec 20, 2024– DocuWare unveils its AI-powered Intelligent Document Processing (DocuWare IDP), bringing about unprecedented improvements...
85% of IT decision makers surveyed reported progress in their companies’ 2024 AI strategy, with 47% saying they have already achieved positive ROI.

MSPs will invest in more AI security forecasting

Posted 1 week ago by Phil Alsop
Predictive maintenance and forecasting for security and failures will be a growing area for MSPs with an interest in security, says Nicole Reineke,...

Machine identities next big target for cyberattacks

Posted 1 week ago by Phil Alsop
Venafi has published the findings of its latest research report: The Impact of Machine Identities on the State of Cloud Native Security in 2024....
Nearly 50% of organisations have experienced a security breach in the last two years.

IT professionals recognise lack of gender diversity

Posted 1 week ago by Phil Alsop
The majority (87 percent) of IT professionals agree that there is a lack of gender diversity in the sector, yet less than half (41 percent) of...

A moving landscape for MSPs

Posted 1 week ago by Phil Alsop
2025 predictions from Ranjan Singh, chief product officer at Kaseya.

Data breach epidemic takes its toll

Posted 1 week ago by Phil Alsop
New study by Splunk shows that a significant number of UK CISOs are stressed, tired, and aren’t getting adequate time to relax.