Majority of IT professionals welcome greater AI regulation

Government action to address AI security and privacy concerns is supported by 88% of IT pros.

  • Wednesday, 18th September 2024 Posted 1 year ago in by Phil Alsop

A striking 88% of IT professionals are calling for stronger government regulation of artificial intelligence (AI) — according to a new survey from SolarWinds.

The survey of nearly 700 IT professionals reveals that security tops the list of AI concerns for, with nearly three-quarters (72%) emphasising the need for measures to secure infrastructure. Privacy is another major worry, with two-thirds (64%) of IT professionals calling for stronger rules to safeguard sensitive information.

Additionally, a majority (55%) believe government intervention is crucial to curb the spread of misinformation through AI, while half (50%) support regulations focused on ensuring transparency and ethical practices in AI development. These findings come as the EU’s AI Act comes into effect. the new Labour government also proposed an AI act during the last King’s speech.

Closer to home for IT pros, the survey further reveals a troubling lack of trust in data quality — which is essential for successful AI implementation. Only 38% of respondents consider themselves ‘very trusting’ of the data quality and training used in AI systems. Additionally, 40% of IT leaders who have encountered issues with AI attribute these problems to algorithmic errors stemming from insufficient or biased data.

As a result, data quality is identified as the second most significant barrier to AI adoption (16%), following security and privacy risks.

Concerns about database readiness are also widespread. Less than half (43%) of IT professionals are confident in their company’s ability to meet the increasing data demands of AI. This lack of preparedness is compounded by the fact that 46% of respondents believe their companies are not moving quickly enough to implement AI, partly due to ongoing data quality challenges.

Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds commented on these findings:

It is understandable that IT leaders are approaching AI with caution. As technology rapidly evolves, it naturally presents challenges typical of any emerging innovation. Security and privacy remain at the forefront, with ongoing scrutiny by regulatory bodies. However, it is incumbent upon organisations to take proactive measures by enhancing data hygiene, enforcing robust AI ethics and assembling the right teams to lead these efforts.

This proactive stance not only helps with compliance with evolving regulations but also maximises the potential of AI. High-quality data is the cornerstone of accurate and reliable AI models, which in turn drive better decision-making and outcomes. Trustworthy data builds confidence in AI among IT professionals, accelerating the broader adoption and integration of AI technologies.

Pax8 has been named a Strategic Partner in the UK Government’s AI Skills Boost programme, which aims to provide AI training to ten million workers...

Joseph Vito joins rackspace as senior VP of partnerships

Posted 2 days ago by Sophie Milburn
Rackspace Technology adds Joseph Vito to lead global alliance partnerships.
Acora partners with Securonix to enhance cyber resilience and modernise security operations through a strategic alliance.
TeamViewer partners with Thrive to integrate DEX capabilities into its managed services platform, improving operational visibility and workflow...
Explore Teleport's new framework for integrating AI agents securely into enterprise infrastructure without compromising data integrity.
Veeam Software strengthens its executive team with three new strategic appointments to drive innovation and enhance global partnerships.
Kyndryl unveils its SAP transformations centre of excellence to support AI-driven SAP transformations and enable faster, more cost-effective...

The rising risks of shadow AI in the workplace

Posted 5 days ago by Sophie Milburn
An exploration into the rising use of unauthorised AI tools by employees, posing significant security risks and challenging IT oversight.