Data Integrity in AI: Navigating the Path to Trust and Transparency in Tech

By Barb Hyman, CEO and founder of Sapia.ai.

  • Tuesday, 27th February 2024 Posted 8 months ago in by Phil Alsop

It's hard to think of a form of technology that has generated as much buzz and discourse in just one year of existence. Within just 12 months, AI has evolved from a fringe technology issue to a board-level talking point in organizations. Given its rapid development over the past year, it's hard to predict where this trend will be by 2024. But I would hazard a guess that it will revolve around two key points: data and trust.

First off, data is everything with AI. An AI business is only as valuable as the data pool it has access to. In this regard, the EU has recently taken steps to mandate that AI tools disclose the data sources they use.

This should bring about a broader discussion as to what data should be used to fuel and build various models. The long-running discussion about data ownership and individuals’ rights to their own data has largely been ignored.

This is because for the individual, there’s little consequence for having loose controls over your personal data. However, the unauthorized use of this data in AI models is likely to change opinions on this matter. This feeds into my second point, trust. While regulation may not come into effect for at least a few years, AI companies should be trying to build trust with their customers by being transparent about their data now.

We’re at a crucial juncture in AI adoption, where building and maintaining trust is everything. Many questions we receive about our platform concern data usage. After all, much of the information they provide during the hiring process is personal. For instance, applicants often ask if the company will retain their responses and whether they have the right to request their removal or deletion after the hiring process. Our policy is that while we store the data, users retain ownership and can manage it as they see fit.

I also strongly emphasize to our team the importance of ensuring our platform does not use external data sources, such as web searches or social media, in our models. We weren’t pre-empting regulation with these decisions. In fact, we put them in place early to ensure there’s trust in our platform. Regardless of how sophisticated our technology is, if we don’t have the trust, it won’t be used.

Right now, we are scratching the surface of what AI is capable of doing and in turn are testing ethical boundaries. The ordinary person is also warming up to the idea of actively engaging with an AI instead of a person. My hope is that AI companies will go into 2024 with these points in mind, already acting in the best interests of society ahead of any potential regulation. It will be a fascinating talking point as we progress into 2024 and continue to find new and exciting ways to leverage AI technology.

By Arun Shenoy, VP Serverfarm EMEA.

Optimising DevOps Workflows with Job Orchestration

Posted 4 days ago by Phil Alsop
By Anugraha Benjamin, Manager, Infrastructure at Progress.
By Nathan Eames, Director EMEA Cloud & MSP at Bitdefender.
By Stewart Laing, CEO, Asanti.

Why being an MSP Is a full-time job

Posted 4 days ago by Phil Alsop
Rob Mackle, Managing Director, EMEA & APAC at Assured Data Protection, discusses the cost and complexity of starting an MSP business.

CMMC 2.0 – An Opportunity for MSPs

Posted 4 days ago by Phil Alsop
By Founder and CEO of Cynomi, David Primor.

How will the AI arms race impact cybersecurity?

Posted 1 week ago by Phil Alsop
By Danny Lopez, CEO of Glasswall.

Empowering AI with User-centric Data

Posted 1 week ago by Phil Alsop
By Oz Olivo, VP, Product Management at Inrupt.