AI Cloud leader DataRobot has released its State of AI Bias Report, conducted in collaboration with the World Economic Forum and global academic leaders. The report offers a deeper understanding of how the risk of AI bias can impact today’s organisations, and how business leaders can manage and mitigate this risk. Based on an exploration of over 350 organisations across industries, the research findings reveal that many leaders share deep concerns around the risk of bias in AI (54%) and a growing desire for government regulation to prevent bias in AI (81%).
AI is an essential technology to accelerate business growth and drive operational efficiency, yet many organisations struggle to implement AI effectively and fairly at scale. More than one in three (36%) organisations surveyed have experienced challenges or direct business impact due to an occurrence of AI bias in their algorithms, such as:
Lost revenue (62%)
Lost customers (61%)
Lost employees (43%)
Incurred legal fees due to a lawsuit or legal action (35%)
Damaged brand reputation/media backlash (6%)
“DataRobot’s research shows what many in the artificial intelligence field have long-known to be true: the line of what is and is not ethical when it comes to AI solutions has been too blurry for too long,” said Kay Firth-Butterfield, Head of AI and Machine Learning, World Economic Forum. “The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics.”
While organisations want to eliminate bias from their algorithms, many are struggling to do so effectively. The research found that 77% of organisations had an AI bias or algorithm test in place prior to discovering bias. Despite significant focus and investment in removing AI bias across the industry, organisations still struggle with many clear challenges to eliminating bias:
Understanding the reasons for a specific AI decision
Understanding the patterns between input values and AI decisions
Developing trustworthy algorithms
Determining what data is used to train AI
“The market for responsible AI solutions will double in 2022,” wrote Forrester VP and Principal Analyst Brandon Purcell in his report Predictions 2022: Artificial Intelligence. Purcell continues, “Responsible AI solutions offer a range of capabilities that help companies turn AI principles such as fairness and transparency into consistent practices. Demand for these solutions will likely double next year as interest extends beyond highly regulated industries into all enterprises using AI for critical business operations.”
DataRobot’s Trusted AI Team of subject-matter experts, data scientists and ethicists are pioneering efforts to build transparent and explainable AI with businesses and industries across the globe. Led by Ted Kwartler, VP of Trusted AI at DataRobot, the team’s mission is to deliver ethical AI systems and actionable guidance for a customer base that includes some of the largest banks in the world, top U.S. health insurers, and defence, intelligence, and civilian agencies within the U.S. federal government.
“The core challenge to eliminate bias is understanding why algorithms arrived at certain decisions in the first place,” said Kwartler. “Organisations need guidance when it comes to navigating AI bias and the complex issues attached. There has been progress, including the EU proposed AI principles and regulations, but there’s still more to be done to ensure models are fair, trusted and explainable.”