“For years, AI has been the focus of the technology world,” said Mike Loukides, vice president of content strategy at O’Reilly and the report’s author. "Now that the hype has died down, it’s time for AI to prove that it can deliver real value, whether that’s cost savings, increased productivity for businesses, or building applications that can generate real value to human lives. This will no doubt require practitioners to develop better ways to collaborate between AI systems and humans, and more sophisticated methods for training AI models that can get around the biases and stereotypes that plague human decision-making.”
Despite the need to maintain the integrity and security of data in enterprise AI systems, a large number of organisations lack AI governance. Among respondents with AI products in production, the number of those whose organisations had a governance plan in place to oversee how projects are created, measured, and observed (49%) was roughly the same as those that didn't (51%).
As for evaluating risks, unexpected outcomes (68%) remained the biggest focus for mature organisations, followed closely by model interpretability and model degradation (both 61%). Privacy (54%), fairness (51%), and security (42%)—issues that may have a direct impact on individuals—were among the risks least cited by organisations. While there may be AI applications where privacy and fairness aren’t issues, companies with AI practices need to place a higher priority on the human impact of AI.
“While AI adoption is slowing, it is certainly not stalling,” said Laura Baldwin, president of O’Reilly. “There are significant venture capital investments being made in the AI space, with 20% of all funds going to AI companies. What this likely means is that AI growth is experiencing a short-term plateau, but these investments will pay off later in the decade. In the meantime, businesses must not lose sight of the purpose of AI: to make people's lives better. The AI community must take the steps needed to create applications that generate real human value, or we risk heading into a period of reduced funding in artificial intelligence.”
Other key findings include:
Among respondents with mature practices, TensorFlow and scikit-learn (both 63%) are the most used AI tools, followed by PyTorch (50%), Keras (40%), and AWS SageMaker (26%).
Significantly more organisations with mature practices are using AutoML to automatically generate models. 67% of organisations are using AutoML tools, compared with 49% of organisations the prior year, representing a 37% increase.
Among mature practices, there was also a 20% increase in the use of automated tools for deployment and monitoring. The most popular tools in use are MLflow (26%), Kubeflow (21%), and TensorFlow Extended (TFX, 15%).
Similar to the results of the previous two years, the biggest bottlenecks to AI adoption are a lack of skilled people and a lack of data or data quality issues (both at 20%). However, organisations with mature practices were more likely to see issues with data, a hallmark of experience.
Both organisations with mature practices and those currently evaluating AI were in agreement on the lack of skilled people being a significant barrier to AI adoption, though only 7% of the respondents in each group listed this as the most important bottleneck.
Organisations with mature practices saw the most significant skills gaps in these areas: ML modeling and data science (45%), data engineering (43%), and maintaining a set of business use cases (40%).
The retail and financial services industries have the highest percentage of mature practices (37% and 35%, respectively). Education and government (both 9%) have the lowest percentage but the highest number of respondents who are considering AI (46% and 50%, respectively).