Access to sufficient, reliable, and timely data will be a key determinant of success for enterprises over the coming years as AI transforms business workflows. Credit: PeopleImages.com - Yuri A / Shutterstock As enterprises become more data-driven, the old computing adage garbage in, garbage out (GIGO) has never been truer. The application of AI to many business processes will only accelerate the need to ensure the veracity and timeliness of the data used, whether generated internally or sourced externally. The costs of bad data Gartner has estimated that organizations lose an average of $12.9m a year from using poor quality data. And IBM calculate that bad data is costing the US economy more than $3 trillion a year. Most of these costs relate to the work carried out within enterprises checking and correcting data as it moves through and across departments. IBM believes that half of knowledge workers’ time is wasted on these activities. Apart from these internal costs, there’s the greater problem of reputational damage among customers, regulators, and suppliers from organizations acting improperly based on bad or misleading data. Sports Illustrated and its CEO found this out recently when it was revealed the magazine published articles written by fake authors with AI-generated images. While the CEO lost his job, the parent company, Arena Group, lost 20% of its market value. There’ve also been several high-profile cases of legal firms getting into hot water by submitting fake, AI-generated cases as evidence of precedence in legal disputes. The AI black box Although costly, checking and correcting the data used in corporate decision making and business operations has become an established practice for most enterprises. However, understanding what’s going on with some large language models (LLMs) in terms of how they’ve been trained, and on what data and whether the outputs can be trusted, is another matter considering the increasing rate of hallucinations. In Australia, for instance, an elected regional mayor has threatened to sue OpenAI over a false claim made by the company’s ChatGPT that he had served prison time for bribery whereas, in fact, he had been a whistleblower on criminal activity. Training an LLM on trusted data and adopting approaches such as iterative querying, retrieval-augmented generation, or reasoning are good ways to significantly lessen the dangers of hallucinations, but can’t guarantee they won’t occur. Training on synthetic data As companies seek a competitive advantage through deploying AI systems, the rewards may go to those with access to sufficient and relevant proprietary data to train their models. But what about most enterprises without access to such data? Researchers have predicted that high-quality text data used for training LLM models will run out before 2026 if current trends continue. One answer to this impending problem will be an increased use of synthetic training data. Gartner estimates that by 2030, synthetic data will overtake the use of real data in AI models. However, returning to the GIGO warning, an over-reliance on synthetic data risks accelerating the dangers of inaccurate outputs and poor decision making; such data is only as good as the models that created it. A longer-term danger may arise from “data inbreeding,” as AI models are trained on sub-standard synthetic data that produce outputs, which are then fed back into later models. Moving with caution The AI genie is out of the bottle, and while it’ll take more time for the widespread digital revolution promised by some overly-enthusiastic technology vendors and consultants to occur, AI will continue to transform businesses in ways we can’t yet imagine. However, access to reliable and trusted data available at the scale needed by enterprises is already a bottleneck that CIOs and other business leaders have to find ways to remedy before it’s too late. Related content brandpost Sponsored by Huawei A Glance at the Intelligent Network Plans from Huawei Analyst Summit 2024 By Chris Barnard May 22, 2024 4 mins Digital Transformation Networking news PagerDuty seeks to ease incident response with generative AI The IR SaaS company has enhanced its PagerDuty Copilot to provide natural-language post-incident reviews, among other automation, summarization, and analysis features. By Evan Schuman May 22, 2024 4 mins Incident Response Generative AI Enterprise Applications how-to Download our enterprise architecture (EA) tools buyer’s guide From the editors of CIO, this enterprise buyer’s guide helps CIOs and other IT leaders understand what enterprise architecture (EA) can do for them and the kinds of tools available to do EA well. By Sarah K. White and Peter Wayner May 22, 2024 1 min Enterprise Architecture Development Tools Enterprise Buyer’s Guides news Big tech companies commit to new safety practices for AI The Frontier AI Safety Commitments can provide a guide not only for AI model developers but also for CIOs to better understand the risks associated with deploying the technology. By Elizabeth Montalbano May 22, 2024 4 mins Regulation IT Governance Frameworks Generative AI PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe