The C-Suite has Trust Issues with AI | 7wData

The C-Suite has Trust Issues with AI | 7wData

Despite rising investments in artificial intelligence (AI) by today’s enterprises, trust in the insights delivered by AI can be a hit or a miss with the C-suite. Are executives just resisting a new, unknown, and still unproven technology, or their hesitancy is rooted in something deeper? Executives have long resisted data analytics for higher-level decision-making, and have always preferred to rely on gut-level decision-making based on field experience to AI-assisted decisions.

AI has been adopted widely for tactical, lower-level decision-making in many industries — credit scoring, upselling recommendations, chatbots, or managing machine performance are examples where it is being successfully deployed. However, its mettle has yet to be proven for higher-level strategic decisions — such as recasting product lines, changing corporate strategies, re-allocating human resources across functions, or establishing relationships with new partners.

Whether it’s AI or high-level analytics, business leaders still are not yet ready to stake their business entirely on machine-made decisions in a profound way. An examination of AI activities among financial and retail organizations by Amit Joshi and Michael Wade of IMD Business School in Switzerland finds that “AI is mainly being used for tactical rather than strategic purposes — in fact, finding a cohesive long-term AI strategic vision is rare.”

More than two in three executives responding to a Deloitte survey, 67%, say they are “not comfortable” accessing or using data from advanced analytic systems. In companies with strong data-driven cultures, 37% of respondents still express discomfort. Similarly, 67% of CEOs in a similar survey by KPMG indicate they often prefer to make decisions based on their own intuition and experience over insights generated through data analytics. The study confirms that many executives lack a high level of trust in their organization’s data, analytics, and AI, with uncertainty about who is accountable for errors and misuse. Data scientists and analysts also see this reluctance among executives — a recent survey by SAS finds 42% of data scientists say their results are not used by business decision makers.

When will executives be ready to take AI to the next step, and trust it enough to act on more strategic recommendations that will impact their business? There are many challenges, but there are four actions that can be taken to increase executive confidence in making AI-assisted decisions:

Executive hesitancy may stem from negative experiences, such as an AI system delivering misleading sales results. Almost every failed AI project has a common denominator — a lack of data quality. In the old enterprise model, structured data was predominant, which classified the data as it arrived from the source, and made it relatively easy to put it to immediate use.

While AI can use quality structured data, it also uses vast amounts of unstructured data to create machine learning (ML) and deep learning (DL) models. That unstructured data, while easy to collect in its raw format, is unusable unless it is properly classified, labeled, and cleansed — videos, images, pictures, audio, text, and logs — all need to be classified, labeled for the AI systems to create and train models before the models can be deployed in the real world. As a result, data fed into AI systems may be outdated, not relevant, redundant, limited, or inaccurate. Partial data fed into AI/ML models will only provide a partial view of the enterprise. AI models may be constructed to reflect the way business has always been done, without an ability to adjust to new opportunities or realities, such as we saw with disruptions in supply chains caused by the effects of a global pandemic. This means data needs to be fed real time to create or change models real time.

Images Powered by Shutterstock