Building trust in artificial intelligence

ai in enterprise

Artificial intelligence (AI) is about to play an important role in daily life and business operations. However, people often don’t trust AI-generated insights. This fear of the unknown can be a significant impediment for project leaders tasked with getting buy-in across an organisation for adopting AI-based tools.

The key to overcoming this reluctance is to help users understand how AI works so that they can learn to trust it. Project leaders must give users insight into the important variables and trends around the outputs the AI tool is targeting, letting people see how the AI tool came to its conclusions. There are four key ways to do this:

1. Change variables affected by the algorithm. By showing that the outputs of the algorithm are sensitive to changes with certain variables, the users can understand what variables the tool is using to make its recommendations and why it may have discounted others.

2. Change the algorithm itself. An algorithm includes a complex network of many nodes. Removing a layer of nodes and then assessing the impact this has on output can help people understand how it works. For example, by changing the threshold for a certain variable slightly, the output may change significantly. Showing examples of this make it clear that variable played a big role in the outcome.

3. Build global surrogate models. Surrogate models are built in parallel to an AI algorithm but are simpler and easier to explain. These could be a tree, linear model, or linear regression that mimic the more-complex AI network. The results will never align perfectly but if the surrogate model’s results strongly echo the AI tool’s results, users will understand some of the steps involved in the AI process.

4. Build LIME models. Local interpretable model-agnostic explanations (LIME) are surrogate localised models. With LIME models, users can generate synthetic samples instead of replicating entire models around an event. From this, users can understand which features are important in doing a linear classification around the event.

Once understanding has been established, project leaders then need to build trust before presenting controversial or challenging hypotheses. There are three ways to do this effectively:

i Detect events and trends that conform to people’s expectations. Producing analyses that are part of users’ domain knowledge and context, and that confirm their expectations, increase the level of buy-in and demonstrates that the AI model will get it right.

ii Use different criteria for event and non-event cases. When a person is trying to detect fraud, for example, the brain goes through different processes when examining a case that looks like fraud and one that doesn’t. Taking this same intuitive approach with AI can show users that the tool operates in a familiar and trustworthy way.

iii Ensure detected outcomes remain consistent. For results to be statistically significant, they must remain consistent over time and be replicable. The same is true for AI. When a possible fraud event is run through an AI model, it should be flagged consistently each time. Stability is key to establishing trust. Companies can build user interfaces that help to the backend of the AI tool into the daylight to illustrate to users what is occurring.

To make AI less threatening and hard-to-understand, project leaders must first make users understand how the AI tool works so that they can then trust the results. It’s natural to view any new technology sceptically but, because the insights generated by AI can so dramatically help to reshape how a company operates, companies need to do everything they can to ensure buy-in.

Alec Gardner, Director – Global Services and Strategy, Think Big Analytics