Can we trust AI to make decisions?

AI is going to make our lives easier by automating repetitive tasks, releasing resources to focus on higher-value tasks and removing the subjectivity from a human judgement call.

Yet, sometimes it goes wrong – people are denied visas, wrongly sent bills and stopped at airports all because AI has mistaken them for someone else. Compounding this problem is the issue that should AI make a wrong call or deliver an incorrect result you cannot just ask it why.

As Douglas Adams once said: “The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair.”

As private enterprises, solution providers and public organisations recognise the potential of AI. AI enabled platforms are now involved in many more decisions than ever before. How should they provide the necessary transparency and oversight that maintains trust?

An essential first step is to understand how AI came to a decision. This is not an easy task, you need to go back and examine the potentially massive volumes of complex training data and determine unwanted biases in the algorithm or hidden layers in the neural network. This issue of transparency is compounded further when the AI is embedded within a platform that the organisation has purchased from a third party. Want to add another layer of complexity? The AI enabled platform may be hosted in the cloud or leased from a platform service provider, further limiting access other than that intended by the suppliers.

The 3 vital steps to building trust with AI:

Step 1: AI should be designed to meet the need directly. Until artificial general intelligence is a reality, AI needs to be customised directly to the task at hand. The business or organisational context is often not given enough significance, particularly in off-the-shelf, AI enabled platforms. AI should be designed not just by technical teams, but by people who understand the underlying data and what it actually represents within this business context. We see far too many data projects focused on clever algorithms and advanced technology without consideration of the actual need of the people who need to consume the insights generated.

An AI enabled data discovery platform will see a pattern between a behaviour and a common outcome (whether that be between spending patterns and credit card fraud, or overheating and impending engine failures). Someone who understands the context will build a significantly more sophisticated and accurate algorithm to understand when that behaviour is normal and when it requires intervention. In short the areas and situations where oversight is required are understood in the design phase and can be built into the business processes that use AI.

Step 2: Apply a rigorous focus on the training data. Machine learning, deep learning and neural networks – the toolset of AI – are only as good as the data that they are trained upon. Training data can often hold unseen biases – as shown by the embarrassing beauty contest AI which was trained to recognise beauty using images of white people so rated white people as more attractive. There is a need for a scientific approach which scrutinises the training data sets and designs experiments to eliminate these otherwise hidden biases.

Step 3: Get the right people. We now know that AI does not end with the algorithm (see step 2) – there needs to be an ongoing oversight from people who understand what the AI generated insights from data are telling them. It is not reasonable to expect a government official to understand complex deep learning topologies and subsequently dive into them when someone raises a concern. But a data scientist who knows the data, algorithm and the context can quickly identify if an error was made and take corrective action, an immediate intervention whilst also using that feedback to update the efficacy of the model.

Incorrect billing and unpleasant airport experiences are bad enough, but AI is helping to design and build the planes, manufacturing plants, nuclear reactors, cars and assessing the safety of new medicines, deciding if someone needs urgent medical attention… AI is now being used in higher impact roles, with higher stakes and providing answers to more complex and important questions than ever before. And as the barrier of entry to AI is lowered, this list is growing, rapidly.

Recent calls for a watchdog to be set up should be welcomed, in theory. But the needs for oversight of an AI enabled decision informing machine is not the same as the needs to oversee a human decision and needs a different level of informed and considered approach.

Categorised:

Matt Jones

Matt Jones

Matt has over 16 years' experience of working in Research and Development groups within the ...

Latest Tweet from Matt Jones

best big data analytics project of the year #UKITAWARDS https://t.co/Ywt3R516Jb

about 6 months ago

© Copyright 2017 Tessella
All rights reserved