Artificial Intelligence could be about to start a processing chip arms race

Recent converts to AI and machine learning’s amazing potential may be disappointed to discover that many of its more interesting aspects have been around for a couple of decades.

AI works by learning to distinguish between different types of information. An example is the use of a neural network for medical diagnosis. Inspired by the way the human brain learns, neural nets can be trained to analyse images tagged by experts (this image shows a tumour, this image does not). Over time, and with enough data, they become as good as the experts at making the judgement, and they can start making diagnoses from new scans.

Although not simple, it is not the complexity of the algorithms that has held these tools back until now. In fact, data scientists have been using smaller scale neural networks for many years.

Rather profanely, the limiting factor for the past 20 years has been processing power and scalability.

Processors have improved exponentially for years (see Moore’s law). But a few years ago, NVIDIA launched GPUs with chips which were not just powerful, but capable of running thousands of tasks in parallel with an instruction set that was exposed for use by developers.

This was the step change for machine learning and other AI tools. It allowed huge amounts of data to be processed simultaneously. Neural nets, like the many synapses in your brain, process lots of information simultaneously to reach their conclusion, with calculations being performed at each node, of which there can be thousands. Before highly parallel processing capability, this was a slow process. Imagine looking at a picture and taking hours to work out what it was.

The availability of consumer level GPUs with massive parallelisation via Nvidia CUDA cores has meant deep neural networks can now be run in reasonable times and at reasonable cost. A grid of GPUs is considerably cheaper, and more effective than the corresponding level of compute available via traditional CPU’s.

Neural nets have long been used in labs to analyse datasets, but due to compute limitations this would take weeks or even months to complete a run. They found applications where lengthy data analysis beyond human capacity (or patience) was needed, but where speed wasn’t critical, such as for predicting drug-like molecule interaction with target receptors in medicine research.

Today’s neural nets – and deep learning (large, combined neural networks) – can now do the same compute in hours or minutes. Computationally intensive AI processes which previously took hours can be applied to real time tasks, such as diagnostics and making safety critical industrial decisions.

On the shoulders of giants

This has been critical to the rapid rise of AI. Off the back of this, more commoditisation has appeared. Google, Microsoft and Facebook, amongst others, have all developed AI programming tools. These allow anyone to build their own AI on the tech giants’ platforms and run the associated processing in their data centres – examples include diagnosing disease, managing oil rig drilling, and predicting when to take aeroplanes out of service. AI became democratised.

Amongst the excitement around applying AI to a brand new set of possibilities, NVIDIA quietly cornered the market for AI processor chips.

Slightly late to the party, but never to be written off, the usual suspects are catching up. Microsoft (using Intel technology) recently launched Brainwave, a new type of chip specifically designed for deep learning. Google also recently started building its own chips, the Tensor Processing Unit, for AI applications.

This means a chip arms race is around the corner. Expect AI announcements soon from other chip manufacturers, and an aggressive push from NVIDIA to defend its leadership.

None of this is a bad thing for us machine learning professionals. If the capacity to process data increases at an ever faster rate, it expands what we can do with AI and how fast we can do it. Better, faster, more parallelised tasks can mean ever deeper deep learning algorithms and complex neural networks. Data processing tasks which previously took minutes, hours, days, are gradually brought into the realm of real time decision making.

With AI tools and processing power readily available, the desire to harness AI is growing rapidly. The tech giants, innovative startups, and companies undergoing digital transformation all want a piece of the action. Technology advances apace, but the limiting factor now is skills, which have not been able to keep up with AI’s meteoric rise.

Truly harnessing AI requires a wide range of highly specialised skills covering advanced maths, programming languages, and an understanding of the tools themselves. In most cases a degree of expertise in the subject the AI is being designed for (oncology or oil rig engineering for example) is necessary. AI is now seen as a serious career choice, but still, these skills will take the uninitiated a good few years to learn – a PhD and many years’ industry experience, which are needed for most AI roles – do not come overnight.

Meanwhile, a generation of scientists – who have spent the last 20 years in a lab patiently waiting for their meticulously designed neural network to work its way through months of data – are suddenly finding themselves in high demand.

This is a guest blogpost by Matt Jones, lead analytics strategist, Tessella

Original source: Computer Weekly 

 

© Copyright 2017 Tessella
All rights reserved