The State of AI 2019: Divergence

Chapter 3: Why has AI come of age?

Specialised hardware, availability of training data, new algorithms and increased investment, among other factors, have enabled an inflection point in AI capability. After seven false dawns since the 1950s, AI technology has come of age.


  • After seven false dawns since its inception in 1956, AI technology has come of age.
  • The capabilities of AI systems have reached a tipping point due to the confluence of seven factors: new algorithms; the availability of training data; specialised hardware; cloud AI services; open source software resources; greater investment; and increased interest.
  • Together, these developments have transformed results while slashing the difficulty, time and cost of developing and deploying AI.
  • A virtuous cycle has developed. Progress in AI is attracting investment, entrepreneurship and interest. These, in turn, are accelerating progress.



  • Recognise that AI technology has come of age and will be a key enabler, and potential threat, in the coming decade.
  • Peers are deploying AI at an accelerating rate. Familiarise yourself with the dynamics of enterprise AI adoption (Chapter 4).
  • Explore the many applications of AI (Chapter 2), and AI’s implications (Chapter 8), to lead and contribute to AI initiatives in your organisation.


  • AI technology can deliver tangible benefits today. Seek opportunities to incorporate AI within your software, where appropriate, whether or not you are an ‘AI company’.
  • Familiarise yourself with the latest developments in AI technology (Chapter 5) and talent (Chapter 6) to enable your AI initiatives.


  • AI will be a powerful enabler for portfolio companies – and a threat. Evaluate whether portfolio companies are embracing AI as a means of competitive advantage.
  • With AI technology at a tipping point, seek opportunities to invest directly or indirectly in companies taking advantage of AI.
  • Explore recent developments in AI technology (Chapter 5) to identify emerging areas of opportunity.


  • Review policy-makers’ key initiatives and identify opportunities for further sector support. In the UK, key programmes and studies include: the UK Government’s £1bn ‘AI sector deal’; recommendations from the House of Lords Select Committee on AI’s (‘AI in the UK: ready, willing and able?’); and findings of the All-Party Parliamentary Group on AI.

Explore our AI Playbook, a blueprint for developing and deploying AI, at

There are seven enablers of AI

Research into AI began in 1956. After seven false dawns, in which results from unsophisticated systems fell short of expectations, AI capability has reached a tipping point. AI is now delivering significant utility and its abilities are advancing rapidly.

AI capabilities have been transformed in the last four years due to:

  1. the development of improved AI algorithms;
  2. increased availability of data to train AI systems;
  3. specialised hardware to accelerate training of AI algorithms;
  4. cloud-based AI services to catalyse developer adoption;
  5. open source AI software frameworks that enable experimentation;
  6. increased investment in AI by large technology companies and venture capitalists;
  7. greater awareness of AI among investors, executives, entrepreneurs and the public.

Together, these developments have improved results from AI systems and increased the breadth of challenges to which they can be applied. They have also irreversibly reduced the difficulty, time and cost of developing basic AI systems.

1. Enhanced algorithms provided improved results
Deep learning, a fruitful form of machine learning, is not new; the first specification for an effective, multilayer neural network was published in 1965. In the last decade, however, evolutions in the design of deep learning algorithms have transformed results, delivering breakthrough applications in areas including computer vision (Fig. 18) and language (Fig. 19).

Convolutional Neural Networks (CNNs) have dramatically improved computers’ ability to recognise objects in images. Employing a design inspired by the visual cortexes of animals, each layer in a CNN acts as a filter for the presence of a specific pattern. In 2015, Microsoft’s CNN-based computer vision system identified objects in pictures more effectively (95.1% accuracy) than humans (94.9% accuracy) (Microsoft). In the last 36 months, performance has improved further (Fig. 18). Broader applications of CNNs include video classification and speech recognition.

Fig 18. Convolutional neural networks are delivering human-level image recognition


Recurrent Neural Networks (RNNs) are delivering improved results in speech recognition and beyond. While data progresses in a single direction in conventional (‘feed forward’) neural networks, RNNs have feedback connections that enable data to flow in a loop. With additional connections and memory cells, RNNs ‘remember’ data processed thousands of steps ago and use it to inform their analysis of what follows. This is valuable for speech recognition, where interpretation of an additional word is enhanced by analysis of preceding ones.

The Long Short-Term Memory (LSTM) model is a particularly effective recent RNN architecture. From 2012, Google used LSTMs to power speech recognition in the Android platform. In October 2016, Microsoft reported that its LSTM speech recognition system achieved a word error rate of 5.9% – human-level speech recognition for the first time in history (Microsoft) (Fig. 19). By August 2017, word error rate had been reduced to 5.1% (Microsoft). Improvements are continuing.

“Microsoft reported that its speech recognition system achieved human-level recognition for the first time in history. Improvements are continuing.”

Fig 19. Recurrent neural networks are delivering human-level speech recognition


2. Extensive data enabled AI systems to be trained
Training neural networks typically requires large volumes of data – thousands or millions of examples, depending on the domain. The creation and availability of data has grown exponentially in recent years, enabling AI.

Today, humanity produces 2.5 exabytes (2,500 million gigabytes) of data daily (Google). 90% of all data has been created in the last 24 months (SINTEF). Data has ballooned as humanity passed through two waves of data creation, and now enters a third.

The first wave of data, beginning in the 1980s, involved the creation of documents and transactional data. It was catalysed in the 1990s by the proliferation of internet-connected desktop PCs. Then, in the 2000s and 2010s, pervasive, connected smartphones drove a second wave of data with an explosion of unstructured media (emails, photos, music and videos), web data and metadata.

Today we enter the third age of data. Machine sensors deployed in industry and the home provide additional monitoring-, analytical- and meta-data. With much data created today transmitted for use via the internet, growing internet traffic is a proxy for humanity’s increasing data production. In 1992, humanity transferred 100GB of data per day. By 2020, we will transfer 61,000GB per second (Fig. 20) (Cisco, MMC Ventures).

The development of AI has been catalysed further by the creation of specialist data resources. ImageNet, a free database of 14.2 million hand-labelled images, has supported the rapid development of deep learning algorithms used to classify objects in images.

Fig 20. Global internet traffic is increasing exponentially, reflecting growth in data production

Source: Cisco, MMC Ventures

3. Specialised hardware accelerated AI system training
Graphical Processing Units (GPUs) are specialised electronic circuits that slash the time required to train the neural networks used in deep learning-based AI.

Modern GPUs were developed in the 1990s, to accelerate 3D gaming and 3D development applications. Panning or zooming a camera in a simulated 3D environment uses a mathematical process called a matrix computation.

Microprocessors with serial architectures, including the Central Processing Units (CPUs) that interpret and execute commands in today’s computers, are poorly suited to the task. GPUs were developed with massively parallel architectures (NVIDIA’s Geforce RTX 2080 TI GPU has 4,352 cores) to perform matrix calculations efficiently. Training a neural network involves numerous matrix computations. GPUs, while conceived for 3D gaming, therefore proved ideal for accelerating deep learning.

A simple GPU can increase five-fold the speed at which a neural network can be trained. Ten-fold or larger gains are possible. When combined with Software Development Kits (SDKs) tuned for popular deep learning frameworks, even greater improvements can be realised. In a 36 month period beginning in 2013, successive GPUs and SDKs enabled a 50x increase in the speed at which certain neural networks could be trained (Fig. 21).

In the last 36 months, advances in AI technology are creating new possibilities. Custom silicon, designed from inception for AI, is enabling a new generation of AI accelerators (Chapter 5).

Fig 21. GPUs enabled neural networks to be trained 50x faster

AlexNet training throughput based on 20 iterations. Source: NVIDIA

4. Cloud AI services fuelled adoption
Leading cloud technology providers including Google, Amazon, IBM and Microsoft offer cloud-based AI infrastructure and services, catalysing developers’ use of AI.

The providers’ infrastructure platforms include environments in which to develop and deploy AI algorithms, and ‘GPUs-as-aservice’ to power them.

Their services comprise a burgeoning range of on-demand AI capabilities, from image recognition to language translation, which developers can incorporate in their own applications.

Google Machine Learning offers application programming interfaces (APIs) for: computer vision (object identification, explicit content detection, face recognition and image sentiment analysis); speech (speech recognition and speech-to-text); text analysis (entity recognition, sentiment analysis, language detection and translation) and more. Microsoft Cognitive Services include over 24 services in the fields of vision, speech, language, knowledge and search.

The accessibility and relative affordability of cloud providers’ AI infrastructure and services are significantly increasing adoption of AI among developers.

5. Open source software catalysed experimentation
The release of open source AI software frameworks has lowered barriers to entry for experimentation and proficiency in AI.

Researchers, and providers of cloud infrastructure and AI services who benefit from the proliferation of AI and data intensive applications, have open-sourced AI frameworks and libraries of algorithms to catalyse developers’ adoption of AI. Popular open source platforms include TensorFlow (Google), Caffe2 (Facebook), Cognitive Toolkit (Microsoft), TorchNet (Facebook), H2O ( and Mahout (Apache Software Foundation).

Each framework offers benefits. Caffe2 is a scalable deep learning framework that processes images at speed. Cognitive Toolkit provides high performance on varying hardware configurations. H2O reduces time-to-value for AI-powered enterprise data analysis. Mahout provides scalability and premade algorithms for tools such as H2O. Google’s decision to open source TensorFlow in November 2015 was particularly significant, given the software’s sophistication. Engagement with TensorFlow has been rapid (Fig. 22). Within two years, the framework has attracted 30,000 developer commitments and 80,000 stars on GitHub, where developers store projects (Google).

Fig 22. Engagement with the TensorFlow framework has been significant and rapid

Source: GDG-Shanghai 2017 TensorFlow Summit

6. Investment in AI increased fifteen-fold
Venture capital firms are investing aggressively in AI, given scope for value creation. Investment dollars into early stage AI companies globally have increased fifteen-fold in five years (Fig. 23), to an estimated $15bn in 2018 (CB Insights, MMC Ventures).

Today’s leading technology companies – including Apple, Amazon, Facebook, Google, IBM, Microsoft and Salesforce – are also spending heavily on research and personnel to develop and deploy AI. Internal corporate investment on AI, among just the top 35 high tech and advanced manufacturing companies investing in AI, may be 2.0x to 4.5x greater than the capital invested by venture capital firms, private equity firms and other sources of external funding combined (McKinsey), further catalysing progress.

“Investment dollars into early stage AI companies globally have increased fifteen-fold in five years, to an estimated $15bn in 2018.”

(CB Insights, MMC Ventures)

Fig 23. Venture capital investment in AI has increased 15-fold in five years

Source: CB Insights, MMC Ventures

7. Awareness of AI has grown significantly
Public interest in AI, measured by the proportion of Google searches for ‘machine learning’, has increased more than seven-fold in six years (Fig. 24).

Executives’ awareness of AI has grown following extensive coverage in business publications. In the last 12 months, 5,700 articles referencing AI have appeared in financial publications such as the Financial Times, Fortune, Investors Chronicle, The Wall Street Journal and Thomson Reuters. Bloomberg Businessweek, the Financial Times, Forbes, Fortune, the Harvard Business Review and The Wall Street Journal (Signal). One third of these references have appeared in the last 12 weeks.

In the popular press, whether relevant (the opportunities and threats posed by automation) or less so (‘killer robots’), 21,800 articles in US and UK newspapers have referred to AI, fuelling public interest (Signal).

“Public interest in AI, measured by the proportion of Google searches for ‘machine learning’, has increased more than sevenfold in six years.”

Fig 24. Interest in AI has increased 7-fold

Source: Google Trends