The State of AI 2019: Divergence

Chapter 5: The advance of technology

Advances in AI technology are creating new possibilities. Custom silicon is enabling a new generation of AI hardware. Emerging software techniques are delivering breakthroughs in multiple domains and decoupling progress from the constraints of human experience.

Summary

  • While graphical processing units (GPUs) catalysed AI development in the past, and will continue to evolve, hardware innovations are expanding AI’s potential. Hardware is being optimised, customised or re-imagined to deliver a new generation of AI accelerators.
  • Hardware with ‘tensor architectures’ is accelerating deep learning AI. Vendors, including NVIDIA and Google are optimising or customising hardware to support the use of popular deep learning frameworks.
  • We are entering the post-GPU era. Leading hardware manufacturers are creating new classes of computer processor designed, from inception, for AI. Custom silicon offers transformational performance and greater versatility.
  • Custom silicon is also taking AI to the ‘edge’ of the internet – to IoT devices, sensors and vehicles. New processors engineered for edge computing combine high performance with low power consumption and small size.
  • As quantum computing matures, it will create profound opportunities for progress in AI and enable humanity to address previously intractable problems, from personalised medicine to climate change. While nascent, quantum computing is advancing rapidly. Researchers have developed functioning neural networks on quantum computers.
  • Reinforcement learning (RL) is an alternative approach to developing AI that enables a problem to be solved without knowledge of the domain. Instead of learning from training data, RL systems reward and reinforce progress towards a specified goal. AlphaGo Zero, an RL system developed by DeepMind to play the board game Go, developed unrivalled ability after just 40 days of operation. In 2019, developments in RL will enable groups of agents to interact and collaborate effectively.
  • Progress in RL is significant because it decouples system improvement from the constraints of human knowledge. RL is well suited to creating agents that perform autonomously in environments for which we lack training data.
  • Transfer learning (TL) enables programmers to apply elements learned from previous challenges to related problems. TL can deliver stronger initial performance, more rapid improvement and better long-term results. Interest in TL has grown seven-fold in 24 months and is enabling a new generation of systems with greater adaptability.
  • By learningfundamental properties of language, TL- powered models are improving the state of the art in language processing – in areas of universal utility. 2018 was a breakthrough year for the application of TL to language processing.
  • TL is also: enabling the development of complex systems that can interact with the real world; delivering systems with greater adaptability; and supporting progress towards artificial general intelligence, which remains far from possible with current AI technology.
  • Generative Adversarial Networks (GANs) will reshape content creation, media and society. An emerging AI software technique, GANs enable the creation of artificial media, including pictures and video, with exceptional fidelity. GANs will deliver transformational benefits in sectors including media and entertainment, while presenting profound challenges to societies – beware ‘fake news 2.0’.

Recommendations

Executives

  • Ensure your organisation, or suppliers, are taking advantage of the latest advances in AI hardware for faster solutions to more complex challenges.
  • Custom silicon for edge computing is enabling ‘edge’ devices – drones, robots, embedded devices and sensors – with greater AI capabilities. Explore whether AI-enabled edge applications could offer your company, or customers, utility.
  • Reinforcement learning can be usefully applied to tackle problems of control (such as warehouse automation) and coordination (including fleet optimisation). Explore whether reinforcement learning could deliver efficiency improvements and cost savings for your company.

Entrepreneurs

  • Take advantage of hardware with tensor architectures to accelerate the development of deep learning systems.
  • Offer more advanced language processing in your solutions by drawing on recent breakthroughs in transfer learning.
  • Generative Adversarial Networks (GANs) can be usefully applied to a wide variety of domains beyond media, from signal normalisation to network security. Explore whether they could provide utility for your application.

Investors

  • The ‘post-GPU era’ will create new winners. Explore companies developing custom silicon for AI, for the data centre and edge devices.
  • Reinforcement learning offers solutions to a range of challenging problems. Identify companies taking advantage of reinforcement learning for competitive advantage.
  • Identify opportunities for portfolio companies to take advantage of advances in computer vision and language enabled by transfer learning.
  • Explore the field of quantum computing. While nascent, it will gain significance rapidly in the years ahead.

Policy-makers

  • Transfer learning, reinforcement learning and generative AI enable AI systems with greater capability and adaptability – and pose new risks to society. Explore the implications of emerging AI technology in Chapter 8.
  • The UK is an emerging leader in the nascent field of quantum computing. Review the National Quantum Technologies Programme to explore the UK’s strengths and challenges in quantum technology and identify opportunities for policy-makers’ support.

Explore our AI Playbook, a blueprint for developing and deploying AI, at www.mmcventures.com/research.

AI hardware is being optimised, customised and reimagined

Training the neural networks that power many AI systems is computationally intensive. Graphical Processing Units (GPUs) – hardware that is efficient at performing the matrix mathematics required – have enabled extensive progress and transformed the field of AI (see Chapter 3). In the last decade, computing performance for AI has improved at a rate of 2.5x per year (IBM). The performance of GPUs will continue to increase.

However, GPUs were designed for graphics processing – not AI. Manufacturers exploited GPUs’ ability to perform matrix calculations when it became apparent that AI benefited from the same mathematics. Frequently, just a third of a GPU’s core area is used for AI.

As AI matures, greater demands are being placed on the hardware that powers it. Larger data sets, more model parameters, deeper networks, moving AI to ‘edge’ devices, and an ambition to tackle new challenges demand improved capability. “Current hardware was holding developers back.” (Nigel Toon, Graphcore)

Below, before describing breakthroughs in AI software techniques, we highlight three dynamics shaping AI hardware – the optimisation, customisation and reimagination of hardware for AI.

Competition among hardware providers is fierce. In response to recent industry benchmarking, which compared Google’s and NVIDIA’s processors (https://mlperf.org/results/), both parties claimed victory (https://bit.ly/2IgWK2T; https://bit. ly/2SYLEQd). Developers and consumers alike will benefit from intense competition, as new hardware:

  • lowers the cost of compute for AI, democratising access and accelerating proliferation of the technology;
  • increases the speed at which systems can be trained and iterated, shortening development cycles;
  • reduces required power consumption, enabling AI on ‘edge’ devices such as Internet of Things (IoT) units, autonomous vehicles, implanted medical devices and sensors; and
  • enables more complex and effective models. Better models can improve existing applications and enable new ones (in December 2018, Google used sophisticated deep learning to predict the 3D structure of proteins, based solely on their genetic sequences, for the first time).
  • accelerates new approaches to AI, such as reinforcement learning (RL) and transfer learning (TL), which we explain below.

Tensor architectures are accelerating deep learning

Deep learning AI continues to offer myriad breakthroughs and benefits – in domains including computer vision and language and applications ranging from autonomous vehicles to medical diagnosis and language translation.

In response, vendors are optimising or customising hardware to support the use of popular deep learning frameworks. While addressing a more limited set of instructions, this hardware enables faster system training and performance from common AI frameworks – with varying degrees of specificity.

NVIDIA has introduced GPUs with architectures optimised for deep learning on a range of frameworks. The Company’s Tesla GPUs contain hundreds of Tensor Cores that accelerate the matrix calculations at the heart of deep learning AI. Tesla GPUs deliver faster results with common AI frameworks, particularly convolutional neural networks used for computer vision systems.

Tesla GPUs enable suitable neural networks to be trained in a third of the time previously required (Fig. 43) and operate four times faster (Fig. 44). Compared with a traditional CPU, Tesla GPUs offer a 27-fold improvement.

Fig 43. Tesla GPUs enable suitable neural networks to be trained in a third of the previous time

Source: NVIDIA

Fig 44. Tesla GPUs allow suitable neural networks to operate four times faster than previously

Source: NVIDIA

Google’s Tensor Processing Unit (TPU) is an application- specific integrated circuit (ASIC) – a custom microchip – designed specifically to accelerate AI workloads on the popular TensorFlow framework.

After publicising its use of TPUs in May 2016, Google announced its second-generation TPU in May 2017 and third generation in May 2018. While first generation TPUs were limited to inferencing (processing queries through a trained network), subsequent generations accelerate system training as well as inference.

Optimised to process the mathematics required by TensorFlow, TPUs offer exceptional performance for TensorFlow applications. Even moving from Google’s second- generation TPU to its third reduced by nearly 40% the time required to train ResNet-50, an industry-standard image classification model.

Fig 45. Google’s second-generation TPU reduced the time required to train an image classification model by nearly 40%

Source: Google

Initially, Google used TPUs only within its own data centres, to accelerate Google services including Google Photos (one TPU can process 100 million photos per day), Google Street View and Google’s RankBrain search facility. TPUs are now accessible to general developers and researchers via the Google Cloud Platform.

The post-GPU era: custom silicon is enabling new possibilities

Leading hardware manufacturers are diverging from architectures used in the past. In 2019 a new class of computer processors designed, from inception, for AI will emerge. Custom silicon, designed from first principles for AI, offers transformational performance, capability similar to existing systems for a fraction of the power or space, and greater versatility.

Incumbent microchip manufacturers, global technology companies and dozens of disruptive early stage companies including Cerebras, Graphcore and Mythic are developing next-generation processors for AI.

Graphcore, a privately-held ‘scale-up’ company in the UK that has attracted over $300m of venture capital funding, has developed an Intelligence Processing Unit (IPU) (Fig. 46). Graphcore’s IPU combines a bespoke, parallel architecture with custom software to offer greater performance than existing systems. Graphcore’s benchmarking suggests that its IPU can deliver 200-fold performance improvements in selected tasks, compared with GPUs (Fig. 47).

The IPU’s architecture and software enable large quantities of data to be consumed in parallel, instead of sequentially, and from multiple locations (‘graph computing’ in place of ‘linear addressing’). Data is transported across the IPU’s 1,000+ sub-processors more efficiently, and the IPU provides faster access to greater volumes of memory to reduce bandwidth limitations.

As well as enabling existing workloads to be processed more rapidly, new hardware architectures such as IPUs will enable developers to tackle previously intractable challenges.

Fig 46. Graphcore’s IPU is designed, from inception, for Artificial Intelligence

Source: Graphcore

Fig 47. Graphcore’s IPU could deliver 200-fold performance improvements in selected tasks

Source: Graphcore

Custom silicon is taking AI to the edge

While cloud computing proliferates, a ‘barbell’ effect is emerging as a new class of AI hardware is optimised for edge computing instead of the data centre.

Edge computing moves the processing of data from the cloud to the ‘edge’ of the internet – on to devices where it was created such as autonomous vehicles, drones, sensors and IoT devices. Increasingly, edge computing is required – as devices proliferate, and connectivity and latency issues demand on- device processing for many.

Numerous hardware manufacturers are developing custom silicon for AI at the edge. In October 2018, Google released Edge TPU – a custom processor to run TensorFlow Lite models on edge devices. A plethora of early stage companies, including Gyrfalcon, Mythic and Syntiant are also developing custom silicon for the edge.

In 2019, as well as enabling next generation AI in the cloud, custom silicon will transform AI at the edge by coupling high performance with low power consumption and small size.

Quantum computing will unlock profound opportunities

Quantum computing is a paradigm shift in computing that exploits the properties of nature – quantum mechanics – to offer profound new possibilities. While nascent, quantum computing hardware and software are advancing rapidly. 2019 may be the year of ‘quantum supremacy’ – the first time a quantum computer solves a problem a classical computer cannot.

Quantum hardware, and associated software to accelerate AI, are emerging. In addition to building quantum processors, Google is developing quantum neural networks. In November 2018, an Italian team of researchers developed a functioning quantum neural network on an IBM quantum computer (https://bit.ly/2Gx1pee). Rigetti, a manufacturer of quantum computers and software, has developed a method for quantum computers to run certain AI algorithms.

While quantum computing technology will take time to mature, in the decade ahead quantum-powered AI will enable humanity to address previously intractable problems – from climate change to personalised medicine.

“In 2019, as well as enabling next generation AI in the cloud, custom silicon will transform AI at the edge by coupling high performance with low power consumption and small size.”

Breakthroughs in software development are delivering transformational results

While novel hardware will enable more powerful AI, recent breakthroughs in software development are delivering transformational results.

Below, we explain how advances in two alternative approaches to developing AI systems – RL and TL – are enabling the creation of programs with unrivalled capabilities. We also describe how a new AI software technique – the Generative Adversarial Networks (GAN) – has reached a tipping point in capability that will reshape media and society.

Reinforcement learning is creating powerful AI agents

Recent advances in RL, an alternative approach to developing AI systems, are delivering breakthrough results – and raising expectations regarding AI’s long-term potential.

Typically, an AI system analyses training data and develops a ‘function’ – a way of relating an output to an input – that is used to assess new samples provided to the system (‘supervised learning’).

RL is an alternative approach that uses principles of exploration and reward. Human parents encourage children’s development through emotional rewards (smiling, clapping and verbal encouragement) and physical prizes (toys or sweets). Similarly, after an RL system is presented with a goal, it experiments through trial and error and is rewarded for progress towards the goal. While the system will initially have no knowledge of the correct steps to take, through cycles of exploration RL systems can rapidly improve.

RL is an efficient approach for teaching an agent to interact with its environment. Developers begin by specifying a goal and elements within the agent’s control – for example, in robotics, the joints that a robot can move and the directions in which it can travel. By rewarding useful progress and negatively reinforcing failure, as early as 1997 it was demonstrated that RL could produce a robot that walked in a dynamic environment – without knowledge of the environment or how to walk (Benbrahim and Franklin).

Developments in RL are enabling profound milestones in the training of individual AI agents and, by teaching cooperation, groups.

18 months ago AlphaGo Zero, an RL system developed by DeepMind to play the board game Go, outperformed DeepMind’s previous AI Go system that had been trained using traditional, supervised learning. Provided only with the rules of Go, and without knowledge of any prior games, by playing against itself AlphaGo Zero reached the level of AlphaGo Master in 21 days. After 40 days, AlphaGo Zero surpassed all prior versions of AlphaGo to become, arguably, the strongest Go player in the world (Fig. 48). “Humans seem redundant in front of its self-improvement” (Ke Jie, World No. 1 Go player).

15 months ago, DeepMind developed a more general program – AlphaZero – that could play Chess, Shogi and Go at levels surpassing existing programs.

RL is well suited to creating agents that can perform autonomously in environments for which we lack training data, and enabling agents to adapt to dynamic environments. In 2019 RL will catalyse the development of autonomous vehicles. In the longer-term the exploration of space, where training data is limited and real-time adaptation is required, is likely to draw on RL.

Progress in RL is significant, more broadly, because it decouples system improvement from the constraints of human knowledge. RL enables researchers to “achieve superhuman performance in the most challenging domains with no human input” (DeepMind). We explore this profound implication of AI in Chapter 8.

Fig 48. Reinforcement learning enabled AlphaGo Zero, a system developed by DeepMind to play Go, to achieve unrivalled capability after 40 days of play

Source: Google DeepMind

Reinforcement learning is enabling multi-agent collaboration

In 2019, developments in RL will also enable groups of agents to interact and collaborate with each other more effectively.

Games, which present a safe and bounded environment for learning, are valuable for training RL systems (Aditya Kaul). Defence of The Ancients 2 (Dota2) is a cooperative online game for teams of five players (Fig. 49). While previous environments required AI agents to optimise only for their own success when responding to the actions of other teams, Dota2 requires agents to consider the success of their team.

OpenAI 5 is a Dota2 team developed by OpenAI, a non- profit AI research company building safe artificial general intelligence. OpenAI used RL in a similar manner to DeepMind’s AlphaGo Zero to train its team.

OpenAI 5 agents initially played against themselves to learn individual and cooperative skills. Subsequently, they were able to improve rapidly (Fig. 50) and defeat all but the top professional human teams.

Developing RLremains challenging. Designing reward functions can be difficult as RL agents will ‘game the system’ to obtain the greatest reward. OpenAI discovered that if they offered agents rewards for collecting power-ups, which would enable the agents to complete their task faster, agents abandoned the task to collect the power-ups given the available rewards. Even with sound reward functions, it can be difficult to avoid ‘overfitting’ solutions to their local environment.

Fig 49. Reinforcement learning is enabling effective multi-agent collaboration (AI agents playing Defence of the Ancients 2)

Source: OpenAI/Dota2

Fig 50. Reinforcement learning enabled the OpenAI 5 team to surpass rapidly the performance of most human teams

Source: OpenAI

Transfer learning is delivering breakthroughs in language AI – and beyond

Traditional AI requires systems to be trained from a standing start, which demands data and time, or accepting the outputs of existing, pre-trained networks whose training data is inaccessible. Accordingly, AI development is frequently inefficient or sub-optimal.

Transfer learning (TL) is an emerging approach for developing AI software, which enables programmers to create novel solutions by re-using structures or features of pre-trained networks with their own data. By drawing upon skills learned from a previous problem, and applying them to a different but related challenge, TL can deliver systems with stronger initial performance, more rapid system improvement, and better long-term results (Fig. 51).

Fig 51. Transfer learning can offer strong initial performance, faster improvement and better long-term results

Source: Torry, Shalvik

TL has been used to accelerate the development of AI computer vision systems for over a decade. In the last 24 months, however, interest in TL has grown 7-fold (Fig. 52). In 2019 TL is being applied to broader domains –particularly natural language processing.

Fig 52. Interest in transfer learning has grown 7-fold in 24 months

Source: Google trends

To date, natural language processing has operated at a shallow level, struggling to infer meaning at the level of sentences and paragraphs instead of words. Word embedding, an historically popular technique for inferring the meaning of a word based on the words that frequently appear near to it, is limited and susceptible to bias. The absence of extensive, labelled training data for natural language processing has compounded practitioners’ challenges.

“By enabling better results with less training data, transfer learning is delivering transformational results. 2018 was a breakthrough year for the application of transfer learning in language processing.”

By enabling better results with less training data, TL is offering transformational results. 2018 was a breakthrough year for the application of transfer learning in language processing:

  • In March 2018, the Allen Institute for Artificial Intelligence used TL to deliver ELMo (Embeddings from Language Models), which improved the state of the art for a broad range of natural language tasks including question answering and sentiment analysis (https://bit.ly/2HY61MZ).
  • In May 2018, research institution Fast.AI released ULMFiT (Universal Language Model Fine-tuning for Text classification). ULMFiT underscored that TL can be applied to language processing tasks and introduced techniques for fine-tuning language models. By using TL, with only 100 labelled examples ULMFiT matched the performance of systems trained with 100-fold more data. Their method also offered improved text classification and reduced error rates by 18-24% on many data sets (https://bit.ly/2Hmur1d).
  • In mid-2018, OpenAI demonstrated the ability to achieve impressive results on a diverse range of language tasks from a single starting point. OpenAI’s general, task-agnostic model outperformed models that used architectures specifically crafted for tasks including question answering and textual entailment (https://bit.ly/2t9cjyM).
  • In October 2018, Google open-sourced BERT (Bidirectional Encoder Representations from Transformers), an RL-based language processor that achieved state of the art results on 11 natural language processing benchmarks (https://bit.ly/2OqmY5D). The ‘bidirectionality’ of BERT allows context to be carried between sentences for improved textual responses.

New, TL-powered models “learn fundamental properties of language” (Matthew Peters, ELMo). By doing so, they may unlock higher-level capabilities in language processing with universal utility – including text classification, summation, text generation, question answering and sentiment analysis.

Transfer learning enables complex systems to interact with the real world

In many situations, gathering data to train AI systems is laborious, expensive or dangerous. Amassing data to train an autonomous vehicle, for example, could require millions of hours of labour, billions of dollars and considerable risk. Simulation, combined with transfer learning, offers a solution. Instead of capturing real-life data, environments are simulated. Using TL, learnings from the simulation can then be applied to the real-world asset.

In the field of robotics, similarly, training models on real- world robots is slow and costly. Learning from a simulation, and transferring the knowledge to a physical machine, can be preferable.

TL may be “a pre-requisite for large-scale machine learning projects that need to interact with the real world” (Sebastian Ruder). As a result, “transfer learning will be the next driver of machine learning commercial success” (Andrew Ng).

Transfer learning offers adaptability and progress towards artificial general intelligence

TL offers profound as well as pragmatic benefits.

By reducing the volume of training data required to solve a problem, TL enables humans to develop systems in domains where we lack large numbers of labelled data-points for system training.

By offering greater adaptability, TL also supports progresstowards artificial general intelligence (AGI) – systems that can undertake any intellectual tasks a human can perform. While AGI is far from possible with current AI technology, developments in TL are enabling progress. “I think transfer learning is the key to general intelligence. And I think the key to doing transfer learning will be the acquisition of conceptual knowledge – knowledge that is abstracted away from perceptual details of where you learned it, so you can apply it to a new domain” (Demis Hassabis, DeepMind).

“I think transfer learning is the key to general intelligence.”

Demis Hassabis

DeepMind

GANs will transform media and society

First proposed in 2014, Generative Adversarial Networks (GANs) are a novel, emerging AI software technique for the creation of lifelike media – including pictures, video, music and text. Exceptional recent progress in the development of GANs (Fig. 53) has enabled breakthrough results. Today, GANs can generate highly realistic media, which – despite being artificially generated – are virtually impossible to differentiate from real content (Fig. 54)

“Today, GANs can generate highly realistic media, which – despite being artificially generated – are virtually impossible to differentiate from real content.”

Fig 53. GANs’ ability to create lifelike media has rapidly improved

Source: Goodfellow et al, Radford et al, Liu and Tuzel, Karras et al, https://bit.ly/2GxTRot

Fig 54. GANs can generate artificial images that appear real (none of these individuals exist)

Source: NVIDIA

While GANs are frequently used to create images, their utility is broader. Additional uses include:

  • Alternative media: GANs can create different forms of media, such as music or text in the style of particular individuals.
  • System training: GANs can be used to improve the training of AI classification systems. Neural networks used for image classification are easily misled by minor changes to images, including those invisible to the human eye. A classifier can be made more robust by using it as a GAN discriminator, and using the GAN to create altered images.
  • Data manipulation: Frequently, it is important to remove personal information from data – such as the number plate of a vehicle or the face of a child in an image. Combining GANs with additional techniques, such as autoencoding, enables the addition or removal of features from data.
  • Data normalisation: GANsenable data from different sources to be normalised. Instead of feeding random noise into a GAN’s generator, developers can input types of signal data that are different from the desired output. The GAN will normalise the result. For example, health data collected from different devices will have different sampling frequencies and accuracy tolerances. GANs can normalise the signals for greater comparability.
  • Network security: Because GANs are structured to distinguish between the real and the counterfeit, they are valuable for domains such as cybersecurity where it is a priority to detect anomalies in network access or activity.
  • Data creation: AI classification systems are inhibited by the volume and quality of data available to train them. GANs can produce additional training data to improve classifiers’ accuracy. This technique has been used to improve the classification of liver lesions. Creating data using GANs poses challenges as well as opportunities. The GAN discriminator will have been trained using a limited data set. While the generator’s outputs may appear realistic, the images produced may not correctly reflect the appearance of a human body with the same disease.

GANs will deliver transformational benefits. The ability to generate lifelike images to a desired specification will reshape the media sector. Further, GANs will enable agencies to capture footage of brand ambassadors and then repurpose footage to create an infinite range of convincing variations. Ambassadors could appear to speak in foreign languages (to promote goods and services in international markets) and discuss new products – without recording any additional footage.

GANs also present profound ethical and societal risks. GANs can be used to: splice individuals’ faces onto existing video without their consent; develop video in which individuals appear to speak words they have not spoken; create counterfeit evidence for criminal cases; and generate or alter footage to create ‘fake news’. We discuss the implications of GANs for society in Chapter 8.

“GANs will deliver transformational benefits. They also present profound risks. We discuss the implications of GANs in Chapter 8.”

GANs operate by two networks working in opposition

GANs operate by two networks – a ‘generator’ and ‘discriminator’ – working in opposition to create increasingly lifelike media.

For a visual GAN, a generator receives a random input, such as a matrix of numbers, and follows a series of mathematical transformations to convert the input into a picture. Initial results will be poor, resembling random sets of pixels (Fig. 55).

Fig 55. GANs operate with two networks working in opposition

Source: Naoki Shibuya

“GANs operate by two networks – a ‘generator’ and ‘discriminator’ – working in opposition to create increasingly lifelike media.”

The output of the generator is then passed to the discriminator. The discriminator is a separate convolutional neural network that has been trained to recognise counterfeit images of the type in question – in this example, handwritten digits. The discriminator assesses whether the image received from the generator is authentic or has been artificially generated. Following the discriminator’s decision, the correct answer is revealed.

If the discriminator correctly determines that the output is artificially generated, the generator: changes the weights in the network responsible for the output recognised as counterfeit; and reinforces the weights in the discriminator that led to the correct conclusion.

If the discriminator incorrectly assess the output from the generator: the weights in the generator, which led to a useful image, are reinforced; and the features in the discriminator, which led to an incorrect result, are down- weighted to yield a better assessment in future.

As the two networks work in parallel, influencing one another, the output from the generator improves until the accuracy of the discriminator is no better than chance (a 50/50 probability of correctly determining the authenticity of the generated image).

“The discriminator assesses whether the image received from the generator is authentic or has been artificially generated.”