The State of AI 2019: Divergence

Chapter 8: The implications of AI

AI will have profound implications for companies and societies. AI will reshape sector value chains, enable new business models and accelerate cycles of creative destruction. While offering societies numerous benefits, AI poses risks of job displacement, increased inequality and the erosion of trust.

Summary

  • AI’s benefits can be abstracted to: innovation (new products and services); efficacy (perform tasks more effectively); velocity (complete tasks more quickly); and scalability (free activity from the constraints of human capacity). These benefits will have profound implications for consumers, companies and societies.
  • By automating capabilities previously delivered by human professionals, AI will reduce the cost and increase the scalability of services, broadening global participation in markets including healthcare and transport.
  • In multiple sectors including insurance, legal services and transport, AI will change where, and the extent to which, profits are available within a value chain.
  • New commercial success factors – including ownership of large, private data-sets and the ability to attract data scientists – will determine a company’s success in the age of AI.
  • New platforms, leaders, laggards and disruptors will emerge as the paradigm shift to AI causes shifts in companies’ competitive positioning.
  • AI, ‘x-as-a-service’ consumption, and subscription payment models will obviate select business models and offer new possibilities in sectors including transport and insurance.
  • As AI gains adoption, the skills that companies seek, and companies’ organisational structures, will change.
  • By reducing the time required for process-driven work, AI will accelerate innovation. This will compress cycles of creative destruction, reducing the period of time for which all but select super-competitors maintain value.
  • AI will provide profound benefits to societies, including: improved health; greater manufacturing and agricultural capability; broader access to professional services; more satisfying retail experiences; and greater convenience. AI also presents significant challenges and risks.
  • AI-powered automation may displace jobs. AI will enable the automation of certain occupations that involve routine. In other occupations, AI will augment workers’ activities. The short period of time in which select workers may be displaced could prevent those who lose their jobs from being rapidly reabsorbed into the workforce. Social dislocation, with political consequences, may result.
  • Biased systems could increase inequality. Data used to train AI systems reflects historic biases, including those of gender and race. Biased AI systems could cause individuals economic loss, loss of opportunity and social stigmatisation.
  • Artificial media may undermine trust. New AI techniques enable the creation of lifelike artificial media. While offering benefits, they enable convincing counterfeit videos. Artificial media will make it easy to harass and mislead individuals, and weaken societies by undermining trust.
  • AI offers trade-offs between privacy and security. As AI powered facial recognition advances, to what extent will citizens be willing to sacrifice privacy to detect crime?
  • AI enables the high-tech surveillance state, with greater powers for control. China is combining real-time recognition with social scoring to disincentivise undesirable activity.
  • Autonomous weapons may increase conflict. The risk of ‘killer robots’ turning against their masters may be overstated. Less considered is the risk that conflict between nations may increase if the human costs of war are lower.

Executives

  • Evaluate how the benefits unleashed by AI – innovation, efficacy, velocity and scalability – will impact your industry.
  • Consider if AI can be used to reach new market participants and expand your addressable market.
  • Assess the shifts in your industry value chain that will occur as adoption of AI grows.
  • Evaluate the business model a disruptor might adopt in the age of AI, if freed from the “innovator’s dilemma”. What would the Netflix to your Blockbuster look like?
  • Assess the extent to which your company is developing the commercial success factors required for the age of AI.
  • Companies’ competitive positioning will change as adoption of AI increases. Develop an AI strategy to become a leader rather than a laggard.
  • Evaluate the suitability of your company’s skills and organisational design in light of changes AI will necessitate.
  • Recognise the need for responsible stewardship. AI presents risks to society – including issues of job displacement, bias, and privacy. Develop rigorous ethical frameworks to govern the AI systems you develop and use.

Entrepreneurs

  • Identify opportunities to take advantage of probable shifts in sector value chains that AI will cause.
  • Develop initiatives that will take advantage of the new market participants and business models that AI will present.
  • Identify weaknesses in incumbents’ competitive positioning that are likely to persist, or worsen, given their structure or strategy.
  • Be mindful of the risks AI poses to society. Develop robust frameworks for ethical development and regulatory compliance. Explore Chapter 8 of our AI Playbook (www.mmcventures.com/research) for an actionable guide.

Investors

  • Assess how the innovation, efficacy and scalability enabled by AI will impact your portfolio companies.
  • Identify investment opportunities in sectors that will be transformed as a result of AI altering value chains and enabling new market participants.
  • Evaluate opportunities to invest in companies structured around business models that will come of age as AI disrupts existing markets.
  • Assess entrepreneurs’ awareness of AI’s ethical risks, their mitigation strategies and compliance with regulatory best practices.

Policy-makers

  • Engage with experts in the field of AI bias to highlight the risks posed by prejudiced systems, create frameworks for best practice and highlight non-compliance.
  • Engage the public in debate regarding the trade-off desired between privacy and AI-enabled security.
  • Anticipate the proliferation of artificial media, and work with technology and media companies to support the creation of systems of trust.

AI will deliver innovation, efficacy, velocity and scalability

AI’s value, from finding patterns in data more effectively to automating previously manual tasks, can be abstracted to four key benefits (Fig. 94):

Fig 94. AI offers innovation, efficacy, velocity and scalability

Source: MMC Ventures

AI will have significant implications for markets and societies

Innovation, efficacy, velocity and scalability will have significant implications for economic systems, employees, consumers and society.

Below, we explain how AI will disrupt companies and markets by enabling:

  1. New market participants
  2. Shifts in sector value chains
  3. New business models
  4. New commercial success factors
  5. Changes in companies’ competitive positioning
  6. Shifts in skills and organisational design
  7. Accelerated cycles of innovation.

For societies, in addition to numerous benefits AI presents challenges and risks. Below, we describe how:

  1. AI-powered automation may displace jobs
  2. Biased systems could increase inequality
  3. Artificial media will undermine trust
  4. AI offers states greater control and presents trade-offs between privacy and security
  5. Autonomous weapons may increase conflict between nations.

1. New market participants
By automating capabilities previously delivered by human professionals, AI will reduce the cost and increase the scalability of services, significantly broadening participation in select markets.

Today, access to sectors including healthcare and financial services is limited to subsets of the global population.

Medical diagnosis, for example, is inaccessible to people in developing economies and expensive for those in developed nations. Diagnosis has been undertaken by experienced professionals, whose training is time consuming and whose scalability is limited, inhibiting supply and increasing cost.

AI will provide automated diagnosis for a growing proportion of conditions. The marginal cost of diagnosing a patient with an AI algorithm will be nil. With smartphone adoption in developing economies increasing rapidly, from 37% in 2017 to an estimated 57% by 2020 (GSMA), barriers to access are also falling rapidly. By transferring the burden of diagnosis from people to software, global access to primary care will increase. Millions of additional individuals will benefit from primary care, while the market for providers of relevant and associated technologies will expand.

“By automating capabilities previously delivered by human professionals, AI will reduce the cost and increase the scalability of services, broadening participation in select markets.”

2. Shifts in sector value chains
In multiple sectors AI will change where, and the extent to which, profits are made within a value chain.

In the insurance sector, revenue from car insurance accounts for 42% of global insurance premiums (Autonomous Research). As AI-powered autonomous vehicles gain adoption, the frequency of accidents will reduce – and with them, insurers’ revenue.

UK car insurance premiums are expected to fall by as much as 63%, causing profits for insurers to fall by 81% (Autonomous Research). Insurers must anticipate and plan for a profound shift in their sector’s value chain.

In the legal services sector, clients are increasingly aware, and less willing to pay, for deliverables that have not required the time or expertise of an experienced lawyer. In March 2017, Deutsche Bank announced that it will no longer pay City law firms for legal work undertaken by trainees and newly qualified lawyers. The automation enabled by AI will broaden the range of tasks that can be provided to clients at low cost. As clients expect greater use of AI, cost pressures on routine work will increase and value will shift further to high-end work.

In the transport sector, automotive finance provides 19%, on average, of car manufacturers’ pre-tax profits (MMC Ventures). Large automotive finance companies, including Ford Motor Credit, Toyota Financial Services, Nissan Motor Acceptance Corp and Hyundai Motor Finance loan consumers money to buy new cars. As we describe next (‘New business models’), private vehicle ownership will reduce as subscription-based services provide consumers with on-demand access to fleets of autonomous vehicles. Demand for, and value in, automotive finance for consumers is likely to decline.

42%

of revenue from global insurance premiums come from car insurance.

Source: Autonomous Research

3. New business models
AI, growth of ‘x-as-a-service’ consumption, and subscription payment models will obviate select business models and offer new possibilities in sectors including transport, insurance and healthcare.

The greatest impact of new corporate and consumer technologies is the new business models they enable, not the technical capabilities they provide.

In the transport sector, AI will transform the economic fabric of ownership and insurance. Cars are parked for an average of 96% of their lives (UITP Millennium Cities Database). Despite the cost and inefficiency of private car ownership, the model has been necessary to enable spontaneity, point-to-point convenience, comfort, privacy and security during travel.

An autonomous vehicle, summoned whenever required from a distributed fleet and used for the duration of a journey, will offer the same benefits while optimally utilising a fleet.

With the cost of the driver removed, and the cost of the vehicle and insurance divided over a greater volume of trips in a given period, the marginal cost of a journey will be lower. With growing use of transport-as-a-service subscription models, in which consumers pay a low monthly fee for on-demand access to a fleet of autonomous vehicles, private car ownership is likely to decline.

The impact on ‘downstream’ market participants will be as significant. The business models of local car dealerships, vehicle repair centres, petrol stations and charging centres  will change as local ownership of private vehicles is displaced by large, managed fleets.

In the insurance sector, associated business models will be disrupted. The object of car insurance is likely to change, from a driver (who will play no role in an autonomous vehicle’s operations) to the vehicle manufacturer or service provider. The immediate buyer of car insurance will also change, from the end user to the manufacturer or service provider. (Ultimately, the fee will be repaid by the end user as a small component of their monthly subscription fee). Accordingly, insurers’ business models in the automotive sector may shift from private policies to fleet-based agreements. Today, 87% of car insurance policies are personal, not commercial. This may fall to 40% (Autonomous Research).

“In the transport sector, AI will transform the economic fabric of car ownership and insurance.”

4. New commercial success factors
New commercial success factors will determine a company’s ability to be successful in the age of AI.

A paradigm shift in technology offers companies new benefits while demanding new competencies. Cloud computing, for example, offered flexibility, scalability, reduced capital expenditure and faster upgrade cycles. However, it demanded new diligence processes, different supplier relations and dynamics, internal competencies in change management and greater attention on security.

Success factors in the age of AI include:

  • The vision to embrace AI and the organisational changes it requires;
  • Ownership of large, non-public data sets to train and deploy market-leading AI algorithms;
  • A willingness to evaluate the opportunities and risks of sharing training data with partners and competitors;
  • The ability to attract, develop, retain and integrate data scientists within an organisation;
  • The ability to form effective partnerships with best-of-breed third-party AI software and service providers;
  • The ability to diligence AI partners effectively;
  • A willingness to understand and respond to regulatory challenges posed by AI;
  • A shift in mindset to the use of software that provides probabilistic instead of binary recommendations.
  • The ability to manage workflow changes that result from the implementation of AI systems.
  • The ability to manage challenges of organisational design and culture as AI augments, and in some cases replaces personnel.

5. Changes in companies’ competitive positioning

New leaders, followers, laggards and disruptors will emerge as the paradigm shift to AI causes significant shifts in companies’ competitive positioning.

Paradigm shifts in technology destabilise incumbents and enable new leaders to emerge. As adoption of cloud computing continues, for example, IT spend is being reallocated to cloud-native platforms (such as Amazon) and applications at the expense of incumbents.

AI will cause greater shifts as it alters value chains, enables new business models and demands different success factors from competitors. We expect ‘Platforms’, ‘Disruptors’, ‘Leaders’ and ‘Laggards’ to emerge.

“New leaders, laggards, platforms and disruptors will emerge.”

Among providers of AI:

Platforms – primarily Google, Amazon, IBM and Microsoft (GAIM) – provide the AI infrastructure, development environments and ‘plug and play’ AI services used by many developers and consumers of AI. With vast data sets, worldclass AI teams and extensive resources, select GAIM vendors are well positioned to accrue value as platforms that support the provision of AI.

GAIM do not, however, have the data advantage, expertise or strategic desire to address the myriad domain-specific use cases required by businesses in sectors ranging from manufacturing, agriculture and education to retail, professional services and finance. This presents opportunities for disruptors.

Disruptors are early stage, AI-led software companies tackling business problems in a novel way using AI. For incumbents, disruptors are a double-edged sword. Disruptors will enable the enterprises, small- and medium-sized businesses that embrace them, while eroding the value of those that lack the foresight to do so. Select disruptors will become tomorrow’s incumbents or be acquired by today’s.

Among buyers of AI (today’s enterprises, and small and medium-sized businesses):

Leaders will emerge in key industries, by: anticipating the shifts in value chains and business models caused by AI; taking advantage of their large, proprietary data sets to train and deploy AI algorithms; having the organisational ability to deploy AI effectively; and by having sufficient resources and reputation to attract high quality AI talent. Leaders will extend their competitive advantage and enjoy particular benefits:

  1. In the ‘data economy’, economic returns will accrue disproportionally to companies that can extract value from information most effectively.
  2. Data network effects create wider competitive moats. Larger volumes of training data enable better algorithms, which deliver better products and services, which win more customers, who provide more data. Leaders will benefit from data network effects that competitors will struggle to overcome.

Laggards are buyers that lack the will or organisational ability to use AI effectively. While some enterprises will lack the foresight to adapt, more will falter due to limited organisational capability. Laggards will: move slowly to partner with disruptors or invest in their own AI teams; fail to take advantage of the extensive data sets and resources at their disposal; and struggle to attract AI talent. In the ‘data economy’, laggards will lose competitive advantage and market share significantly and rapidly.

“New leaders will anticipate the shifts in value chains and new business models enabled by AI.”

6. Shifts in skills and organisational design
As AI gains adoption the skills that companies seek, and companies’ organisational structure, will change.

As companies vie for leadership in the AI era, companies will seek different personnel and change the organisational principles around which they are structured.

41% of companies are considering the impact of AI on future skill requirements (PWC). A mix shift to employing data scientists is likely. Data scientists extract meaning from data by collating, cleaning and processing data and then applying statistical techniques and AI algorithms. Companies’ engagement with data scientists is limited today. For example, while the world’s largest professional services and consulting firms average 5,000 to 15,000 in-house analytics professionals, we estimate that fewer than 8% of these are data scientists (MMC Ventures). Some large companies have as few as 100 data scientists. Tomorrow’s leaders are aggressively expanding their data science teams, recognising that time to market is key because of the potential for competitive advantage through data network effects (more data yields better algorithms, which provide improved products that attract more clients and data).

While adjusting their mix of personnel, companies will alter their organisational design. Hiring for adaptability will be increasingly important, as the range of tasks supported or undertaken by AI systems increases. One in three companies are redesigning their organisational structures from traditional hierarchies to multi-disciplinary teams (Deloitte) to enable greater adaptability.

7. Accelerating cycles of innovation
By reducing the time required for process-driven work, AI will accelerate the pace of business and innovation. This may compress cycles of creative destruction, reducing the period of time for which all but a select number of super-competitors maintain value.

With several occupations, and numerous constituent activities, automated or augmented with AI, the speed at which tasks can be completed will increase. By accelerating the pace of business, AI is likely to shorten cycles of innovation, adoption and consumption that have been compressing since the 1950s (Fig. 95).

Fig 95. Cycles of innovation, adoption and consumption are compressing

Source: European Environment Agency, based on Kurzweil

Historically, accelerating cycles of innovation have reduced the period of time for which large companies retain value. In 1965, companies in the S&P500 stayed in the index for an average of 33 years (Innosight). By 1990, average longevity had narrowed to 20 years. By 2012, 18 years was typical. By 2026, average tenure in the S&P 500 is forecast to shrink to 14 years (Innosight). While reduced longevity in stock market indices arises partly due to technical factors, such as increasing merger and acquisition activity, creative destruction of incumbents has been accelerating. Faster cycles of disruption due to AI could reduce, further, large companies’ ability to maintain value.

However, the dynamics of AI, and today’s market leaders, may result in a divergence in longevity and the emergence of a small number of super-competitors. Three factors could lead to the emergence of super-competitors that maintain value for longer than companies in recent history.

First, AI offers network effects through data. Because training AI algorithms typically requires large volumes of data, companies with large, proprietary data sets can deliver more effective AI systems. Superior systems provide better results, which attract more customers, who bring additional data – creating a virtuous circle and powerful defensibility. Several of today’s largest technology companies including Google, Amazon, Apple and Microsoft have vast consumer data sets inaccessible to disruptors.

“Artificial media will make it easy to mislead – to harm individuals by ascribing to them words they have not said and actions they have not performed.”

Second, today’s leading technology companies are investing, and expanding, into emerging technologies and product categories more forcefully than many companies in the past. Leading technology companies are disrupting themselves. Google, a company conceived to index pages on the world wide web, has become a leader in autonomous vehicles and quantum computing. Amazon, a company that sold books online, is becoming a force in so many sectors that the Company is mentioned on 10% of all US company quarterly earnings calls (Reuters).

Third, select 21st century technology companies are consolidating power by expanding up, and down, the technology ‘stack’. Providers of cloud storage, such as Amazon and Microsoft, are layering ever-higher levels of functionality – such as AI and security – into the environments they provide. Technology leaders are also expanding down the technology stack. Google and Apple now develop their own microprocessors for competitive advantage in mobile and AI computing. By expanding up and down the technology stack, companies can consolidate control and customer spend.

The combination of data network effects, greater investment in emerging technologies and product categories, and expansion up and down the technology stack may enable a small number of super-competitors to capture and maintain economic influence for a longer period of time than has been possible in recent history – amidst a broader bifurcation in corporate longevity.

AI offers benefits and risks to societies

AI will deliver numerous, profound benefits for societies. They include: accelerated cycles of innovation; broader access to better, less expensive healthcare; increased manufacturing capability and agricultural productivity; enhanced mobility with fewer accidents; improved management of financial assets and risk; broader access to lower-cost professional services; more efficient and satisfying retail experiences; and greater day-to-day convenience.

AI also presents significant challenges and risks. Below, we describe how:

  1. AI-powered automation may displace jobs;
  2. biased systems could increase inequality;
  3. artificial media will undermine trust;
  4. AI offers states greater control and presents trade-offs between privacy and security; and
  5. autonomous weapons may increase conflict
    between nations.

Increasingly, AI is enabling divergent futures. The extent to which risks crystallise will depend upon the choices and actions of citizens, organisations, companies and governments.

1. AI-powered automation may displace jobs
Job displacement is a significant risk associated with the proliferation of AI. AI will directly enable the automation of several occupations that involve routine and repetition – from truck-driving to telemarketing. Truck driving comprises 3.6 million jobs in the US (American Trucking Association). In many other occupations, AI will augment and then displace some workers in more complex roles, while reducing the need for additional workers to be hired as companies expand. In approximately 60% of occupations, at least 30% of constituent activities are technically automatable by adapting currently proven AI technologies (McKinsey Global Institute).

Analysis of UK census data since 1871 shows that historically, contracting employment in agriculture and manufacturing – a result, in part, of automation – have been more than offset by rapid growth in the caring, creative, technology and business service sectors (Deloitte).

Greater automation of both manual and business service roles, however, may concentrate employment further in occupations resistant to automation, including care work and teaching. Whether or not, over time, AI creates more jobs than it destroys, the short time frame in which a large number of workers could be displaced, coupled with a reduction in the availability of similar roles, could prevent those who lose their jobs from being rapidly re-absorbed into the workforce. Social dislocation, with political consequences, may result.

2. Biased systems could increase inequality
Theoretically, AI has the potential to free decision-making from human bias by finding objective patterns in large data sets. However, AI systems typically learn by processing training data. Available data sets frequently reflect systemic historic biases, including those of gender and race.

The results from ‘word embedding’, an AI technique used to interpret written and spoken language, are an example. Word embedding creates mathematical representations of language. The meaning of words are abstracted to a set of numbers based on the words that frequently appear near to them. However, when trained on the Common Crawl data set (a 145-terabyte collection of data taken from material published online), the word ‘women’ is closely associated with occupations in the humanities and the home, while ‘man’ is associated closely with science and technology professions (Caliskan, Bryson and Narayanan).

Lack of diversity among AI development teams is compounding the problem. Groups representing majorities in the population are less likely to notice that data regarding minorities is lacking in training data they use. In a popular data set for training facial recognition systems, over 75% of faces are male and 80% are lighter-skinned (Buolamwini, Gebru).

Inadequate or imbalanced training data are causing AI systems to perform poorly and problematically, particularly when serving minorities. For example, AI-powered facial recognition systems that offer gender classification  misgender just 1% of lighter-skinned males – but up to 7% of lighter-skinned females, 12% of darker-skinned males and 35% of darker-skinned females (Fig. 96) (Buolamwini, Gebru).

Fig 96. AI-powered facial recognition systems misgender 1% of lighter-skinned males but 35% of darkerskinned females

Source: J Buolamwini, M.I.T. Media Lab, via The New York Times

Algorithms will make decisions that have significant ramifications for individuals’ lives, in a growing range of domains from recruitment to credit. If bias is not recognised and removed from AI systems, individuals will suffer economic loss, loss of opportunity and social stigmatisation (Fig. 97). “If we fail to make ethical and inclusive AI, we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.” (Joy Buolamwini).

“There is a battle going on for fairness, inclusion and justice in the digital world.” (Darren Walker, via The New York Times). To avoid ‘automating inequality’, developers can:

  • recognise the challenge, as a starting point for action;
  • develop diverse teams that reflect the communities they serve;
  • create balanced, representative data sets;
  • deploy ethics and testing frameworks for system validation.

“If we fail to make ethical and inclusive AI, we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.”

Joy Buolamwini

Fig 97. There are potential harms from algorithmic decision-making

Source: Megan Smith via gendershades.org

3. Artificial media will undermine trust – ‘fake news 2.0’
Generative Adversarial Networks (GANs) are a novel, emerging AI software technique that enable the creation of lifelike media – including pictures, video, music and text (chapter 5). Exceptional recent progress in the development of  GANs (Fig. 98) has enabled breakthrough results. Today, GANs can generate  highly realistic media, which – despite being artificially generated – are virtually impossible to differentiate from real content.

Fig 98. GANs’ ability to create lifelike media has rapidly improved

Source: Goodfellow et al, Radford et al, Liu and Tuzel, Karras et al, https://bit.ly/2GxTRot

GANs will have many positive implications. Individuals and companies will have the power to create and adapt media at unprecedented scale and low cost, democratising content. Brands will have the ability to re-purpose a single video, such as influencer footage, with new speech – offering infinite adaptation. Game designers will create more lifelike characters. Individuals will use GANs to create new music. Because GANs are technically structured to distinguish between real and the counterfeit items, GANs also have useful applications beyond media, in sectors ranging from network security to healthcare.

GANs also present profound ethical and pragmatic risks. As GANs commoditise, individuals with limited resources will be able to create damaging media that appears counterfeit only on close scrutiny, if at all.

“Today, GANs can generate highly realistic media, which – despite being artificially generated – are virtually impossible to differentiate from real content.”

While Photoshop enabled photographs to be manipulated, GANs can be used to splice individuals’ faces onto existing video without their consent. GANs have already been used to create adult content – ‘deep fake’ pornography – in which a celebrity’s face, or the face of a private individual, is convincingly superimposed onto graphic material. ‘Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look… eerily realistic.”(Scarlett Johansson, via The Washington Post). When abused, GANs can be used to embarrass and humiliate.

GANs can also be used to alter video so it appears that an individual has spoken words he or she has not. The speaker’s lips are convincingly re-mapped to synchronise with new audio. Given video of former President Barack Obama, researchers synthesised photorealistic, new lip-synched video (Fig. 99) (Suwajanakorn, Seitz and Kemelmacher-Shlizerman).

Fig 99. Given video of former President Obama, researchers synthesised photorealistic , new lip-synched video

Source: Suwajanakorn, Seitz and Kemelmacher-Shlizerman

GANs will progress from synthesising individuals to scenes. Footage of individuals and events will be generated, or altered, with little cost and effort, to create ‘fake news 2.0’ for political purposes or counterfeit evidence in criminal cases. As smartphones are used to record high-definition video,
and videoconferencing solutions such as Skype and Facetime are used pervasively, source material is becoming plentiful.

The proliferation of artificial media poses immediate and secondary risks. In the short term, artificial media will make it easy to mislead – to damage individuals by ascribing to them words they have not said and actions they have not performed.

In the longer term, the rise of artificial media will undermine trust. Positively, citizens will learn to question whether the media they see is authentic. However, if any media can be counterfeit, all media is open to challenge. What can be believed? Adversaries have recognised that sowing doubt and confusion to divide populations and inhibit collective action is frequently more powerful than direct action over the long term. In Nineteen Eighty-Four, the dystopian novel by George Orwell in which a ruling party persecutes independent thinking, citizens are taught to ignore what they see and hear. “The party told you to reject the evidence of your eyes and ears. It was their final, most essential command” (Nineteen Eighty-Four, George Orwell). In the decade ahead, as the unreal becomes real, society will grapple with challenges of truth and trust.

“In the decade ahead, society will grapple with challenges of truth and trust.”

4. AI offers states greater control and presents trade-offs between privacy and security
In the age of AI, citizens and governments must re-evaluate the balance between security and privacy they desire – while states could enjoy greater powers of social control.

AI-powered facial recognition systems offer unprecedented capability. Technical maturation coincides with the proliferation of high-resolution cameras. Every smartphone owner carries a camera in their pocket. Over 1.85 million CCTV cameras were in place in the UK as early as 2011; on average, a citizen is captured on CCTV an estimated 68 times per day (Cheshire Constabulary Camera Survey). To what extent will citizens and governments be willing to sacrifice anonymity and privacy to prevent and detect crime?

Further, the combination of AI and real-time analytics is enabling the high-tech surveillance state, with greater capacity for social control. With increasing accuracy, AI-powered gait analysis can recognise individuals from their shape and movement – even if their faces are hidden. “You don’t need people’s cooperation for us to be able to recognise their identity” (Huang Yongzhen, Watrix, via the Associated Press). China intends to combine real-time recognition with social scoring, to rate citizens according to their behaviour and habits. Individuals with undesirable behaviour may be inhibited from travelling, suffer reduced internet connectivity, be penalised when applying for government roles and be impeded from placing their children in desired schools.

“In the age of AI, citizens and governments must re-evaluate the balance between security and privacy they desire.”

5. Autonomous weapons may increase conflict between nations
Weapon systems have incorporated a degree of autonomy for decades. The Phalanx CIWS weapon system, for example, defends ships in 20 countries’ navies from missile attacks. The Phalanx combines a 20mm rotating Vulcan cannon with an automated system to interpret radar data, decide whether
a target is a threat and engage it.

However, the combination of AI-powered computer vision systems, AI-based decision-making algorithms and improved robotics are enabling humanoid and aerial drones with greater capability and autonomy. The risk of ‘killer robots’ turning against their masters may be overstated. Less considered is the possibility that conflict between nations may increase if the human costs of war are lower. A country that thinks twice about sending young people into conflict may be more adventurous if the only assets in harm’s way are equipment.