Artificial Intelligence: A Case for Optimism

2017

In October 2014, Elon Musk posited that artificial intelligence (AI) is humanity’s “biggest existential threat.”1 Lest you find this statement rhetorical or hyperbolic, a year later Musk and several other technology leaders committed $1 billion to create a non-profit research company, OpenAI, with the goal of advancing AI in ways that are most likely to benefit humanity, rather than harm it.2

The view that AI poses an existential risk to humanity is not new,3 but it has acquired a new degree of urgency and appeal thanks to endorsement from public intellectuals like Musk, Bill Gates, and Stephen Hawking.4 Superintelligence, a 2014 bestseller by Oxford philosopher Nick Bostrom, added further academic credence to the argument. In comprehensive detail, Bostrom describes how the creation of an artificial general intelligence (AGI, or AI able to complete any intellectual task that a human being can) may lead to our demise as soon as the middle of this century. Bostrom argues that an AGI would be able to use its human-level intelligence to research how to design a better AI; therefore, recursive self-improvement would eventually lead to an “intelligence explosion” and the creation of a superintelligence that far exceeds human intellect. This superintelligence would pose an existential risk to humanity if its goals were not perfectly aligned with humanity’s values, since it might take unanticipated actions harmful to humanity but optimal with respect to its pre-programmed goals. Bostrom proposes several ways we might minimize the existential risk of creating an AGI, but he concludes that there is a nontrivial probability that humanity’s existence will be threatened by superintelligence before the end of this century. Given the noteworthy support and potentially catastrophic consequence of this argument, we should examine technological advances towards AGI with great scrutiny.

While futurists debate whether AI is going to destroy us, AI is already permeating our lives. You can ask Alexa to turn off your lights, or Siri to make your dinner reservation. When you upload a photo, Facebook’s neural networks immediately recognize who’s in the picture as accurately as a human.5 When you read the news, you’re bombarded with fantastical headlines like “Google’s Self-Driving Cars Have More Driving Experience Than Any Human”6 or “Google’s AI translation tool seems to have invented its own secret internal language.”7 Since 2012, AI-based startups have raised over $12 billion, growing more than 70 percent annually.8 Likewise the world’s largest technology companies – including Amazon, Facebook, Google, IBM, and Microsoft – are each investing heavily in advancing AI as part of their core business strategies.9 Of course, a true AGI does not exist today. The current wave of AI technology is considered narrow AI, or AI specific to a narrow set of tasks like driving cars, making medical diagnoses, or winning games of Go. But given the colossal investment and manifest progress, surely we’re well on our way toward achieving AGI and its potentially disastrous consequences?

The answer seems non-obvious, so I surveyed the literature to pit the most compelling arguments against each other. Based on current evidence, I’m unconvinced that AGI or superintelligence poses an existential risk to humanity anytime soon. On the contrary, the evidence invites a measured optimism about advances in narrow AI technology benefiting humanity in the near and foreseeable future.

Specifically, I’ll advance three claims:

  1. It is theoretically possible for us to engineer an AGI and superintelligence.
  2. However, it is extremely unlikely that we will create an AGI in the foreseeable future.
  3. The proliferation of narrow AI applications will cause significant structural changes to the global economy long before AGI is feasible. While it’s difficult to predict whether narrow AI’s impact will be net positive or net negative for society, there is cause for optimism.

1. It is theoretically possible for us to engineer an AGI and superintelligence.

Given enough time and computational power, we could build an AGI. Evolution already created a proof of concept (homo sapiens), so with sufficient resources we could just simulate natural selection until an intelligent machine emerges. Of course, this would be an extraordinarily inefficient way to achieve AGI, but the thought illustrates that in theory it is possible to do. Human brains produce general intelligence, and like silicon computer chips they are made of matter and bound by the laws of physics. In principle, there is no reason to believe we cannot build an artificial replica of the human brain and, thus, human intelligence.

It is also likely that an AGI would be able to recursively improve itself to the point of superintelligence, where it is significantly more intelligent than any human. Bostrom cites numerous advantages that silicon has inherently over biological material, implying that an AGI would have a much higher upper bound on possible intelligence. For example, silicon neurons would be significantly faster than their biological analogs. A 2 GHz CPU (modest by today’s standards) operates 10 million times faster than a biological neuron, and CPUs are able to relay signals 2.5 million times faster than biological axons.10 With these speed advantages alone, an AGI could complete millennia of human intellectual work in minutes. Add in other advantages like an always-on work ethic, a longer lifespan, and a cloning process roughly as easy as “copy-paste,” and it is clear that an AGI could become much more intelligent than any human.

Objection 1a: AGI is impossible. There is something uniquely biological about human-level intelligence that cannot emerge on silicon.

This objection is reminiscent of philosopher John Searle’s famous “Chinese Room” thought experiment.11 Imagine you do not speak Chinese and are locked in a room with a book of rules, which maps input Chinese characters to output Chinese characters. When given an input, you are instructed to supply the corresponding output specified in the rulebook. If you diligently follow the rules, Chinese speakers outside the room may believe you actually understand Chinese, even though you have no idea what the characters mean and are simply following a set of preprogrammed rules. Searle argues that computer programs similarly operate without true understanding, and that true understanding is a biological phenomenon caused by physical-chemical properties of the human brain.

Perhaps biological processes are required for subjective conscious experience or “true understanding” or “qualia.” We don’t know for sure. But Bostrom would argue that qualia is an orthogonal feature and not required for an AGI or superintelligence. A superintelligence just has to “greatly exceed the cognitive performance of humans in virtually all domains of interest.”10 With a sufficiently sophisticated rulebook and superhuman speed, a man in a Chinese Room would suffice.

On a related note, the “AI will never be able to do XYZ” argument has been made many times in the past and repeatedly proven wrong. In the 1950s, AI researchers at Carnegie Mellon decreed: “If one could devise a successful chess machine, one would seem to have penetrated to the core of human intellectual endeavor.”12 Of course, now that computers routinely beat humans at chess, the field has revised the criteria for a machine to possess human-level intelligence such that mastery of chess is no longer sufficient. To wit: If you argue that AGI is impossible or that there are certain cognitive tasks that computers will never be able to do, you may be falling into a similar trap as AI researchers of the past.

Objection 1b: Human-level intelligence cannot be improved to the point of superintelligence. The computational complexity of real-world problems limits how much more intelligent a machine could be.

Technologist Ramez Naam argues that although we’re likely to create an AGI eventually, it’s unlikely we’ll create a superintelligence because real-world problems (like building a better AI) have a high degree of computational complexity.13 In other words, the more intelligent an AI gets, the harder it is to improve: If it takes X computational units to create an AI of intelligence 1, it will take more than 2X computational units to create an AI of intelligence 2. The nonlinear complexity of real world problems is likely to stifle Bostrom’s “intelligence explosion.” The machines will never get that much smarter than us.

However, Gwern Branwen offers compelling counterarguments to this objection.14 First, there are many ways to bypass a problem’s computational complexity: approximate solutions are often sufficient; randomized algorithms can yield approximate answers with high probability; and complex problems can often be solved in less time than the theoretical worst case. Second, an AGI has the potential to scale nonlinearly, too. Companies like Amazon, Microsoft, and Google have millions of computer cores in data centers around the globe, so there’s plenty of room for an AGI’s computational power to grow exponentially. Third, even small advantages in intelligence tend to have a big cumulative impact. For example, the top-rated chess AI, “Komodo,” has an ELO rating of 3358, about 17 percent higher than the human chess master Magnus Carlsen at his prime. This rating difference implies that Komodo would lose 1 time out of 20; however, in best-of-five matches Komodo would only lose 1 time out of 500.

Summing up, an AGI could scale its computational power to tackle even highly complex, real-world problems. It probably won’t achieve omniscience as in science fiction, but an AGI with even minor advantages over humans would likely lead to an intellectually dominating superintelligence.

2. It is extremely unlikely that we will create an AGI in the foreseeable future.

While it is theoretically possible for us to create an AGI, it’s unlikely that this will happen in the near future. We’ve made marked progress in applying narrow AI to solve real-world problems, but we’re not much closer to discovering how to create AGI. Consider Andrej Karpathy’s tweet in response to Elon Musk’s assertion that AI may be more dangerous than nuclear weapons: “As an AI PhD student at Stanford, easy to see that AI right now is cheap tricks and regression. Can sleep well for many years.”15 To a degree, he’s right. Recent narrow AI milestones like superhuman facial recognition, self-driving cars, and mastery of the game Go are less the result of breakthroughs in understanding human intelligence, and more the result of old algorithms being run with more data and computational power. The resulting programs are extraordinarily good at the tasks they were designed to do, but they are unable to generalize in a way that remotely resembles human cognition.

One of the most generalizable AI program we’ve seen to date is from Google DeepMind’s 2015 paper in Nature, “Human-level control through deep reinforcement learning.”16 The DeepMind team was able to train a single AI agent to play 23 different Atari games with at least human-level performance, using only raw pixels and the game score as inputs. It’s an impressive result, which received deservedly widespread acclaim. But it’s worth acknowledging just how far away this is from being anything like AGI. Pedantically, the AI performed worse than a human on the other 26 games it was trained to play (completely incapable of playing several), so we have some ways to go before we consider Atari games “solved.” More importantly, we must appreciate that these 2D Atari games are vastly less complex than any real-world problem. An intelligent human’s reward function is not a simple integer counter, and the sensory inputs to the human brain have many orders of magnitude higher dimensionality than a tiny 210 x 160 pixel screen. Even if we had an AI agent that could learn to master the whole Atari game catalog, it would certainly not be capable of learning how to drive a car through the streets of New York or tell you why a Monty Python skit is funny or write an original philosophical response to Peter Singer’s Animal Liberation. The state-of-the-art is very, very far away from building a generalizable AI, or AGI.

In spite of our achievements in narrow AI, we still don’t know how our brains produce general intelligence, and we likely need numerous breakthroughs in our understanding of neuroscience if we are to engineer AGI. Entrepreneur Maciej Ceglowski likens today’s AI researchers to 17th century alchemists: we have a lot of clues about where intelligence comes from, but we need some critical scientific breakthroughs before we will be able to measure and understand which clues are pointing us in the right direction versus which ones are leading us astray.17 Of course, breakthroughs by their nature are rare and unpredictable. As a result, AGI does not seem like a promising bet in the short term.

Objection 2a: On average, experts say we’ll have AGI around 2050.

While we’d be foolish to ignore expert opinions, this statement is misleading. By combining the results of several recent expert surveys, Bostrom calculates that the median expert estimate places a 50 percent probability on AGI by 2040 and 90 percent probability by 2075.10 However, Bostrom also cautions that there is a “wide spread of opinion” and that historically AI researchers have had a poor record of predicting the future of their field. A separate study by Armstrong and Kaj (2012) assesses predictions from 62 AI experts and similarly finds that expert predictions vary widely with little correlation, and that there’s no significant difference between “expert” and “non-expert” predictions – indicating that most expert predictions may be biased by psychological factors rather than grounded in actual expertise.18 One popular theory is that prognosticators tend to predict that breakthroughs will occur roughly two decades into the future, since that’s “near enough to be attention-grabbing,” but distant enough that a string of breakthroughs is believable and the reputational risk of an incorrect prediction is low.10 In any case, the “average expert opinion” does not seem to provide compelling evidence that AGI is on the horizon.

Objection 2b: If Moore’s Law continues and our computational power keeps doubling, we’ll be able to simulate a whole human brain by 2050. Hence, AGI.

Moore’s Law refers to the observation that our computational power has roughly doubled every two years since the 1970s. Ray Kurzweil’s popular “Law of Accelerating Returns” contends that this observation applies more generally, and that technological progress itself is exponential.19 Proponents of this view argue that if we extrapolate our current rate of progress, we will have enough computational power to simulate every neuron and synapse in the human brain by 2050. Effectively, we will be able to emulate an entire human brain and thereby create AGI.

However, this reasoning falls short for two reasons. The first is inductive bias. Moore’s Law makes no claims that it will continue indefinitely; it’s simply an observation of the past. In the words of Paul Allen, “These ‘laws’ will work until they don’t.”20 So like Nassim Taleb’s turkey on the day before Thanksgiving,21 we cannot blindly assume that what happened in the past will happen in the future.

The second reason is that to simulate a whole human brain, we need more than increased computational power; we also need scientific breakthroughs in our understanding of cognition. As Neil Lawrence, a professor of machine learning and computational biology at the University of Sheffield, points out, we cannot possibly simulate each neuron down to the quantum level – the computational requirements would be absurd.22 Instead we must model neurons with some level of abstraction, and without breakthroughs in our understanding of how the brain creates intelligence, we will not know what elements of biochemistry and physics are required for these models. Moreover, Paul Allen argues that our progress in understanding human cognition is currently slowed by a “complexity break”: the more we learn about the brain, the more we realize we don’t understand. How does the brain enable us to experience feelings like happiness or fear? How does the brain enable us to learn and remember? How does the brain enable us to dream and imagine? These are fundamental questions that we currently lack the tools to answer.23 Unfortunately, breakthroughs are inherently unpredictable, so it is irrational to assume that we are on the cusp of understanding how to emulate an entire human brain, even if we acquire the computational power suggested by present-day extrapolations.

Further, let’s assume that the requisite intellectual breakthroughs do occur and Moore’s Law continues unabated. Even if we understood how to model and simulate the human brain in a way that would produce AGI, the computational requirements would likely be much higher than Kurzweilian forecasters assume. The neuron models used in today’s simulations like IBM’s Blue Brain are almost certainly too simple to generate intelligence. We cannot assume that just because we’re able to simulate 86 billion of these simplistic neurons at human speed in 2050, that this is sufficient for AGI. In all likelihood, we will need significantly more sophisticated neuron models, which would increase the computational requirements by orders of magnitude. Even in this best-case scenario, whole brain emulation appears a long way off.

Objection 2c: Narrow AI applications are becoming as good or better than humans at an increasing number of tasks. If we create enough narrow AI programs that are capable of completing different tasks and put them together, surely we’ll have created the equivalent of AGI?

It’s true that narrow AI applications are becoming proficient in an expanding set of domains; however, there are many intellectually important tasks that a narrow AI likely cannot solve without being an AGI itself – for instance, making policy recommendations on how to address social unrest relating to income inequality. Narrow AI lacks the ability to generalize to unforeseen problems, which is an essential feature of human-level intelligence and a prerequisite for tasks like original, creative, and abstract thinking. Current methods for creating narrow AI are insufficient for tackling such tasks. Absent scientific breakthroughs that yield AGI directly, it is unlikely that we will create narrow AI programs capable of all the tasks required for an AGI, even if combined.

Objection 2d: What if we create superintelligence by either genetically enhancing humans or adding neural prosthetics?

Bostrom argues that creating a superintelligence through human-machine interfaces is likely an “AI-complete” problem. That is, in order to reliably and bi-directionally transfer information between the human and machine components of a brain, you’d likely need a full model of how the human brain produces intelligence. At that point, you’d be able to create an AGI through whole brain emulation anyway. Moreover, even if we could transfer information between human and machine, to create a superintelligence you would likely need to make the whole brain mechanical because biological neurons are significantly slower than their silicon counterparts. Thus, superintelligence is more likely to arise from whole brain emulation than from neural prosthetics.

Separately, the topic of genetic enhancement is a complicated one. In the interest of scoping this essay to artificial intelligence, I will not address it here.

Objection 2e: Even if it’s extremely unlikely that we will create an AGI soon, you must concede that there is a chance that it will happen. After all, you said it’s unpredictable! So given the existential risk, we must devote resources now to developing safe AGI.

I’m not opposed to research on how to create a safe AGI; however, it’s important that this research not detract from humanity’s other, more pressing research needs. For example, nuclear proliferation, income inequality, and internet surveillance are all immediate problems that are exacerbated by technology and carry existential risk.24 Our research efforts – including those related to AI – must not neglect the issues that threaten humanity in the short term. I’ll touch on this more in the next section.

It’s important we not forget common sense when considering what to do about AGI. We must be wary of Pascal’s Wager and Ceglowski’s “parlor trick,” where “by multiplying such astronomical numbers by tiny probabilities, you can convince yourself that you need to do some weird stuff.”17 The mere possibility of superintelligence being an existential risk does not entail that we need to focus on it at the expense of more imminent threats.

3. Narrow AI will cause significant structural changes to the global economy in the foreseeable future. It’s difficult to predict whether AI’s impact will be net positive or net negative for society, but there is cause for optimism.

As historian Melvin Kranzberg observed, “Technology is neither good nor bad; nor is it neutral.”25 Indeed, the goodness or badness of narrow AI will depend on the contexts in which it is used, since the same underlying technology may be beneficial to humanity in one context but harmful in another. For example, the same natural language processing methods used to automatically flag “fake news” on Facebook26 might also be used by authoritarian governments to automatically censor controversial stories.27 The same image recognition methods used to detect breast cancer in mammograms28 might also be used by weaponized drones to autonomously identify targets.29 Clearly, technological factors alone are insufficient to predict whether the net impact of narrow AI will be positive or negative.

However, the impact will certainly not be neutral. It seems probable that for better or for worse, narrow AI will cause significant structural changes to the global economy in the coming decades. Already, narrow AI is making machines “good enough” to replace humans at an increasing number and variety of tasks, and it’s likely that this trajectory will continue. Three reasons in particular stand out:

  1. We are generating and digitizing an increasing amount and variety of data, which enables us to train narrow AI programs to perform new types of tasks.
  2. Hardware is becoming more powerful, which enables us to train more complex models. Moreover, powerful hardware is becoming more affordable and accessible, enabling more people to participate in building narrow AI programs.
  3. Investor funding and media attention around narrow AI is growing, which incentivizes more students, researchers, entrepreneurs, corporations, and governments to devote resources toward applying and improving AI across a broader set of domains.

Examples of significant AI-driven change are already on the horizon. For example, self-driving vehicles are likely to replace human-driven ones sooner than later. The U.S. Council of Economic Advisors estimates that 2.2 to 3.1 million jobs will be impacted.30 Whether the new equilibrium will be better or worse for society is hard to say. On the one hand, millions of jobs may be lost; on the other hand, new jobs may be created (e.g., urban planning) and workers may become more productive due to lower transportation costs. One way or another, the impact will be significant.

More generally, narrow AI seems likely to accelerate the transition of human labor towards non-routine jobs. Non-routine jobs (e.g., personal health aides or public relations managers) involve tasks that aren’t rule-based and tend to emphasize social interaction, empathy, and creativity, making these jobs difficult for machines to do. By contrast, routine jobs (e.g., factory workers or office secretaries) involve tasks that are rule-based and tend to require minimal discretion, making these jobs relatively easy for machines to learn to do via pattern recognition and learning algorithms. As narrow AI progresses, machines will acquire proficiency in routine jobs across an increasingly broad spread of industries. As a result, humans will shift toward non-routine jobs where they have a comparative advantage. This represents a significant structural shift in the economy.

Thus, the evidence points to narrow AI effecting widespread changes to society in the anticipatable future. From today’s vantage point, it’s difficult to predict whether these changes will ultimately be positive or negative for society. Nonetheless, we have two things going for us: Narrow AI has the potential to dramatically improve our lives, and we control our own destiny. I see this as cause for optimism.

Objection 3a: Narrow AI will negatively impact society. It will cause permanent and widespread job loss for unskilled workers in the coming decades, and in the long term it will cause permanent and widespread job loss even for skilled workers. As narrow AI machines consume an increasing proportion of the job market, capitalism will lose all semblance of meritocracy. The lack of socioeconomic mobility for humans will cause global unrest and instability.

The jobs-versus-automation debate is not new. For narrow AI, the debate boils down to how you weigh the evidence of the past against the idiosyncrasies of the present.

Viewed through a historical lens, narrow AI appears unlikely to cause widespread joblessness. It’s tempting to fear that artificially intelligent machines are going to replace us in the workforce, but past waves of technology-driven automation, like the Industrial Revolution, have consistently created more jobs than they destroyed. To explain this phenomenon, David Autor, an economist from MIT, argues that when tasks are completed more quickly or cheaply through automation, demand tends to increase for human workers to do related tasks that have not been automated.31 For example, consider the spread of ATMs in the 1970s: while these machines did reduce the number of tellers required to operate a bank branch by automating deposits and withdrawals, they also reduced the cost of operating a branch. Lower operating costs enabled banks to open branches in new locations based on customer demand, which ultimately resulted in a net increase in bank teller jobs. The new status quo merely skewed the work responsibilities of bank tellers towards non-automated tasks like sales and customer service.31

But if history suggests that technology-driven automation is more creative than destructive, then why does the Luddite argument still have such appeal? Stewart et al. (2015), a group of economists from Deloitte, argue that the jobs-versus-automation debate is biased against technological change because technology’s job-destroying effects are so visible, while its job-creating effects are unpredictable.32 Again, it’s tempting to assume – incorrectly – that there is a fixed amount of work in the economy, based on the jobs that exist today. However, Stewart et al. argue that there are many “pressing, unmet needs even in the rich world,” such as “the care of the elderly and the frail, lifetime education and retraining, health care, physical and mental well-being.”32 Even as machines become broadly responsible for routine tasks, there will be plenty of work left demanding a human touch.

Still, futurists like Martin Ford argue that the current wave of AI-driven automation is different from the past. Ford likens the job market to a pyramid, where the top is comprised of a small number of skilled workers who drive innovation, and the bottom is comprised of a large number of less-skilled workers doing relatively routine and predictable tasks. Historically, Ford argues, workers at the bottom of the pyramid have navigated automation by transitioning from one routine job to another: “The person who would have worked on a farm in 1900, or in a factory in 1950, is today scanning bar codes or stocking shelves at Walmart.”33 But advances in narrow AI are making machines exponentially smarter, to the point where machines will eventually dominate the base of the job skills pyramid and even encroach on the “safe area” at the top. Even if we invest heavily in education and training, Ford finds it unlikely that we are “somehow going to cram everyone into that shrinking region at the top.”33 To maintain stability in an economy where machines have largely replaced human labor, Ford suggests we consider instituting a basic income guarantee.33

While Ford’s outlook seems overly pessimistic, the thrust of his argument has merit: narrow AI is enabling machines to automate an increasing variety of predictable tasks, and a lot of humans are employed to do predictable things. It’s likely that the spread of narrow AI will automate away a large number of jobs that exist today. A widely cited study by Frey and Osborne (2013) examines the social intelligence, creativity, and perception and manipulation requirements of 702 occupations, and estimates that 47 percent of total US employment is at risk in the coming decades due to computerization.34 Subsequent analysis by the Organization for Economic Cooperation and Development (OECD) argues that Frey and Osborne overestimate job automatability by ignoring the fact that many occupations contain a variety of tasks, making it more difficult to automate the occupation entirely. By accounting for the heterogeneity of tasks within occupations, the OECD estimates that only 9 percent of jobs are at risk.35 Of course, Ford would argue that these proportions will only rise as narrow AI gets smarter and enables increasingly non-routine tasks to be automated. Even if we conservatively accept the OECD’s 9 percent figure, we can anticipate a nontrivial fraction of the workforce being forced to find new work as a result of AI-driven automation.

However, at least with respect to the foreseeable future, Ford seems to underestimate the capacity of AI to create new jobs for humans. While the top of Ford’s job skills pyramid will gradually shrink as AI gets smarter, it is also likely to expand as the spread of AI generates new job opportunities for humans. A recent study by the U.S. Council of Economic Advisors attempts to identify broad areas where AI-driven automation is likely to create jobs. Beyond the obvious case where demand increases for human labor to develop AI technologies, human labor may complement narrow AI in areas like medical care where human judgment and social interaction is important. Similarly, paradigm shifts like an infrastructural transition to self-driving cars may necessitate new occupations and more employment.30 It’s difficult to predict exactly how much opportunity AI will create, but it appears that Ford’s job skills pyramid is larger and less static than his pessimism suggests.

Summing up, advances in narrow AI seem unlikely to cause widespread joblessness in the foreseeable future; however, these advances will likely force a large number of humans (particularly less-skilled ones) to switch to less routine occupations. To avoid negative outcomes, policymakers should be vigilant and ensure that these workers are properly trained to transition jobs. Since such policymaking is entirely under our control, I am optimistic – or hopeful, at least – we can navigate these challenges successfully.

Objection 3b: Narrow AI will negatively impact society for at least two other reasons:

  1. Narrow AI will enable increasingly broad and intrusive surveillance by governments. By using automated tools to monitor, predict, and manipulate the behavior of every individual, governments will become dangerously more powerful than their citizenry. Those in power will be tempted to abuse this power imbalance.
  2. The proliferation of automated systems will make it easy for small mistakes to harm large populations. For example, automated weapons systems will make it easy to kill entire populations, and “black box” automated profiling systems will make it easy to discriminate against various subpopulations unnoticed. Even if not intentional, mistakes are inevitable because the codebases and machinery behind these automated systems will be complex.

These are powerful objections that we cannot ignore in our pursuit of ever-better AI. Also, these are probably not the only other ways that advances in narrow AI might harm humanity. However, these problems are avoidable. We should not respond to these objections by giving up our pursuit of AI; rather, we should proactively tackle these problems head-on by building safeguards into our systems and enacting policies that protect those most vulnerable to AI-driven change.

On the matter of surveillance, the core problem isn’t narrow AI, which is just a tool that enables humans to interpret and act upon large amounts of data more efficiently and at scale. The problem is the data itself, because knowledge is what gives power to intelligent agents – whether human or machine. So, to the extent that citizens’ private data are available to government actors, citizens will be vulnerable to exploitation. Advances in narrow AI simply exacerbate this more fundamental problem. To prevent narrow AI from facilitating intrusive government surveillance and popular manipulation, we should enact policies that limit the government’s – or frankly, anyone’s – ability to gather information about the private lives of its citizens. Given the scope of the government’s existing infrastructure for collecting such information, for instance by tapping into the internet backbone,36 this issue commands a high degree of urgency already.

On the matter of proliferating automated systems, there are two important considerations. First, the likelihood of small mistakes snowballing into disproportionate damage is a general property of automated systems, regardless of whether they are powered by AI. We must exercise particularly extreme caution whenever weapons are automated, but advances in narrow AI shouldn’t make us any more pessimistic than we already are about, for example, nuclear proliferation.

Second, the proliferation of automated systems powered by AI introduces a much subtler concern around algorithmic bias. Decision-making algorithms are necessarily biased by the humans who design and interact with them. Kristian Hammond, a professor of computer science at Northwestern University, identifies several mechanisms by which this can occur:37

  1. Data-driven bias: Systems that learn from skewed training datasets will reflect similar biases in their decision-making. For example, algorithms used to predict criminal recidivism and inform sentencing decisions in courtrooms have been shown to be unfairly biased against black defendants and in favor of white defendants.38
  2. Bias through interaction: Systems that learn can be manipulated by interactions with humans. For example, Tay, a Twitter-based chatbot developed by Microsoft in 2016, lasted only 24 hours before users had taught it to make racist and misogynistic tweets.39
  3. Emergent and similarity bias: Systems that offer personalized information, like news, to humans tend to lock us inside “echo chambers” or “filter bubbles” where we only see information that confirms our existing beliefs.40

To mitigate the threat of pervasive algorithmic bias as narrow AI systems take on increasing responsibility, we should a) promote diversity and inclusion in the AI community, and b) strive to make discretionary algorithms interpretable rather than black boxes, so humans can audit them for fairness.

Ultimately, advances in narrow AI are not without danger. Increasingly powerful technologies will be available to increasingly many people. However, with careful thought and planning, I believe we can navigate these dangers and move toward a world in which narrow AI is net beneficial to society.

So what should we do?

Between high-profile technologists like Elon Musk predicting that AGI is humanity’s greatest existential threat, and market research firms projecting that investment in narrow AI will triple this year,41 AI has generated a staggering amount of hype. We cannot ignore it.

Based on current evidence, it appears unlikely that AGI or superintelligence poses an existential risk to humanity requiring urgent action. The bigger risks, it seems, come from more imminent advances in narrow AI technology. However, these risks are navigable, and on balance I believe that we can reasonably afford a measured optimism about advances in narrow AI technology being net beneficial to humanity in the near and foreseeable future.

Specifically, I have argued:

  1. In theory, we can build an AGI and superintelligence.
  2. It is extremely unlikely that we will do so in the foreseeable future.
  3. In the foreseeable future, the proliferation of narrow AI applications will cause significant structural changes to the global economy. While it’s difficult to predict whether the outcome will be net positive or net negative for society, I see cause for optimism.

Of course, the future is not predetermined, and these beliefs are conditioned only on what we know from the past and present. Advances in AI technology will not necessarily benefit society. However, if we’re in the business of speculating on the future of humanity, AI seems a worthy bet.

Thanks to Midori Takasaki for reading drafts of this essay.

References

  1. https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/ 

  2. https://openai.com/blog/introducing-openai/ 

  3. http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html 

  4. http://futureoflife.org/ai-open-letter/ 

  5. https://research.fb.com/publications/deepface-closing-the-gap-to-human-level-performance-in-face-verification/ 

  6. http://fortune.com/2016/10/05/google-self-driving-cars-milestone/ 

  7. https://techcrunch.com/2016/11/22/googles-ai-translation-tool-seems-to-have-invented-its-own-secret-internal-language 

  8. https://www.cbinsights.com/blog/artificial-intelligence-startup-funding/ 

  9. https://www.partnershiponai.org/ 

  10. Nick Bostrom, Superintelligence (2014)  2 3 4

  11. http://faculty.arts.ubc.ca/rjohns/searle.pdf 

  12. http://aitopics.org/sites/default/files/classic/Feigenbaum_Feldman/C&T-Newll-Shaw-Simon.pdf 

  13. http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html 

  14. https://www.gwern.net/Complexity%20vs%20AI 

  15. https://twitter.com/karpathy/status/495772988361277440 

  16. https://storage.googleapis.com/deepmind-data/assets/papers/DeepMindNature14236Paper.pdf 

  17. http://idlewords.com/talks/superintelligence.htm  2

  18. https://intelligence.org/files/PredictingAI.pdf 

  19. http://www.kurzweilai.net/the-law-of-accelerating-return 

  20. https://www.technologyreview.com/s/425733/paul-allen-the-singularity-isnt-near/ 

  21. Nassim Taleb, Black Swan (2007) 

  22. http://inverseprobability.com/2016/05/09/machine-learning-futures-6 

  23. https://www.weforum.org/agenda/2014/09/understanding-human-brain 

  24. https://alexcbecker.net/blog.html#against-ai-risk 

  25. https://www.jstor.org/stable/3105385 

  26. http://www.wsj.com/articles/facebook-could-develop-artificial-intelligence-to-weed-out-fake-news-1480608004 

  27. https://www.nytimes.com/2016/11/22/technology/facebook-censorship-tool-china.html 

  28. http://www.forbes.com/sites/janetwburns/2016/08/29/artificial-intelligence-can-help-doctors-assess-breast-cancer-risk-thirty-times-faster/ 

  29. https://www.washingtonpost.com/national/national-security/a-future-for-drones-automated-killing/2011/09/15/gIQAVy9mgK_story.html 

  30. https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF  2

  31. http://www.economist.com/news/special-report/21700758-will-smarter-machines-cause-mass-unemployment-automation-and-anxiety  2

  32. https://www2.deloitte.com/uk/en/pages/finance/articles/technology-and-people.html  2

  33. Martin Ford, Rise of the Robots: Technology and the Threat of a Jobless Future (2015)  2 3

  34. http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf 

  35. http://www.oecd-ilibrary.org/social-issues-migration-health/the-risk-of-automation-for-jobs-in-oecd-countries_5jlz9h56dvq7-en;jsessionid=2xkfwu3bkffsf.x-oecd-live-02 

  36. https://www.nytimes.com/2014/07/03/world/privacy-board-backs-nsa-program-that-taps-internet-in-us.html 

  37. https://techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/ 

  38. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 

  39. http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist 

  40. https://en.wikipedia.org/wiki/Filter_bubble 

  41. https://go.forrester.com/wp-content/uploads/Forrester_Predictions_2017_-Artificial_Intelligence_Will_Drive_The_Insights_Revolution.pdf