The Future of Employment (Part III)

How Susceptible are Jobs to Computerisation

by Carl Benedikt Frey and Michael A Osborne

http://www.oxfordmartin.ox.ac.uk (September 17 2013)

III. The Technological Revolutions of the Twenty-First Century

The secular price decline in the real cost of computing has created vast economic incentives for employers to substitute labour for computer capital {15}. Yet the tasks computers are able to perform ultimately depend upon the ability of a programmer to write a set of procedures or rules that appropriately direct the technology in each possible contingency. Computers will therefore be relatively productive to human labour when a problem can be specified – in the sense that the criteria for success are quantifiable and can readily be evaluated (Acemoglu and Autor, 2011). The extent of job computerisation will thus be determined by technological advances that allow engineering problems to be sufficiently specified, which sets the boundaries for the scope of computerisation. In this section, we examine the extent of tasks computer-controlled equipment can be expected to perform over the next decades. Doing so, we focus on advances in fields related to Machine Learning (“ML”), including Data Mining, Machine Vision, Computational Statistics and other subfields of Artificial Intelligence (“AI”), in which efforts are explicitly dedicated to the development of algorithms that allow cognitive tasks to be automated. In addition, we examine the application of ML technologies in Mobile Robotics (“MR”), and thus the extent of computerisation in manual tasks.

Our analysis builds on the task categorisation of Autor, et al (2003), which distinguishes between workplace tasks using a two-by-two matrix, with routine versus non-routine tasks on one axis, and manual versus cognitive tasks on the other. In short, routine tasks are defined as tasks that follow explicit rules that can be accomplished by machines, while non-routine tasks are not sufficiently well understood to be specified in computer code. Each of these task categories can, in turn, be of either manual or cognitive nature – that is, they relate to physical labour or knowledge work. Historically, computerisation has largely been confined to manual and cognitive routine tasks involving explicit rule-based activities (Autor and Dorn, 2013; Goos, et al, 2009). Following recent technological advances, however, computerisation is now spreading to domains commonly defined as non-routine. The rapid pace at which tasks that were defined as non-routine only a decade ago have now become computerisable is illustrated by Autor, et al (2003), asserting that: “Navigating a car through city traffic or deciphering the scrawled handwriting on a personal check – minor undertakings for most adults – are not routine tasks by our definition”. Today, the problems of navigating a car and deciphering handwriting are sufficiently well understood that many related tasks can be specified in computer code and automated (Veres, et al, 2011; Plotz and Fink, 2009).

Recent technological breakthroughs are, in large part, due to efforts to turn non-routine tasks into well-defined problems. Defining such problems is helped by the provision of relevant data: this is highlighted in the case of handwriting recognition by Plotz and Fink (2009). The success of an algorithm for handwriting recognition is difficult to quantify without data to test on – in particular, determining whether an algorithm performs well for different styles of writing requires data containing a variety of such styles. That is, data is required to specify the many contingencies a technology must manage in order to form an adequate substitute for human labour. With data, objective and quantifiable measures of the success of an algorithm can be produced, which aid the continual improvement of its performance relative to humans.

As such, technological progress has been aided by the recent production of increasingly large and complex datasets, known as big data {16}. For instance, with a growing corpus of human-translated digitalised text, the success of a machine translator can now be judged by its accuracy in reproducing observed translations. Data from United Nations documents, which are translated by human experts into six languages, allow Google Translate to monitor and improve the performance of different machine translation algorithms (Tanner, 2007).

Further, ML algorithms can discover unexpected similarities between old and new data, aiding the computerisation of tasks for which big data has newly become available. As a result, computerisation is no longer confined to routine tasks that can be written as rule-based software queries, but is spreading to every non-routine task where big data becomes available (Brynjolfsson and McAfee, 2011). In this section, we examine the extent of future computerisation beyond routine tasks.

III.A. Computerisation in non-routine cognitive tasks

With the availability of big data, a wide range of non-routine cognitive tasks are becoming computerisable. That is, further to the general improvement in technological progress due to big data, algorithms for big data are rapidly entering domains reliant upon storing or accessing information. The use of big data is afforded by one of the chief comparative advantages of computers relative to human labor: scalability. Little evidence is required to demonstrate that, in performing the task of laborious computation, networks of machines scale better than human labour (Campbell-Kelly, 2009). As such, computers can better manage the large calculations required in using large datasets. ML algorithms running on computers are now, in many cases, better able to detect patterns in big data than humans.

Computerisation of cognitive tasks is also aided by another core comparative advantage of algorithms: their absence of some human biases. An algorithm can be designed to ruthlessly satisfy the small range of tasks it is given. Humans, in contrast, must fulfil a range of tasks unrelated to their occupation, such as sleeping, necessitating occasional sacrifices in their occupational performance (Kahneman, et al, 1982). The additional constraints under which humans must operate manifest themselves as biases. Consider an example of human bias: Danziger, et al (2011) demonstrate that experienced Israeli judges are substantially more generous in their rulings following a lunch break. It can thus be argued that many roles involving decision-making will benefit from impartial algorithmic solutions.

Fraud detection is a task that requires both impartial decision making and the ability to detect trends in big data. As such, this task is now almost completely automated (Phua, et al, 2010). In a similar manner, the comparative advantages of computers are likely to change the nature of work across a wide range of industries and occupations.

In health care, diagnostics tasks are already being computerised. Oncologists at Memorial Sloan-Kettering Cancer Center are, for example, using IBM’s Watson computer to provide chronic care and cancer treatment diagnostics. Knowledge from 600,000 medical evidence reports, 1.5 million patient records and clinical trials, and two million pages of text from medical journals, are used for benchmarking and pattern recognition purposes. This allows the computer to compare each patient’s individual symptoms, genetics, family and medication history, et cetera, to diagnose and develop a treatment plan with the highest probability of success (Cohn, 2013).

In addition, computerisation is entering the domains of legal and financial services. Sophisticated algorithms are gradually taking on a number of tasks performed by paralegals, contract and patent lawyers (Markoff, 2011). More specifically, law firms now rely on computers that can scan thousands of legal briefs and precedents to assist in pre-trial research. A frequently cited example is Symantec’s Clearwell system, which uses language analysis to identify general concepts in documents, can present the results graphically, and proved capable of analysing and sorting more than 570,000 documents in two days (Markoff, 2011).

Furthermore, the improvement of sensing technology has made sensor data one of the most prominent sources of big data (Ackerman and Guizzo, 2011). Sensor data is often coupled with new ML fault- and anomaly-detection algorithms to render many tasks computerisable. A broad class of examples can be found in condition monitoring and novelty detection, with technology substituting for closed-circuit TV (CCTV) operators, workers examining equipment defects, and clinical staff responsible for monitoring the state of patients in intensive care. Here, the fact that computers lack human biases is of great value: algorithms are free of irrational bias, and their vigilance need not be interrupted by rest breaks or lapses of concentration. Following the declining costs of digital sensing and actuation, ML approaches have successfully addressed condition monitoring applications ranging from batteries (Saha, et al, 2007), to aircraft engines (King, et al, 2009), water quality (Osborne, et al, 2012) and intensive care units (ICUs) (Clifford and Clifton, 2012; Clifton, et al, 2012). Sensors can equally be placed on trucks and pallets to improve companies’ supply chain management, and used to measure the moisture in a field of crops to track the flow of water through utility pipes. This allows for automatic meter reading, eliminating the need for personnel to gather such information. For example, the cities of Doha, Sao Paulo, and Beijing use sensors on pipes, pumps, and other water infrastructure to monitor conditions and manage water loss, reducing leaks by forty to fifty percent. In the near future, it will be possible to place inexpensive sensors on light poles, sidewalks, and other public property to capture sound and images, likely reducing the number of workers in law enforcement (MGI, 2013).

Advances in user interfaces also enable computers to respond directly to a wider range of human requests, thus augmenting the work of highly skilled labour, while allowing some types of jobs to become fully automated. For example, Apple’s Siri and Google Now rely on natural user interfaces to recognise spoken words, interpret their meanings, and act on them accordingly. Moreover, a company called SmartAction now provides call computerisation solutions that use ML technology and advanced speech recognition to improve upon conventional interactive voice response systems, realising cost savings of sixty to eighty percent over an outsourced call centre consisting of human labour (CAA, 2012). Even education, one of the most labour intensive sectors, will most likely be significantly impacted by improved user interfaces and algorithms building upon big data. The recent growth in MOOCs (Massive Open Online Courses) has begun to generate large datasets detailing how students interact on forums, their diligence in completing assignments and viewing lectures, and their ultimate grades (Simonite, 2013; Breslow, et al, 2013). Such information, together with improved user interfaces, will allow for ML algorithms that serve as interactive tutors, with teaching and assessment strategies statistically calibrated to match individual student needs (Woolf, 2010). Big data analysis will also allow for more effective predictions of student performance, and for their suitability for post-graduation occupations. These technologies can equally be implemented in recruitment, most likely resulting in the streamlining of human resource (HR) departments.

Occupations that require subtle judgement are also increasingly susceptible to computerisation. To many such tasks, the unbiased decision making of an algorithm represents a comparative advantage over human operators. In the most challenging or critical applications, as in ICUs, algorithmic recommendations may serve as inputs to human operators; in other circumstances, algorithms will themselves be responsible for appropriate decision-making. In the financial sector, such automated decision-making has played a role for quite some time. AIalgorithms are able to process a greater number of financial announcements, press releases, and other information than any human trader, and then act faster upon them (Mims, 2010). Services like Future Advisor similarly use AI to offer personalised financial advice at larger scale and lower cost. Even the work of software engineers may soon largely be computerisable. For example, advances in ML allow a programmer to leave complex parameter and design choices to be appropriately optimised by an algorithm (Hoos, 2012). Algorithms can further automatically detect bugs in software (Hangal and Lam, 2002; Livshits and Zimmermann, 2005; Kim, et al, 2008), with a reliability that humans are unlikely to match. Big databases of code also offer the eventual prospect of algorithms that learn how to write programs to satisfy specifications provided by a human. Such an approach is likely to eventually improve upon human programmers, in the same way that human-written compilers eventually proved inferior to automatically optimised compilers. An algorithm can better keep the whole of a program in working memory, and is not constrained to human-intelligible code, allowing for holistic solutions that might never occur to a human. Such algorithmic improvements over human judgement are likely to become increasingly common.

Although the extent of these developments remains to be seen, estimates by MGI (2013) suggests that sophisticated algorithms could substitute for approximately 140 million full-time knowledge workers worldwide. Hence, while technological progress throughout economic history has largely been confined to the mechanisation of manual tasks, requiring physical labour, technological progress in the twenty-first century can be expected to contribute to a wide range of cognitive tasks, which, until now, have largely remained a human domain. Of course, many occupations being affected by these developments are still far from fully computerisable, meaning that the computerisation of some tasks will simply free-up time for human labour to perform other tasks. Nonetheless, the trend is clear: computers increasingly challenge human labour in a wide range of cognitive tasks (Brynjolfsson and McAfee, 2011).

III.B. Computerisation in non-routine manual tasks

Mobile robotics provides a means of directly leveraging ML technologies to aid the computerisation of a growing scope of manual tasks. The continued technological development of robotic hardware is having notable impact upon employment: over the past decades, industrial robots have taken on the routine tasks of most operatives in manufacturing. Now, however, more advanced robots are gaining enhanced sensors and manipulators, allowing them to perform non-routine manual tasks. For example, General Electric has recently developed robots to climb and maintain wind turbines, and more flexible surgical robots with a greater range of motion will soon perform more types of operations (Robotics-VO, 2013). In a similar manner, the computerisation of logistics is being aided by the increasing cost-effectiveness of highly instrumented and computerised cars. Mass-production vehicles, such as the Nissan LEAF, contain on-board computers and advanced telecommunication equipment that render the car a potentially fly-by-wire robot. {17} Advances in sensor technology mean that vehicles are likely to soon be augmented with even more advanced suites of sensors. These will permit an algorithmic vehicle controller to monitor its environment to a degree that exceeds the capabilities of any human driver: they have the ability to simultaneously look both forwards and backwards, can natively integrate camera, GPS and LIDAR data, and are not subject to distraction. Algorithms are thus potentially safer and more effective drivers than humans.

The big data provided by these improved sensors are offering solutions to many of the engineering problems that had hindered robotic development in the past. In particular, the creation of detailed three dimensional maps of road networks has enabled autonomous vehicle navigation; most notably illustrated by Google’s use of large, specialised datasets collected by its driverless cars (Guizzo, 2011). It is now completely feasible to store representations of the entire road network on-board a car, dramatically simplifying the navigation problem. Algorithms that could perform navigation throughout the changing seasons, particularly after snowfall, have been viewed as a substantial challenge. However, the big data approach can answer this by storing records from the last time snow fell, against which the vehicle’s current environment can be compared (Churchill and Newman, 2012). ML approaches have also been developed to identify unprecedented changes to a particular piece of the
road network, such as roadworks (Mathibela, et al, 2012). This emerging technology will affect a variety of logistics jobs. Agricultural vehicles, forklifts and cargo-handling vehicles are imminently automatable, and hospitals are already employing autonomous robots to transport food, prescriptions and samples (Bloss, 2011). The computerisation of mining vehicles is further being pursued by companies such as Rio Tinto, seeking to replace labour in Australian mine-sites. {18}

With improved sensors, robots are capable of producing goods with higher quality and reliability than human labour. For example, El Dulze, a Spanish food processor, now uses robotics to pick up heads of lettuce from a conveyor belt, rejecting heads that do not comply with company standards. This is achieved by measuring their density and replacing them on the belt (IFR, 2012a). Advanced sensors further allow robots to recognise patterns. Baxter, a 22,000 US Dollar general-purpose robot, provides a well-known example. The robot features an LCD display screen displaying a pair of eyes that take on different expressions depending on the situation. When the robot is first installed or needs to learn a new pattern, no programming is required. A human worker simply guides the robot arms through the motions that will be needed for the task. Baxter then memorises these patterns and can communicate that it has understood its new instructions. While the physical flexibility of Baxter is limited to performing simple operations such as picking up objects and moving them, different standard attachments can be installed on its arms, allowing Baxter to perform a relatively broad scope of manual tasks at low cost (MGI, 2013).

Technological advances are contributing to declining cost s in robotics. Over the past decades, robot prices have fallen about ten percent annually and are expected to decline at an even faster pace in the near future (MGI, 2013). Industrial robots, with features enabled by machine vision and high-precision dexterity, which typically cost 100,000 to 150,000 US Dollars, will be available for 50,000 to 75,000 US Dollars in the next decade, with higher levels of intelligence and additional capabilities (IFR, 2012b). Declining robot prices will inevitably place them within reach of more users. For example, in China, employers are increasingly incentivised to substitute robots for labour, as wages and living standards are rising – Foxconn, a Chinese contract manufacturer that employs 1.2 million workers, is now investing in robots to assemble products such as the Apple iPhone (Markoff, 2012). According to the International Federation of Robotics, robot sales in China grew by more
than fifty percent in 2011 and are expected to increase further. Globally, industrial robot sales reached a record 166,000 units in 2011, a forty percent year-on-year increase (IFR, 2012b). Most likely, there will be even faster growth ahead as low-priced general-purpose models, such as Baxter, are adopted in simple manufacturing and service work.

Expanding technological capabilities and declining costs will make entirely new uses for robots possible. Robots will likely continue to take on an increasing set of manual tasks in manufacturing, packing, construction, maintenance, and agriculture. In addition, robots are already performing many simple service tasks such as vacuuming, mopping, lawn mowing, and gutter cleaning – the market for personal and household service robots is growing by about twenty percent annually (MGI, 2013). Meanwhile, commercial service robots are now able to perform more complex tasks in food preparation, health care, commercial cleaning, and elderly care (Robotics-VO, 2013). As robot costs decline and technological capabilities expand, robots can thus be expected to gradually substitute for labour in a wide range of low-wage service occupations, where most US job growth has occurred over the past decades (Autor and Dorn, 2013). This means that many low-wage manual jobs that have been previously protected from computerisation could diminish over time.

III.C. The task model revisited

Note: The initial portion of this section is too technical for most readers, so I have not included it here. Those who wish to read it can find it at http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf.

PERCEPTION AND MANIPULATION TASKS. Robots are still unable to match the depth and breadth of human perception. While basic geometric identification is reasonably mature, enabled by the rapid development of sophisticated sensors and lasers, significant challenges remain for more complex perception tasks, such as identifying objects and their properties in a cluttered field of view. As such, tasks that relate to an unstructured work environment can make jobs less susceptible to computerisation. For example, most homes are unstructured, requiring the identification of a plurality of irregular objects and containing many cluttered spaces which inhibit the mobility of wheeled objects. Conversely, supermarkets, factories, warehouses, airports and hospitals have been designed for large wheeled objects, making it easier for robots to navigate in performing non-routine manual tasks. Perception problems can, however, sometimes be sidestepped by clever task design. For example, Kiva Systems, acquired by Amazon.com in 2012, solved the problem of warehouse navigation by simply placing bar-code stickers on the floor, informing robots of their precise location (Guizzo, 2008).

The difficulty of perception has ramifications for manipulation tasks, and, in particular, the handling of irregular objects, for which robots are yet to reach human levels of aptitude. This has been evidenced in the development of robots that interact with human objects and environments. While advances have been made, solutions tend to be unreliable over the myriad small variations on a single task, repeated thousands of times a day, that many applications require. A related challenge is failure recovery – that is, identifying and rectifying the mistakes of the robot when it has, for example, dropped an object. Manipulation is also limited by the difficulties of planning out the sequence of actions required to move an object from one place to another. There are yet further problems in designing manipulators that, like human limbs, are soft, have compliant dynamics and provide useful tactile feedback. Most industrial manipulation makes uses of workarounds to these challenges
(Brown, et al, 2010), but these approaches are nonetheless limited to a narrow range of tasks. The main challenges to robotic computerisation, perception and manipulation, thus largely remain and are unlikely to be fully resolved in the next decade or two (Robotics-VO, 2013).

CREATIVE INTELLIGENCE TASKS. The psychological processes underlying human creativity are difficult to specify. According to Boden (2003), creativity is the ability to come up with ideas or artifacts that are novel and valuable. Ideas, in a broader sense, include concepts, poems, musical compositions, scientific theories, cooking recipes and jokes, whereas artifacts are objects such as paintings, sculptures, machinery, and pottery. One process of creating ideas (and similarly for artifacts) involves making unfamiliar combinations of familiar ideas, requiring a rich store of knowledge. The challenge here is to find some reliable means of arriving at combinations that “make sense”. For a computer to make a subtle joke, for example, would require a database with a richness of knowledge comparable to that of humans, and methods of benchmarking the algorithm’s subtlety.

In principle, such creativity is possible and some approaches to creativity already exist in the literature. Duvenaud, et al (2013) provide an example of automating the core creative task required in order to perform statistics, that of designing models for data. As to artistic creativity, AARON, a drawing-program, has generated thousands of stylistically-similar line-drawings, which have been exhibited in galleries worldwide. Furthermore, David Cope’s EMI software composes music in many different styles, reminiscent of specific human composers.

In these and many other applications, generating novelty is not particularly difficult. Instead, the principal obstacle to computerising creativity is stating our creative values sufficiently clearly that they can be encoded in an program (Boden, 2003). Moreover, human values change over time and vary across cultures. Because creativity, by definition, involves not only novelty but value, and because values are highly variable, it follows that many arguments about creativity are rooted in disagreements about value. Thus, even if we could identify and encode our creative values, to enable the computer to inform and monitor its own activities accordingly, there would still be disagreement about whether the computer appeared to be creative. In the absence of engineering solutions to overcome this problem, it seems unlikely that occupations requiring a high degree of creative intelligence will be automated in the next decades.

SOCIAL INTELLIGENCE TASKS. Human social intelligence is important in a wide range of work tasks, such as those involving negotiation, persuasion and care. To aid the computerisation of such tasks, active research is being undertaken within the fields of Affective Computing (Scherer, et al, 2010; Picard, 2010), and Social Robotics (Ge, 2007; Broekens, et al, 2009). While algorithms and robots can now reproduce some aspects of human social interaction, the real-time recognition of natural human emotion remains a challenging problem, and the ability to respond intelligently to such inputs is even more difficult. Even
simplified versions of typical social tasks prove difficult for computers, as is the case in which social interaction is reduced to pure text. The social intelligence of algorithms is partly captured by the Turing test, examining the ability of a machine to communicate indistinguishably from an actual human. Since 1990, the Loebner Prize, an annual Turing test competition, awards prizes to textual chat programmes that are considered to be the most human-like. In each competition, a human judge simultaneously holds computer-based textual interactions with both an algorithm and a human. Based on the responses, the judge is to distinguish between the two. Sophisticated algorithms have so far failed to convince judges about their human resemblance. This is largely because there is much ‘common sense’ information possessed by humans, which is difficult to articulate, that would need to be provided to algorithms if they are to function in human social settings.

Whole brain emulation, the scanning, mapping and digitalising of a human brain, is one possible approach to achieving this, but is currently only a theoretical technology. For brain emulation to become operational, additional functional understanding is required to recognise what data is relevant, as well as a roadmap of technologies needed to implement it. While such roadmaps exist, present implementation estimates, under certain assumptions, suggest that whole brain emulation is unlikely to become operational within the next decade or two (Sandberg and Bostrom, 2008). When or if they do, however, the employment impact is likely to be vast (Hanson, 2001).

Hence, in short, while sophisticated algorithms and developments in MR, building upon with big data, now allow many non-routine tasks to be automated, occupations that involve complex perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks are unlikely to be substituted by computer capital over the next decade or two. The probability of an occupation being automated can thus be described as a function of these task characteristics. As suggested by Figure I, the low degree of social intelligence required by a dishwasher makes this occupation more susceptible to computerisation than a public relation specialist, for example. We proceed to examining the susceptibility of jobs to computerisation as a function of the above described non-susceptible task characteristics.

Notes:

{15} We refer to computer capital as accumulated computers and computer-controlled equipment by means of capital deepening.

{16} Predictions by Cisco Systems suggest that the Internet traffic in 2016 will be around one zettabyte (1×10**21 bytes) (Cisco, 2012). In comparison, the information contained in all books worldwide is about 480 terabytes (5×10**14 bytes), and a text transcript of all the words ever spoken by humans would represent about five exabytes (5×10**18 bytes) (UC Berkeley School of
Information, 2003)

{17} A fly-by-wire robot is a robot that is controllable by a remote computer.

{18} Rio Tinto’s computerisation efforts are advertised at http://www.mineofthefuture.com.au.

References: See URL below.

http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf

Advertisements
Posted in Uncategorized

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: