robot in hannover

Reality Check: Robots Are Here to Automate Your Job, or not

You know these numbers. 47 percent of US jobs are at a high risk of automation over the next two decades. Do you hear the clunking sounds? Those are robots marching to take your job and put you on the brink of grim unemployment survival.

Before you start frantically examining your job description, let’s figure out what’s going on.

47 percent: where do the numbers come from and what do they mean?

Even if you are the one who bothers to read the copy that follows the headlines, succumbing to alarmist stories is not hard to do, even for Huffington Post readers. Reporters often fail to explain exactly what stands behind the numbers.

The most quoted study that estimates jobs susceptible to automation is the work by Carl Frey and Michael Osborne (so-called FO) The Future of Employment published in 2013 by Oxford University. This scientific research conducted four years ago still serves as the foundation of many predictions to render them more academically credible. And yes, it’s them who estimated 47 percent.

So, what did Frey and Osborne do?

The first thing to keep in mind is that automating occupations isn’t  the absolutely correct way to think of automation. Machines support automation of specific skills and abilities that these occupations entail. So, the researchers brainstormed (i.e. held a workshop) with their colleagues from Oxford University Engineering Sciences Department to figure out what groups of skills and abilities are NOT likely to be automated. The reason they approached the problem from the other side is... machine learning.

A short excursus here. Before machine learning and big data became so influential, we had been looking at automation as the phenomenon that impacts routine and manual tasks, things that are very repetitive and involve brute and not-so-brute force, like screwing the same type of bolts on an assembly line. As digital transformation continued and computer software evolved, automation also touched on cognitive - yet repetitive -  tasks. Think of Excel formulas or macros that you can write to automate your car loan calculations. But the growth of data and the means to collect it expanded the areas of automation. Now machines can do some non-repetitive, manual tasks like driving a car or even non-repetitive and cognitive tasks like playing Go. Learning from millions of examples, machines develop complex algorithms of actions themselves without the need for programmers, who would have a hard time explicitly elaborating the instructions.

So, the surge of machine learning narrowed down the number of tasks that are still very resistant to clanks and silicon. Frey and Osborne found main groups of abilities that aren’t likely to be automated. As they assume the rest is fairly automatable.

3 bottlenecks for automation

FO and Oxford scientists basically relied on their technical experience, studies, and the recent advancements in data science observable in 2013 to outline these three limitations, things that machines can’t do and we don’t know how to teach them.

Perception and manipulation tasks. While machines are fine with manual tasks, the actions must be repetitive as any alterations intensify perception problems. For instance, Amazon employs forklift and pod carrier robots in their warehouses, and to address perception, the company uses barcode stickers on the floor for machines to navigate around. But things get harder if we consider say housekeeping robots, the ones more complex than your Roomba. Imagine they would have to clean tables, shelves, and understand why soil in a flower pot is okay and -  if your cat flipped the flowerpot after being freaked out by a robot - why the same soil on the floor is not. And repotting your favorite flower back would also require high precision and custom manipulation abilities.

robot carries a box

Credit: Boston Dynamics

Creative intelligence tasks. Today we can make machines create paintings, compose music, or even write movie scripts. The main problem here is that feeding a program with thousands of artistic examples can teach it to replicate a style and other on-surface features. But it’s nearly impossible to infuse human values into these works. We can’t do that yet. But more on that in a bit.

Social intelligence tasks.  Arguably, all jobs entail some level of social intelligence, the abilities to perceive emotions, negotiate, or assist. Even fire lookout loners in national parks need to break solitude sometimes to keep sane. But the point of Frey and Osborne is that social intelligence abilities are essential for only a specific set of occupations. These are the psychiatrists, nurses, sales executives, etc., ultimately, all who read human reactions, persuade, reconcile, and empathize for a living. These are perhaps the hardest skills to automate.

Given these three major bottlenecks and their level of importance for an occupation, the scientists had to find the actual number of jobs susceptible to automation.

Estimating the number of jobs to be automated

To explore how these bottlenecks correspond to real job data, Frey and Osborne used the O*NET service developed for the US Department of Labor. This database was initially collected by labor market analysts and has since been updated by expert surveys on different occupations. The 2010 O*NET version - the one that FO used - contains detailed data on 903 occupations. After some dataset preparation, FO ended up with 702 occupations that are assumed to cover the entire US labor market.

The specific of O*NET is that each occupation in it contains an elaborate set of features - knowledge, skills, abilities, etc. - that are assigned with 0-100 ranks of importance for each occupation and a global level of these abilities as related to all occupations. For instance, the social perceptiveness skill has the importance rank of 88 and a global level rank of 77 for psychiatrists. The same skill, for instance, has 0 importance and 5 level rank for mathematical technicians. The ranks are based on expert analysis conducted by O*NET contributors.

social perceptiveness skill level and importance

O*NET Service. Top: Social Perceptiveness skill ranked with importance and level for different occupations; Bottom: Skillset of the physiatrist occupation with skills ranked by importance

Frey and Osborne handpicked 9 skills and abilities from O*NET that correlated with 3 bottlenecks that the researchers considered. For example, creative intelligence correlated with originality and fine arts skills. These 9 skills and abilities with their levels (FO didn’t include the importance rank) became the main variables of the research.

how bottlenecks match specific skills

Source: The Future of Employment. 9 skills corresponding with technological bottlenecks were used as the main variables of the research

Instead of going through all 702 occupations, Frey and Osborne took 70 to subjectively label them as either 1 (highly susceptible to automation) and 0 (can’t be automated). These 70 were the ones scientists were the most confident about. For example, FO labeled dishwashers as 1 and family therapists as 0.

Having 9 skills and abilities as variables and 70 labeled occupations, FO used this dataset as the training one to build a classifier machine learning model that could predict the risk of automation for the rest of the 702 occupations.

So, what did they get?

FO prediction on automation

Source: The Future of Employment, 2013

To yield the final number of 47 percent, FO linked the occupations with corresponding employment numbers from 2010 Bureau of Labor Statistics.

Authors note that they “focus on estimating the share of employment that can potentially be substituted by computer capital, from a technological capabilities point of view, over some unspecified number of years” and they “make no attempt to estimate how many jobs will actually be automated.

The keyword here is potentially. You see, it’s not exactly “47% of Jobs Will Disappear in the next 25 Years, According to Oxford University” as Big Think puts it. An average American can purchase roughly 12,400 Big Macs per year if you consider the median wage. But it doesn’t mean that this will ever happen! In other words, while technology may be there to automate occupations, not all companies will be ready to shift from people to robots right away. The impact of social organizations and unions may also be high, and the authors directly mention this.

But besides the misleading media coverage, there are some debatable things in the study itself.

Drawbacks of the Frey and Osborne’s study

Do you remember that occupations aren’t automated in their entirety? The core idea of automation is that companies segregate discrete tasks from an employee's job structure and assign these tasks to machines. So the employee can focus on other tasks that aren’t yet automated. And if most (e.g. 70 percent) of tasks are automated, then that employee faces a high risk of being totally replaced by a machine and -  worst case scenario - living in a box.

The main problem with Frey and Osborne’s research is it looks at how 9 bottleneck skills are associated with average occupations and how critical they are for these occupations, rather than for individual tasks structures.

The leading opposing research that criticizes the FO approach was published in 2016 by Melanie Arntz, Terry Gregory, and Ulrich Zierahn (so-called AGZ) from the Organisation for Economic Co-operation and Development, The Risk of Automation for Jobs in OECD Countries.

Instead of focusing on average tasks structures for occupations that O*NET provides, AGZ assumed that every job is different across companies, industries, and countries. Although the occupation name may be the same, the tasks structures highly differ from company to company. And they managed to support this assumption with actual records.

AGZ, unlike FO, focused on individual-level data received by the International Assessment of Adult Competencies (PIAAC). This database consists of survey results from individual workers and thus has many unique characteristics of skills, competencies, and tasks. In other words, the PIAAC database has varying tasks within each occupation, while FO’s database provides average tasks structure for each occupation.  AGZ call their approach task-based as opposed to FO’s occupation-based.

According to the method employed by Arntz, Gregory, and Zierahn,  “Jobs with larger shares of automatable tasks are more exposed to automatability than jobs with larger shares of non-automatable tasks (bottlenecks, using the wording of FO). The procedure allows for differences in task-structures within occupations and specifically focuses on the individual job”.

These researchers took the same 9 bottleneck indicators of automatability by FO and matched them with PIAAC jobs data. More importantly, they considered that even if core tasks can be automated, many employees can’t do their jobs without some level of these bottleneck skills involvement. For example, Frey and Osborne’s research shows that people working as  “Bookkeeping, Accounting, and Auditing Clerks” have an automation potential of 98 percent, but only 24 percent of all employees in this occupation can perform their job with neither group work nor face-to-face interactions that match the social intelligence bottleneck.

What are the numbers from Arntz, Gregory, and Zierahn?

AGZ prediction on automation

Source: The Risk of Automation for Jobs in OECD Countries, 2016. The researchers used Survey of Adult Skills (PIAAC) conducted in 2012

They are very different! The AGZ method shows only 9 percent of automation potential in the US compared to 47 percent by FO. And none of the developed countries even approach such high levels. Austria, for instance, leads with 12 percent only.

The latest update in this discussion was introduced by PwC in March 2017. They also applied AGZ's task-based approach, but they claim to have improved the predictability of the AGZ machine learning model and showed the number of 30 percent of jobs susceptible to automation in the US.

FO vs AGZ vs PwC predictions

It’s up to everyone to decide which approach is more precise. But the point is we aren’t that close to ubiquitous automation. It would take a considerable amount of time to support the demand to match possible tech supply.

But what about bottlenecks? Are we trying to solve these problems? Maybe 3 main bottlenecks that FO outlined and AGZ agreed on will partly disappear and we’ll have to recalculate these numbers again.

State of existing technologies

In our first reality check on artificial general intelligence (AGI) we considered the chances of creating machines that would be capable of taking charge of most occupations that humans are good at. Unlike narrow artificial intelligence that we use today, AGI could flexibly adapt to each occupation as we humans can do. But there’s no consensus on when general intelligence will arrive and we still don’t know what it will be like when it does.

Instead, we’re bound to consider best examples of narrow artificial intelligence and see how they try to break the existing bottlenecks.

Perception and Manipulation

If you’ve ever seen the work of modern industrial robots, you know how strangely satisfying looking at them may be.

fast moving robots

Source: FANUC Robots at Expo 2014

And if you consider the teamwork of these two guys, it may seem that we’re somewhat close to solving perception and manipulation problems. However, organizing randomly placed objects into neat blocks is still a narrow job, regardless of how fast these robots work. This “narrowness” is exactly what allows robots do so well (and allows us to make satisfactory gifs).

But we are slowly approaching other types of robots. Meet Baxter.

adaptive robot baxter

Source: Rethink Robotics

It or rhe (as Quartz suggests for “male” robots along with rshe for “female”) can learn, adjust to different works, and fit into human workspaces.

Baxter may seem awkward and slow at a first glance. But the idea behind this robot is to automate manual, repetitive tasks the cheapest way possible. Baxter comes in different kits that can be adjusted to some extent to suit your particular work requirements, and the cost will be just about $20K.

Unlike heavy and expensive industrial robots that you may see on assembly lines, not only is Baxter cheap, rhe (let’s go for that newspeak) learns from looking at a job being done and recognizes human reactions to understand whether rhe misunderstood something. Yes, basically this is a general-purpose robot for manual jobs but a machine that can easily adapt.

Just recently, the manufacturer introduced a new way of communicating with Baxter. To streamline the robot’s learning, researchers suggest connecting the brain of a supervising human to the robot and let the machine read positive or negative reactions to learning accomplishments. It would only take wearing a special sort of helmet for us to drastically increase the feedback loop speed.

Although Baxter doesn’t solve finger or manual dexterity problems and you wouldn’t assign this guy to do open-heart surgery, these bottlenecks are likely to become quite manageable quite rapidly as Baxter and rhis metal brothers mature from childhood.

Creative Intelligence

Perhaps you’re unfazed because you’re a special creative snowflake. Well, guess what? You’re not that special ” - CGP Grey argues in the famous video Humans Need Not Apply. His point is rather simple. Robots can do artistic jobs, including music compositions that are even sold today, they write movie scripts (uncanny valley alert!), and draw beautiful paintings.

Frey and Osborne mentioned originality and fine arts as the main bottleneck tasks for creative intelligence. But these are a bit misleading. We can teach robots to do fine arts and we can even achieve originality by randomizing learning material and balancing the algorithms to bring harmony into this mix. There’s no big issue about that.

But the problem with machine-generated art is all about humans, not machines. Because there are two basic ways to perceive art.

The first way is to consider the art as a means to reflect on your own thoughts without considering the actual artist as someone to set a dialog with through his or her artistic work. This may sound confusing, but people tend to imbue artistic works with their own understanding and interpretation of what’s seen, read, or heard. Sometimes we become artists for ourselves and employ works as our personal imagination playground. And some masterpieces seem to be created hollow to provoke us to supply the inner meaning. Look at Pollock’s paintings. There is more “us” in his art than him. If you are that type of person and don’t need an author behind the medium, you’re good with machine artists.

The existence of the second way is confirmed by the fact that machines can’t make subtle jokes or write novels that people would love. Because these types of things involve understanding the world and sharing some values within the culture you and the author both inhabit. The artistic medium, in this case, becomes a playground for two, where we cooperate with an artist and build the dialog on shared values. This type of interaction can’t be automated yet because machines then would have to be human equals to in their perception of the world.

Social Intelligence

The most impressive result of social intelligence today is Eugene Goostman, the only winner of the Turing test so far. Eugene is a bot - or as it's become trendy to call it - a chatbot. This chatbot managed to fool the judges by pretending to be a 14-year-old boy from Ukraine. The age and the language barrier allowed for relaxing of judgments and the bot was convincing. But it is nowhere near the social perceptiveness, negotiation, and persuasion, that Frey and Osborne put as the bottleneck challenges. But what about assisting and caring for others?

This may be partly solvable in the short run. Look at Japan, the world leader in robotization and insane commercials. Another important thing about this country is the aging population. The average age in Japan is 46 years, which is nearly twice the world average. And 26 percent of Japanese people are 65 years old or older. This means that the country dramatically lacks younger people to take care of its elderly, especially those with disabilities.

Back in 2015, Toyota introduced a Human Support Robot (HRS). It can be operated either remotely by a caregiver in a distant location or by a user personally. The HSR can reduce time spent by caregivers and with its robotic arm bring things to a disabled person. Another Japanese robot, Robear, is aimed at substituting multiple nursing-care workers to lift patients from beds to wheelchairs and assist those who have difficulties with standing up. A number of European projects also focus on elder care. The robots, elements a smart-house environment, can track human reactions, remind someone to take medicine, and immediately call for human help if the system sees dangerous signs. Even though these machines can’t lead a conversation, they already ease loneliness and helplessness. A recent announcement from Amazon suggests that Alexa, a home assistant, will sound more human and comforting as developers plan to nuance its speech patterns.

So what about bottlenecks? Some of them may approach resolution in 10 to 20 years, especially those that relate to manipulation and perception. But, it’s still too early to revise them completely.

Luddite fallacy and further steps

The Luddite fallacy is a well-known concept that evokes the Luddite movement of in the early 1800s as a reaction to the Industrial Revolution in England. Luddites were destroying factory machinery believing that it will leave them unemployed. However, the automation didn’t destroy jobs; rather it led to their recomposition within the economy and subsequently drove economic growth. As Luddites of different generations loose in their fight against machines they eventually win financially. But does Luddite fallacy fail today?

Frankly, we don’t know.

The recent update to the Frey and Osborne’s classic work, Technology at Work v.2 published in 2016 by Oxford University, follows the conclusions about 47 percent. They also provide expert survey opinions and - good news - techno-optimists win. 76 percent of respondents of the Technology at Work research believe that automation will have a positive impact on society and Luddites fallacy will remain just that - a fallacy. 21 percent of the respondents are on the pessimist side and 3 percent didn’t provide any answer.

What the authors assume though is that automation, as usual, will increase the educational level of society challenging it with more skill-heavy tasks. In the EU, for instance, there will be 9,5 million new job openings from 2013 to 2025, and 98 million replacement jobs. Nearly half of all them will be high-skilled jobs.

So what should we do?

policies to combat consequences of automation

Source: Technology at Work v.2

If you support Elon Musk in introducing the universal basic income to combat the consequences of automation, you’re not following the main trend of combating automation with educational growth. As it was before, the growing level of education looks like the main driving force of successful change. So let's believe that Luddites will remain false.

Our Reality Check series aims at exploring overhyped concepts and unraveling the facts that stand behind these concepts. Please have a look at other stories from the series to get a big picture. 

Comments