“AI will not replace you. A person using AI will.”
This phrase is being trotted out in articles and talks about AI or posted as a stand alone social media statement that presumes to sum up and clarify AI’s role with respect to the future of work. We're seeing this message everywhere and it's unclear who said it first. Maybe it was generated by an AI system?
The more interesting questions to consider are what will people who are using AI at work actually be doing and what kinds of skills will those people need?
The articles that talk about AI and human collaborations go on to describe a scenario where workers offload routine tasks to AI, allowing the human workers to develop their critical thinking skills, become more creative and solve more complex problems. It sounds good….but is that really what happens when we automate a work process? Do workers upskill to become more creative and solve more problems? The history of automation provides some clues.
From craftsperson to factory worker
We can think about the transition of the physical production of goods from craftsperson to factory worker as an example of automation's impact. In a craftsperson scenario workers would have a greater range of skills. A shoemaker who is a craftsperson might have design skills. They might also need to understand something about the materials that go into making shoes, about patterns and measurements as well as using various tools or machines needed to make a pair of shoes. If we take that same job and look at it through the lens of a factory worker it's a very different thing. The factory worker may have one or maybe two tasks in the assembly line. Maybe they are the person who just makes the shoe's soles or just works on punching eyelets.
How might this division of labour play out in the context of knowledge work?
One possible scenario is that generative AI might level up lesser skilled or less educated people to provide consistent knowledge outputs for a business that are “good enough”. In this scenario, automation increases have an inverse investment relationship with labour skill requirements. Instead of needing college graduates to fill white collar occupations, those with less education, who can also be paid less, might become more attractive to businesses who invest heavily in more sophisticated AI systems.
This bears a resemblance to the factory work scenario. The set of holistic skills that made the craftsperson valuable are not needed. The AI will do the heavy intellectual lifting in terms of “knowing” the information and generating it into a reasonable output. The knowledge output might not be the most interesting or appealing, but it will be good enough to meet the business needs. Most business writing doesn’t need to win any awards. Code needs to be functional, not necessarily elegant.
We can think of this automation of intellectual labour along the same lines as the McJob.
Similar to how fast food franchises deconstruct the production and service of food through automation, intellectual labour can be deconstructed by AI. The investment shifts into automation and away from highly skilled labour. The human might serve as the “assembler” of the components generated by the systems, filling in the gaps that the AI can’t yet perform and being accountable to management for the overall quality of the work product.
The biggest gains for business in terms of AI will be to replace a larger percentage of higher skilled workers. This is where the greatest labour cost savings will occur. This may also result in higher skilled workers (those who already have degrees) settling for lower wages in order to retain jobs. In the long run there might also be declines in educational attainment as people may no longer require a four year degree and may be less willing to go into debt to fund their education without the promise of a “good job” at the end of the process.
Another role that humans might be required to perform, particularly in highly automated processes, is that of the monitor or babysitter for the AI system. In this scenario, humans are there to “be in the loop” and make sure the automation continues to work. They only leap into action when things go off the rails. We can think of this as a Homer Simpson job. Homer’s role at the nuclear power plant is to monitor the equipment. During one episode where the nuclear power plant has malfunctioned and is facing a meltdown, Homer struggles to remember what button he is supposed to push to avert disaster.
This is the challenge with monitoring roles - humans need to be attentive, but there is nothing to attend to most of the time.
This monitor role also has another function - the people who do this work might become scapegoats in the event of an accident. We’ve already seen this play out with autonomous car test drivers. They are there to serve as backup but are often blamed when the system makes an error.
Finally, another role where humans excel is providing care and empathizing with other humans. While more sophisticated chatbots might be able to deal with routine customer problems, a truly irate customer or a customer with a highly unique problem might still need a human to help them. In the case of the angry customer, simply having another human to empathize with the situation might be the only way to appease them.
This emotional labour traditionally has not been well paid and much customer service work has already moved to countries with lower wages and less worker protections.
Imagine the traumatic nature of a role that deals exclusively with worst case scenario customers. Is that a role anyone would want?
What about generative AI prompt engineers making six figure salaries?
At the moment, generative AI appears to perform better if given certain kinds of prompts. However, this might be a temporary phase given the newness of the technology. As the systems mature, prompt engineering will likely become less important.
Instead of focusing just on job replacement vs job retention, we also need to be thinking about the kinds of jobs themselves. Will jobs that involve human and AI collaboration lead to opportunities for workers to upskill and become more creative complex problem solvers? Or will there be too many business incentives to move AI and human collaborations towards deskilling workers to the lowest acceptable level?
By Katrina Ingram, CEO, Ethically Aligned AI ________ Sign up for our newsletter to have new blog posts and other updates delivered to you each month! Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com © 2023 Ethically Aligned AI Inc. All right reserved.