money
Photograph: Getty Images

You Trained the Chatbot to Do Your Job. Why Didn’t You Get Paid?

Data from top-performing employees can create AI helpers that boost everyone’s productivity—but also create new concerns over fair pay.

In 2020, 5,000 customer service agents mostly based in the Philippines became guinea pigs in an experiment testing a question that by 2023 would feel urgent: Can an AI assistant based on OpenAI’s text-generation technology make workers more productive?

The automated helper offered agents suggested responses to small-business owners seeking tech support. The bot had been trained on previous customer chats, with a special emphasis on answers from top performers. And sure enough, when MIT and Stanford researchers analyzed the results, the AI tool had boosted the support team’s productivity by 14 percent.

When the National Bureau for Economic Research, a nonprofit, published those results in late April, they were quickly seized upon as confirmation that ChatGPT-style bots would indeed transform work. But for the researchers conducting the study, the results raised a provocative new question: Should the top workers whose chats trained the bot be compensated?

“Let’s imagine you called me with a problem, and I solved it,” says Danielle Li, an economist at MIT’s Sloan School of Management who coauthored the study with MIT PhD candidate Lindsey Raymond and Erik Brynjolfsson, director Stanford’s Digital Economy Lab. In a world without AI chatbots, that would create what economists call productivity. But in the ChatGPT era it also produces valuable data. “Now that data can be used to solve other people's problems, so the same answer has generated more output,” Li says. “And I think it's really important to find a way to measure and compensate that.”

Raymond argues that it would be in an employer’s interest to find a way to reward workers whose data enables productivity-boosting AI systems. After all, employers will need sharp minds to stick around and continue feeding the model. “There's almost no business situation where there are no new problems. So you need those high performers to continue generating those best practices in the future.”

The question of whether workers should be compensated when their data helps train an AI system to do their job is the latest example of concerns about the way generative AI tools such as ChatGPT or image generators like Dall-E are created. The words or images needed to train these systems were crafted by people who can stand to lose when the AI system is complete.  Coders and artists have sued AI companies, claiming that their copyrighted work was used without their permission. Reddit and programming site Stack Overflow say they will start charging AI companies for access to their conversational caboodle. But what happens if the company capturing the value of your data is your own employer? And what if the better you are at your job, the more valuable your data becomes?

The MIT and Stanford study shows how similar tensions could arise within companies using generative AI tools—and even between workers. The customer service agents worked for a Fortune 500 enterprise software company that the researchers did not have permission to name. The employees provided chat-based support to US small and medium-size businesses navigating administrative issues like payroll and taxes, work that was stressful and involved frequent interactions with cheesed-off customers, causing high turnover on the support team.

As a result, the company spent a lot of time training new workers hired to replace those who quit. Many of the skills needed were what the researchers called “tacit knowledge,” experiential know-how that can’t be easily codified but that large language models can absorb from chat logs and then mimic. The company’s bot helped with both technical and social skills, pointing agents to relevant technical documents and suggesting chipper phrases to soothe seething customers, such as “happy to help you get this fixed asap!”

After the bot started helping out, the number of issues the team resolved per hour jumped 14 percent. What’s more, the odds that a worker would quit in a given month went down by 9 percent, and customers’ attitudes toward employees also improved. The company also saw a 25 percent decline in customers asking to speak to a manager.

But when the researchers broke the results down by skill level, they found that most of the chatbot’s benefits accrued to the least-skilled workers, who saw a 35 percent productivity bump. The highest-skilled workers saw no gain and even saw their customer satisfaction scores dip slightly, suggesting that the bot may have been a distraction.

The value of that high-skilled work, meanwhile, multiplied as the AI assistant steered lower-skilled workers to use the same techniques. 

There’s reason to doubt that employers will reward that value of their own accord. Aaron Benanav, a historian at Syracuse University and author of the book Automation and the Future of Work, sees a historical parallel in Taylorism, a productivity system developed in the late 19th century by a mechanical engineer named Frederick Taylor and later adopted in Henry Ford’s car factories.

Using a stopwatch, Taylor broke physical processes down into their component parts to determine the most efficient way to complete them. He paid special attention to the most-skilled workers in a trade, Benanav says, “in order to be able to get less-skilled workers to work in the same way.” Now, instead of a fastidious engineer toting a stopwatch, machine learning tools can collect and disseminate workers’ best practices.

That didn’t work out so hot for some employees in Taylor’s era. His methods became associated with declining incomes for higher-skilled workers, because companies could pay less-skilled employees to do the same kind of work, says Benanav. Even if some high performers remained necessary, companies needed fewer of them, and competition between them increased.

“By some accounts, that played a pretty big role in sparking unionization among all these less-skilled or medium-skilled workers in the 1930s,” Benanav says. Some less-punitive schemes did emerge, however. One of Taylor’s adherents, the mechanical engineer Henry Gantt—yes, the chart guy—created a system that paid all workers a minimum wage but offered bonuses to those who also hit extra targets. 

Even if employers feel incentivized to pay high performers a premium for teaching AI systems, or employees win it for themselves, dividing the spoils fairly might be tricky. For one thing, data might be pooled from several workplaces and sent to an AI company that builds a model and sells it back to individual firms.

But a company that wanted to try could turn to a concept from game theory called the Shapley value, named for Nobel Prize–winning economist Lloyd Shapley, says Ruoxi Jia, an electrical engineer at Virginia Tech who has coauthored research papers on the value. It can be used to determine fair profit sharing when multiple players contribute different amounts to a group achievement and has been used to compensate patients for sharing medical data of differing values with researchers.

But calculating Shapley values is computationally expensive, Jia says. For that reason, the technique has yet to be applied to a large language model, the type of complex machine learning system behind bots like ChatGPT. It also includes a degree of randomness when applied to a machine learning context.

If chatbots like the one tested in the MIT and Stanford study become common, some workers might use their own power to push for new approaches to compensation. Benanav points to companies in countries with friendlier collective bargaining laws, like Germany and Sweden, which tend to invest more in their workers than corporations in the US. Surveys indicate that Swedish citizens display less anxiety about robots taking their jobs, in part because when companies introduce new technologies, they often pay to upgrade their workers’ skills. “If you upskill workers, you pay them more,” Benanav says. “That's a more durable and sustainable process.”

The chatbot in the MIT and Stanford study appeared to make the workplace less abrasive for some workers by improving interactions between agents and customers, but one can imagine the same technology becoming a form of algorithmic management, the practice of using automated systems to surveil and control workers. Call center agents are already commonly subjected to such technology, which has been identified as limiting pay and job satisfaction. 

The researchers plan to continue studying the AI tool’s impact. They’re interested in whether workers learn from the chatbot or become dependent on it. “It’s like, could you drive without Google Maps?” says Li. If the answer is no, she says, it doesn’t necessarily spell doom. In her own work as an economist, statistical analysis software has replaced some of her manual calculation skills. “That’s not necessarily bad, because I do have access to that technology. And I can think about building a new set of skills.”