Animation: James Marshall

The Overlooked Upsides of Algorithms in the Workplace

Author and labor lawyer Orly Lobel says AI can help mitigate human biases in hiring and compensation.

Orly Lobel believes technology can make the world a better place—and she knows in 2022, that makes her a bit of a contrarian.

Lobel, a law professor specializing in labor and employment at the University of San Diego in California, has studied how technology and the gig economy affects workers. That has made her familiar with the potential disruptions caused by tools like automated résumé screening and apps that use algorithms to assign work to people. Yet Lobel feels discussion about automation and AI is too stuck on the harms these systems create. 

In her book The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future, Lobel encourages a sunnier view. She surveys the ways AI has pervaded many of the most important and personal aspects of our lives, with job seekers increasingly placing their fate in the judgments of automated systems and home health care devices sweeping up reams of intimate data. If deployed with care, Lobel argues, such tools can create more diverse applicant pools or more effective health care. She spoke to WIRED about seeing AI as a potential force for good. This interview has been edited for length and clarity.

Jennifer Conrad: You characterize this book as contrarian. What’s wrong with the recent attention to the idea that AI can be harmful?

Photograph: Geri Goodale

Orly Lobel: For the past decade, I’ve seen too much of a binary discussion. People on the inside of the tech industry are not really interested in equality, distributive justice, and fairness—they’re just celebrating technology for the sake of technology. Then there are people asking, “Who are the winners and losers, and how do we protect different rights?” I wanted to bridge the two conversations.

We need to celebrate opportunities and successes, not just have tunnel vision on the problems. And people who are interested in having these conversations are getting more discouraged. A lot of people, particularly women and minorities, are opting out of working for Big Tech. It’s a vicious circle, where we’re getting fewer of those diverse voices on the inside, and the people who are critiquing or being agnostic have less skin in the game. 

People often assume algorithms give precise or perfect answers. Is there a danger that no one will question automated hiring calls, or accusations of harassment?

I’ve been researching hiring and diversity and inclusion for a long time. We know that so much discrimination and disparity happens without algorithmic decisionmaking. The question to ask if you’re introducing a hiring algorithm is whether it is outperforming the human processes—not if it’s perfect. And when there are biases, what are the sources, and can they be corrected, for example, by adding more training data? How much can we debias as humans versus how much can we improve the different systems?

A vast majority of large companies today are using some form of automated resume screening. It’s important for agencies like the US Equal Employment Opportunity Commission and the Labor Department to look at the claims versus the results. There hasn’t been enough nuanced conversation about the sources of the risks and whether they can be corrected. 

You describe the potential of using candidate-screening technology that takes the form of an online game, like Wasabi Waiter from a company called Knack, where a person is a server in a busy sushi restaurant. How can that be effective at assessing job candidates?

Courtesy of Hachette

It’s thinking more creatively about what we’re screening for, using insights from psychology and other research on what makes a good team player. You don’t want only what we call exploitation algorithms, which look at who became successful employees in the past, like somebody who finished an Ivy League college and was captain of a sports team.

There’s a lot of talk about the black box problem, that it’s hard to understand what the algorithm actually is doing. But from my experience as an expert witness in employment discrimination litigation, and research into hiring, it’s also very hard to pierce the black box of our human minds and trace what happened. With digital processes, we actually do have that paper trail, and can check whether a game or some kind of automated emotional screening will outperform the previous way of screening in creating a more diverse pool of people.  

My personal experience of applying for jobs that require aptitude tests and personality screenings is that I find them opaque and frustrating. When you’re speaking to someone face to face, you can get a bit of a sense of how you’re doing. When the whole process is automated, you don’t even really know what you’re being tested on. 

That’s what a lot of people feel. But this is where I get a little more contrarian. It’s not just about how people experience the interview, but what we know about how good people are at making assessments during an interview.

There’s quite a bit of research that shows that interviews are a bad predictor for job performance, and that interviewers consistently overestimate what they can actually glean from an interview. There’s even research that shows how in a matter of seconds, bias creeps in. If we’re serious about expanding the pool of people eligible for a job, the sheer numbers of applicants will be too much for a human to take on, at least in the initial stages.

A lot of these workplace biases are well documented. We’ve known about the gender pay gap for a long time, but it has been very hard to close. Can automation help there?

It has been frustrating to see how stagnant the gender pay gap has been, even though we have equal pay laws on the books. With the vast datasets now available, I think we can do better. Textio’s software helps companies write job ads that are more inclusive and will result in a more diverse applicant pool. Syndio can detect pay inequities across different parts of the labor force in large workplaces, which can be harder to see.

It’s kind of intuitive: If we use software to look across many different modes of pay and a lot of different job ads, we can pierce that veil of formal job descriptions in a large workforce and see what’s happening in terms of gender and race. We used to have this idea of auditing as one-shot—once a year—but here you can have continuous auditing over several months, or when there’s suddenly an increase in pay gaps introduced by things like bonuses.

That approach raises the question of how much data we should give up in order to be protected or evaluated fairly. You wrote about using AI to monitor workplace chats for harassment. My first thought was, “Do I really want a bot reading my Slack messages?” Are people going to be comfortable having so much of their information digitized in order for software to make judgments about them?

We’ve always had these tensions between more privacy as a protective measure, and privacy as something that conceals and protects the powerful. Nondisclosure agreements in the workplace have been ways to conceal a lot of wrongdoing. But the technology is actually making some of these trade-offs more salient, because we know we’re being monitored. There are now reporting apps where it’s only when there are several instances of a person being flagged for harassment that those reports are unlocked. 

What about platforms for informal or gig work? Airbnb stopped showing profile photos for hosts or guests after data showed minorities were less likely to complete successful bookings. But the company recently found that Black guests still face discrimination.

This is a story of active continuous auditing and detecting discrimination through the digital paper trail and computational powers of machine learning. While human discrimination continues, it can be better understood, identified, isolated, and corrected by design when it happens on platforms versus when happening in the offline market.

Now that so much of our data is out there, some argue regulation should focus less on data collection and more on ways to control how that data is used.

Absolutely. I love that. While privacy is important, we need to understand that sometimes there’s tension between accurate and trustworthy AI, and representative, unskewed data collection. A lot of the conversations we’re having are pretty muddled. There’s this assumption that the more we collect data, [the more] it’s going to disproportionately put at risk more marginalized communities. 

We should be equally concerned about people who are what I would call data marginalized. Governments and industry make decisions about resource allocation from the data they have, and some communities are not equally represented. There are many examples of positive uses of having fuller information. Cities making decisions about where to connect roads, or United Nations initiatives investing in schools and villages that are under-resourced. Decisions are being made using satellite imaging and even smartphone activity. The story of human progress and fairness is: The more we know, the more it can help us correct and understand the source and root causes of discrimination.

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.