How Humanity Can Avoid an AI Takeover

We talk to MIT professor Daron Acemoglu about his book Power and Progress and unpack why making direct human-to-AI comparisons isn’t necessarily helpful in determining our relationship with technology.
Portrait of Daron Acemoglu
Photo-illustration: WIRED STAFF; GETTY IMAGES

ON THIS WEEK’S episode of Have a Nice Future, Gideon Lichfield and Lauren Goode talk to Daron Acemoglu, institute professor at MIT, about his new book Power and Progress and why we're not necessarily destined for an AI takeover.

Show Notes

Check out our coverage of all things artificial intelligence!

Lauren Goode is @LaurenGoode. Gideon Lichfield is @glichfield. Bling the main hotline at @WIRED.

How to Listen

You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:

If you're on an iPhone or iPad, just tap this link, or open the app called Podcasts and search for Have a Nice Future. If you use Android, you can find us in the Google Podcasts app just by tapping here. You can also download an app like Overcast or Pocket Casts, and search for Have a Nice Future. We’re on Spotify too.

Transcript

Note: This is an automated transcript, which may contain errors.

Gideon Lichfield: Hi, I'm Gideon Lichfield.

Lauren Goode: And I'm Lauren Goode. And this is Have a Nice Future, a show about how fast everything is changing.

Gideon Lichfield: Each week we talk to someone with big, audacious ideas about the future and we ask, is this the future we want? 

Lauren Goode: This week, our guest is Daron Acemoglu, a professor of economics at MIT and the coauthor of a new book that is helping us think about what AI is going to do to us all.

Daron Acemoglu (audio clip): I'm not against automation. I think it's good if we automate certain things, but at the same time, we have to create as many new things for humans to do productively and contribute and expand their creativity as we are automating. And that latter part is not being done.

Lauren Goode: So Gideon, I've been thinking a lot about the film and TV writers strike that's happening right now. It's been going on for a couple of weeks. And one of the demands that writers are making is that the studios and producers create some limits around how they'll be using AI to write scripts. Do you think that the writers are right to be worried that they'll be out of the job

Gideon Lichfield: I don't think we're going to see scripts completely written by AI, at least not in the near future. But I can see a world where AI is being used for say, the basic structure of a story and then humans go in and add to it or clean it up or make it better. AI is really designed to do a good imitation of writing that already exists. It's not so great at making something completely original.

Lauren Goode: But it's advancing really quickly. I mean, I have to imagine that someone is sitting there right now with ChatGPT open and Final Draft next to them and they're just like copying and pasting parts of scripts into the software.

Gideon Lichfield: I'm sure somebody is. And I think that's kind of the crux of the question. Is it writers who are going to be using those tools to give themselves enhanced capabilities or is it the studios and the producers who are gonna use those tools to replace the writers? That's where I think the power struggle lies. Either way, I think it's going to change the writing profession pretty profoundly. And the Writers Guild is smart to be thinking about that. And honestly, they could do a lot worse than reading Daron Acemoglu's book Power and Progress.

Lauren Goode: And why is that? What does the book have to say about all this? 

Gideon Lichfield: Well, Daron is an economics professor at MIT and his book, which he coauthored with Simon Johnson, who's also at MIT, is a really long view, and takes a look back at a thousand years of technological progress. And it asks basically at what times did a new technology benefit the larger workforce and at what times did it mainly benefit the rich and powerful? And what they conclude is that when workers in civil society don't have a voice, the entities that do control the tech are probably going to use it in a way that runs counter to this narrative we've all been fed that technological progress always shakes out to the benefit of everybody.

Lauren Goode: So basically, the writers’ strike is really part of a longer history, this ongoing cycle of new tech emerging and the fight to make sure that it is actually for the benefit of all.

Gideon Lichfield: Exactly. But also I think the writers’ strike is a test case for how society adopts generative AI today and how workers in the Capitol negotiate over that adoption. And Daron really changed the way that I think about what is possible there.

Daron Acemoglu (audio clip): The way I would put it is, don't think of your labor as a cost to be cut. Think of your labor as a human resource to be used better, and AI would be an amazing tool for it. Use AI to allow workers to make better decisions.

Lauren Goode: Did you interpret some of this more keenly because you are a writer and a journalist? 

Gideon Lichfield: Yeah, I've been thinking about it for a while because as you know, we published a policy here at WIRED a few months ago limiting how we use generative AI. And part of the reason is that I think it's important for us to use these tools in a way that augments human capabilities rather than replacing them. And that is essentially the argument of Daron's book as well.

Lauren Goode: So it sounds like as a writer, I'm toast either way. Like if I don't embrace ChatGPT and the like to enhance my job, that I'll probably be left behind. And if I do use ChatGPT to file a story for WIRED, you are definitely gonna call me out.

Gideon Lichfield: If you use ChatGPT to write lazy copy, then sure. I don't think that's what I'm looking for. But if you use it in an intelligent way to make yourself a more powerful journalist, that's something I can get behind.

Lauren Goode: Okay. Well, just to be clear, boss, I have not … filed copy that's generated from ChatGPT or anything like it. I have no plans to.

Gideon Lichfield: Very good.

Lauren Goode: Okay. Well I can't wait to hear this conversation and it's coming up right after the break.

[Break]

Gideon Lichfield: Thank you, Daron, for joining us on Have a Nice Future.

Daron Acemoglu: Well, I'm excited. Thank you. Thanks Gideon.

Gideon Lichfield: Your book Power and Progress is very timely because everyone is so interested in generative AI, but for years we've been hearing the sort of back and forth debate about whether AI will create more jobs or take them away. And I think the central thesis of the book is, well, it depends. Your book is full of examples from a thousand years of history of where tech innovation has empowered workers and spread wealth and created new opportunities and where it didn't. A central piece of the book is the Industrial Revolution, which was impoverishing and disempowering for a lot of workers at first, but then the tide shifted. So why was it disempowering people at the beginning and then what then changed? 

Daron Acemoglu: Well, I think the best way to understand what happened during the Industrial Revolution is to first consider the social milieu in which it was taking place. Britain was a very hierarchical society. The working people were referred to as the meaner sort of people. And the way that many of the leading industrialists thought is, "Well, I'm gonna use this machinery to get rid of workers. I'm gonna use the factory system to monitor them better so that I can impose discipline on them. And if I can get away, I'll employ women and children and pay as low wages as possible. And if anybody wants to organize, I have the laws on my side—trade union activity, trying to even negotiate wages or, God forbid, go on strike—are punishable by … imprisonment." So that was the context in which the early phase of the British Industrial Revolution played out. And if you look at the outcomes, we are not sure, we're not certain, we don't have great wage data or national income data, but available evidence suggests that for about 80 to 90 years, real incomes of the working people did not increase. But at the same time, their working hours lengthened. They were subjected to much harsher working conditions and their living conditions worsened.

Gideon Lichfield: Right. And then what shifted? Why did it start to move in the direction of benefiting workers? 

Daron Acemoglu: I think the twin process of institutional and technological change. First of all, if you look at British society toward the end of the 19th century, it's massively different from what it was in the middle of the 18th century. It started building a government sector that regulates factories, tries to clean up cities, build a health care system, mass education, and that's bolstered by a democratic process. Now the majority of adult males are voting and many of the draconian laws that made bosses so much more powerful over workers have been abolished. So trade union activity is now legal, masters and servant acts that made workers essentially at the whim of their employers and suitable and imprisonable, those have been lifted. So the institutional context has changed a lot. And now there's a much more balanced power equilibrium between workers and firm owners and managers.

Gideon Lichfield: So there is this common narrative that you hear among tech founders and tech leaders, which is, you can't stop progress—society has always adapted in the past to technologies that people were scared of. So what's wrong with that narrative? 

Daron Acemoglu: I think there are two things that are wrong with that narrative. The first one is that by its nature, it sort of belittles the losers from technological progress.

Gideon Lichfield: Right. They get written out of history.

Daron Acemoglu: Yes, exactly. We give the examples of Luddites, look how wrong they were, the gales of creative destruction and progress—they didn't understand them. Well, they understood them very well. They also understood that they were being the losers out of this. And their hardship was not to be belittled. But the more fundamental thing that that narrative ignores, and that is actually central to the book, is that technology is very malleable. Technology is nothing but applications of human cognition and knowledge. And human understanding of nature of our social relations is multifaceted. There are many ways in which we can put that to work, to change how we approach nature, how we approach human relations, how we approach the production process. Digital technologies, for example, don't have a preordained direction. They can be developed in many different ways. And then once you make that realization, there isn't like, "Oh, the technological progress is gonna happen. There is this direction that technology is gonna go." And we decide that direction and different directions have very different consequences both for productivity and distributionality. That's why the subtitle of our book is "Our 1000-Year Struggle Over Technology and Prosperity." There is a struggle. We cannot ignore that struggle and it is jointly about technology and prosperity.

Gideon Lichfield: Right. You talk in the book about machine usefulness. What does that mean? What are the tenets of a more human-centered approach to tech? 

Daron Acemoglu: Yeah, I think it's a word, it's a term that Simon and I invent. The whole point of it is to create a different set of analogies than machine intelligence does. I think when we talk of machine intelligence, we are immediately getting into the mind frame of thinking of machines doing things that are just like humans. And that's what automation is. Take the tasks—there are billions of them—but take the tasks that humans perform and then define machine intelligence as parity or improvement relative to humans in some of those tasks. That is, to me, the wrong vision. It pushes us down the rabbit hole of excessive automation and it doesn't leverage what we really want from machinery. Let me give you the example of a hand calculator. I think it's a fantastic machine. It's not intelligent. I don't think anybody would say that. You know, simple calculators have human-like capabilities of reasoning, but they are superbly useful. I'm not very good at multiplying seven-digit numbers nor dividing them one by the other. As long as I put the calculator to good use that boosts my capabilities, my productivity, the set of things that I can do, I think that's the sort of thing that we should strive for. And with that term, we are trying to encourage that sort of mind frame.

Gideon Lichfield: Right. So when you look at the kinds of uses that are being proposed now with generative AI, which ones look to you like things that enhance people and which ones look to you like they might be disempowering people or taking work away? 

Daron Acemoglu: That question’s really hard to answer with generative AI. And I'll tell you why. Generative AI, or at least the large language models that have come out of generative AI, have the capability to be empowering to humans. After all, we can put them to use for information curation, filtering, and verification for humans. So we can make decisions, be creative, design new products using much better information. We can use that for creating better matches between different types of human skills. We can be in a position where we get inputs from large language models, for example, in writing some simple code on which we can build and be more creative and more expensive. But on the other hand, there is also a lot of rote automation that you can do with generative AI. And the problem is that the industry often does the automation, but talks as if it's going to be human enriching. And that's where the difficulty of talking about what the future that generative AI will bring lies in.

Gideon Lichfield: When you say rote automation, what's an example of that? 

Daron Acemoglu: Like what are we seeing the generative AI or large language models being used for right now? There is a lot of simple writing tasks or simple information representation tasks that companies are already automating using large language models.

Gideon Lichfield: Like writing simple marketing copy, for example.

Daron Acemoglu: Like marketing, marketing and advertising, or news summaries like BuzzFeed used to do. I don't, I don't see anything wrong with that. I'm not against automation. I think it's good if we automate certain things, but at the same time we have to create as many new things for humans to do productively and contribute and expand their creativity as we are automating. And that latter part is not being done. And that's my sort of beef with the direction in which large language models are going right now.

Gideon Lichfield: What would it be like then to do that? You know, here's something that I can see is that you see a lot of people using image generators like Dall-E and Midjourney to create art in much quicker form. And some people are saying, "This can augment my work as an artist." And then some people are saying, are saying, "No, but that will actually take away from the work of many illustrators or stock photographers." So how do you use it in such a way that it is augmentative rather than just diluting people's work? 

Daron Acemoglu: The parts that I have emphasized, like information curation, information filtering, I think those things can really lead to many new functions and many new tasks for workers, for knowledge workers, for white-collar workers. But the problem there is that the current architecture of LLMs is not very good for that. Like what do LLMs do? I think they have been so far partly optimized for impressing humans. The tremendous meteoric rise of ChatGPT is on the basis of giving answers that humans find intriguing, surprising, impressive. But what that also brings is that it's not sufficiently nuanced. So if as a journalist or as an academic, I go to GPT-4 or GPT-3 and try to understand where different types of information is coming from, how reliable different types of information is, it does not give good answers. And in fact, it gives very misleading answers.

Gideon Lichfield: Right, it hallucinates often, yes.

Daron Acemoglu: It hallucinates or it makes up, it makes things up, or it refuses to recognize when two answers are contradictory or where two answers are saying the same thing, but are being represented as independent pieces of information. So there are a lot of complexity to human cognition that has evolved over hundreds of thousands of years that, you know, we can try to augment using these new technologies, but this sort of excessive authoritativeness of large language models is not gonna help.

Gideon Lichfield: Right now, we have the film and TV writers of Hollywood on strike, and one of the demands is that the movie studios take steps to ensure that AI doesn't replace them. So what should the studios be doing? 

Daron Acemoglu: So the fundamental issue, which is, again, central to not just large language models, but to the entire AI industry who controls data—I think the real argument that is very valid that's coming from the Writers Guild is that these machines are taking our creative data and they're going to repackage it. Why is that fair? Actually, think of the large language models. If you look at the answers that they give, the correct and relevant answers that they give, a lot of it comes from two sources: books that have been digitized, and Wikipedia, but none of that was done for the purpose of enriching OpenAI, Microsoft, or Google. People wrote books for different purposes, to communicate with their colleagues or with the broader public, people devoted their effort and time to Wikipedia for this collective project. None of them agreed that their knowledge was going to be taken over by OpenAI. So the Writers Guild is trying to articulate, I think, a deeper problem. I think in the age of AI we have to be much more cognizant of whose data we are using and in what way we are using. I think that requires both regulation and compensation.

Gideon Lichfield: Right. In other words, when you talk about data, you're also talking about the writing that AI is trained on.

Daron Acemoglu: Exactly.

Gideon Lichfield: And who gets compensated for that training? 

Daron Acemoglu: Right.

Gideon Lichfield: Well, let's come to the regulation question, because even in past eras when tech innovation seemed to move much more slowly, it was incredibly socially disruptive. We looked at the in the case of the Industrial Revolution, for example, and today it feels like these changes move faster than ever. Do you think that they are in fact moving faster? And if so, how does regulation keep pace with it? How does society adapt to changes that are so rapid? 

Daron Acemoglu: Things are going very fast, and I think the unforeseen consequences here are just that, completely unforeseen and we need a regulatory framework. But you're absolutely right. We have not kept up pace with the developments in the tech world in such a way that regulation is gonna be easy. First of all, all of the talent is now attracted to the tech world. So there aren't amazingly knowledgeable experts working in the government sector anymore. That was very different when, you know, in the 1950s or '60s. Second, I think we have gone into a legal framework where it's going to be very difficult to implement the things that we mentioned before, like regulating who controls data, making companies pay for the data that they use without permission. So all of these, I think, are going to require big changes in who we attract to civil service, how we incentivize people in the civil service, what sort of fast track laws we need in order to make this regulation a reality.

Gideon Lichfield: If you are a legislator or a policymaker looking at generative AI and trying to think about where should the first targets of regulation be, when everything is changing so fast, what should you focus on? 

Daron Acemoglu: I think there are so many things to worry about. The way that I think about this is first, we have to start with an aspiration. We have to agree on what we want from new technologies. There, my argument is very clear. We want new technologies to empower workers, to increase worker productivity, and to empower citizens. Now, not everybody's gonna agree on this, but if there's a broad enough agreement, that's a good goal. Then we need to form the narrative around that. How do we achieve that? Whose vision do we need to follow? What is feasible? Who do we need to empower for this? We need to build institutions around it. Like how do we get worker voice? How do we get writer's voice? How do we get broader civil society engaged in this? How do we build the institutional foundations of a better regulatory system? And then we need specific policies. Regulation of data, we talked about that. I think we need to put guardrails about how tech companies can take people's data. We need to perhaps support data unions so that certain types of creative artists can form unions and sell their data products in some coherent way.

Gideon Lichfield: All of these so that the data can't just be used willy-nilly to—

[overlapping conversation]

Daron Acemoglu: Exactly. Cannot be expropriated just at the whim of tech companies and then justify the ex post. I think we need to worry about the power of the biggest tech companies. So does that require more antitrust? Again, I don't think that's a panacea, but it's something to be considered.

Gideon Lichfield: If you are the leader of a company, let's say. It doesn't matter what sphere it's in, maybe it's the law, maybe it's marketing, maybe it's something else, and you're thinking about how to bring generative AI into the workplace, what are some good or some bad choices that you could make? 

Daron Acemoglu: I think there are a lot of profit opportunities for companies if they can use their workforce in a better way. It's a change of vision. The way I would put it is, don't think of your labor as a cost to be cut. Think of your labor as a human resource to be used better, and AI would be an amazing tool for it. Use AI to allow workers to make better decisions. If you are a hospital and you can use AI, now, that's gonna, again, requires an institutional element, doctors are not gonna like some of that. But if you can use your nurses and train your nurses better, and give them AI tools so that they can do much better care, much better diagnosis, they can prescribe medications, they can play much more of a rapid task force type of approach to the cure of patients in emergency rooms, I think those are gonna be much better for hospitals. In schools, don't think of AI as a way of sidelining teachers, think of them as a way of empowering teachers. We need more individualized education programs for children who are coming from diverse backgrounds with lots of challenges, with lots of difficulties in certain parts of the curricula. I think we can do that using AI. In the entertainment industry, I think—you are hinting at this earlier on. We can use these tools to create a richer entertainment form, not again sideline the writers and the creative artists.

Gideon Lichfield: One of the takeaways of the book, I think, is, because it covers such a broad sweep of history, is that the cycle of technological gains being captured by elites and then being recaptured by social forces, and it keeps on swinging back and forth. So what has to happen for a more equitable approach to the development of technology to really take route, do you think? 

Daron Acemoglu: I would go back to the same answer that I gave. I think we have to first start discussing these aspirations. I think it's really central that we redirect technological change, so that start has to be an aspiration. Then we need to form the right sort of institutional framework for making that happen. I think those two are really critical. Right now, we are at this point in the United States especially where there are no countervailing powers. The democratic process is not working as well as it used to do. That wasn't perfect before, but it's in a much worse position, with parties being captured by special interest, polarization, conspiracy theories, misinformation everywhere. We are at a point where the most usual ways in which worker voice was heard in the past through the worker labor movement unions, that's not working anymore, and it's not clear what will replace worker movements of the industrial age, but we need something. We need civil society to play more of a constructive role in this process, and we need a regulatory structure as we talked about.

Gideon Lichfield: Last question. What keeps you up at night, and what makes you hopeful? 

Daron Acemoglu: All of this keeps me up at night. Look, I'm an optimist. I believe in the possibility that we can use technology for expanding human capabilities. I also believe that humans are unique, distinct, and enriched by their diversity. So we need to find a humanist path for the future of AI, and I'm sure that such a path exists. But my problem is neither we know where that path is, nor are we looking for it at the moment.

Gideon Lichfield: Well, Daron, I think you've outlined how we might have a nicer future, whether or not we're actually pushing toward it at the moment, that is the question. Thank you for joining us.

Daron Acemoglu: Thank you. This was an amazingly fruitful, thought-provoking conversation. Thanks for having me on the show.

[Break]

Lauren Goode: So Gideon, now that you've had a little bit of time to digest your conversation with Daron, what's your biggest take away from it? 

Gideon Lichfield: I think it's that he challenges the sense of inevitability that seems to accompany new tech developments. This idea that innovators just build the tech, put it out there, and you can't stop its progress, and society finds a way to adapt around it. He keeps on using the word choice in the book and also in the conversation. And his point is that there are choices that you can make as a policymaker, and there are choices you can make as an adopter of technology, and there are choices you can make as an ordinary worker around how you use or try to avoid using a technology, and all those choices influence the outcome that it will have. It's not something that is just dictated by the tech itself.

Lauren Goode: Were there specific examples in the book that stood out to you? 

Gideon Lichfield: There's a really simple interesting example that he uses of when technology doesn't benefit workers. He calls it so-so automation. And the example he uses is in a supermarket where they have the self-service checkout kiosks. And he says those kiosks don't do anything to increase the overall productivity of the supermarket. You don't get—it doesn't sell more goods because it has automated kiosks. It just saves some money on the salaries of the workers. And so that doesn't benefit the workers, it just benefits the company's bottom line. But then he talks about the rise of mass production of cars after the Second World War, and he says, sure, there was a lot of automation there, there were assembly lines, there were workers who were made to do very repetitive jobs, but the rise of the car industry also created a huge number of new kinds of jobs and skills, and it caused other industries to grow that provided the raw material or the design for cars and their components. And of course, the car changed the economy and society as a whole and made it easier to get to places, to deliver things. It caused us to urbanize more. So the car industry, even though it involved a lot of automation, was also automation that created many, many more opportunities for work.

Lauren Goode: I like what Daron said in your conversation with him about how we shouldn't be trying so hard to establish parity between humans and machines, like always default to saying that machine is going to replace X, this thing that a human does, but instead looking at it as how this machine is going to boost human capabilities because it can't actually do the thing that humans do. Like maybe in a way that means that our concerns right now about AI replacing our knowledge jobs is a little bit overwrought. Maybe we should actually be a little bit more open-minded or optimistic about the idea that it might just largely enhance more than replace.

Gideon Lichfield: I think we should be exploring its capabilities in trying to figure out what it can help a human worker do better. I am curious about whether as a journalist, I can use AI to, I don't know, help me collate a lot of information quickly or learn about a topic that I don't know very well, or even suggest angles on a story, which I can then do my own reporting on and my own writing, but use the AI to help kickstart that process. What I think we should be wary of is the temptation to use AI to do a task that a human can do, and do it kind of just well enough so that you can produce something, but something that isn't very good. I think that's where we run into the risks of AI replacing humans and in the process just producing mediocre work, which I think is what the Hollywood writers are worried about. And it's also what we've seen with some of the journalistic organizations that have tried using AI to write stories, and the results were that they got stories that were full of errors and were just kind of mediocre.

Lauren Goode: Yeah, I think at the core of the writers strike is the concern that we end up losing, I don't know, we lose human ingenuity and creativity, and those are the things that are most valuable. And I think those are the things that machines and humans don't achieve parity on.

Gideon Lichfield: Yeah. I think what Daron is saying basically is when you're thinking about how to apply AI, start by thinking about the human and what AI can do to make that human a better worker, rather than thinking about the task and what AI can do to automate the task.

Lauren Goode: I love what he said about some of these gen-AI chatbots basically existing to impress.

Gideon Lichfield: Yeah, he was making a fairly basic point about the way AI functions, which is, because what it does is predicts the next word in a sequence, what it's trained to do is produce the text that sounds the most plausible and the most coherent. But it's not optimizing for accuracy, it's optimizing for coherence. And so it can produce things that sound great, but are actually full of errors. That's I think what he meant by it's trying to impress.

Lauren Goode: Yeah. In a way, a lot of this is like a big flex right now. Because you have these big corporations elbowing each other to get to the front of the line in the generative AI race, and this is technology that some of them have been working on for many years at this point, but as soon as OpenAI released its chatbot late last year, it opened the floodgates for Microsoft and Google to try to release their version of these generative AI tools. We were just at Google's developer conference last week, almost the entire two-hour keynote was about generative AI in Google Cloud and Google Apps and Google Android. Whereas in the past, almost the entire conference was about the Android operating system, and maybe a little bit of Search, and maybe like Maps. But now it's just gen-AI all day long. But I'm still curious as to whether or not consumers, we the people who are on the internet and using the internet, actually want our experiences to be shaped in this way. Where is the overwhelming consumer sentiment that like, this is how they want chat, or search, or work, to be? 

Gideon Lichfield: It sounds like you're saying that people might love the idea of having a chatbot do their work for them and make it easier, but that actually when they're looking at the work that other people do using chatbots, they're not gonna find it that useful.

Lauren Goode: Sure, or maybe people just don't want to Google search that way. We all like our familiar interfaces. But to bring it back to Daron's point. I think right now, there's probably a percentage of the population using ChatGPT and tools like it, who are getting genuine value out of it. They're using it for real work. Coders come to mind because of the way these can spit out code for people. That's pretty incredible, provided that the code is correct. But then I think there are a lot of other people who are still using it as a novelty. "Oh, look at what this thing can do. Oh, cool, it wrote me a love letter or a poem, or it spit out a cover letter for me," but then a lot of people say that they still go on and tweak it themselves. And some of that does feel to me like it exists right now to impress. It exists to say, like, here are these language learning models that have been in development for a very long time. It’s still early days, and here’s what they can do. It gave AI a UI, and I think by definition when you roll something out in beta, and you’re like, “Hey world, look at this thing,” it is to impress.

Gideon Lichfield: Did he leave you feeling optimistic about the possibility that maybe this time with generative AI, we can get it right and not have it turn into a technology that benefits just a few? 

Lauren Goode: One thing that struck me from your conversation with Daron is the idea that we still don't really know how to think about AI, but everyone is very eager to give each other a new framework for thinking about it. I think “framework” is going to be the buzzword of 2023. I would like to do a Google search trend right now for the word framework and just see how much it has skyrocketed. Because we are just feeling our way through the dark on this—

Gideon Lichfield: I love a good framework.

Lauren Goode: And we need [chuckle] I too have found myself using it in recent weeks. I'm like, "Oh my God, stop using this word." But we are looking for structures or blueprints, or just something that's going to help us chart a path forward.

Gideon Lichfield: It feels like 15 years ago or so when the social media companies were launching, nobody was really having these conversations about the social impact, and it took us several years to start to notice just how profound the effect of Big Tech was on society. So do you feel like we're now having that conversation a little earlier? 

Lauren Goode: Absolutely. I feel like some of this is a correction, not only on the part of tech companies, but on the part of journalists and thinkers. I don't want to use the term “thought leader” because then I would bog this podcast down with too many buzzwords. Yeah, I think that we're looking at the ways that technology has advanced over the past 20 or 25 years, and we're looking at some of the privacy nightmares and the ways that inequities have been deepened, and basically saying what were the questions that we weren't asking 15 years ago, or 20 years ago? What do we need to be asking now? And I think we have an obligation to do it, actually. And there are going to be people on the side of tech who say that we're being alarmist, or that this is slowing down innovation. Just the other day, a tech executive told me that because of new policies like GDPR, one of the first hires that a startup should probably consider making is a compliance officer, whereas in the past, you know, 10 years ago, they weren't thinking about hiring a compliance officer right out of the gate. They were using that budget for like coders and stuff.

Gideon Lichfield: Was this executive saying that is a bad thing? Horrors, we have to actually pay somebody to think about the law now.

Lauren Goode: Right, or that they wouldn't have to do that typically until a later stage of the startup, and now it's something that you have to consider right out of the gate. That's just one example of saying, you know, how all of this policy is going to slow us all down.

Gideon Lichfield: Sounds like a good thing to me.

Lauren Goode: And that's probably valid. Right. Now we have more information about how technology is impacting society, it would be completely foolish not to integrate that information and use it to ask the right questions.

[Music]

Gideon Lichfield: That's our show for today. Thank you for listening. Have a Nice Future is hosted by me, Gideon Lichfield.

Lauren Goode: And me, Lauren Goode.

Gideon Lichfield: If you like the show, you should tell us. Leave us a rating and a review wherever you get your podcasts, and don't forget to subscribe for new episodes each week.

Lauren Goode: We really want to hear from you. You can also email us at nicefuture@WIRED.com. Tell us what you're worried about, what excites you, any question you have about the future, and we'll try to answer it with our guests.

Gideon Lichfield: Have a Nice Future is a production of Condé Nast Entertainment. Danielle Hewitt and Lena Richards from Prologue Projects produce the show.

Lauren Goode: See you back here next Wednesday. And until then, have a nice future.


If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.