A nearly completed white jigsaw puzzle with one remaining piece laying on top web plugin concept
Photograph: Daniel Sambraus/Getty Images

Now That ChatGPT Is Plugged In, Things Could Get Weird

Letting the chatbot interact with the live internet will make it more useful—and more problematic, too.

ChatGPT has dazzled with its poetryprose, and academic test scores. Now prepare for the precocious chatbot to find your next flight, recommend a restaurant with good seating, and fetch you a sandwich, too.

Last week, OpenAI, the company behind ChatGPT, announced that a slew of companies including ExpediaOpenTable, and Instacart have created plugins to let the chatbot access their services. Once a user activates a plugin, they will be able to ask ChatGPT to perform tasks that would normally require using the web or opening an app, and hopefully see the dutiful bot scurry off to do it. 

The move potentially heralds a big shift in how people use computers, apps, and the web, with clever AI programs completing chores on their behalf. Until now, ChatGPT has been cut off from the live internet, unable to look up recent information or interact with websites. Changing that may also help cement OpenAI’s position at the center of what could rapidly become a new era for AI and personal computing.

“I think it’s a genius move,” says Linxi “Jim” Fan, an AI scientist at Nvidia who works on autonomous agents. Fan says ChatGPT’s ability to read documentation and interpret code should make the process of integrating new plugins remarkably smooth. He believes it may help OpenAI take on Apple and Google, which use their app stores to operate as gatekeepers. “The next generation of ChatGPT will be like a meta-app—an app that uses other apps,” Fan says.

But some are concerned by the prospect of ChatGPT—and OpenAI—gaining increasing dominance through its AI. If other businesses come to rely too heavily on OpenAI’s technology, the company could reap huge financial rewards and wield enormous influence over the technology industry. And if ChatGPT becomes a foundational layer of the tech industry, OpenAI will have an outsize responsibility for ensuring that a fast-moving technology is used carefully and responsibly.

“There’s some distress in the startup ecosystem among companies that were picking up pennies in front of the OpenAI steamroller,” says Sarah Guo, cofounder of Conviction VC, an investment group, in reference to businesses trying to make money by building technology similar to ChatGPT. Guo says that OpenAI’s latest maneuver “improves the staying power and strategic position” of the company’s consumer business.

OpenAI has captured the public’s imagination with ChatGPT, which is far more capable, coherent, and creative than previous chatbots, and it has also lured dozens of startups into building on top of its AI. Microsoft, which has also invested $10 billion in OpenAI, has added ChatGPT to the search engine Bing, and is rushing to fold it into other products, including its Office suite.

ChatGPT is built on top of an algorithm called GPT that OpenAI began developing several years ago. GPT predicts the words that should follow a prompt based on a statistical analysis of trillions of lines of text harvested from web pages, books, and other sources. Although GPT is, at heart, little more than an autocomplete program, the latest version, called GPT-4, is capable of some remarkable features of question-answering, including scoring highly on many academic tests. 

A number of open source projects such as LangChain and LLamaIndex are also exploring ways of building applications using the capabilities provided by large language models. The launch of OpenAI’s plugins threatens to torpedo these efforts, Guo says. 

Plugins might also introduce risks that plague complex AI models. ChatGPT’s own plugin red team members found they could “send fraudulent or spam emails, bypass safety restrictions, or misuse information sent to the plugin,” according to Emily Bender, a linguistics professor at the University of Washington. “Letting automated systems take action in the world is a choice that we make,” Bender adds.

Dan Hendrycks, director of the Center for AI Safety, a non-profit, believes plugins make language models more risky at a time when companies like Google, Microsoft, and OpenAI are aggressively lobbying to limit liability via the AI Act. He calls the release of ChatGPT plugins a bad precedent and suspects it could lead other makers of large language models to take a similar route.

And while there might be a limited selection of plugins today, competition could push OpenAI to expand its selection. Hendrycks sees a distinction between ChatGPT plugins and previous efforts by tech companies to grow developer ecosystems around conversational AI—such as Amazon’s Alexa voice assistant.

GPT-4 can, for example, execute Linux commands, and the GPT-4 red-teaming process found that the model can explain how to make bioweapons, synthesize bombs, or buy ransomware on the dark web. Hendrycks suspects extensions inspired by ChatGPT plugins could make tasks like spear phishing or phishing emails a lot easier.

Going from text generation to taking actions on a person’s behalf erodes an air gap that has so far prevented language models from taking actions. “We know that the models can be jailbroken and now we’re hooking them up to the internet so that it can potentially take actions,” says Hendrycks. “That isn’t to say that by its own volition ChatGPT is going to build bombs or something, but it makes it a lot easier to do these sorts of things.”

Part of the problem with plugins for language models is that they could make it easier to jailbreak such systems, says Ali Alkhatib, acting director of the Center for Applied Data Ethics at the University of San Francisco. Since you interact with the AI using natural language, there are potentially millions of undiscovered vulnerabilities. Alkhatib believes plugins carry far-reaching implications at a time when companies like Microsoft and OpenAI are muddling public perception with recent claims of advances toward artificial general intelligence.

“Things are moving fast enough to be not just dangerous, but actually harmful to a lot of people,” he says, while voicing concern that companies excited to use new AI systems may rush plugins into sensitive contexts like counseling services.

Adding new capabilities to AI programs like ChatGPT could have unintended consequences, too, says Kanjun Qiu, CEO of Generally Intelligent, an AI company working on AI-powered agents. A chatbot might, for instance, book an overly expensive flight or be used to distribute spam, and Qiu says we will have to work out who would be responsible for such misbehavior.

But Qiu also adds that the usefulness of AI programs connected to the internet means the technology is unstoppable. “Over the next few months and years, we can expect much of the internet to get connected to large language models,” Qiu says.