warped keyboard
Photograph: Yaroslav Kushta/Getty Images

Boston Isn’t Afraid of Generative AI

The city’s first-of-its-kind policy encourages its public servants to use the technology—and could serve as a blueprint for other governments.

After ChatGPT burst on the scene last November, some government officials raced to prohibit its use. Italy banned the chatbot. New York City, Los Angeles Unified, Seattle, and Baltimore School Districts either banned or blocked access to generative AI tools, fearing that ChatGPT, Bard, and other content generation sites could tempt students to cheat on assignments, induce rampant plagiarism, and impede critical thinking. This week, US Congress heard testimony from Sam Altman, CEO of OpenAI, and AI researcher Gary Marcus as it weighed whether and how to regulate the technology.

In a rapid about-face, however, a few governments are now embracing a less fearful and more hands-on approach to AI. New York City Schools chancellor David Banks announced yesterday that NYC is reversing its ban because “the knee jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial.” And yesterday, City of Boston chief information officer Santiago Garces sent guidelines to every city official encouraging them to start using generative AI “to understand their potential.” The city also turned on use of Google Bard as part of the City of Boston’s enterprise-wide use of Google Workspace so that all public servants have access.

The “responsible experimentation approach” adopted in Boston—the first policy of its kind in the US—could, if used as a blueprint, revolutionize the public sector’s use of AI across the country and cause a sea change in how governments at every level approach AI. By promoting greater exploration of how AI can be used to improve government effectiveness and efficiency, and by focusing on how to use AI for governance instead of only how to govern AI, the Boston approach might help to reduce alarmism and focus attention on how to use AI for social good. 

Boston’s policy outlines several scenarios in which public servants might want to use AI to improve how they work, and even includes specific how-tos for effective prompt writing.

Generative AI, city officials were told in an email that went out from the CIO to all city officials on May 18, is a great way to get started on memos, letters, and job descriptions, and might help to alleviate the work of overburdened public officials. 

The tools can also help public servants “translate” government-speak and legalese into plain English, which can make important information about public services more accessible to residents. The policy explains that public servants can indicate the reading level or audience in the prompt, allowing the AI model to generate text suitable for elementary school students or specific target audiences.

Generative AI can also help with translation into other languages so that a city’s non-English speaking populations can enjoy equal and easier access to information about policies and services affecting them. 

City officials were also encouraged to use generative AI to summarize lengthy pieces of text or audio into concise summaries, which could make it easier for government officials to engage in conversations with residents.

The Boston policy even explains how AI can help produce code snippets and assist less technical individuals. As a result, even interns and student workers could start to engage in technical projects, such as creating web pages that help to communicate much needed government information.

Still, the policy advocates for a critical approach to the technology and for taking personal responsibility for use of the tools. Thus, public servants are encouraged to proof any work developed using generative AI to ensure that hallucinations and mistakes do not creep into what they publish. The guidelines emphasize that privacy, security, and the public purpose should be prioritized in the use of technology, weighing impact on the environment and constituents' digital rights.

These principles represent a shift from fear-mongering about the dangers of AI to a more proactive and responsible approach that provides guidance on how to use AI in the public workforce. Instead of the usual narrative about AI killing jobs or talking only about AI bias, the city’s letter explains that, by enabling better communication and conversation with residents of all kinds, AI could help repair historical harm to marginalized communities and foster inclusivity. 

Boston’s generative AI policy sets a new precedent in how governments approach AI. By supporting responsible experimentation, transparency, and collective learning, it opens the door to realizing the potential of AI to do good in governance. If more public servants and politicians embrace these technologies, practical experience can inform sensible regulations. Furthermore, generative AI’s ability to simplify communication, summarize conversations, and create appealing visuals can radically enhance government inclusivity and accessibility. Boston’s vision serves as an inspiration for other governments to break free from fear and embrace the opportunities presented by generative AI.