Public Sector PracticeUnlocking the potential of generative AI: Three key questions for government agenciesGovernment organizations may seek to jump on the gen AI bandwagon, but the technology’s complexities could sideline their efforts. Our framework addresses some critical implementation questions.This article is a collaborative effort by Damien Bruce, Ankit Fadia, Tom Isherwood, Chiara Marcati, Aldous Mitchell, Björn Münstermann, Gayatri Shenai, Hrishika Vuppala, and Thomas Weber, representing views from McKinsey’s Public Sector Practice.December 2023It’s been just a year since generative AI (gen AI) tools first captured public attention worldwide. But already the economic value of gen AI is estimated to reach trillions of dollars annually—even as its risks begin to worry businesses and governments across the globe. Gen AI offers government leaders unique opportunities to steer national economic development (Exhibit 1). At the same time, they face the heavy burden of monitoring the technology’s downsides and establishing robust guidelines and regulations for its use.Many government agencies have started investing in transformations made possible by gen AI, but the technology’s rapid evolution means that predicting where it can contribute the most value is difficult. In this article, we discuss three important questions that public sector organizations may need to consider before choosing areas for investment:—How can government agencies address the potential risks of gen AI?—How can public sector entities begin to transform their own service delivery?—Should governments develop national gen AI foundation models (core models on which gen AI applications are built)?We conclude with a suggested eight-step plan for government organizations that are just beginning to implement gen AI use cases.1. How can government agenciesaddress the potential risks of gen AI?By now, the risks of gen AI—such as its tendencies toward unpredictability, inaccuracy, and bias—are widely known. Government agencies face different risks than do private sector companies. For example, the technology can be misused to spread political propaganda or compromise national security. Confidential government data can be leaked or stolen if government employees inadvertently introduce that information into foundation models through prompts.Some outputs from gen AI models might contain inaccurate information—also called “hallucinations”—that could erode public trust in government services that leverage these technologies. Like many private sector organizations, government agencies face challenges with gen AI’s transparency and with the difficulty of explaining the conceptual underpinnings of gen AI, as well as the logic of the models’ decisions and output. Consequences might include low public acceptance of gen-AI-powered government services and unclear liability when unintended effects occur....