California examines benefits, risks of using artificial intelligence in state government
Artificial intelligence that can generate text, images and other content could help improve state programs but also poses risks, according to a report released by the governor’s office on Tuesday.
Generative AI could help quickly translate government materials into multiple languages, analyze tax claims to detect fraud, summarize public comments and answer questions about state services. Still, deploying the technology, the analysis warned, also comes with concerns around data privacy, misinformation, equity and bias.
“When used ethically and transparently, GenAI has the potential to dramatically improve service delivery outcomes and increase access to and utilization of government programs,” the report stated.
The 34-page report, ordered by Gov. Gavin Newsom, provides a glimpse into how California could apply the technology to state programs even as lawmakers grapple with how to protect people without hindering innovation.
Concerns about AI safety have divided tech executives. Leaders such as billionaire Elon Musk have sounded the alarm that the technology could lead to the destruction of civilization, noting that if humans become too dependent on automation they could eventually forget how machines work. Other tech executives have a more optimistic view about AI’s potential to help save humanity by making it easier to fight climate change and diseases.
At the same time, major tech firms including Google, Facebook and Microsoft-backed OpenAI are competing with one another to develop and release new AI tools that can produce content.
The report also comes as generative AI is reaching another major turning point. Last week, the board of ChatGPT maker OpenAI fired Chief Executive Sam Altman for not being “consistently candid in his communications with the board,” thrusting the company and AI sector into chaos.
On Tuesday night, OpenAI said it reached “an agreement in principle” for Altman to return as CEO and the company named members of a new board. The company faced pressure to reinstate Altman from investors, tech executives and employees, who threatened to quit. OpenAI hasn’t provided details publicly about what led to the surprise ousting of Altman, but the company reportedly had disagreements over keeping AI safe while also making money. A nonprofit board controls OpenAI, an unusual governance structure that made it possible to push out the CEO.
Newsom called the AI report an “important first step” as the state weighs some of the safety concerns that come with AI.
“We’re taking a nuanced, measured approach — understanding the risks this transformative technology poses while examining how to leverage its benefits,” he said in a statement.
AI advancements could benefit California’s economy. The state is home to 35 of the world’s 50 top AI companies and data from Pitchbook say the GenAI market could reach $42.6 billion in 2023, the report said.
Some of the risks outlined in the report include spreading false information, giving consumers dangerous medical advice and enabling the creation of harmful chemicals and nuclear weapons. Data breaches, privacy and bias are also top concerns along with whether AI will take away jobs.
“Given these risks, the use of GenAI technology should always be evaluated to determine if this tool is necessary and beneficial to solve a problem compared to the status quo,” the report said.
As the state works on guidelines for the use of generative AI, the report said that in the interim state employees should abide by certain principles to safeguard the data of Californians. For example, state employees shouldn’t provide Californians’ data to generative AI tools such as ChatGPT or Google’s Bard or use unapproved tools on state devices, the report said.
AI‘s potential use goes beyond state government. Law enforcement agencies such as Los Angeles police are planning to use AI to analyze the tone and word choice of officers in body cam videos.
California’s efforts to regulate some of the safety concerns such as bias surrounding AI didn’t gain much traction during the last legislative session. But lawmakers have introduced new bills to tackle some of AI’s risks when they return in January such as protecting entertainment workers from being replaced by digital clones.
Meanwhile, regulators around the world are still figuring out how to protect people from AI’s potential risks. In October, President Biden issued an executive order that outlined standards around safety and security as developers create new AI tools. AI regulation was a major issue of discussion at the Asia-Pacific Economic Cooperation meeting in San Francisco last week.
During a panel discussion with executives from Google and Facebook’s parent company, Meta, Altman said he thought that Biden’s executive order was a “good start” even though there were areas for improvement. Current AI models, he said, are “fine” and “heavy regulation” isn’t needed but he expressed concern about the future.
“At some point when the model can do the equivalent output of a whole company and then a whole country and then the whole world, like maybe we do want some sort of collective global supervision of that,” he said, a day before he was fired as OpenAI’s CEO.