Worried about AI? How California lawmakers plan to tackle the technology’s risks in 2024
Jodi Long was caught off guard by the cage filled with cameras meant to capture images of her face and body.
“I was a little freaked out because, before I walked in there, I said I don’t remember this being in my contract,” the actor said.
The filmmakers needed her digital scan, Long was told, because they wanted to make sure her arms were positioned correctly in a scene where she holds a computer-generated character.
That moment in 2020 stuck with Long, president of SAG-AFTRA’s Los Angeles local, while she was negotiating for protections around the use of artificial intelligence when actors went on strike. In November, the actors guild reached a deal with Hollywood studios that — among other things — required consent and compensation for the use of a worker’s digital replica.
Labor unions aren’t the only ones trying to limit AI’s potential threats. Along with Gov. Gavin Newsom signing an executive order on AI in September, California lawmakers have introduced a raft of legislation that sets the stage for more regulation in 2024. Some of the proposals focus on protecting workers, combating AI systems that can contribute to gender and racial biases and establishing new requirements to safeguard against the misuse of AI for cybercrimes, weapon development and propaganda.
Whether California lawmakers will succeed in passing AI legislation, though, remains unclear. They’ll face lobbying from multibillion-dollar tech companies including Microsoft, Google and Facebook, political powerhouses that successfully stalled several AI bills introduced this year.
Artificial intelligence has been around for decades. But as technology rapidly advances, the ability of machines to perform tasks associated with human intelligence has raised questions about whether AI will replace jobs, fuel the spread of misinformation or even lead to humanity’s extinction.
As lawmakers attempt to regulate AI, they’re also trying to understand how the technology works so they don’t hinder its potential benefits while simultaneously trying to mitigate dangers.
“One of the core challenges is that this technology is dual use, meaning the same kind of technology that can, for instance, lead to massive improvements in healthcare can also be used potentially to do pretty serious harm,” said Daniel Ho, a professor at Stanford University’s law school who advises the White House on AI policy.
Politicians are feeling a sense of urgency, pointing to the resistance they’ve faced already in trying to control some of the mental health and child safety issues exacerbated by social media and other tech products. While some tech executives say they don’t oppose regulation, they’ve also said critics are exaggerating the risks and expressed concern that they’ll have to deal with a patchwork of rules that vary around the world.
TechNet — a trade group that includes a variety of companies such as Apple, Google and Amazon — outlines on its website what members would and wouldn’t support when it comes to AI regulation. For example, TechNet says policymakers should avoid “blanket prohibitions on artificial intelligence, machine learning, or other forms of automated decision-making” and not force AI developers to share information publicly that is proprietary.
State Assemblymember Ash Kalra (D-San Jose) said policymakers don’t trust tech companies to regulate themselves.
“As a lawmaker, my intention is to protect the public and protect workers and protect against risks that may be created through unregulated AI,” Kalra said. “Those that are in the industry have different priorities.”
AI could affect 300 million full-time jobs, according to an April report by Goldman Sachs.
In September, Kalra introduced legislation that would give actors, voice artists and other workers a way to nullify vague contracts that allow studios and other companies to use artificial intelligence to digitally clone their voices, faces and bodies. Kalra said he has no plans for now to set aside the bill, which is backed by SAG-AFTRA.
Federal lawmakers also have introduced legislation aimed at protecting the voices and likenesses of workers. President Biden signed an executive order on AI in October, noting how the technology could improve productivity but also displace workers.
Duncan Crabtree-Ireland, the national executive director and chief negotiator of SAG-AFTRA, said he thinks it’s important that both state and federal lawmakers regulate AI without delay.
“It has to come from a variety of sources and [be] put together in a way that creates the ultimate picture that we all want to see,” he said.
Policymakers outside of the U.S. already have been moving forward. In December, the European Parliament and EU member states reached a landmark deal on the AI Act, calling the proposal “the world’s first comprehensive AI law.” The legislation includes a different set of rules based on how risky AI systems are and would also require AI tools that generate text, images and other content like OpenAI’s ChatGPT to publish what copyrighted data were used to train the systems.
As federal and state lawmakers fine-tune legislation, workers are seeing how AI is affecting their jobs and testing whether current laws offer enough protections.
Tech companies — including Microsoft-backed OpenAI, Stability AI, Facebook parent Meta and Anthropic — are facing lawsuits over allegations that they used copyrighted work from artists and writers to train their AI systems. On Wednesday, the New York Times filed a lawsuit against Microsoft and OpenAI accusing the tech companies of using copyrighted work to create AI products that would compete with the news outlet.
Tim Friedlander, president and co-founder of the National Assn. of Voice Actors, said his members are losing out on jobs because some companies have decided to use AI-generated voice. Actors have also alleged their voices are being cloned without their consent or compensation, a problem musicians face as well.
“One of the difficult things right now is that there’s no way to prove that something is human or synthetic or to be able to prove where the voice came from,” he said.
Worker protections are just one issue surrounding AI that California lawmakers will try to tackle in 2024.
Sen. Scott Wiener (D-San Francisco) in September introduced the Safety in Artificial Intelligence Act, which aims to address some of the biggest risks posed by AI, he said, including the technology’s potential misuse in chemical and nuclear weapons, election interference and cyberattacks. Even though lawmakers don’t want to “squelch innovation,” they also want to be proactive, Wiener said.
“If you don’t get ahead of it, then it can be too late and we’ve seen that with social media and other areas where we should have been setting up at least broad stroke regulatory systems before the problem starts,” he said.
Lawmakers are also worried that AI systems could make mistakes that lead to unequal treatment of people based on protected characteristics such as race and gender. Assemblymember Rebecca Bauer-Kahan (D-Orinda) is sponsoring a bill that would bar a person or entity from deploying an AI system or service that’s involved in making “consequential decisions” that result in “algorithmic discrimination.”
Concern that algorithms can amplify gender and racial biases because of what data are used to train the computer systems has been an ongoing issue in the tech industry. Amazon scrapped an AI recruiting tool, for example, because it showed bias against women after the computer models were trained with resumes that mostly came from men, Reuters reported in 2018.
Passing AI legislation has already proved difficult. Bauer-Kahan’s bill never even made it to the Assembly floor for a vote. An analysis of the legislation, AB 331, said various industries and businesses expressed concerns that it was too broad and would result in “overregulation in this space.”
Still, Bauer-Kahan said she does plan to reintroduce the bill in 2024 despite the opposition she faced last session.
“It’s not as if I want these tools to go away, but I want to ensure that when they enter the marketplace we know they’re non-discriminatory,” she said. “That balance is not too much to ask for.”
Trying to figure out what issues to prioritize when it comes to AI’s potential risks is another challenge politicians will face in 2024, given that controversial bills can be difficult to pass in an election year.
“If there is not an agreement on at least some sense of the prioritization of harm, and which ones are the most urgent, it can become hard to figure out what the most effective form of an intervention might be,” said Ho, the Stanford Law School professor.
Despite all the fears surrounding AI, Long said, she remains optimistic about the future.
She has starred in blockbuster films such as Marvel’s “Shang-Chi and the Legend of the Ten Rings,” and in 2021 became the first Asian American to win a Daytime Emmy for outstanding performance by a supporting actress in the Netflix show “Dash and Lily.”
“My industry is a collaborative process between lots of humans,” she said. “And as long as we have humans putting out our stories, I think we’ll be OK.”