AI Technologies , Generative AI , Large Language Models
California Enacts Laws to Combat Election, Media Deepfakes
Laws Seek Removal of Deceptive Content, Labeling of Less Malicious ContentCalifornia enacted regulation to crack down on the misuse of artificial intelligence as Gov. Gavin Newsom on Tuesday signed five bills focused on curbing the impact of deepfakes.
See Also: AI in Cybersecurity: Building Smart Defenses and Outsmarting Threats
Three of the laws focus on addressing deepfakes in elections - on social media, in political advertisements and in willful distribution of AI-altered content. The others look to protect the digital likeness of Hollywood actors, prohibiting studios from using AI to clone an artist's body or voice without consent.
Whether in spite of or because the state serves as the headquarters for many of the world's largest technical companies, California has been on the national forefront of tech regulation, enacting privacy protection statutes and now laws that could rein in AI developers. Newsom has not yet publicly said whether he will sign SB 1047, a bill that would establish first-in-the-nation safety standards for advanced artificial intelligence models, but he has voiced doubt on the legislation's utility.
"What are the demonstrable risks in AI, and what are the hypothetical risks? I can’t solve for everything. What can we solve for? And so that's the approach we’re taking across the spectrum on this," the Democratic governor said Tuesday during a tech conference. Newsom can sign or veto the bill within the next two weeks (see: California AI Safety Bill Passes Key Marker).
With fewer than 50 days until the U.S. presidential election, social media platforms are rife with AI-generated election misinformation and disinformation (see: US Targets Russian Media and Hackers Over Election Meddling). "It’s critical that we ensure AI is not deployed to undermine the public's trust through disinformation - especially in today's fraught political climate," Newsom said in a statement.
One bill that's now law, AB 2655, requires "large online platforms" such as Facebook and X, formerly Twitter, to restrict the spread of election-related deceptive deepfakes. The law mandates platforms to remove election-related deepfakes and requires that "less materially deceptive content" be labeled with a warning to users about manipulated and unauthentic content. Candidates and government officials could sue platforms in court to force them into removing deceptive content.
AB 2839 requires platforms to remove or label manipulated content during the period before an election, which it defines as 120 days before the election. The restrictions regarding fake portrayals of an election official or materially false depictions of voting machines or sites extend for 60 days after an election.
Another law, AB 2355, requires electoral campaigns and political action groups to label advertisements made or altered using AI.
"The availability of tools to doctor images, video and sound is not new. However, the rapid improvements in AI and large language models have made it easier to create convincingly fake images, videos and sounds," said bill sponsor Wendy Carrillo, a Los Angeles Democratic state assemblymember.
The two other AI laws Newsom signed set new standards for the media industry. AB 2602 requires movie studios to obtain consent from actors before creating their AI likenesses in voice or physical appearance, and AB 1836 restricts filmmakers from creating digital replicas of deceased performers without prior consent or consent from the performer's surviving spouse, children or grandchildren.