Through most of 2023, just a handful of states enacted laws to tackle the challenges that artificial intelligence and deepfakes posed to political campaigns.
But now that the 2024 election cycle is in full force, state lawmakers across the country have snapped into action to try to deal with the thorny, quickly changing issue.
In just the first three weeks of 2024, lawmakers from both major parties have introduced legislation in at least 13 states to combat the kind of mis- and dis-information AI and deepfakes can create in elections.
The issue was back in the spotlight Monday after the emergence of a fake robocall featuring a voice impersonating President Joe Biden telling Democratic voters in New Hampshire not to cast their ballots in Tuesday’s primary.
It remains unclear whether the voice — an apparent imitation or digital manipulation of the president’s — was created with the use of artificial intelligence, though the New Hampshire attorney general’s office, which is investigating, said in a statement that the message appeared to have been “artificially generated based on initial indications.”
The call underscores the type of threat posed by the use of AI and deepfakes heading into the heart of the campaign season.
“The political deepfake moment is here. Policymakers must rush to put in place protections or we’re facing electoral chaos. The New Hampshire deepfake is a reminder of the many ways that deepfakes can sow confusion and perpetuate fraud,” Robert Weissman, the president of the government watchdog group Public Citizen, which has petitioned the federal government to act more aggressively against deepfakes, said in a statement.
“The good news is that states are rushing to fill the gap,” Weissman added.
State bills attempting to tackle the issue tend to fall into two categories: disclosure requirements and bans.
Disclosure requirements typically require a disclaimer to be placed on any media created with the use of AI that is being issued to influence an election within a certain time frame.
Bans often have nuanced exceptions. For example, a Michigan law enacted last year doesn’t enforce a ban if a disclosure has been shared and if the person responsible for the media doesn’t know that it “falsely represents” the people it depicts.
Since the beginning of the year, disclosure requirement bills have been introduced by Republican lawmakers in Alaska and Florida.
Meanwhile, Democrats in Hawaii, South Dakota, Massachusetts, Oklahoma and Nebraska, as well as Republicans in Indiana and Wyoming, have introduced legislation that would ban media created with help of AI within specific time frames before elections if the media doesn’t feature a disclosure.
Democrats in Nebraska also introduced a bill that would ban disseminating all deepfakes within 60 days of an election.
In Arizona, Republican lawmakers proposed a bill to allow any candidate for public office who will appear on the ballot — or any Arizona resident — to sue for relief or damages any people who publish a “digital impersonation” of that person.
Idaho Republicans proposed a bill that would ban disseminating “synthetic media” that doesn’t include a disclosure. The bill would also create rules allowing the depicted people to sue the people who published it.
A Republican-proposed bill in Kentucky would create definitions for deepfakes and ban disseminating them if the people depicted in them haven’t consented, while allowing them to sue for relief and damages.
And a Democratic-led bill in Virginia would make it a Class 1 misdemeanor for anyone to create “any deceptive audio or visual media” for the purpose of committing “a criminal offense.”
In the final days of 2023, lawmakers in three other states — Ohio, South Carolina and New Hampshire — introduced similar bills, none of which have advanced.
The bill in New Hampshire, introduced by Democrats, proposed “requiring a disclosure of deceptive artificial intelligence usage in political advertising.”
Introducing such bills doesn’t necessarily mean any will become law.
Last year, for example, just three states enacted laws related to regulate the use of AI and deepfakes in political campaigns — even as the size, scale and potential threats they can pose came into clearer view throughout the year.