Artificial intelligence is progressing, creating deepfake images that are becoming increasingly difficult to distinguish from real ones.
Even before explicit and violent deepfake images of Taylor Swift started circulating widely in recent days, lawmakers in various U.S. states had been exploring ways to prevent the dissemination of such nonconsensual images involving both adults and children.
However, in this era centered around Taylor Swift, the issue has garnered increased attention, particularly since she became a target of deepfakes—computer-generated images utilizing artificial intelligence to appear authentic.
Here are key points to understand regarding the actions states have taken and what they are contemplating.
Filings Show Biden’s top super PAC receives millions in new dark money funds.
Artificial intelligence gained significant prominence last year, allowing individuals to generate increasingly realistic deepfakes. As a result, these deepfakes are becoming more prevalent in various online formats.
There is the use of artificial intelligence in creating pornography that exploits celebrities, like Swift, to generate fake compromising images.
In the realm of music, a song that mimicked a collaboration between Drake and The Weeknd garnered millions of clicks on streaming services. However, it was not performed by those artists, leading to its removal from platforms.
In the realm of political tactics during this election year, some New Hampshire voters reported receiving robocalls just before January’s presidential primary. These calls claimed to be from President Joe Biden, advising them not to bother casting ballots. The state attorney general’s office is currently investigating this matter.
However, a more prevalent occurrence involves the use of deepfake technology in creating explicit content using the likenesses of non-famous individuals, including minors.
Taylor will bring this extra new angle to the stage in 2024, forget Swift’s Eras Tour
WHAT STATES HAVE DONE SO FAR ON DEEPFAKES
Deepfakes represent just one facet of the intricate landscape of artificial intelligence that lawmakers are grappling with, attempting to determine whether and how to address the challenges it presents.
To date, at least 10 states have passed laws specifically related to deepfakes, with numerous additional measures under consideration in legislatures across the country.
States like Georgia, Hawaii, Texas, and Virginia have enacted laws that criminalize the creation of nonconsensual deepfake pornography.
In California and Illinois, victims now have the right to sue those responsible for creating images using their likenesses.
Minnesota and New York have adopted both approaches, with Minnesota’s law also specifically addressing the use of deepfakes in political contexts.
ARE THERE TECH SOLUTIONS?
Siwei Liu, a professor of computer science at the University at Buffalo, said efforts are underway with various approaches, though none are flawless.
One approach involves the use of deepfake detection algorithms, capable of flagging deepfakes, especially on social media platforms.
Another method, currently in development and not yet widely implemented, involves embedding codes in uploaded content to signal if it can be reused in AI-generated creations.
A third mechanism proposes that companies provide AI tools to include digital watermarks that can identify content generated using their applications.
Lew emphasized the rationale behind holding companies accountable for how their tools are used, suggesting that companies can enforce user agreements to prevent the creation of problematic deepfakes.
WHAT SHOULD BE IN A LAW?
The American Legislative Exchange Council (ALEC) has proposed model legislation that specifically focuses on deepfakes in the context of pornography rather than politics. ALEC urges states to take two key steps: criminalize the possession and distribution of deepfakes that depict minors in explicit acts, and allow victims to pursue legal action against people who distribute non-consensual deepfakes that contain explicit content.
Jake Morabito, who oversees the communications and technology task force for ALEC, suggests that lawmakers start with targeted solutions to address obvious problems. He advises against targeting the underlying technology behind deepfakes, as this stifles innovation with broader positive applications.
Todd Helmus, a behavioral scientist at RAND, emphasized that relying on individual lawsuits for enforcement is insufficient because of the associated costs and lack of merit. Helmus argued for systemic guardrails with government involvement to ensure effective regulation.
Helmus calls on companies like OpenAI, whose platforms can be used to create virtual content, to take preventive measures. In addition, social media companies should implement better systems to curb the proliferation of deepfakes and legal consequences for those who engage in such activities.
Jenna Leventoff, a First Amendment attorney at the ACLU, recognizes the potential harm caused by deepfakes but stresses the importance of making sure the regulations fit with free speech protections. She urged lawmakers to overcome existing exceptions to free speech, such as defamation, fraud, and obscenity, while regulating this emerging technology.
White House Press Secretary Karine Jean-Pierre addressed the issue last week, suggesting that social media companies establish and enforce their own rules to prevent the spread of false information and inappropriate images of Taylor Swift.
WHAT’S BEING PROPOSED?
In January, a bipartisan group of members of Congress introduced federal legislation aimed at granting individuals the property right to their likeness and voice. The Act empowers people to take legal action against those who fraudulently misuse their likenesses or voices through deepfake technology.
Several states are currently considering deepfake-related laws in their sessions this year. These bills are being introduced by lawmakers of various political affiliations, including Democrats, Republicans, and bipartisan coalitions.
Among the bills gaining traction is one in GOP-dominated Indiana that would make it a crime to distribute or create sexually indecent depictions without a person’s consent. It was unanimously approved in the House in January.
Missouri recently introduced a similar measure called “The Taylor Swift Act,” and another successfully cleared the Senate this week in South Dakota. Attorney General Marty Jackley explained that some investigations have been turned over to federal authorities because the state lacks the AI-related laws necessary to file charges.
Jackley emphasized the need for legal boundaries, saying there is no First Amendment right to steal someone’s children from their Facebook page and use it in pornographic images.
WHAT CAN A PERSON DO?
For people with an online presence, preventing victims of deepfakes can be challenging. However, Todd Helmus from RAND suggests several actions for those who find themselves targeted:
- Request the shared social media platform to remove the images.
- If there are applicable laws, report the incident to the police.
- Notify school or university officials if the alleged offender is a student.
- Seek mental health help when needed.