Elon Musk shared an AI video of Kamala Harris. Here's why it matters




Kamala Harris smiles as crowds of supporters cheer in a video that's making the rounds on X, the social media platform formerly known as Twitter. But there's an issue — the person speaking in the video isn't really Harris. It's artificial intelligence mimicking her voice. 

The manipulated video gained widespread attention after tech billionaire and X owner Elon Musk shared it on the social media platform on Friday without noting it was parody. Experts say it's the latest example of the influential role AI could play in the leadup to the U.S. presidential election in November.

What's in the video?

The video features many visuals from a real campaign video Harris recently released. But the voiceover makes it sound like the presidential candidate is saying things she didn't.

The voice can be heard describing Harris as "the ultimate diversity hire," calling U.S. President Joe Biden a "deep state puppet" and claiming that Harris doesn't "know the first thing about running the country." 

CBC News is not linking to the digitally altered video. 

Musk's post has since been viewed more than 130 million times and appears to violate X's policies, which prohibit sharing "synthetic, manipulated or out-of-context media that may deceive or confuse people and lead to harm." 

The video does not contain any parody disclaimer, however, the account that first uploaded it @MrReaganUSA, described it as "ad parody" in accompanying text. 

A man in a grey suit gestures while speaking.
Musk faced widespread criticism for posting the video, responding on Monday that 'parody is legal in America.' (Kevork Djansezian/Getty Images)

Some X users have suggested Musk's post should be labeled with a "community note" — a feature that adds context to inaccurate posts. No label has been added at the time of this article's publication. 

Others have gone as far as suggesting that Musk's post violates the Federal Election Campaign Act, which prohibits fraudulent misrepresentation of federal candidates or political parties. The law, which was introduced in 1971, doesn't have any clear rules around technology like artificial intelligence or social media. 

Following widespread criticism over the weekend, Musk said on Monday "parody is legal in America," replying to a post by California Democratic Gov. Gavin Newsom.

When asked for comment via its press relations email, X replied: "Busy now, please check back later."

The value of 'transparency'

The altered video confirms something Henry Ajder, a researcher and expert adviser to organizations like Meta, Adobe and the U.K. government, says he's felt for a long time. 

"Satire," he said, "is a an incredibly murky topic."

Ajder co-authored a 2020 report from the human rights organization Witness and the Co-creation Studio at MIT Open Documentary Lab that examined the political and policy implications of AI media and deepfakes. Ajder and his colleagues examined 70 cases from a wide range of deepfake videos to understand the growing relationship between satire and deepfakes. 

He says deepfakes should be clearly labelled, and points to something called the Content Authenticity Initiative he's been developing with Adobe as an example.

He describes it as a "nutrition label for media."

Labelling a deepfake is "not about saying 'This is bad or this is good,'" he said. "It's about providing transparency about how a piece of media is being created."

Many popular social media companies have rules in place to try to manage AI-generated content. Meta, the company that owns Facebook and Instagram, requires that "manipulated media" be labelled as such and that context be appended to the post. In March, Google, which owns YouTube, announced a policy requiring users posting videos to disclose when content has been made with AI.

Growing trend in politics

This isn't the first time AI has been used in relation to the upcoming U.S. presidential election.

In January, ahead of the New Hampshire Democratic primary, a robocall using AI technology mimicked Biden's voice in an attempt to discourage people from voting. Following that, the U.S. Federal Communications Commission ruled that robocalls using AI-generated voices were illegal and proposed a $6 million US fine.

During this year's Republican primary, deepfake videos depicting former U.S. secretary of state Hillary Clinton endorsing Republican Florida Gov. Ron DeSantis began popping up on social media. 

Ajder, who also points to similar instances in Slovakia and the U.K., says there is a place for satire in politics, pointing to publications like Babylon Bee and the Onion, but that it is important to be clearly defined as such. 

"There is, in my view, a space for AI-generated satire and deepfake satire, but it has to be created and shared in a responsible manner."



Source link

Posted: 2024-07-30 01:45:54

How to dry clothes fast indoors without tumble dryer or heating to avoid damp smells
 



... Read More

Jack Draper vomits on court and cleans it up himself during US Open semi-final | Tennis | Sport
 



... Read More

The world’s most feminist city: how Umeå in Sweden became an idyll for women | Feminism
 



... Read More

Labour's winter fuel payment cuts are just the beginning, warns Tom Tugendhat | Politics | News
 



... Read More

Matteo Bocelli sings Elvis Presley classic at BST Hyde Park in new footage | Music | Entertainment
 



... Read More

High street retailers will carry ‘weight’ of UK tax rises, says Primark owner | Primark
 



... Read More

AI images, child sexual abuse and a ‘first prosecution of its kind’ - podcast | Artificial intelligence (AI)
 



... Read More

Paul Hollywood explains where bread should really be stored
 



... Read More