News

Microsoft, Meta Lay Down AI Rules to Safeguard Elections

Microsoft, Meta Lay Down AI Rules to Safeguard Elections

Microsoft is stepping up efforts to curb the creation of deceptive political ads, joining Facebook’s parent company, Meta, in rolling out new policies this week targeting misleading AI-generated political ads heading into the 2024 election season.

In a blog post on Tuesday, Microsoft’s President Brad Smith and Teresa Hutson, VP of Technology for Fundamental Rights, laid out how the tech giant will approach AI in political advertising.

“The world in 2024 may see multiple authoritarian nation-states seek to interfere in electoral processes,” Microsoft said. “And they may combine traditional techniques with AI and other new technologies to threaten the integrity of electoral systems.”

Smith and Hutson said Microsoft’s election protection commitment to voters includes “transparent and authoritative information” about elections, candidates having the ability to confirm the origins of campaign material, and having a recourse when AI distorts their likeness or content. The commitment also touched on safeguarding political campaigns against cyber threats.

To help campaigns maintain control of their content, Microsoft said it is launching “Content Credentials as a Service,” a new tool that uses the Coalition for Content Provenance and Authenticity’s digital watermarking that uses cryptography to encode details about the content’s origin. Microsoft is also launching an Election Communications Hub to help secure elections.

“No one person, institution, or company can guarantee elections are free and fair,” Microsoft said. “But, by stepping up and working together, we can make meaningful progress in protecting everyone’s right to free and fair elections.”

Microsoft has not yet responded to Decrypt’s request for comment.

Social media giant Meta is meanwhile targeting misinformation and deceptive political ads on its various platforms, announcing on Wednesday that political campaigns must disclose the use of AI.

“In the New Year, advertisers who run ads about social issues, elections, and politics with Meta will have to disclose if image or sound has been created or altered digitally, including with AI, to show real people doing or saying things they haven’t done or said,” President of Global Affairs at Meta, Nick Clegg wrote on Twitter.

Clegg explained that advertisers must complete an authorization process and include a declaimer stating who paid for the ad.

Meta’s new policy said that advertisers must disclose whenever an ad related to social issues, elections, and political ads contains photorealistic images or video and audio designed to sound or look like a human.

According to Meta, the policy includes ads that feature an AI-generated image or deepfake of a person doing or saying something they never did. The policy also includes ads that use realistic images or footage manipulated to misrepresent a “non-existent” event.

Meta says the new policy does not pertain to image size adjustment, cropping, color correction, or image sharpening unless the changes are “consequential or material to the claim, assertion, or issue raised in the ad.”

“This policy will go into effect in the new year and will be required globally,” Meta said.

Meta has not yet responded to Decrypt’s request for comment.

With generative AI advancing rapidly, policymakers, corporations, and law enforcement are attempting to catch up. With new AI tools emerging daily, however, it’s an uphill battle to combat deepfakes. Last month, a new “face-swapping” AI called FaceFusion showed how quickly one person’s face could be swapped for another using the free, open-source model.

Cybercriminals have also turned to generative AI models to accelerate phishing attacks. In July, a malicious ChatGPT clone called WormGPT was discovered on the Darkweb that could be used to launch email attacks. Chatbots like ChatGPT and its nefarious cousin use generative AI to create text, images, and videos based on user prompts. Other Darkweb AI models cybercriminals are turning to include FraudGPT, DarkBert, and DarkBart.

And an October report by cybersecurity firm SlashNext said email phishing attacks have increased 1265% since the launch of OpenAI’s ChatGPT. AI-generated ads on television may be easier to deal with, but the spread of AI deepfakes online has internet watchdogs sounding the alarm.

“So there’s that ongoing thing of you can’t trust whether things are real or not,” Internet Watch Foundation CTO Dan Sexton previously told Decrypt. “The things that will tell us whether things are real or not are not 100%, and therefore, you can’t trust them either.”

In August, the U.S. Federal Election Commission addressed AI-generated deepfakes after a petition by the non-profit organization Public Citizen asked the agency to regulate the technology.

Last month, a bipartisan group of U.S. Senators proposed a bill called the “No Fakes Act,” which, if passed, would criminalize the unauthorized use of a person’s likeness in a song, photo, or video without their permission. The Act would come with a $5000 fine per violation plus damages and apply to any individual, including 70 years after death.

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.



Source: https://decrypt.co/204987/microsoft-facebook-meta-election-politics-ai-deepfakes

Leave a Reply

Your email address will not be published. Required fields are marked *