To help voters in 2024, Meta says it will begin labeling political ads using AI-generated images

0

Washington (AFP) – Facebook and Instagram will require political ads running on their platforms to disclose whether they were created using them. artificial intelligenceThe parent company announced on Wednesday.

Under Meta’s new policy, labels that acknowledge the use of AI will appear on users’ screens when they click on ads. The rule goes into effect on January 1 and will be applied worldwide.

The development of new AI software has made it easier than ever to create lifelike audio, images, and video files more quickly than ever before. If this technology falls into the wrong hands, it could be used to create fake videos of a candidate or frightening images of election fraud or violence at polling places. When these fake products are linked to the powerful algorithms of social media, they can mislead and confuse voters on an unprecedented scale.

Meta Platforms Inc Other technology platforms have been criticized for not doing more to address this risk. Wednesday’s announcement — which comes on the day House lawmakers hold a hearing on deepfakes — is unlikely to allay those concerns.

While officials in Europe are working on comprehensive regulations for the use of artificial intelligence, time is running out for lawmakers in the United States to pass regulations ahead of schedule. 2024 elections.

Earlier this year, the Federal Election Commission began practical for possible regulation AI-generated deepfakes in political ads Ahead of the 2024 election. President Joe Biden’s administration released last week Executive order It aims to encourage the responsible development of artificial intelligence. Among other provisions, AI developers will be required to submit safety data and other information about their software to the government.

The United States is not the only country that will hold a high-profile vote next year: national elections are also scheduled in countries such as Mexico, South Africa, Ukraine, Taiwan and Pakistan.

AI-generated political ads have already appeared in the United States. In April, the Republican National Committee released an ad generated entirely by artificial intelligence aimed at showing the future of the United States if Biden, a Democrat, is re-elected. It used fake but realistic images showing shuttered storefronts, armored military patrols in the streets, and terror-inducing waves of migrants. The ad is classified to inform viewers using artificial intelligence.

In June, Florida Governor Ron DeSantis’ presidential campaign shared an attack ad against his GOP primary opponent Donald Trump, which used artificial intelligence-generated images of the former president hugging infectious disease expert Dr. Anthony Fauci.

“It has become a very difficult task for the casual observer to figure out: What am I believing here?” said Vince Lynch, AI developer and CEO of AI company IV.AI. Lynch said a combination of federal regulation and voluntary policies by tech companies is needed to protect the public. “Companies have to take responsibility,” Lynch said.

The new Meta policy will cover any advertisement for a social or electoral cause or political candidate that includes a realistic image of a person or event that has been modified using artificial intelligence. More modest use of technology – to change the size of an image or increase its clarity, for example, will be allowed without detection.

Along with labels that inform the viewer when an ad contains AI-generated images, information about the ad’s use of AI will be included in Facebook’s online ad library. Meta, which is based in Menlo Park, California, says content that violates the rule will be removed.

Google announced Similar policy for AI classification For political ads in September. Under this rule, political ads running on YouTube or other Google platforms will have to disclose the use of voices or images modified by artificial intelligence.

This article originally appeared on apnews.com

Leave A Reply

Your email address will not be published.