By Aisha Counts | Bloomberg
Meta Platforms Inc. will soon require advertisers to disclose when political or social issue ads have been created or altered by artificial intelligence, aiming to prevent users from being fooled by misinformation.
The rules, which go into effect in 2024, will require advertisers to disclose when AI or other digital tools are used in Facebook or Instagram ads on social issues, elections or politics, Nick Clegg, the company’s vice president of global affairs, announced Wednesday in a blog post. Advertisers will need to say when AI is used to depict real people doing or saying something they didn’t actually do or when a digitally created person or event is made to look realistic, among other cases.
If advertisers fail to disclose when they are using AI or other digital tools in these types of ads, Meta will reject the message. After repeated failures to disclose the use of these tools, the company can issue penalties against the advertiser. The policy doesn’t apply to small changes such as cropping an image or correcting the color.
Meta’s advertising policy comes as tech companies are grappling with the ripple effects of AI. Earlier this year, Alphabet Inc.’s Google announced a similar policy requiring election advertisers to disclose when their messages have been altered or created by AI. Social media companies also are struggling to keep up with a surge of misinformation proliferating across platforms such as Facebook, TikTok and X, formerly Twitter. In the wake of the Israel-Hamas war for example, video game footage passed off as war action has been circulated on social media apps.
Experts are particularly concerned about the spread of misinformation because social media companies have relaxed some restrictions ahead of the 2024 US election and global elections in Russia, Taiwan, India and other countries. In June, Google’s YouTube said it would stop taking down content that promotes false claims about the 2020 US…
Read the full article here