Meta is taking steps to address the issue of AI-generated images being shared on its platforms. The company aims to identify and label such images that were created using third-party tools. This initiative comes as Meta gears up for the 2024 election season, where the use of artificial intelligence tools has the potential to create confusion and misinformation.
According to a blog post by Meta Global Affairs President Nick Clegg, in the near future, Meta will begin including “AI generated” labels on images produced by various tools such as those from Google, Microsoft, OpenAI, Adobe, Midjourney, and Shutterstock. Meta already uses a label called “imagined with AI” for photorealistic images made with its own AI generator tool.
Clegg mentioned that Meta is collaborating with other prominent companies in the development of artificial intelligence tools. The goal is to establish shared technical standards, specifically by incorporating hidden metadata or watermarks into images. This will enable Meta’s systems to recognize images that have been generated using their AI tools.
Meta’s labels will be introduced on Facebook, Instagram, and Threads in various languages.
Meta’s announcement is in response to growing concerns from experts, lawmakers, and tech executives about the potential dangers of AI tools that can create realistic images. These tools, combined with the speed at which social media can spread content, have raised fears about the spread of false information that could deceive voters in upcoming elections.
Additionally, this development occurs following criticism from Meta’s Oversight Board regarding the company’s inconsistent policy on manipulated media. The board’s decision was in response to an altered video of US President Joe Biden. In a statement, Biden’s presidential campaign criticized the policy as “nonsensical and dangerous,” in response to the Oversight Board’s findings. Meta announced on Monday that it will carefully review the board’s recommendations and provide a response within 60 days.
On Tuesday, Clegg recognized the significance of users clearly labeling AI-generated imagery.
“Users frequently encounter AI-generated content for the first time, and our users have expressed their appreciation for transparency regarding this emerging technology,” Clegg stated in the post.
“We will be implementing this strategy in the coming year, coinciding with several significant elections happening globally,” he stated. “During this period, we anticipate gaining a deeper understanding of how individuals are generating and disseminating AI content, the types of transparency that are most valued by people, and the ongoing development of these technologies.”
Unfortunately, the markers that are now considered the industry standard for Meta to label AI-generated images will not be included in videos and audio generated by artificial intelligence at this time.
Currently, Meta is working on a new feature that allows users to easily determine if the video or audio content they are sharing was created using AI technology. According to Clegg, individuals must apply the necessary disclosure for any video or audio content that has been “digitally created or altered.” Failure to do so may result in penalties.
In certain cases, if an image, video, or sound that has been digitally created or altered has the potential to significantly mislead the public on an important matter, the company may choose to include a more noticeable label.
Additionally, Meta is taking measures to deter users from removing the hidden watermarks embedded in AI-generated images, according to Clegg.
This work holds great significance, considering the potential for increased conflict in this area in the coming years. According to him, individuals and organizations that have ulterior motives and aim to deceive others using AI-generated content will always seek loopholes in the existing safeguards. When determining if content has been created by AI, it’s crucial for individuals to consider a few factors. One should assess the trustworthiness of the account sharing the content and be on the lookout for any details that may appear or sound unnatural.
In addition, Meta revealed on Tuesday that it is expanding its support for the “Take It Down” anti-sextortion tool developed by the National Center for Missing and Exploited Children. The tool offers a secure way for teenagers and parents to create a special identifier for private images that they are concerned about being shared online. This enables platforms like Meta to quickly and effectively identify and remove these images from their platforms.
According to a blog post by Meta, “Take It Down” was initially launched last year in English and Spanish. Now, it is set to expand to 25 languages and additional countries.
The “Take it Down” announcement follows a Senate hearing last week where Meta CEO Mark Zuckerberg and other social media company leaders faced intense questioning regarding the company’s safeguards for young users.