The Oversight Board of Meta Platforms, an independent body funded by the social media giant, is currently evaluating the company’s approach to two AI-generated images of female celebrities circulating on Facebook and Instagram.
These images, described in detail by the board but with the celebrities unnamed to prevent further harm, serve as case studies to assess Meta’s policies and enforcement strategies regarding sexually explicit deepfakes.
Advancements in AI technology have enabled the creation of fabricated content, including images, audio, and videos, that closely resemble real human-generated material.
This has led to a surge in the dissemination of sexualized deepfakes online, predominantly targeting women and girls.
One notable incident earlier this year involved Elon Musk‘s social media platform X temporarily restricting searches for images of Taylor Swift due to difficulties in controlling the spread of fake explicit content featuring her.
Industry leaders have called for legislative measures to address the proliferation of harmful deepfakes and to hold tech companies accountable for preventing their misuse.
The Oversight Board’s review includes cases such as an AI-generated nude image resembling a public figure from India posted on Instagram by an account exclusively sharing such images of Indian women.
Another case involves an image posted in a Facebook group showcasing AI creations, depicting a nude woman resembling an American public figure with a man groping her.
Meta initially removed the image featuring the American woman for violating its harassment policy but left up the one depicting the Indian woman until the board selected it for review.