Meta’s Oversight Board is once again taking on the social network’s rules for AI-generated content. The board has accepted two cases that deal with AI-made explicit images of public figures.
While Meta’s rules already prohibit nudity on Facebook and Instagram, the board said in a statement that it wants to address whether “Meta’s policies and its enforcement practices are effective at addressing explicit AI-generated imagery.” Sometimes referred to as “deepfake porn,” AI-generated images of female celebrities, politicians and other public figures has become an increasingly prominent form of online harassment and has drawn a wave of . With the two cases, the Oversight Board could push Meta to adopt new rules to address such harassment on its platform.
The Oversight Board said it’s not naming the two public figures at the center of each case in an effort to avoid further harassment, though it described the circumstances around each post.
One case involves an Instagram post showing an AI-generated image of a nude Indian woman that was posted by an account that “only shares AI- generated images of Indian women.” The post was reported to Meta but the report was closed after 48 hours because it wasn’t reviewed. The same user appealed that decision but the appeal was also closed and never reviewed. Meta eventually removed the post after the user appealed to the Oversight Board and the board agreed to take the case.
The second case involved a Facebook post in a group dedicated to AI art. The post in question showed “an AI-generated image of a nude woman with a man groping her breast.” The woman was meant to resemble “an American public figure” whose name was also in the caption of the post. The post was taken down automatically because it had been previously reported and Meta’s internal systems were able to match it to the prior post. The user appealed the decision to take it down but the appeal was “automatically closed.” The user then appealed to the Oversight Board, which agreed to consider the case.
In a statement, Oversight Board co-chair Helle Thorning-Schmidt said that the board took up the two cases from different countries in order to assess potential disparities in how Meta’s policies are enforced. “We know that Meta is quicker and more effective at moderating content in some markets and languages than others,” Thorning-Schmidt said. “By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way.”
The Oversight Board is asking for public comment for the next two weeks and will publish its decision sometime in the next few weeks, along with policy recommendations for Meta. A similar process involving a misleadingly-edited video recently resulted in Meta agreeing more AI-generated content on its platform.
Read the original article here