13 December 2020

Prudish or Plain Stupid: Are These Walla Walla Onions or "You Know What"??

Tech gone wild or what? Barely legal? ...just too much ridiculous fun!
"A seed business in Newfoundland, Canada was notified its image of onions had been removed for violating the terms of service. Its picture of onions apparently set off the auto-moderation, which flagged the image for containing "products with overtly sexual positioning." A follow-up message noted the picture of a handful of onions in a wicker basket was "sexually suggestive."

Source:

Content Moderation Case Study: Facebook's AI Continues To Struggle With Identifying Nudity (2020)

from the ai-is-not-the-answer dept

Summary: Since its inception, Facebook has attempted to be more "family-friendly" than other social media services. Its hardline stance on nudity, however, has often proved problematic, as its AI (and its human moderators) have flagged accounts for harmless images and/or failed to consider context when removing images or locking accounts.

The latest example of Facebook's AI failing to properly moderate nudity involves garden vegetables . . .

Facebook's nudity policy has been inconsistent since its inception. Male breasts are treated differently than female breasts, resulting in some questionable decisions by the platform.

Its policy has also caused problems for definitively non-sexual content, like photos and other content posted by breastfeeding groups and breast cancer awareness videos.

In this case, the round shape and flesh tones of the onions appear to have tricked the AI into thinking garden vegetables were overtly sexual content, showing the AI still has a lot to learn about human anatomy and sexual positioning . . .

Resolution: The seed company's ad was reinstated shortly after Facebook moderators were informed of the mistake. A statement from the company raised at least one more question as its spokesperson did not clarify exactly what the AI thought the onions actually were, leaving users to speculate what the spokesperson meant, as well as how the AI would react to future posts it mistook for, "well, you know."

"We use automated technology to keep nudity off our apps," wrote Meg Sinclair, Facebook Canada's head of communications. "But sometimes it doesn't know a walla walla onion from a, well, you know. We restored the ad and are sorry for the business' trouble."

Originally posted at the Trust & Safety Foundation website.

Filed Under: ai, content moderation, nudity
Companies: facebook

 
More
Decisions to be made by Facebook:
  • Should more automated nudity/sexual content decisions be backstopped by human moderators?
  • Is the possibility of over-blocking worth the reduction in labor costs?
  • Is over-blocking preferable to under-blocking when it comes to moderating content?
  • Is Facebook large enough to comfortably absorb any damage to its reputation or user goodwill when its moderation decisions affect content that doesn't actually violate its policies?
  • Is it even possible for a platform of Facebook's size to accurately moderate content and/or provide better options for challenging content removals?
Questions and policy implications to consider:
  • Is the handling of nudity in accordance with the United States' more historically Puritianical views really the best way to moderate content submitted by users all over the world?
  • Would it be more useful to users is content were hidden -- but not deleted -- when it appears to violate Facebook's terms of service, allowing posters and readers to access the content if they choose to after being notified of its potential violation?
  • Would a more transparent appeals process allow for quicker reversals of incorrect moderation decisions?

No comments: