Categories
Artificial Intelligence

Youtube Broadens the Scope its of AI-driven Content Moderation to Include Flagging Age-inappropriate Content

On Monday 22 September Youtube announced via it’s official blog that it will being using artificial intelligence to automatically flag age-inappropriate content. Youtube began using AI content moderation in 2017 to find and flag violent extremism. These systems were later expanded to include automated flagging of hateful conduct. Since the start of the Coronavirus pandemic, social media platforms have also deployed AI in order to detect and flag misinformation. Today, Youtube will begin using AI to flag age-inappropriate conduct.

“Over the last several years, we’ve taken important steps to make YouTube a safer place for families, like launching the standalone YouTube Kids app for users under the age of 13. Today, we’re announcing a continuation of these efforts to live up to our regulatory obligations around the world and to provide age-appropriate experiences when people come to YouTube. “

The LawTechLab class of 2020 was tasked with developing a content-moderation AI algorithm to automatically detect coronavirus misinformation during the “lab” practical session. One of the main insights from this session was that satire, parody, and criticism can be difficult to detect. Will these difficulties play a role in the detection of age-inappropriate content?

Image
The UCT Cyberlaw class of 2020, watching a demonstration of a classification algorithm for the detection of fake news.

Leave a Reply

Your email address will not be published. Required fields are marked *