When people think of content moderation, they usually imagine some kind of AI program that automatically monitors social media posts to delete inappropriate content. Though some content moderation is indeed performed by AI, a huge part of it is still done manually by people because moderation remains too difficult and nuanced for AI to perform well. In fact, over 100,000 content moderators work globally today to keep the internet safe for the rest of us. 

Moderating the internet is no easy task. In the first quarter of 2018 alone, 3.5 million items of uncivil or violent content were taken down on Facebook alone. Unfortunately, sitting in front of a computer and scrolling through violent images all day can have serious consequences for mental health. A further challenge is that this massive and necessary work remains largely invisible. When we browse through social media, most people have no idea that tens of thousands of content moderators are working around the clock to keep our browsing safe. 

In his research article “The Psychological Well-Being of Content Moderators”, UT Prof. Matthew Lease collaborated with a team of researchers to explore how content moderation affects the health of workers and what we can do to better protect them. As Prof. Lease explained “A lot of the problem with content moderation work is that we don’t see it.” 

Because most people remain unaware of both the existence of human content moderators and the health risks they face each day, the issue has received little public attention or scrutiny.

Can we ensure moderator wellbeing without sacrificing accuracy and efficiency? To answer this question, Prof. Lease and his team have been studying the use of image blurring techniques to reduce the moderators’ exposure to disturbing imagery without compromising their work. In the research article “Fast, Accurate, and Healthier: Interactive Blurring Helps Moderators Reduce Exposure to Harmful Content”, the team designed three different user interfaces for blurring images for moderators. They then conducted user studies to understand how each interface impacted the speed and accuracy of moderation work, as well as mental health impacts. They found that neither speed nor accuracy of content moderation were reduced by the interactive blurring technique. In addition, results of a psychological questionnaire also showed reduced negative mental health impacts from the moderation work.

While reducing exposure to disturbing imagery is important, Prof. Lease and his co-researchers also recommend creating workplace programs that improve the resilience of moderators, take measures to prevent and reduce exposure to traumatic content as much as possible, and provide better, long-term access to health care.

News categories: