Safer Internet Day: why social media has to transform content moderation

More needs to be done eliminate harmful, false or dangerous content from social platforms. We take a look at how small businesses can take the lead.

Our experts

We are a team of writers, experimenters and researchers providing you with the best advice with zero bias or partiality.
Written and reviewed by:

Time and again, the giants of social media have failed to mitigate the dangers of harmful content online.

Just last month during a hearing in the US Senate, Meta’s Mark Zuckerberg publicly apologised to families whose children have been harmed through platforms such as Facebook and Instagram.

As a chorus of voices raise awareness on this Safer Internet Day, one of the burning topics is how content moderation can protect users online. Whether it’s violent or sexual imagery appearing on somebody’s feed, or brands finding their advertising appearing next to content that could cause reputational harm, the issues are more pressing than ever.

Yet, where some of the biggest businesses in tech have struggled to do enough, smaller startups are now taking a lead.

The startups working to keep the internet safe

“At the click of a button, our young people can be exposed to age-inappropriate or even the most horrendous online content imaginable,” warns Michal Karnibad, Co-CEO of VerifyMy, a tech solutions company that aims to keep children safe online.

“Despite the best efforts of websites and platforms, schools, parents, caregivers and awareness days to guide online best practice, our findings show more needs to be done,” she stresses.

According to statistics, more than 300 million photos are uploaded to the internet everyday and more than four million hours of content are launched on YouTube. Moderating the vast amount of new content is a Herculean task, to put it lightly.

This task has never felt more pressing, and yet it’s already grown beyond the capacity of teams of human moderators.

In this year’s Startups 100 index, the top new UK business identified was an AI content moderation specialist, Unitary. The platform can tackle online content moderation at an almost unfathomable scale, helping to keep users and brands safer online.

The business’s patented technology uses machine learning to understand if a photo or video contains explicit or offensive content – even in nuanced cases. Unitary can analyse around three billion images a day, or 25,000 frames of video per second, catching bad actors that would have otherwise have slipped through the net.

Defining and policing harmful content

Beyond the unmanageable mountains-worth of content, moderating unsafe social media posts is equally complicated by how ambiguous definitions are of what is ‘harmful.’

“When we talk about online harm, it’s not necessarily obvious what we mean. In fact, governments, social media platforms, regulators, and startups alike have dedicated enormous effort to defining what is meant by harmful content,” wrote Sasha Haco, CEO and Co-Founder of Unitary.

Some material is obviously harmful – such as terrorist propaganda or child abuse imagery. On TikTok, some of the most viewed posts that reference suicide, self-harm and highly depressive content have been viewed and liked over 1 million times.

However, other content might only be considered harmful due to its context. Unitary’s solution puts a heavy emphasis on its context-aware AI tool. It can understand how, for example, imagery of alcohol consumption might be perfectly harmless for one brand advertising alongside it. But, the tool understands how that same imagery could be potentially devastating for another business appearing alongside.

As explained by Haco, harm is not a binary label – it is determined by a wide range of contextual factors. This ambiguous nature of harm makes content moderation a constant uphill battle.

Investment in content moderation will be pivotal

“Rather than pointing fingers, now is the time to act and implement pragmatic solutions to solve the issue of how we best protect children online,” emphasises Karnibad. “Businesses should be engaging and partnering with subject matter experts in this area – including regulators and safety tech providers.”

“Websites must ensure they have robust content moderation technology in place which can identify and remove any illegal material before it is published. At the same time, they must invest in age assurance technologies to ensure those accessing their platforms are the correct age and only see age-appropriate content,” she continues.

Companies like Unitary are vital in strengthening a protective wall that keeps dangerous content out. With its proprietary AI model, Unitary is speeding up the process of identifying harmful material on social media before it’s too late.

Written by:

Leave a comment

Leave a reply

We value your comments but kindly requests all posts are on topic, constructive and respectful. Please review our commenting policy.

Back to Top