How AI can help safeguard a brand’s reputation – and the internet

Unitary is on a mission to make the internet a safer digital space for users and brands, powered by artificial intelligence and content moderation expertise.

Our experts

We are a team of writers, experimenters and researchers providing you with the best advice with zero bias or partiality.
Written and reviewed by:

There are 4.8 billion social media users worldwide, representing 59.9% of the global population. 95 million photos and videos are shared on Instagram each day, which translates into 65,972 every minute.

These numbers would not represent a risk if it was all content void of harmful components that foster hate or damage a brand’s reputation. The reality is that noxious content continues to populate the web and humans can’t moderate everything manually.

Born in 2019 from the minds of ex-black hole physicist Sasha Haco and experienced Facebook and Reddit content moderator James Thewlis, Unitary is harnessing the power of AI to make the internet safer.

The AI startup creates a custom machine learning model and rigorously tests it against real world scenarios so clients can integrate the AI system into their workflow, using a scalable API. Every model is adapted to the current policies and safety challenges of the client, making sure content is moderated based on the context and needs of each brand.

Brands rely on existing in a digital space to reach customers and construct a community of users. Keeping that space safe is Unitary’s guiding mission.

A safe internet is “imperative for the bottom line”

“Brands know that they have to rely on a safe internet,” stresses Zoe Steele, Director of Business Development at Unitary.

With the rise of the Global Alliance for Responsible Media (GARM), brands are aware that when they’re reaching their users, they need to safeguard their reputation.

According to statistics, 87% of customers will purchase a product because a company advocated for an issue they cared about. 92% confess to having a more positive image of companies that support social issues and environmental efforts.

Moreover, a company’s reputation accounts for 63% of its market value. A campaign gone wrong or a lack of marketing due diligence could incur hefty costs.

“It’s not about being 99% safe because when they’re spending tens of millions of dollars on their marketing budgets, 1% of impressions that might be unsafe is a major reputational risk,” warns Steele.

“No brand wants to be featured in a Wall Street Journal article where their content or their advertising showed up next to something horrific, that’s every marketer’s worst nightmare,” she adds.

Giving content moderation a makeover

Content moderation has been a concern ever since uploading content and comments to the internet became possible. However, the tools to do so have not necessarily evolved with the times, costing money and inflicting damage on brands along the way.

Most tools have relied on keyword blocking or frame by frame analysis of images and videos, which don’t have the ability to understand the nuance and context of objects, content and text.

During the pandemic, UK news publishers were projected to lose £50m in ad revenue as brand safety measures blocked the keyword ‘coronavirus’. The use of blocklists not only forced newspapers to make operational costs cuts, but displayed how content moderation that doesn’t account for context can be counterproductive.

Unitary has taken a multimodal approach that trains AI systems to understand objects within their context, and adapt the definition of what unsafe content looks like based on the safety parameters outlined by their clients.

This approach has been long overdue in the content moderation space, given that video makes up 80% of online traffic and has been notoriously difficult to moderate. This is because the technology hasn’t been trained to interpret context in the same way that humans do.

“We think the future of moderation lies in dynamic updating with policy and dynamically updating the classification alongside new rising types of harm,” shares Steele.

“You need a system that is able to learn as quickly as possible in almost real time to be able to classify all that content and enable trust and safety teams and brand safety teams with the information they need to keep their platform safe.”

Importantly, the proliferation of generative AI tools means that harmful content is no longer just created by content. It also implies that noxious content is created and shared at greater rates than before.

“You also need major AI tools to combat potential major AI harms,” warns Steele.

Why marketing teams win with thorough content moderation

Integrating AI tools into a marketing team’s content moderation efforts has a two fold advantage: mitigating risk and boosting monetisation.

By preventing certain content from being posted because it is deemed unsafe, brands can avoid compromising scenarios where they have to offer apologies to their community or justify their advertising choices.

Effective content moderation can also unlock higher rates of monetisation.

“Let’s say that a platform wants to open up a new format that they want to monetise, like a new creative surface like an immersive video product, something that is nuanced and they don’t feel confident in opening that surface area up to advertisers,” explains Steele.

“With context based safety solutions, you make sure that instead of invoking these archaic tools like keyword blocking, you’re really understanding the nuance of that surface area to drive lower CPM (cost per mile) for advertisers and increase ad revenue for these platforms.”

Open sourcing content moderation

Brands that have the tools to safeguard their reputation will be better positioned to have stable monetisation channels and build trust with their community.

However, a safe internet should not be something that is gatekept. Open-sourcing data can help address the collective responsibility advertisers have in creating safe digital spaces.

Accordingly, Unitary built Detoxify, an open source text moderation model that helps SMEs and other small brands that don’t have large budgetary coffers for content moderation.

“We know that safety is not a competitive advantage in every sense,” confesses Steele. “The work is never done but open sourcing is a good step forwards for that.”

This shared responsibility will become increasingly important as generative AI continues to evolve, making it even more difficult to moderate the mountains of content that can be created at the click of a button.

Written by:
Fernanda is a Mexican-born Startups Writer. Specialising in the Marketing & Finding Customers pillar, she’s always on the lookout for how startups can leverage tools, software, and insights to help solidify their brand, retain clients, and find new areas for growth. Having grown up in Mexico City and Abu Dhabi, Fernanda is passionate about how businesses can adapt to new challenges in different economic environments to grow and find creative ways to engage with new and existing customers. With a background in journalism, politics, and international relations, Fernanda has written for a multitude of online magazines about topics ranging from Latin American politics to how businesses can retain staff during a recession. She is currently strengthening her journalistic muscle by studying for a part-time multimedia journalism degree from the National Council of Training for Journalists (NCTJ).

Leave a comment

Leave a reply

We value your comments but kindly requests all posts are on topic, constructive and respectful. Please review our commenting policy.

Back to Top