The EU AI Act: what UK businesses need to know

The European Union has announced a provisional agreement on its AI Act, with important consequences for UK businesses developing the technology.

Our experts

We are a team of writers, experimenters and researchers providing you with the best advice with zero bias or partiality.
Written and reviewed by:
Direct to your inbox
Startups.co.uk Email Newsletter viewed on a phone

Sign up to the Startups Weekly Newsletter

Stay informed on the top business stories with Startups.co.uk’s weekly email newsletter

SUBSCRIBE

Brussels has agreed to a fresh new set of regulations that will govern how AI businesses operate in the European Union.

The world’s first legal framework on AI will follow a risk-based approach, wherein AI systems are evaluated based on the level of risk they present to users: unacceptable, high risk, and limited or low risk.

This European regulatory hoorah has sped ahead of the UK’s effort to become the first country to establish its own AI regulation, despite Westminster’s emphatic wish to become a ‘British Silicon Valley’.

As businesses react to the news, European firms are concerned regulations will place a straightjacket on innovation and give the upper hand to UK AI enterprises, in terms of resources and investor attention.

Upon reading the fine print, and despite the regulatory EU-UK divide triggered by Brexit, the EU AI Act is set to have tangible consequences on the way UK AI businesses work.

What does the EU AI Act mean

The regulations passed last Friday entail that businesses will need to be more meticulous with their due diligence and compliance, particularly if their AI product is considered high-risk.

Those who fall into this category will be required to register their system in an EU-wide database managed by the EU Commission before they are placed on the market.

They will also have to comply with a range of requirements, particularly around risk management, testing, technical robustness, data training and governance, transparency, human oversight, and cybersecurity.

For some AI businesses, this will mean having to ‘red team’ new models, which is the process of testing against various types of risks. This could entail having to hire more staff and bear compliance costs.

If businesses fail to do so, they’ll incur a hefty €35m or 7% of global turnover fine.

Why AI is not a zero sum game

Initially, this regulatory labyrinth could suggest UK AI businesses are in a comparatively advantageous position, as European innovation could be stifled.

The UK’s significant customer base in Europe suggests otherwise.

According to the EU AI Act, providers from outside the EU will be required to have an authorised representative in the EU to ensure the conformity assessment, establish a post-market monitoring system, and take corrective action as needed.

Experts note that the arrival of the EU AI Act could also encourage UK AI businesses with no ties to Europe to still choose to comply, in a bid to differentiate themselves in the market.

By adhering to legislation, companies will signal to customers and investors they are making a conscious effort to ethically and safely roll out artificial intelligence products.

“We expect the UK to follow the EU’s rules in practice even if not brought into force as primary legislation,” predicts David Strong, Partner and Head of Venture Capital at Marriott Harrison.

“If executed effectively, this Act may be a blueprint for wider global regulation that sets an equal playing field across the board – therefore much is resting on its effect in practice,” he continues.

Others are actively calling for the UK to follow suit.

“In light of the EU’s new framework, we would encourage the UK to be proactive,” emphasises Dr Roeland Decorte, Founder of Decorte Industries and President of the Artificial Intelligence Founders Association.

“This is a unique opportunity for the UK – where the AI summit and EU Act focused on the risks – to work with startups to focus on the economic opportunities and benefits to humanity AI can offer,” he adds.

A race to the AI top?

While the announcement of the EU AI Act might have sent some in the halls of Westminster into a cold sweat, the dynamics of AI technology calls for a multinational coordinated approach to regulation.

Between big corporations like Google working across continents to the borderless sharing and access of data to train Large Language Models, egoistical approaches to regulation will do more damage to innovation than regulation itself.

While the EU will now forever wield the title as the first entity to have passed AI regulation, the UK can still make itself a global AI force by listening to what AI businesses require to innovate and succeed, as well as paving the regulatory path to facilitate those needs.

Written by:
Fernanda is a Mexican-born Startups Writer. Specialising in the Marketing & Finding Customers pillar, she’s always on the lookout for how startups can leverage tools, software, and insights to help solidify their brand, retain clients, and find new areas for growth. Having grown up in Mexico City and Abu Dhabi, Fernanda is passionate about how businesses can adapt to new challenges in different economic environments to grow and find creative ways to engage with new and existing customers. With a background in journalism, politics, and international relations, Fernanda has written for a multitude of online magazines about topics ranging from Latin American politics to how businesses can retain staff during a recession. She is currently strengthening her journalistic muscle by studying for a part-time multimedia journalism degree from the National Council of Training for Journalists (NCTJ).

Leave a comment

Leave a reply

We value your comments but kindly requests all posts are on topic, constructive and respectful. Please review our commenting policy.

Back to Top