Caution or Innovation: The drivers of EU and UK AI policy

16 Jun 2022  –  Written by Jessica Sumner

As artificial intelligence (AI) technologies advance, the EU and UK have diverged on their legal, business, and social approaches to regulating AI. With many AI companies conducting business in both markets, they must be meticulous to abide by the stringent laws of the EU while taking advantage of the UK’s innovation-first approach. While both entities recognise the potential to harness the power of artificial innovation for the future of their economic, healthcare, and climate change adaptation efforts, they differ on the means by which this will take place. Undoubtedly, both recognise the success that Silicon Valley has experienced while also understanding the ways in which the foundational culture supporting this “tech boom” could be improved upon within their own approaches. Ultimately, promoting a culture of innovation shared by both the EU and UK will create advancements in economic opportunities and the potential for competition with the United States, further strengthening the EU’s cohesion as an economic and information zone, and advancing the UK as a viable tech competitor.

To approach the foundational legislation, the EU has taken a vastly different approach to the UK. To fully immerse itself head-on, the EU has had to take a cohesive approach, one that continues to promote itself as an organisation as it continues to reorganise itself sans-UK. Either the innovation sector of the EU faces the challenge of innovating through the framework of stringent policies; or paving the way for Europe’s tech industry to pave its own path. Following the increasing transformation of technology within the EU, EU regulation has been put in place, in direct opposition to the “let innovation take its route” way of Silicon Valley. With a growing number of EU tech companies disrupting areas such as fintech and smart energy solutions, the interest to cultivate the potential surrounding new technologies is there, but to advance to the same level as the UK will prove difficult. 

To understand the potential that lies between these regulatory practices, it is crucial to analyse the frameworks the UK and EU are using to approach these technological advances. Between the two, the legislative differences in approach are vastly different. We see that the UK takes an approach similar to that of Silicon Valley, one that promotes the potential of technology rather than pre-empting possible harm, whereas, as a whole, the EU continues to go forward with caution.


EU: An Infrastructure of Caution

To define the ways in which legislation will dictate the parameters of AI, the EU has developed a three-tier classification system. Artificial Intelligence systems are outlined by the AI White Paper into the following categories: unacceptable risk AI systems, high-risk systems, and limited and minimal risk AI. Each of these respective categories has specific legislation that delineates the way in which the companies that either do business in the EU, operate within an EU member state, or intend to conduct business in the EU, will need to comply with.

Of the classifications, unacceptable risk AI-systems are prohibited for use in the EU as they prove to be too pervasive. These consist of: 1) subliminal, manipulative, or exploitative systems that cause harm, (2) real-time, remote biometric identification systems used in public spaces for law enforcement, and (3) all forms of social scoring, such as AI or technology that evaluates an individual’s trustworthiness based on social behaviour or predicted personality traits. This category in particular has been met with the greatest amount of lobbying from corporations who seek for it to be amended in accordance with the amount of risk that is thought to be appropriate when intaking new technologies.

More broadly, the existing legislation includes the following rules:

  • Address risks specifically created by AI applications;
  • Propose a list of high-risk applications;
  • Set clear requirements for AI systems for high-risk applications;
  • Define specific obligations for AI users and providers of high-risk applications;
  • Propose a conformity assessment before the AI system is put into service or placed on the market;
  • Propose enforcement after such an AI system is placed in the market;
  • Propose a governance structure at the European and national levels.

While these pre-cautionary measures are in place to, first and foremost, protect citizens’ rights, they are insufficient to address the specific challenges that AI systems may bring. Only after this protection has been enhanced will companies navigate within the EU tech space, and will the economic benefits of more widespread AI adoption be realised. 


UK: A Sector Led Approach

In contrast to this consumer rights-led approach, the UK’s approach is one that lends itself to promoting a greater sense of autonomy on behalf of the company, and surely is a catalyst for encouraging tech investment outside of the Valley. 

The UK’s approach justifies a lack of regulation by defending the right of the private corporation over that of the potential harm that the technology itself has the capability to cause. The National AI Strategy aims to achieve three main goals to ultimately help the UK achieve the status of becoming a “global AI superpower”. These are:

  • Invest and plan for the long-term needs of the AI ecosystem to continue our leadership as a science and AI Superpower;
  • Support the transition to an AI-enabled economy, capturing the benefits of innovation in the UK, and ensuring AI benefits all sectors and regions;
  • Ensure the UK gets the governance of AI technologies right to encourage innovation, investment, and protect the public and our fundamental values.

The government has specified that it is first and foremost concerned with leading an innovative legacy and creating more opportunities, something that will similarly distinguish it from the public-sector-focused policies it has previously pursued prior to Brexit. As defined in the UK’s National AI Strategy there is a clear signal that the intent is to stray from the cautious advancement encased in regulation, and rather strive to hold an appearance of being innovative first; to drive prosperity across the UK and ensure everyone can benefit from AI, and apply AI to help solve global challenges like climate change. While this approach by the UK contrasts with that of the defined infrastructure the EU has laid out for its’ own tech sector, it is too soon to tell which way each has benefitted and possibly suffered due to its own approach. 


Which is Better & Who Does it Serve? 

Certainly, the EU’s approach is one that clearly sets the operational framework for companies in a way that they can conduct business while being trans-nationally compliant. This will not necessarily dissuade companies from investing, but as mirrored through the parameters of the GDPR, to which AI policy is subjected, it is likely to ensure companies abide by the regulatory frameworks in place or face penalty fines. Conversely, the UK’s goal is clearly to improve the innovative efforts that make it competitive against the US market. To do so, it has been decided that a company-first approach is most suitable. Determining which is better depends on each regulator’s specific goal. As the UK seeks to be innovation-first, this will certainly be more attainable with fewer regulatory barriers in place. Whereas, the goal for the EU is one that seeks to reap the benefits of the more benevolent technologies and hold off on those that could lead to undesirable outcomes for its citizens, necessitating greater and more stringent regulation.

IDRN does not take an institutional position and we encourage a diversity of opinions and perspectives in order to maximise the public good.

Recommended citation:

Sumner, J. (2022) Caution or Innovation: The drivers of EU and UK AI policy, IDRN, 16 June. Available at: [Accessed dd/mm/yyyy].