The digital transition, inevitably accelerated by the Covid-19 crisis, brought a number of policy challenges to the EU especially in relation to emerging technologies in the European market, with Artificial Intelligence (AI) at the forefront. AI systems, referred to as “systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals”, are gaining momentum with applications in key sectors such as automotive, with connected and self-driving vehicles, robotics (automated inventory management and manufacturing) and e-commerce. The European market, in this sense, showed a high degree of readiness with 60346 AI patents registered as of March 2019, behind only China (66508) and the United States (279145). Such a rapid market development immediately sparked questions as to whether, and to what extent, to regulate AI, given the impact that algorithms and intelligent robots may have on fundamental rights and ethics. If the first question immediately found a positive response from EU policymakers, the second one underwent a long regulatory design process, which ultimately led the European Commission to put forward a proposal for a Regulation on Artificial Intelligence – the AI Act.
The proposed AI Act clearly follows the regulatory trend that has shaped the Commission’s recent initiatives as regards the General Data Protection Regulation (GDPR), the Digital Services Act and the general internal market rules. Pursuing a reasoned “risk-based approach”, the EU Executive aims at regulating the potential uses of AI systems, rather than the technology itself, and provides a framework to ensure that they respect criteria such as high-quality data, traceability and human oversight.
The AI Act de facto bans all AI uses presenting “unacceptable risk” due to violation of EU values, such as fundamental rights. This provision encompasses practices ranging from AI using subliminal techniques to manipulate a person’s behavior or exploiting vulnerabilities of certain groups of people possibly causing harm; social scoring enacted by public authorities; and “real-time” remote biometric identification in publicly accessible places for law enforcement purposes. The latter remains possible with some exceptions, namely the search for missing children or the prevention of a terrorist attack – loosening the regulatory tie on a perceived controversial practice.
Nonetheless, AI uses classified as ‘high-risk’ by the proposed regulation will be allowed in the EU, provided their developers comply with certain requirements and ex-ante conformity assessments. Coherently with platforms’ regulation (e.g. the Digital Services Act), the Act does not seek to influence outcomes by mandating how AI algorithms should work, but by focusing on transparency, risk management and quality requirements.
On the basis of its intended purpose, an AI system is thus going to be considered as ‘high risk’ when:
It is used as a safety component of a product, or if it is covered by one of 19 specified pieces of EU single market harmonisation legislation (e.g., aviation, cars, medical devices).
It has implications for health, safety and fundamental rights, notably including operation of critical infrastructure such as road traffic, water and electricity supply, education and vocational training, employment (as in recruitment, task allocation, contract termination), access to public services, law enforcement: risk assessment of individuals, predicting criminal offences, migration and border control.
Developers of ‘high-risk’ AI systems will have to comply with a number of technical and regulatory requirements such as establishing safeguards against various types of biases in data sets. Most importantly, Article 14 mandates human oversight during the whole lifecycle and design phase of a ‘high-risk’ AI, therefore putting accountability at the forefront.
Finally, for limited risk AI, regulation only mandates transparency requirements, like notifying users they are interacting with an AI system, what personal data it is collecting and for what purpose.
From an overview of the proposal, it is clear how the Commission did not want to tighten the regulatory bond on AI, but rather wanted to showcase the EU as a global standard-setter when it comes to developing safe AI. This broad regulatory framework, which is now in the form of a draft law and will need to be amended by the European Parliament and the Council via co-decision, will indeed apply to a 450 million people market – a factor global companies will need to take into account when developing their algorithms and systems inside and outside their respective markets. It remains to be seen whether the two other major players, notably the US and China, will follow the EU down the regulatory pathway, given the share of businesses that gained their frontline positions in AI technologies by mining large amount of consumers’ personal data. Looking at the geopolitical side of the issue, the regulation looks like a ‘hint’ to the US to develop a complementary framework on AI to counter the Chinese market development – striking in this sense are the EU draft law’s provisions on banning social scoring practices through AI, which are already largely adopted in China.
IDRN does not take an institutional position and we encourage a diversity of opinions and perspectives in order to maximise the public good.
Dante De Falco, F. (2021) Do Androids Dream of Regulated Sheep: The EU AI Act, IDRN, 04 June. Available at: https://idrn.eu/economic-development/do-androids-dream-of-regulated-sheep-the-eu-ai-act [Accessed dd/mm/yyyy].