Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News EU AI Act: the Regulatory Framework on the Usage of Machine Learning in the European Union

EU AI Act: the Regulatory Framework on the Usage of Machine Learning in the European Union

After the first publication of the regulative structure proposal on the operation and governance of machine learning applications in 2021, on June 14th negotiations have started for the realization of the legislation in the EU Council. According to the press release, the EU countries are expected to reach an agreement by the end of 2023.

In the framework, the EU Act takes a risk-based approach and plans to avoid disproportionate prescriptions when executing the regulations. Considering the ambiguity of the word artificial intelligence, the AI Act's principles are concerned with the general risk factor of any algorithmic and model-based system including discriminative, generative machine learning models (e.g. large language, speech, and vision deep networks), and possible intelligent systems of the future (i.e. AI, not to be confused with current misuse of the word for marketing purposes).

The Act has gained significant attention this year as the plausibility level of the generative models' output has increased owing to the employment of internet-scale scraped data and high-end accelerators in large quantities. The legislative work is part of an ongoing global effort that aims to set the grounding principles for the future developments and applications of model-based systems e.g. Algorithmic Accountability Act, NTIA's policy RFC in the USA, Artificial Intelligence and Data Act in Canada, and similar AI strategy laws in Japan.

Foundations of the EU AI Act's principles are set on top of four types of risk factors. Rather than imposing general restrictions, this leveled approach aims to match the transparency, rules, obligations, and monitoring for providers and users depending on possible adversarial effects that the system can cause:

  • Minimal-risk systems: free usage is allowed, e.g. model-assisted video game character motion control or 3d rendering.
  • Limited-risk systems: providers are obliged to inform a ML system is being interacted with, users then can act accordingly or leave the platform if they wish, e.g. image, video, sound editing programs.
  • High-risk systems: these are the systems that may cause damage to public rights, health, safety, or may trigger negative environmental effects. They are subject to the highest level of obligations and monitoring.
  • Unacceptable-risk systems: these systems will be prohibited.

Platforms with unacceptable and high-level risks are two critical points within the EU AI Act framework. For a ML-assisted system to be characterised with the unacceptable-risk status, it has to exploit physical biometric data, emotion, gender, race, ethnicity, citizenship status, religion, and political orientation during inference that may cause discriminative operations or it has to manipulate cognitive and behavioural actions, e.g. toys motivating harming acts via generated speech. However, post-incident (i.e. not real-time) processing of biometric data for serious crime identification (after the court approval) is exempted at this level.

High-risk status is given special emphasis due to common deployment of large uninterpretable deep networks. Since they cover larger commercial segments, the official documents can be consulted for examples that falls under this category (e.g. products listed under EU's product safety legislation). For that reason, stricter assessment, regulation, and monitoring procedures are proposed:

  • The models have to be registered in the EU Database.
  • Platforms have to disclose for each content that it was generated by a model.
  • Platforms have to ensure generation of illegal content is prevented.
  • Platforms have to provide related summaries of the copyrighted data used during the training stage.

Considering the tendency of large deep network API providers to conceal the training data details, the EU’s requirements will enable more transparency hence will allow necessary actions to be taken by the copyright owners.

It should be noted that the EU AI Act should not be confused with the EU Data Act. The Data Act aims for broader goals for the EU's data economy and is concerned with the EU's data sovereignty. (e.g. IoT data vendor-lock prevention etc.).

About the Author

Rate this Article