30
Wed, Oct
59 New Articles

Regulation of Artificial Intelligence – How Long Until the Future is Here?

Regulation of Artificial Intelligence – How Long Until the Future is Here?

Hungary
Tools
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

The regulation of artificial intelligence (AI) is not a new issue. We understand that the use of such technologies can bring many benefits - such as better healthcare, safer and cleaner transport system, more efficient production, or cheaper and more sustainable energy - but we also are aware that they can pose significant risks if not properly regulated.

Specific objectives of the European Union

As part of the European Union's (EU) digital strategy, the European Commission proposed the first EU regulatory framework for artificial intelligence in April 2021. The European Parliament examined the Commission's proposal and set out its own objectives.

Its general aim is to ensure the proper functioning of the single market by creating the conditions for the development and use of reliable artificial intelligence systems in the EU. The framework sets out a harmonised legal framework for the development, marketing and use of AI products and services on the EU market.

In addition, the Parliament sets further specific targets. In this context, it aims to:

  • ensure that AI systems placed on the EU market are safe and respect existing EU law,
  • ensure legal certainty to promote investment and innovation in AI,
  • improve the governance and effective implementation of EU law on fundamental rights and safety requirements applicable to AI systems; and
  • facilitate the development of a single market for legitimate, safe and trustworthy AI applications and preventing market fragmentation.

The Parliament also believes that it is essential that AI systems should operate only under human supervision, rather than automation, in order to prevent harmful consequences. 

Defining artificial intelligence 

Currently, there is no universally accepted definition of "artificial intelligence" by the scientific community, and the term is often used as a generic term for computer applications based on various techniques that exhibit capabilities that are commonly and currently associated with human intelligence.

However, the Commission has found that a clear definition of artificial intelligence is crucial for the classification of legal liability. The Commission also proposed that the definition of "AI" and "AI system" should be established at EU level, thus ensuring legal harmonisation and certainty. The framework defines the concept of an "artificial intelligence system", largely based on the definition already known and used by the Organization for Economic Cooperation and Development (OECD), as follows: 

...software that is developed with [specific] techniques and approaches [listed in Annex 1] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. 

Risk-based classification

The use of AI may have a negative impact on the fundamental rights of individuals and on the security of users, given its specific characteristics (e.g. opacity, complexity, data dependency, autonomous behavior). To address these concerns, the framework introduces a risk-based approach, whereby the level of legal intervention is always adapted to the actual level of risk. Accordingly, the framework distinguishes the following levels.

 

Unacceptable risk –

Prohibited

AI systems that pose an unacceptable risk and are explicitly prohibited. These systems are clearly a threat to people's safety, livelihoods and fundamental rights.

High risk –

Regulation obligation

High risk AI systems that have a harmful impact on people's safety or fundamental rights. Their development, use and distribution are not prohibited by the proposal, but new rules are to be introduced.

Limited risk –

Transparency requirements

Limited-risk AI systems only have to comply with specific transparency requirements. Examples of such systems include systems that interact with humans (e.g. chatbots such as ChatGPT), certain emotion recognition systems, systems capable of biometric identification, and AI systems that produce or manipulate image, audio or video content (e.g. deepfake).

Low or minimal risk –

No legal obligation

AI systems that do not fall into the above categories, i.e. low or minimal risk AI systems, can be developed and used in the EU without further legal obligations.

Where are we now? 

On 14 June 2023, Members of the European Parliament decided by 499 votes to 28, with 93 abstentions, on the Parliament's negotiating position for the debate in the European Council on the proposal for a law on artificial intelligence. In particular, the position taken is that there should be a complete ban on the use of biometric surveillance, predictive policing and machine emotion recognition in law enforcement, border control, the workplace and educational institutions. Furthermore, when using generative AI systems (e.g. ChatGPT), it should be clearly indicated that the content was generated by artificial intelligence. Finally, the use of AI systems to influence voters' votes should be considered a high risk. 

Negotiations have already started with the Council on the final text of the legislation. The current position is that the legislation will be adopted at the level of a regulatory legislation. Therefore, the new legislation will be directly applicable throughout the EU without transposition into national law. The reason for regulation at the ordinance level is to ensure uniform regulation, identical and predictable conditions for development, use and distribution, and thus to provide a properly regulated framework for improvement. The aim is to reach agreement on the final text of the legislation by the end of the year.

By Kamilla Bodori, Junior Associate and Istvan Solt, Attorney at Law, Act Legal Kuzela, Partner, and Tomas Zwinger, Lawyer, Act Legal