GET IN TOUCH WITH PAKKO, CREATIVE DIRECTOR ALIGNED FOR THE FUTURE OF CREATIVITY.
PAKKO@PAKKO.ORG

LA | DUBAI | NY | CDMX

PLAY PC GAMES? ADD ME AS A FRIEND ON STEAM

 


Back to Top

AIM: //DUBAI/LA/TAIPEI/SÃO PAULO/CDMX/AUSTRALIA/SPAIN\\

EU countries adopt a common position on Artificial Intelligence rulebook – EURACTIV.com

EU countries adopt a common position on Artificial Intelligence rulebook – EURACTIV.com

EU ministers green-lighted a general approach to the AI Act at the Telecom Council meeting on Tuesday (6 December). EURACTIV provides an overview of the main changes.

The AI Act is a flagship legislative proposal to regulate Artificial Intelligence technology based on its potential to cause harm. The EU Council is the first co-legislator to finish the first step of the legislative process, with the European Parliament due to finalise its version around March next year.

“The Czech presidency’s final compromise text takes into account the key concerns of the member states and preserves the delicate balance between the protection of fundamental rights and the promotion of uptake of AI technology,” said Ivan Bartoš, Czechia’s Deputy Prime Minister for Digitalisation.

The position of the EU Council on the flagship legislation to regulate Artificial Intelligence was shared on Friday (8 November) with some final last-minute adjustments made by the Czech Presidency.

AI definition

How AI is defined was a critical part of the discussions as that defines the scope of the regulation.

Member states were concerned that traditional software would be included, so they put forth a narrower definition of systems developed through machine learning, logic- and knowledge-based approaches, elements that the Commission can specify or update later via delegate acts.

The Czech Presidency of the EU Council pitched a narrower definition of Artificial Intelligence (AI), a revised and shortened list of high-risk systems, a stronger role for the AI Board and reworded national security exemption.

General purpose AI

General purpose AI comprises large language models that can be adapted to carry out various tasks. As such, it did not initially fall under the scope of the AI regulation that only envisaged objective-based systems.

However, the member states deemed that leaving these critical systems out of the scope would have crippled the AI rulebook, while the specificities of this nascent market needed some tailoring.

The Czech presidency resolved the matter by tasking the Commission to carry out an impact assessment and consultation on which to adapt the rules for general purpose AI via implementing act within one year and a half from the entry into force of the regulation.

The Czech Republic wants the Commission to evaluate how to best adapt the obligation of the AI Act to general purpose AI, according to the latest compromise text seen by EURACTIV.

Prohibited practices

The AI rulebook bans the use of the technology for subliminal techniques, exploiting vulnerabilities and establishing a Chinese-style social scoring altogether.

The social scoring ban was extended to private actors to avoid the prohibition being circumvented via a contractor, whilst the concept of vulnerability was also extended to socio-economic aspects.

High-risk categories

Under Annex III, the regulation lists the uses of AI that are considered at high risk to harm people or properties and, therefore, must comply with stricter legal obligations.

Notably, the Czech presidency introduced an extra layer, meaning that, to be classified as high-risk, the system should have a decisive weight in the decision-making process and not be ‘purely accessory’, a concept left to the Commission to define via implementing act.

The Council deleted from the list the deepfake detection by law enforcement authorities, crime analytics, and the verification of the authenticity of travel documents. However, critical digital infrastructure and life and health insurance have been added.

Another significant change is that the Commission will be able not only to add high-risk use cases to the annex, but also to delete them under certain conditions.

Moreover, the obligation of high-risk providers to register on an EU database has been extended to public body users, except law enforcement.

High-risk obligations

The high-risk systems will have to comply with requirements such as the dataset’s quality and detailed technical documentation. For the Czech presidency, these provisions “have been clarified and adjusted in such a way that they are more technically feasible and less burdensome for stakeholders to comply with”.

The general approach also attempts to clarify the allocation of responsibility along the complex AI value chains and how the AI Act will interact with existing sectorial legislation.

A new partial compromise on the AI Act, seen by EURACTIV on Friday (16 September) further elaborates on the concept of the ‘extra layer’ that would qualify an AI as high-risk only if it has a major impact on decision-making.

Law enforcement

The member states introduced several carveouts for law enforcement in the text, some of which are intended to be ‘bargaining chips’ for the negotiations with the European Parliament.

For instance, while users of high-risk systems will have to monitor the systems after launch and report to the provider in case of serious incidents, this obligation does not apply to sensitive information spurring from law enforcement activities.

What the EU governments seem less keen to concede on is the exclusion of AI applications related to national security, defence and military from the regulation’s scope and capacity for police agencies to use ‘real-time’ remote biometric identification systems in exceptional circumstances.

The Czech Presidency of the EU Council circulated a new compromise on the Artificial Intelligence (AI) Act on Wednesday (19 October), set to be the basis for an agreement next month.

Governance & enforcement

The Council has enhanced the AI Board, which will gather the competent national authorities, notably by introducing elements already present in the European Data Protection Board, like the pool of experts.

The general approach also mandates that the Commission designates one or more testing facilities to provide technical support for the enforcement and to adopt guidance on how to comply with the legislation.

The penalties for breaching the AI obligations were made lighter for SMEs, while a set of criteria have been introduced for national authorities to consider when calculating the sanction.

The AI Act includes the possibility of setting up regulatory sandboxes, controlled environments under the supervision of an authority where companies can test AI solutions.

The Council’s text allows such testing to occur in real world conditions, whereas under certain conditions, this real-world testing could also occur unsupervised.

The transparency requirements for emotion recognition and deepfakes have been enhanced.

[Edited by Nathalie Weatherald]

This content was originally published here.