Other opportunities
Download Print

Proposal for EU regulation on artificial intelligence 1/19/21

On 21 April 2021 the European Commission published its proposal for a regulation on artificial intelligence (“AI”), the first piece of legislation in Europe to govern AI matters. This article explores key provisions of the proposed regulation.

The scope

The regulation will apply to businesses and organisations established within and outside the EU. It offers AI solutions for use in the EU or applicable to EU users (individuals or entities).

The regulation will not apply to any military AI solutions or ones governed by international treaties on cooperation between law enforcement or judicial agencies.

The proposal offers a fairly open definition of an AI system: software developed by using one or more of the techniques and approaches named in the proposal, such as machine learning and inductive programming, which is able, based on some human-defined goals, to create, for instance, content, forecasts, recommendations or decisions that affect the environment the system is interacting with.

The AI service provider covered by the proposal can be a business, a government agency or an individual that has developed an AI system or on whose behalf it has been developed for first-time use in the EU market as part of commercial activities in exchange for payment or free of charge.

Banned AI systems

To protect EU residents, the proposal lists AI systems to be banned from the EU market, for example:

  • Systems using subconscious techniques that influence someone’s actions in a way that causes or may cause physical or psychological damage to that person or someone else;
  • Systems exploiting the vulnerability of certain groups of persons due to a person’s age or physical or mental disability to influence their actions in a way that causes or may cause physical or psychological damage to that person or someone else;
  • Systems used by government agencies, or by another organisation on their behalf, to assess or classify the reliability of individuals based on their behaviour or personality traits if this may lead to an unfair decision being made about the person or group of persons;
  • Remote biometric identification systems in publicly accessible places for the needs of law enforcement agencies, other than systems designed, for instance, to search for criminals or lost children or to prevent an immediate personal threat or a terrorist attack.

High-risk systems

Taking a risk-based approach, the proposal lays down specific rules for AI systems that pose a high risk to a person’s health, safety or basic rights, such as biometric identification systems, systems for managing critical infrastructure (water, electricity or highway traffic), systems ensuring the availability of education, employment systems etc.

If any high-risk systems are to qualify for the EU market, they will have to meet certain requirements for data management, record keeping, provision of information to users, the scope for efficient human supervision of such systems, and other requirements.

Opportunities for businesses

Given the fairly stringent rules, the proposal also provides for mechanisms allowing an AI system developer to test their system for compliance in AI regulatory sandboxes.

Despite the binding rules mainly applicable to high-risk systems, the proposal offers businesses the opportunity to draw up codes of ethics for AI systems that are not considered high risk.

Liability

The proposal includes provisions for liability levels, stating that a penalty for breaches may reach EUR 30 million or 6% of the company’s worldwide turnover.

Share the article

If you have any comments on this article please email them to lv_mindlink@pwc.com

Ask question