Other opportunities
Download Print

Artificial Intelligence Act’s requirements for most AI system maintainers 3/32/24

Tereza Vagentroca
Associate, PwC Legal
Maris Butans
Senior Manager, PwC Legal

A system of artificial intelligence (AI) can make your day-to-day work increasingly more efficient, competitive and productive in both the private and the public sector. There are various AI system models on the market you can put in place, tailor to your company’s needs and use in your day-to-day work. Remember that, for instance, a company using an AI system for its professional purposes under EU Regulation 2024/1689 (the ‘AI Act’) faces various obligations for AI system maintainers.

What is an AI system maintainer?

This is an individual or entity, a public sector institution, agency or other unit that uses an AI system it controls, except where an AI system is used for private non-professional activities. In other words, an AI system maintainer is the person that uses an AI system for its own professional purposes, i.e. the person that has adapted the system for its own needs and exercises control over who is using it, how and why.

For example, a company buys a model from an AI system developer, tailors it and uses it as a virtual assistant to make customer service more efficient. The company will be treated as maintainer because it’s maintaining and running this AI system.

The AI Act assigns specific risk levels to AI systems. This article explores obligations that must be met in most cases, without analysing any of the additional obligations that must be met when high-risk AI systems are used and without looking at any of the prohibited AI practices. There are systems with no additional obligations under the AI Act. Those are systems that do not affect decisions capable of substantially infringing an individual’s fundamental rights. An example would be the use of AI in developing a video game.

If a company maintains a high-risk system, then its responsibility levels rise, so companies need to make sure that none of their AI systems qualify as a high-risk system.

What is a high-risk AI system?

This is a system being used in the following areas, for example:

  • Critical infrastructure capable of threatening human life, among other things
  • Education or professional training capable of determining access to education and employment opportunities
  • Product security components
  • Employment, HR management and access to self-employment
  • Key private and public services
  • Law enforcement agencies capable of affecting fundamental human rights, including freedom
  • Migration, asylum and border control management
  • Administration of justice and democratic processes

In most cases, companies putting AI systems in place for their daily tasks are not using any high-risk AI systems or prohibited AI practices, so there are only a few additional rules the AI Act prescribes for such limited-risk systems.

Transparency

An AI system generating or manipulating image, audio or video content that, for instance, considerably resembles real persons, objects or events, and would appear misleadingly authentic or real to the individual, must state that the content is artificially generated or manipulated. If such content is part of an apparently artistic, fictional or similar work or programme, the informing should be done in a way that does not hinder enjoyment of the content.

For example, if a company in the arts business has created a manipulated image, the informing can be done by an informational text. If a company uses an AI system and creates a manipulated image that is published, then individuals should be informed by means of a watermark.

Maintainers using an AI system to generate or manipulate text for the purpose of using it to inform the public of important matters must include information that an AI system has been used. This obligation does not apply if a human reviews those texts and exercises editorial control, and the individual or entity takes editorial responsibility for the text created by the AI system.

For example, if a media company uses an AI system to prepare an article informing the public of an important matter and the article is not reviewed by an editor and is published without control, the article should contain an informational statement that an AI system has been used in preparing it. If a media company uses an AI system to prepare an article informing the public and a human does the reviewing and performs the editing function, then it’s not necessary to state that an AI system has been used.

Before the AI Act comes into force, such notices can be seen in many places. Several social media platforms are offering an automatic option to state that an AI system has been used in creating the content.

AI literacy

Every company should be enhancing AI literacy regardless of AI systems it’s using.

The maintainer should educate its staff handling the AI system, i.e. clearly state the goals AI systems can be used for, what data can be processed, what issues the system will be addressing and what principles underpin its workings, as well as any other considerations that would help individuals make informed and reasonable decisions. The AI Act prescribes this as an obligation for high-risk AI systems. Training is also needed for individuals running various internal processes and driving corporate innovation because before new systems are adopted it’s important to understand what benefits will come from a particular system and what corporate goals it will help achieve.

AI literacy is an area that should be improved by any company adopting AI systems to improve its work whatever their degree of risk. AI systems offer great potential but if individuals having access to these tools in their day-to-day work do not know how to use them or are acting inefficiently, the company will not benefit or the benefits will be smaller than planned. AI literacy is also a skill that needs to be improved regularly, so we must not forget about regular training courses and workshops tailored to systems used by the company and its goals, as general training does not always produce the desired effect.

High-risk AI system maintainers have the following additional obligations under the AI Act:

  • Setting up a risk management system for the entire life cycle of the high-risk AI system
  • Data management – the training, validation and test datasets must be appropriate, sufficiently representative, free of errors and complete according to the objective
  • Preparing technical documentation to demonstrate compliance and provide authorities with information for a compliance assessment
  • Providing human oversight
  • Identifying appropriate levels of accuracy, resilience and cybersecurity
  • Setting up and enforcing a quality management system

AI system maintainers face substantial penalties for breaching the AI Act – the maintainer faces a fine of EUR 750,000 for a general breach.

The penalty section highlights failure to meet the requirements for high-risk AI systems, with a breach attracting a fine of up to EUR 15,000,000 or, if the offender is a company, up to 3% of its total worldwide revenue in the previous financial year, whichever is greater. Of course, these are maximum penalties and there are certain points to be considered in the interests of small and medium enterprises, including startups, to secure their economic viability.

It’s important to note that none of the penalties under the AI Act can be charged until 2 August 2025, so there is still time to get ready and ensure compliance.

In summary, the AI Act does not impose any significant additional obligations on most AI system maintainers, yet where a high-risk AI system is maintained, the maintainer is advised to make sure that all AI Act requirements are met to avoid penalties.

Share the article

If you have any comments on this article please email them to lv_mindlink@pwc.com

Ask question