The AI Act in practice: challenges and opportunities for the development of artificial intelligence in the EU

26 July 2024 | Knowledge, News, The Right Focus

On 12 July 2024, just over three years after work began, the AI Act, i.e. the Regulation laying down harmonised rules on artificial intelligence, was published in the Official Journal of the European Union. The Regulation will enter into force 20 days after this date and will be fully applicable from 2 August 2026. This means that providers and users of artificial intelligence will soon face a number of new obligations.

We look at how to prepare for this effectively.

How to prepare your business for the AI Act

Before embarking on large-scale implementations, every company should consider what systems it uses, if any, and what its role is in relation to them.

This is because the extent of your responsibilities will depend on the type of system and on whether you are the system’s provider, just a user, or perhaps you use it with appropriate modifications.

EN Broszura AI   3

Classification of systems

The AI Act classifies AI systems according to their level of risk:

  • Solutions deemed to pose unacceptable risk, such as those using subliminal techniques or social scoring based on behaviour or personal characteristics, are prohibited.
  • High-risk systems, such as those using biometric data or used for employee recruitment, will be allowed after meeting additional requirements, including, without limitation:
    • Monitoring system performance
    • Ensuring that input data is relevant and representative
    • Compliance with registration obligations
  • Limited-risk systems, such as chatbots or technologies that manipulate audiovisual content – it will be necessary to inform users that they are dealing with artificial intelligence systems
  • Minimal-risk systems, such as spam filters, will be free to use, although, as with all AI systems, providers and deployers should take measures to ensure that their staff and others responsible for the systems have a sufficient level of AI literacy

In addition, the AI Act also singles out general-purpose AI models, such as tools like ChatGPT.

Defining your role

In order to adequately prepare for the AI Act, the first step is to map and identify the processes. This will enable you to determine whether you are dealing with an AI system and to verify its technical standards.

You will then need to classify the system according to the risk categories mentioned above and define your role, i.e. whether you are a supplier or a deployer, whether you are modifying the system, and to identify your specific responsibilities.

The next step will be to develop appropriate procedures and documentation, including:

  • Policy on the use of AI systems
  • Technical documentation on the technologies used
  • Risk management mechanisms
  • Procedures for dealing with customers or recipients of the system

Preparation for action

As part of your operational preparation to meet your obligations under the AI Act, it is advisable to:

  • Carry out an AIRA (AI Risk Assessment) process and designate a structure responsible for managing AI, monitoring risks and ensuring compliance, as well as implementing appropriate internal policies
  • Assess the AI systems in use and analyse the associated risks (e.g. discrimination, data breaches) and compliance gaps
  • Ensure that appropriate cybersecurity standards are in place
  • Protect the organisation against potential incidents, including developing appropriate patterns for preventing, responding to and reporting incidents to the relevant authorities
  • Establish good practices for example in terms of staff preparation or customer information standards.
  • If using technology provided by an external provider – also assess the provider using the AI Vendor risk assessment matrix or other methodology
  • In addition, in the case of high-risk system providers, it is also important to consider:
  • Ensuring that the system meets the requirements of the AI Act
  • Implementing a quality management system
  • Properly labelling of the AI system
  • Conducting conformity assessment and preparing a statement of conformity
  • Fulfilling registration obligations and obligations towards supervisory authorities

Reporting obligations

The AI Act requires providers of high-risk artificial intelligence systems to report serious incidents.

Serious incidents are those that directly or indirectly lead, could have led or are likely to lead to the death of a person or serious harm to a person’s health, harm to property or the environment, or serious and irreversible disruption of the management and operation of critical infrastructure.

Incident prevention and response mechanisms should therefore be developed.

In addition, it is important to remember that compliance with the obligations under the AI Act will often overlap with the requirements of other regulations, such as the GDPR, DORA, DMA, DSA, or regulations on copyright protection, among others.

The AI Act also provides for the establishment of the AI Office to supervise certain systems, support the development of certain standards and enforce rules set at EU level.

In addition, each Member State should establish its own competent authority for AI matters or delegate such powers to an existing body. In Poland, this role will be fulfilled by the newly established Commission for Artificial Intelligence Supervision, according to the Ministry of Digital Affairs.

The AI Act – a summary

In summary, by imposing obligations on providers and users of AI-based solutions, the AI Act will affect not so much BigTechs as all businesses using AI.

It is predicted that within the next two years, almost 80 per cent of businesses will be using AI-based systems and will therefore fall under the AI Act to some extent.

This will require the implementation of appropriate policies, procedures and comprehensive AI Governance, as well as securing the aspect of using AI-based solutions provided by third parties.

It is therefore advisable to make the appropriate organisational and technical preparations now and to ensure compliance with the new regulations.

Any questions? Contact us

Natalia Kotłowska-Wochna

Mikołaj Kuterek

Latest Knowledge

Peak season for tax inspections

In an era of ubiquitous tax compliance inspections, the implementation of an appropriate oversight framework appears to be critical to the safe operation of a business.

Contact us:

Natalia Kotłowska-Wochna

Natalia Kotłowska-Wochna

Attorney-at-Law / Head of New Tech M&A / NewTech Practice Group / Head of the Poznan Office

+48 606 689 185

n.kotlowska@kochanski.pl