Artificial intelligence is now a reality and it is starting to impact our personal and working lives. Machine builders are already looking at using artificial intelligence (AI) for a variety of tasks, including within control systems for automation and robotics. Derek Coulson, founder of Safe Machine, explains what machine builders need to know about the EU AI Act.
Depending how AI systems are deployed, there can be safety implications; for example, AI might determine how a robot reacts when a person enters its operating zone.
If you think the idea of using AI for safety-related control systems is far-fetched, this author can remember the widespread scepticism when programmable electronic safety systems were first introduced. Many engineers declined to rely on software for safety functions, yet programmable safety systems are now well proven and commonplace.
The European Commission has decided to legislate with the aim of promoting the uptake of AI that is human-centric and trustworthy. The AI Act (sometimes known as the AI Regulation) also refers to protecting fundamental rights and ethical principles. Certain types of AI system that have unacceptable risks are being outlawed, such as those that exploit individuals' vulnerabilities, and those that utilise subliminal techniques to distort behaviours. Having a legislative framework in place is intended to help promote innovation, investment and the adoption of AI in general.
Key dates
On 1 August 2024, the Artificial Intelligence Regulation (EU) 2024/1689 entered into force as the AI Act and most of the legislation is applicable from 2 August 2026. However, the date that is of most interest to machine builders is 2 August 2027, as this is when high-risk AI systems used as safety components become regulated.
In the UK, the government is not introducing AI legislation, as it feels existing laws are sufficient. Nevertheless, the situation will remain under review as the government waits to see how the risks and opportunities develop.
Meanwhile, UK machine builders supplying to the EU will have to comply with the AI Act. If an AI system is operating outside the EU and its output impacts people within the EU, then it also needs to comply - though this is unlikely to be the situation for AI systems used as safety components on machines.
As with most regulations, the AI Act is not retrospective. This means an AI system does not need to comply if it is placed on the market before the AI Act becomes applicable. However, if the AI system is substantially modified after the AI Act becomes applicable, then it will have to be conformity assessed and CE marked to indicate its compliance.
Steps to compliance
Before an AI system is placed on the market or put into use for the first time in the EU, it needs to be CE marked. This is true whether an AI system is supplied on its own or embedded within a product such as a machine. If an AI system is embedded within a product, then the product manufacturer becomes responsible for the AI system and takes on the obligations of a 'provider'. Note that the AI Act differentiates between 'providers' and 'deployers'. For example, a machine builder would be the provider of an AI system and the end user would be the deployer.
The CE marking process has several similarities to that for CE marking to the Machinery Directive. Most of the steps along the route to compliance will therefore be familiar to anyone who has CE marked a machine to the Machinery Directive.
Several categories of AI system are defined in the AI Act. For machine builders, the one of interest is high-risk AI systems used as safety components. 'High-risk AI systems' are ones with the potential to have a significant impact on the health, safety or fundamental rights of persons. Much of the AI Act applies to other types of AI system, so not all 144 pages of Regulation (EU) 2024/1689 are relevant to machine builders.
As with the Machinery Directive, the AI Act lays down procedures to be followed for conformity assessment. The Act covers both self-certification and the use of third-party assessment bodies (Notified Bodies). Fortunately, self-certification should be adequate for AI systems used as safety components. However, if a Notified body certifies compliance, then the certification expires after five years and the AI system would have to be reassessed and recertified.
The easiest way to demonstrate conformity with the requirements is to apply standards that are harmonised to the AI Act. So far, the standards have not yet been written or harmonised, but these are due to be ready by the end of April 2025. If suitable standards are not available, or if the European Commission finds them to be inadequate, the AI Act allows for Implementing Acts to establish common specifications. Complying with these common specifications will provide a presumption of conformity in the same way as complying with harmonised standards.
In addition to the harmonised standards and/or common specifications, the European Commission will publish guidelines on the application of the AI Act.
Once an AI system has been assessed as being in compliance with the requirements of the Act, a Declaration of Conformity (DoC) can be drawn up. If an AI system is embedded within another product such as a machine, the DoC can be incorporated within the machine's Machinery Directive DoC. Similarly, the technical documentation compiled for compliance with the AI Act can be incorporated within that relating to the Machinery Directive.
To indicate the claimed compliance, high-risk AI systems must have a physical CE marking applied. Where this is not possible, the CE mark should be applied to the packaging or accompanying documentation. The physical marking may be complemented by a digital CE marking. For high-risk AI systems that are only provided digitally, a digital CE marking should be used. If a CE marked machine features an embedded high-risk AI system used as a safety component, the machine's CE marking must indicate compliance with both the Machinery Directive and the AI Act.
An important point to note for providers outside the EU is that the AI Act requires an Authorised Representative (AR) to be appointed. The AR must be a natural or legal person in the EU and possess a mandate from the provider to perform certain tasks under the AI Act. Both the instructions and DoC must show the AR's identity and contact details.
A point to note about the detail of the AI Act is that high-risk AI systems must be designed so that natural persons can oversee their functioning. For an AI system used as a safety component, it will be interesting to see how machine builders fulfil this requirement.
Ongoing obligations
After a high-risk AI system has been placed on the market, the provider is obliged to undertake post-market monitoring for the system's lifetime. If any serious incidents occur, these must be reported to the relevant market surveillance authority.
High-risk AI systems used as safety components are required to have automatic event logging, which will assist with post-market surveillance.
If an AI system undergoes substantial modification, then its conformity must be reassessed. 'Substantial modification' includes using an AI system for a purpose for which it was not originally intended.
Penalties for non-compliance
Within the AI Act, there are rules governing penalties for non-compliance, including failing to provide the relevant authorities with information or access upon request. Penalties can apply to providers, deployers, importers, distributors and authorised representatives, as well as notified bodies. Penalties take the form of fines that can be up to EUR15million or 3 per cent of worldwide annual turnover. Fines relating to prohibited AI systems are substantially higher.