Machine builders are beginning to implement artificial intelligence (AI) for a variety of tasks, such as predictive maintenance and control systems. If AI is used within control systems, there can be safety implications. For example, AI might determine how a robot reacts when a person enters its operating zone. Derek Coulson, compliance specialist at Hold Tech Files, explains.
In the UK, the government is not legislating on the development and deployment of most AI systems, as it feels existing laws are sufficient (though the creation of certain deepfakes is being banned). Nevertheless, the situation will remain under review while the use of AI expands, and its risks and opportunities become clearer.
In contrast, the European Commission is legislating, with the aim of promoting the uptake of AI that is human-centric and trustworthy. AI systems that have unacceptable risks are being outlawed, such as those that exploit individuals' vulnerabilities. Having a legislative framework in place is intended to help promote innovation, investment and the adoption of AI in general.
Machine builders that embed AI systems within machines will need to comply with the AI Act (also known as the AI Regulation) if the machines are to be placed on the market in the EU. Compliance will be required, regardless of where machine builders are based.
If an AI system is operating outside the EU and its output impacts people within the EU, then it also needs to comply - though this is unlikely to be the situation for AI systems used on machinery.
On 1 August 2024, the Artificial Intelligence Regulation (EU) 2024/1689 entered into force and most of it applies from 2 August 2026. However, the date that will be of interest to machine builders is 2 August 2027, as this is the date from which high-risk AI systems used as safety components will be regulated.
As with most legislation, the AI Act is not retrospective, so AI systems do not need to comply if they are placed on the market before the AI Act becomes applicable. However, if an AI system is substantially modified after the AI Act becomes applicable, then it will have to be conformity assessed and CE marked to indicate its compliance.
CE marking
Before an AI system is placed on the market or put into use for the first time in the EU, it must be CE marked. This is the case whether an AI system is supplied on its own or embedded within a product such as a machine. If an AI system is embedded, then the product manufacturer becomes responsible for the AI system and takes on the obligations of a 'provider'. Note that the AI Act differentiates between 'providers' and 'deployers' of AI systems. For example, a machine builder would be the provider and the end user would be the deployer.
The CE marking process is similar to that for CE marking to the Machinery Directive. Most of the steps along the route to compliance will therefore be familiar to anyone who has CE marked a machine to the Machinery Directive.
Several categories of AI system are defined in the AI Act. For machine builders, the category of interest is high-risk AI systems used as safety components. High-risk AI systems are those with the potential to have a significant impact on the health, safety or fundamental rights of persons.
CE marking of such AI systems involves conformity assessment (self-certification should be fine for high-risk AI systems used as safety components), and the easiest way to demonstrate compliance with the Act's requirements is to apply harmonised standards (though these are yet to be published). Once the system has been assessed as being compliant, a Declaration of Conformity (DoC) can be prepared and the CE mark applied. If a physical CE mark cannot be applied, the CE mark can be shown on packaging, accompanying documentation, or a digital CE mark can be applied.
Authorised representatives
Providers outside the EU are required to appoint an Authorised Representative (AR). The AR must be a natural or legal person in the EU and possess a mandate from the provider to perform certain tasks under the AI Act. Both the instructions and the DoC must show the identity and contact details for the AR.
After a high-risk AI system has been placed on the market, the provider is obliged to undertake post-market monitoring for the system's lifetime. If any serious incidents occur, these must be reported to the relevant market surveillance authority.
High-risk AI systems used as safety components are required to have automatic event logging, which will assist with post-market monitoring. If an AR has been appointed, then one of their obligations is to provide the relevant competent authority with access to these logs upon request.
If an AI system undergoes substantial modification after it has been placed on the market or put into use, then its conformity must be reassessed. 'Substantial modification' includes using an AI system for a purpose for which it was not originally intended.
Penalties for non-compliance
Within the AI Act, there are rules governing penalties for non-compliance, including failing to provide the relevant authorities with information or access upon request. Penalties can apply to providers, deployers, importers, distributors and authorised representatives, as well as notified bodies. Penalties are in the form of fines of up to EUR15million or 3 per cent of worldwide annual turnover. Fines relating to prohibited AI systems are substantially higher.
Dun Iseal House, Newtown, Gaulsmills
Ferrybank
IRELAND