The European Artificial Intelligence Act, the world’s first comprehensive regulation on artificial intelligence (AI), came into force last month (August), with implications for systems used in the employment and management of workers.

The AI Act is designed to ensure that AI developed and used in the EU is trustworthy, with safeguards to protect people’s rights. The regulation aims to establish a harmonised internal market for AI in the EU, encouraging the uptake of this technology and creating a supportive environment for innovation and investment. 

The Act defines four categories of risk for AI systems: minimal risk, specific transparency risk, high risk and unacceptable risk. Most AI systems fall into the minimal risk category, including AI-enabled recommender systems and spam filters, which will face no regulation under the Act, although organisations can voluntarily adopt additional codes of conduct. 

The specific transparency risk category requires AI systems like chatbots to clearly disclose to users that they are interacting with a machine. AI-generated content such as deep-fakes must be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. 

The high risk category includes systems used in the employment and management of workers, such as placing targeted job advertisements, analysing and filtering job applications, and evaluating candidates.

The unacceptable risk category requires the banning of any AI systems considered to be a clear threat to people’s fundamental rights, including AI systems or apps that manipulate human behaviour to circumvent users’ free will. 

Companies not complying will be fined. Fines may be up to 7% of global annual turnover for violations of banned AI apps, up to 3% for violations of other obligations and up to 1.5%  for supplying incorrect information.