With the development of artificial intelligence (AI) across the globe progressing at breakneck speed, it is generally accepted that humanity is within touching distance of living in a world where AI plays a ubiquitous role in our everyday lives.
From an employment perspective, concerns have been raised over the potential risks of AI, including the threat it poses to displace jobs and the possibility of unlawful discrimination. In particular, internal processes including recruitment, performance management, and disciplinary action could leave employers open to claims of discrimination arising from alleged issues with visual or audio cues or claims of failing to make reasonable adjustments for disabled candidates. And by dint of machine learning, employers face the potential risk of being unable to demonstrate sufficient transparency in the use of AI systems in their decision-making processes to counter claims of unfair treatment. There also could be issues raised relating to the monitoring of employees that employers will need to be aware of, and which data regulators, including the UK’s ICO, have been vocal about in recent times.
Whether law and regulation can be developed to adequately respond to advances in AI being used as a force for innovation and not fetter the fundamental rights of individuals is an issue that has been subject to debate for some time.
The AI Act
The European Union (EU), through the European Commission’s proposed AI Act, aspires to go a significant way toward resolving such issues. The AI Act will have a global impact, as it will apply to:
organisations providing or using AI systems in the EU; and
providers or users of AI systems located in a third country (including the UK and US), if the output produced by those AI systems is used in the EU.
The Act, which is currently predicted to be adopted by the end of 2023, follows a risk-based approach and seeks to address potential AI issues posed by AI by applying certain mandatory compliance requirements on providers and users while assigning applications of AI to four risk categories and regulating only as strictly necessary to address the specific levels of risk.
Data source: European Commission.
For high-risk AI systems impacting the employment context, the Act proposes mandatory requirements for organisations to comply with. These include:
Establishing a risk management system
Data training and data governance
Accuracy, robustness, and cybersecurity
Conformity assessment and registering in EU-wide database
Even for other authorised risk categories, these are recommended.
The final text of the Act is subject to intense negotiation between the European Commission, the EU Council and the EU Parliament. Nonetheless, it is clear that organisations will be expected to be able to explain how their use of AI works and demonstrate transparency in the keeping of proper records of data sets, decisions, policies and protocols relating to AI outputs.
Failure to do so could result in heavy fines under the proposed Act (up to €30 million or 6 % of the total worldwide annual turnover), depending on the severity of the infringement.
Organisations who do business in the EU and who are interested in securing the potential benefits of AI whilst ensuring their business mitigates against potential employment risks will need to take practical steps to audit and enhance their existing compliance footprint.