July 4, 2022

Volume XII, Number 185

Advertisement
Advertisement
Advertisement

Proposed new EU regulatory regime for Artificial Intelligence – more relevant to HR than you might think (UK)

For the last year or so the EU Commission has been working on the world’s first serious attempt to create a regulatory framework around the use of AI, the Artificial Intelligence Act.  The Proposal itself runs to over 100 pages of dense type and no pictures, so is a fairly off-putting read at first look.  However, there are within it a number of provisions which may have significant repercussions for employers in the UK, let alone the EU.

The underlying thinking behind the proposal is that left unchecked, AI can encroach quite severely on the fundamental rights of EU citizens.  Those rights are listed in paragraph 3.5 of the Explanatory Memo and are very much as you would expect – human dignity, private life, protection of personal data, non-discrimination, freedoms of expression and of assembly, health and safety, etc.  Moving away from traditional fundamental rights, however, paragraph 3.5 also includes workers’ rights to “fair and just working conditions“, while Article 36 of the Proposal itself appears to convey a fundamental right to future career prospects, something I wish I had known at a much earlier age.

How might employers’ use of AI threaten those rights?  There is no suggestion that that risk needs to be intentional – it is the innate characteristics of AI (opacity, complexity, dependency on data and autonomous behaviours) which by themselves create that possibility.  To head that off in what it describes as a proportionate manner, the proposed AIA will divide the use of AI into four categories of risk – unacceptable, high, limited and minimal.  Most employers would see their use of AI in the workplace as limited at best, but what makes the proposed regulatory regime more relevant than one might think is that it expressly places AI used for making decisions on recruitment, promotion and termination of work-related relationships, for task allocation and for monitoring and evaluating performance and behaviour of people in those contracts into the high risk band.

That will bring with it a number of consequences for employers using AI for that purpose, and that will be a substantial and growing population.  CV scanners, auto-marked non-verbal reasoning tests, work allocation software using details of an individual’s location or activity levels or capability, monitoring performance or conduct, etc. would probably all be caught.  Nothing in the AIA will make any of this illegal, but it will impose a bus-load of additional background obligations on both the providers and more particularly the users of such technology.  That may bring both cost and delay to your implementation or continued use of such systems going forward.  While none of this is yet set in stone, the current safeguards to be required include:

  • The creation of an AI compliance approval body by each member state.  This must meet requirements relating to independence, competence, and the absence of conflicts of interest, all issues with which UK Government bodies have had particular difficulties in the recent past.

  • To make the technology human-accessible, those who may be affected by it are to be entitled to “instructions of use” and “clear and concise information … including as to possible risks to fundamental rights“.  No doubt there will be plentiful guidance on this as and when, but as one who struggles with the instructions for flat-pack furniture, I can’t wait to see a layman-accessible explanation of an AI algorithm.  Leaving that detail aside, this seems to be not much more than an extension of the notices you already see under the data protection regime warning that you are on CCTV, your call is being recorded, your personal details will be kept on our site, etc. – an administrative nuisance, perhaps, but not that difficult.

  • High-risk systems should be designed such that “natural persons can oversee their functioning“.  In obvious apprehension of the Terminator’s Skynet, that means that the kit must be subject to limits which cannot be overridden by the system itself and must be “responsive to the human operator“.  Those operators must be demonstrated to have the appropriate “competence, training and authority” to carry out that role.  This all sounds very complex until you get to Article 14 of the Proposal, which indicates that making the system responsive to the operator can include the use of a stop-button “or similar procedure“, which presumably includes just pulling the plug out at the wall.

  • To gain approval, users must also demonstrate an adequate level of resilience and security against inadvertent errors, the mysterious “unexpected situations” and malicious interference.

  • Employers will need to keep records adequate to record the traceability of how, when and for what the AI system has been used, including the database against which the input data has been checked.  You will probably have much of this to hand if you have had to answer one of those DSAR questions around the automatic processing of the requesters’ personal data.

  • If something goes wrong and your AI recruitment system begins to behave erratically, perhaps making clearly unsustainable decisions, not taking “stop” for an answer or showing inclinations towards world domination, users must notify the National Authority and cooperate with it in sweeping up afterwards, again not a huge step onwards from the existing data protection regime.

Assorted heinous penalties await those employers which get this materially wrong, especially after an initial transition period – up to the higher of €30 million or 6% of global revenue for the most serious infringements, while the supply of misleading information to the relevant national authority in your application for approval of your AI system goes up to a relatively trivial €10 million only.

All this is very interesting, obviously, but post-Brexit, of what real interest to UK employers?  First, the AIA will apply to any use of such technology in the EU.  UK employers deploying it in relation to their operations and staff there will certainly be caught immediately even if the system itself is based in the UK or decisions arising from it are processed or acted upon here.  Second, much as on the data protection/GDPR front, UK businesses are likely to come under pressure to play by similar rules as a condition of access to EU markets, either through legislative compulsion or commercial procurement requirements.  Even wholly domestic businesses may therefore end up dealing with some variation of this.

How far the Proposal will be refined before then is open to doubt.  Our team in Brussels says that the EU Commission is aware of the risk of regulatory over-reach from the protection of fundamental rights into the mundane daily workings of a business but has yet to work out how to address it without diluting the bigger-picture protections.  On that front the Commission is stuck in a hail of flak from all sides – from industry fearing that these measures will put it at a competitive disadvantage against regimes with a less scrupulous approach to those rights on the one hand and from unions and civil rights bodies saying that the Proposal does not go far enough on the other.  That seems to suggest that it is in roughly the right place overall but it is impossible not to think that so far as employee and worker rights are concerned it is also basically unnecessary – almost all the ills it is designed to prevent seem more than adequately covered already by existing common law and the health & safety, discrimination and data protection legislation.

Actions for employers

  • If you have yet to sign the contract on a new AI system, pre-emptively discuss responsibility for compliance with the AIA with your intended supplier or third party operator.

  • Check whether any existing AI system has been reliable and if not, why not.  This may be inadequate staff training, confused authority levels or the megalomaniac fantasies of the equipment itself, in all of which cases you should take steps now to put it right.

  • If your system can generate records, review them now to see that they contain the minimum details which will be required.

  • Keep an eye open for news on the implementation of the AIA and the opening for business of any relevant national approval body.  The new law may be a couple of years off, but it is almost definitely heading this way and as matters stand the likelihood of the UK introducing an AI approval body which is not beset by delays, IT failures and political division seems on the thin side.  As and when, therefore, it probably makes sense to get your application for approval in as soon as you can.

  • Don’t think that this is just about your workforce – the AIA’s principles will extend into any area of your business using that technology, and there will be more of those in two years than there are now.

  • Put a fresh coat of red paint on that stop button.

© Copyright 2022 Squire Patton Boggs (US) LLPNational Law Review, Volume XII, Number 123
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

About this Author

David Whincup Employment Attorney Squire Patton Boggs Law Firm
Partner

Following 10 years at a Magic Circle firm, David has been head of our London Labor & Employment Practice since 1994.

His expertise gained from over 30 years as a specialist employment law practitioner cover a wide variety of employment-related issues, including individual and team recruitment issues, policy and contract drafting, disciplinary and grievance procedures, individual and collective redundancies, the defence of employee discrimination and dismissal claims and other litigation, whistleblowing, employee health, data protection and matters surrounding confidentiality and...

+44 20 7655 1132
Advertisement
Advertisement
Advertisement