The Malicious Use Of AI – Why Those “Black Mirror” Scenarios May Be Closer Than We Think
Developments in artificial intelligence (AI) are proving hugely beneficial for society, both at a commercial and individual level (e.g. face recognition and medical diagnostic technology, route mappers for navigation, robotic pets, cleaners and industrial robots). However, with the advancement of such technology, and the increasing integration of AI into our everyday lives, comes the potential for its misuse or misappropriation. This is the concern expressed by a number of research bodies and industry experts in a new report, which describes scenarios that could have been taken straight out of the science fiction series “Black Mirror”.
The report – “The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation” – considers the potential threats that could arise from the malicious use of AI technology and aims to identify certain interventions which may benefit from further evaluation to help foresee, prevent and mitigate these threats. The report defines AI as “the use of digital technology to create systems that are capable of performing tasks commonly thought to require intelligence” and it considers “Malicious Use” to include all practices which are intended to “compromise the security of individuals, groups or society”.
The report cites AI technology’s “dual-use” nature as the main reason behind its susceptibility to malicious use (in that the same technology can be used for both beneficial and harmful ends). It identifies those aspects of AI and machine learning technology that lend themselves to be particularly attractive to malicious users, and expands on a number of specific misuse scenarios, which the authors perceive as being possible now or in the near future. For example, the paper describes a fictional situation where a cleaning robot is repurposed to deliver and detonate a bomb in the presence of a government official (who it has identified using facial recognition). It successfully evades suspicion in the days leading up to the attack by posing as a legitimate machine and carrying out routine cleaning tasks.
The report calls for greater collaboration between researchers, developers and policymakers to develop a best practice regulatory framework, with the objective of reducing the potential for future AI developments to be maliciously repurposed. One suggestion is that all such developments could be required to incorporate built in protections against cyberattacks or tampering, in line with an agreed regulatory standard.
Developers of AI solutions may want to familiarise themselves with the contents of the report. Separately, contracts between suppliers and customers for AI solutions will become increasingly important. Customers will be looking for warranties around the security of the technology. Suppliers will be wary to guarantee that the technology is resistant to attack or misuse, particularly given that AI as a whole, and its known uses and abuses, is still in its relative infancy. Some interesting contract negotiations could result around the allocation of liability for malicious misuse.
This is definitely a space to watch.