News

Understanding the risks and benefits of artificial intelligence and robotics
A workshop for media and security professionals

Understanding the risks and benefits of artificial intelligence and robotics

A workshop for media and security professionals

Cambridge (UK), 7 February 2017. The potential risks and benefits associated with advancements being made in the fields of artificial intelligence (AI) and robotics were analyzed and discussed during a two-day workshop organized by the United Nations Interregional Crime and Justice Research Institute (UNICRI) in collaboration with, and hosted by, the Cambridge Centre for Risk Studies. The event took place at the University of Cambridge Judge Business School (United Kingdom) from 6 to 7 February 2017.

Journalists, representatives from the academia, international organizations and the private sector from 20 countries met with leading AI and Robotics experts to deepen their understanding of advancements in AI and robotics, with a special focus on their potential global security implications. “As for any other technology, AI poses both risks and benefits. However, the speed of development of this technology is exponential and is accelerating and intensifying both risks and benefits” explained Konstantinos Karachalios, Managing Director of the Institute of Electrical and Electronics Engineers (IEEE) Standards Association

The Risks and Benefits of Artificial Intelligence and Robotics

Experts agreed that while on the one hand such technological developments may be instrumental for the United Nations Member States to achieve the 2030 Sustainable Development Goals, advances in this field could, at the same time, pose a wide range of legal, ethical and security challenges. Kay Firth-Butterfield, from the Robert S. Strauss Center for International Security and Law (University of Texas) noted that “AI is growing at a faster pace than the law can possibly react. There is not any real regulation around AI at all, apart from the one developed by the European Union.”

Noel Sharkey from the University of Sheffield (UK) and co-founder of the Foundation for Responsible Robotics (FRR) noted that “AI development is not posing any threat, but is the way it is being applied by humans that can represent a threat.” He warned that “there is an emerging arms race for autonomous weapons systems that clearly at the moment cannot comply with international humanitarian law.” During the workshop experts shared information and tools that will aid and promote knowledgeable and reliable reporting on AI and robotics. “The role of media is pivotal to inform the public on the real potential of AI” said Sharkey. “These kinds of workshops are essential for journalists to really understand the technology and not to run off with motive of ‘terminators’ in their reporting” he noted.

The workshop also focused on how to communicate AI and robotics advancements to inform citizens and institutions in decision-making capacities. The importance of a broader cooperation among the international, the technical and the political communities in the sphere of AI regulation was stressed as well as the need to make the scientific community more responsible.

In connection, Mr. Irakli Berdize, Senior policy and strategy advisor at UNICRI, laid emphasis on the opening of the UNICRI Centre for artificial intelligence and robotics in The Hague (The Netherlands), which, he explained “will seek to enhance understanding of the risk-benefit duality of AI and robotics through improved coordination, knowledge collection and dissemination, awareness-raising and outreach activities.” He continued, concluding that: “By utilizing the knowledge and information of experts in the field, we will ensure that policy-makers and other relevant stakeholders possess improved knowledge and understanding of both the risks and benefits. We believe that this will help to generate appropriate and balanced international discussions on this important topic."

The speakers of the workshop, which was implemented within UNICRI's Public Information Programme on New Threats, included: Konstantinos Karachalios, Kay Firth-Butterfield and Mr. John C. Havens (IEEE); Noel Sharkey (University of Sheffield and FRR); Daniel Ralph, Simon Ruffle and Jennifer Copic (Cambridge Centre for Risk Studies, University of Cambridge); Natalie Mullin (1QBit); Dave Palmer (Darktrace); Olly Buston (Future Advocacy), Kyle Scott (Future of Humanity, University of Oxford), Stephen Cave (Leverhulme Centre for the Future of Intelligence, University of Cambridge); and, as organizers/host, Irakli Beridze (UNICRI) and Michelle Tuveson (Cambridge Centre for Risk Studies, University of Cambridge).

More information on the programme

  Google+
Contact Us Disclaimer | Acknowledgements