The ethical implications of machines being able to read human emotions will be explored in a ground-breaking new research project, involving Northumbria University criminologist Dr Diana Miranda.
Emotional Artificial Intelligence (AI) is an emerging technology which allows machines to sense, learn and interact with people’s motions, moods and intentions, using data from our body movements, voices, facial expressions and even body temperature.
With the rise in smart devices, buildings and cities, experts believe that emotional AI may have the capacity to make our lives safer, especially when it comes to preventing crime and improving security.
However, the technology could have strong ethical implications, with the collection of such personal data, especially in public spaces, likely to cause mistrust and concern among citizens.
Researchers are about to explore how people feel about emotional AI, and what measures can be put in place in future to deal with those concerns in a new project, entitled Emotional AI in Cities: Cross Cultural Lessons from UK and Japan on Designing for An Ethical Life.
Jointly funded by the Economic and Social Research Council (ESRC) in the UK and the Japan Science and Technology Agency in Japan, the three-year project involves academics from universities in both countries, including Northumbria’s Dr Miranda.
A Lecturer in Criminology and a member of Northumbria’s Centre for Crime and Policing, Dr Miranda will be focusing on the use of emotional AI in policing and security, exploring both the benefits and potential issues.
Speaking about the project, she said: “We must understand the social and ethical implications that arise from the use of different techniques that allow our bodies, feelings, intentions and emotional states to be read by machines.
“This project aims to explore such implications by considering how these technologies could be used in different settings and the differences between the two countries we are working in, namely their social, cultural and legal particularities.”
The researchers will consider what it means to live well and ethically alongside emotional AI in smart cities, in the context of commercial settings such as shops, security settings such as policing and within the context of the media, including social media.
As well as interviewing organisations already developing or deploying emotional AI in smart cities, the team will examine existing governance for the collection and use of intimate data in relation to people’s emotions, especially in public spaces.
They will also speak to people living in smart cities to find out more about their diverse attitudes to emotional AI. This information will be used to provide recommendations for future developments and shape how emotional AI is used in the future.
Dr Miranda believes the use of emotional AI could have an impact in the field of policing and security. However, as she explains, the process is far from simple.
“Emotional AI is not about identifying an individual but identifying their intention,” she said.
“This is done by ‘reading’ how a person behaves – their voice, intonation, facial expression, physiological characteristics such as their heart rate and temperature, and how they move.
“This is already being used in ways which might improve safety – for example in cars emotional AI can be used to identify when a driver is feeling tired and may need a break.
“However, it is a rapidly evolving technology and we still know very little about the ethical impact on our lives, or how people feel about it.”
One of the outcomes of the project will be the development of a think tank which will provide impartial advice on the use of emotional AI to governments around the world, as well as industry, educators and other stakeholders.
Find out more by visiting the Emotional AI Lab.
The project is one of six funded through UK Research and Innovation’s (UKRI) Fund for International Collaboration (FIC) in a joint UK-Japan initiative. The Economic and Social Research Council (ESRC) and the Arts and Humanities Research Council (AHRC), both part of UKRI, contributed £2.4m via FIC, while the Japanese Science and Technology Agency (JST) contributed ¥180m.
The UK team is led by Andrew McStay (expert on social impact of emotional AI, Bangor University). Other UK Co-Investigators are Vian Bakir (dataveillance and disinformation expert, Bangor University), Dr. Lachlan Urquhart (multidisciplinary expert in IT law, computing and smart cities, University of Edinburgh) and Dr. Diana Miranda (criminology and surveillance technology expert, Northumbria University). The Japan team is led by Prof. Peter Mantello (dataveillance and predictive policing expert, Ritsumeikan Asia Pacific University). Other Japan Co-Investigators are Dr. Hiromi Tanaka (digital media & gender expert, Meiji University), Prof. Nader Ghotbi (cross-cultural ethics and health expert, Ritsumeikan Asia Pacific University), and Prof. Hiroshi Miyashita (AI and data privacy expert, Chuo University).