Perspectives on privacy in human-robot interaction

Image may contain: Typewriter, Office equipment, Office supplies, Space bar.

Photo: Colourbox.com 

Mona Naomi’s doctoral research investigates ethical and legal concerns that rise from how the design and functionality of robots influence human-robot interaction. The project is focusing on the use of robots in the health sector, and on advanced robots using sensors and artificial intelligence (AI) to perform tasks and to interact with humans.

About the Project

For robots to work properly, they are dependent on the use of data. This can for example be spatial data from sensors to be able to move around in a room, the use of cameras to detect and recognising people and data necessary for deep learning with the use of AI. A lot of this data will be personal data, and thereby the use of a robot must comply with the General Data Protection Regulation (GDPR). However, although the GDPR is deemed to apply to robots and AI, it’s uncertain whether the regulation is sufficient since the GDPR only applies to the entity choosing to use the robot (the controller) and not to manufacturer. Therefore, the development and design of a robot may not be aligned with the requirement of data protection by design and by default.

The extensive use of personal data triggers the questions on whether the amount or quality of data can be reduced (data minimisation) to respect the privacy of persons interacting with the robot. However, this may influence the functionality of the robot, and in particular the safety in human-robot interaction.

Since the VIROS project is cooperating with a Japanese manufacturer of robotics, it’s of interest to explore whether there are integral ethical and legal norms affecting the design and functionality in the robots. Research in anthropology and sociology on the understanding of robot autonomy and the concept of appropriate human-robot interaction in Europe and Japan, shows differences that may lead to diverging approaches in product safety and normative frameworks for ethical robotics.  Since the features that are affected by cultural norms may be “hidden” from view, this may not be apparent when a robot is procured. A part of the project will be to do a comparative study of the discourse on ethical and legal concerns relating to AI and robotics in health care in Europe and Japan.

Objectives

The objective of the research is to identify regulatory blind spots with regard to robotics and AI, and the implications for safety and autonomy in human-robot interaction. The starting point will be whether the current legal framework sufficiently regulates the use of robotics, and how the field of robotics can be appropriately regulated by a combination of hard law and soft law.

The research is interdisciplinary in collaboration with the Robotics and Intelligent Systems research group (ROBIN) at the Department of Informatics with an aim to address the legal concerns triggered by the ROBIN research on robotics and intelligent systems, in particular questions on privacy, security, safety and ethics. The project will have a law on the ground approach, identifying and analysing the legal questions that arise from the development of robots.

The thesis is expected to be submitted in September 2024.

Financing

The project is part of the research project Vulnerability in the robot society (VIROS), financed by the Norwegian Research Council.

Published May 28, 2021 10:16 AM - Last modified Apr. 10, 2024 8:44 AM