Technological, Ethical, Environmental and Legal Aspects of Robotics

. Robotics is considered by modern researchers from various positions. The most common technical approach to the study of this concept, which examines the current state and achievements in the field of robotics, as well as the prospects for its development. Also, quite often in recent years, legal experts have begun to address problems related to the development of robotics, focusing on issues related to the legal personality of robots and artificial intelligence, as well as the responsibility of AI for causing harm. A separate direction in the field of robotics research is the analysis of this concept and the relations associated with it, from the standpoint of morality, ethics and technologies.


Introduction
A large number of scientific papers are devoted to various types of robots, including wheeled robots, aircraft, wearable devices, nanorobots, humanoid robots and other types of robots. In addition, separate studies are devoted to the areas of robotics applications, including medicine, security, agriculture, space, industry, etc.
The greatest interest of researchers in recent years is aviation robotics, quadrocopters and other types of unmanned aerial vehicles. The active development of robots designed for physical interaction with people in a collaborative working environment gives us new opportunities, but it also entails certain problems. Robot ethics emerged from a wider field of engineering ethics and its sub -field, computer ethics. The initial impetus for the evolution of robot ethics was the problem of creation robots that would not harm people. Accordingly, its origin is related both to the laws of A. Asimov and to traditional engineering problems aimed at creating safe tools [1].
Initially, security issues were not so relevant for robots, because at the dawn of their appearance, they worked in isolation from humans, mostly in factories and often in security cameras. As robots became more and more capable, and it became apparent that they would soon enter the social world, so the issues of safe robot-human interaction became more and more acute [2].
In this article, we tried to reveal the connection between the technological, ethical and legal aspects of the development of robotics.

Methods
The methodological basis of the research is a systematic approach to the study of the ethical, legal and technological foundations of the application and development of robotics. During the research, the main problematic aspects related to the development of robotic systems were classified and analyzed. When processing the actual material, such traditional scientific methods as dialectical, logical, the method of scientific generalizations, content analysis, comparative analysis, synthesis, source studies, etc. were used. Their application made it possible to ensure the validity of the analysis, theoretical and practical conclusions and developed proposals.

Fundamentals of roboethics
The term "roboethics", coined in 2006 by researcher Veruggio and Operto, combines various aspects of applied engineering ethics in the context of robotics. A key aspect of robot ethics is people's feelings and perceptions about robots. This includes a range of psychological and behavioral approaches that consider how much humans identify with robots or believe they have beliefs and feelings like humans or possibly like animals [3].
A number of sources also examine the various psychological relationships that humans develop with their robot companions in clinical settings, and explore the relationship between emotions and morality, as well as the consequences of modeling emotional qualities in robots that are not capable of true emotions [4].
Also, some authors highlight the most acute problem of robotics: if there is a choice between human life and the "life" of an autonomous robot, what choice should be made? [5].
A. Tubert suggested that that ethical artificial intelligence could be created. The author believes that we face a dilemma when trying to develop ethical artificial intelligence: either we must be able to codify ethics as a set of rules, or we must value the machine's ability to make ethical mistakes so that it can learn ethics as children do. None of these paths seems very promising, although perhaps a reflection on the difficulties with each of them can lead to a better understanding of artificial intelligence and ourselves. The fact that we have very little tolerance for ethical mistakes in machines has to do is related to the possibility that robots learn ethical behavior by imitating how children learn it.
The authors suggest to use an approach similar to the one used on the Internet: to assess the AI's actions from an ethical point of view and to take into account the harm that may be caused by the robot [6].
B.P. Green believes that the problem of robot ethics should be considered in the context of transparency, positive and negative use of AI, bias, unemployment, socioeconomic inequality, and the human factor. At the same time, close contact with AI and limited communication in society can cause, as the author believes, loneliness, isolation, depression, stress, anxiety and addiction. In this regard, analyzing the above problems, the author comes to the conclusion that it is necessary to find a golden mean in the process of developing ethical principles of robotics [7].
Central to the ethics of robots is their use in the care of children, sick or disabled people, as well as the medical aspects of robotics. While many activities can be performed by robots without displaying human traits or qualities (provided that they have adequate machine intelligence and dexterity), such qualities are necessary in the care of the human being. In particular, caring for particularly vulnerable groups, including children, the elderly, the sick and the wounded, requires more humanity than other types of work [8].
Researchers look at the issue of robot caregivers from different perspectives and for different populations. Similar questions arise in the development of robots designed to provide sexual services and informal communication [9].

Moral responsibility of robots
Accordingly, the issue of moral responsibility of robots is quite relevant today. Is there still a debate about whether morality is an important feature of a robot when it interacts with humans? Is there a need to give the robot the ability to interpret the moral significance of situations and actions, to make moral judgments, etc. The matter remains open [10].
Moral responsibility also comes to the fore in the work of G. Dodig-Crncovic and his co-authors [11]. The authors point out that the increasing use of autonomous, learning intelligent systems is leading to a new approach to the distribution of tasks between people and technological artifacts, forcing the study of the related division of responsibilities for their design, production, introduction and use.
According to the authors, the main attention should be paid to moral responsibility in teaching intelligent systems and producer-user relations mediated by intelligent adaptive and autonomous technologies. The authors put forward the argument that an intellectual artifact can be attributed "responsibility" in the same context in which "intelligence" is attributed to them. They argue that the existence of a system that "cares" about performing certain tasks intelligently, learns from experience, both its own and others, and makes autonomous decisions, gives us reason to talk about AI as a system "responsible for the task" [11].
Undoubtedly, any high technology in which a person comes into close contact with a machine is morally significant for a person, so "responsibility for a task" with moral consequences can be considered as a moral responsibility. However, technological advances in the field of robotics at the present stage of development can hardly be attributed to such [11].
There is an opinion in the literature that robots or, as they are called, technological artifacts are products of human design, formed by our values and norms. They can be considered as part of a social and technological system with distributed responsibility, similar to critical safety technologies such as nuclear power or transportation. And here their developer should be fully responsible for the consequences of robots [12]. However, bearing in mind that all possible abnormal operating conditions of the system can never occur, since no system can be tested for all possible situations of its use, it is the responsibility of the manufacturer to ensure the proper functioning of the system under reasonably predictable circumstances. If there are any accidents, additional safety measures should be taken to mitigate their consequences [12]. Accordingly, a number of safety barriers should be put in place for autonomous intelligent and learning systems to prevent undesirable, possibly catastrophic consequences.
According to G. Dodig-Crncovic and his co-authors, for all practical purposes, the issue of responsibility in critical intelligent safety systems can be solved in the same way as the issue of safety in "critical intelligent systems", to which they refer, for example, the nuclear industry and transport [11].

Ethical problem of using service robots
The widespread adoption of service robots was the result of several developments that allowed robots to become mobile, interactive machines. Complex control algorithms have been developed that combined with advances in sensor technology, nanotechnology, materials science, mechanical engineering, and high-speed miniaturized computing [13].
In the field of service robotics, Japanese and South Korean companies have developed child care robots that have built-in video games, can run word quizzes, recognize speech, faces, etc. Mobility and semi-autonomous function are ideal for visual and auditory monitoring; radio frequency identification tags provide an alert when children are out of reach.
Studies of robot nannies in the United States and Japan have demonstrated the close connection and attachment of children to them. However, such robots do not provide the necessary care,and children still need human attention.
Due to the physical security provided by the robot nannies, children may be left out of contact with humans for many hours a day, or possibly for several days, and the possible psychological impact of varying degrees of social isolation on development is unknown. What happens if the parent leaves the child in the safe hands of the future robot-educator practically without taking part in his upbringing? To date, there is no answer to the question of what will be the consequences of prolonged exposure to robots on infants in the absence of a person in the educational process. [14].
Thus, studies of early development in monkeys have shown that severe social dysfunction occurs in young animals that are allowed to develop only attachments to inanimate objects [14]. Accordingly, the potential problem of reducing social adaptation and the development of possible depression in children exists, but there is no legislative consolidation of responsibility for "social isolation of children in the robot society" in modern legislation. There is also no ethical guidance from any international community on how to address this issue [14].
Service robots represent just one of many ethically problematic areas that will soon arise as a result of the rapid growth and proliferation of diverse applications in the field of robotics. Scientists and engineers working in this field should be aware of the potential dangers of their work. [15].

Ethical problem of using medical robots
The introduction of medical robots has also raised the issue of the application of special ethical standards to such robots. Medical ethics itself is based on principles related to medical practice, care and treatment of patients, which include non-violence, charity, respect for patient autonomy and justice [16].
It is the moral duty of doctors to act in the best interests of patients and not to harm them. However, this includes not only the medical aspects of patients, but also determines the overall quality of their life: the responsibility to ensure and maintain their well-being, taking into account the individual desires and values of the patient.
The concept of autonomy is comprehensive and thus has implications for other key ethical topics, including responsibility, informed consent and confidentiality. However, it is also a central issue in itself, and it arises in clinical and ethical discussions. So, among the latest neurotechnical developments are "brain computer interfaces" (BCI), which involve communication between the brain and external devices in such a way that brain signals are converted into commands for output devices in order to perform desired actions, mainly to restore useful function for people with neuromuscular disorders.
Brain-computer interfaces (BCI) receive brain signals, analyze them, and translate them into commands that are transmitted to output devices that perform the desired actions.
The difficulty of making rational decisions that respond to patients' needs and rights with regard to their autonomy is that neurological interventions are accompanied by uncertainty about their likely outcomes and the nature of the risks involved [17].
Guang-Zhong Yang and his co-authors note that medical robotics represents one of the fastest growing sectors in the medical device industry. The regulatory, ethical and legal barriers imposed on medical robots require careful consideration of various levels of autonomy, as well as the context of their use [18].
Automation levels are defined for autonomous vehicles, but there are no such definitions for medical robots. The authors propose a definition of 6 levels of robot autonomy, and assume that robots belonging to levels 1-4 are performers, that is, a person who gives special commands and controls their activities takes part in their activities. Robots that belong to the 5-6 levels are almost completely autonomous; their activities are also subject to risks, as well as the activities of a medical specialist. The authors believe that the following is expected: in addition to the development of technologies, the risk tolerance of autonomous robots will also change [19].

Criminal behavior of robots
Society has long been concerned about the impact of robotics, even before the technology became viable. Since the first appearance of this category on the pages of literary works and scientific publications, their authors have focused on warning against programming errors, the consequences of emergent behavior, and other problems that make robots unpredictable and potentially dangerous [20].
The robotics industry is rapidly developing. It indicates the need to pay attention to the ethics of robots now, especially because consensus on ethical issues is usually slow to catch up with technology, which can lead to a "political vacuum" [21].
Rolf H. Weber believes that in order to align social and ethical values with legal norms, it is necessary to answer the following questions arising from the normative concept of society [14]: -Do AI processes comply with such fundamental principles as human rights and non-discrimination?
-Is automated decision-making based on a sufficient legal basis, at least in relation to management-related issues?
-Does automated decision-making comply with all applicable data protection legislation requirements?
-Who is responsible for monitoring socially responsible activities and is responsible in case of failure caused by algorithms?
Taking into account the answers to these questions, according to the author, it is necessary to intensify efforts to develop an integrated approach to the socio-ethical and legal dimension of value conceptualizations moving towards potentially symbiotic relations concerning humans and AI [22].

Discussion
There's a huge amount of research going on around the world today using various techniques of artificial intelligence in the field of robotics. Most of the works are devoted to machine learning, particularly, methods of deep learning. Some studies have focused on the use of artificial intelligence techniques to solve problems related to adaptive control, semantic scene recognition, intelligent control systems, etc. [23] proposed a new approach to teaching the robot new skills by transferring knowledge. In order to teach the robot a new skill, it is necessary to spend a lot of time going through all possible configurations and states of the robot.

Stark, Peters & Rueckert
To solve this problem, it is proposed to limit the search space by initializing it by solving a similar problem. The authors of the paper proposed an approach that allows to adapt the knowledge about the already mastered skill to the solution a new task. For the presentation of skills, possible primitive movements are related to the description of their consequences. New skills are first initialized with parameters derived from the primitives of the movement of the learned skill, and then adapted to the new problem by searching for a relative entropy policy search. To demonstrate the effectiveness of the proposed approach, the task of moving an object by a robot manipulator with three degrees of freedom in a simulation environment was tested. Using past experience, it was shown that the number of iterations, required for training, reduced by more than 60 %.
McGill and el [24] addresses the problem of crossing an unregulated intersection by the intelligent driving system. Intersections are among the most dangerous sections of the road, where up to one-third of all road accidents occur. To improve the safety of drivers and passengers, adaptive driver assistance systems that reduce the number of incidents are developed. The effective operation of such systems is hindered by errors related to overlaps, when building facades, trees or other cars block the view. The authors propose a probabilistic model of risk assessment, taking into account the uncertainty of the received data of perception of the surrounding space. In contrast to existing approaches that assess risks based on observed vehicles, the proposed approach assesses risks for individual road sections. The developed system makes it possible to increase the safety of maneuvers, as well as reduce the waiting time of the vehicle before the intersection. This system is sufficiently subject to the existing regulation of unmanned vehicles.

Types of robotics and applications
For example, Stefas, Bayram & Isler [25], are considered at the problem of minimizing the time of approach and landing in the vicinity of a targeted beacon in an unknown location using an unmanned aerial vehicle (UAV). This problem is complicated by the existence of a conical region above the target within which the antenna measurements lose direction: the recording of the signal in all directions gives the same signal strength. The authors described a geometric model of this area based on antenna modelling and data collected by the real system. To solve this problem, a strategy has been proposed that takes advantage of the ability of drones to change altitude and is a special design that emerges when approaching a target beacon from above to reduce the flight time needed to land near a lighthouse.
Experimental studies have shown the effectiveness of the proposed strategy and a reduction of the time required for landing near the lighthouse.
A large number of papers are also devoted to the application of robotics in the field of medicine in several thematic blocks, including the use of robotics in microsurgery, endovascular surgery, laparoscopy, diagnostics, etc. Endovascular procedures require realtime visual feedback on the location of inserted catheters. Currently, this is achieved by Xray fluoroscopy, which causes radiation.
An alternative method using a robotic ultrasonic system to track and navigate catheters in endovascular interventions with an emphasis on endovascular repair aneurysm is described in Langsch and el's work. [26]. The approach developed by the authors is based on the registration of preoperative images to provide both the tracking trajectory and visual feedback about the position of the catheter in real time.
Biologically inspired robots use mechanisms and control methods that are characteristic of real biological creatures, such as animals or insects. The biologically inspired approach does not involve copying living things. Instead, the mechanisms developed are based on those observed in nature, but they tend to be simpler and more efficient. Recently, various studies have been actively conducted to control the posture of jumping robots using the inertial tail mechanism. However, the inertial tail mechanism has a high probability of collision with obstacles.
In the work of Kim & Yun [27], a pulsed wheel mechanism is proposed to achieve the same orientation control characteristics while reducing the volume occupied by the inertial tail mechanism. To test the efficiency of the mechanism with a pulse wheel, the authors proposed a model of the robot and conducted dynamic analysis, modeling and experiments on a jumping robot with the developed mechanism. In addition, it was demonstrated that the proposed mechanism can help control the angle of the body of a jumping robot.
In addition to the above-mentioned areas of research and development, the sources of scientific information include works on motion planning [28], safety improvement [29], various sensors and methods of perception of the surrounding space [30,31], control of actuators [32], calibration [33], design and construction of robots [34], etc.
All these scientific works raise some questions related to the interaction of a robot with a human.

The interaction between robot and human
In the framework of this direction we study the issues related to different methods and interfaces for interaction between robot and people.
One of the problems that arise when integrating robots into the workflow is the need to prepare the working environment so that the robot can navigate in it. Typically, various markers are used for this purpose, as well as external monitoring systems. Chacko & Kapila [35], propose to use augmented reality technologies to solve this problem. The authors have developed a mobile application that provides visualization of the workspace. The mobile app allows you to select any position in the robot's scope, which is then associated with real objects. These positions are translated into the robot's coordinate system, and it can independently perform tasks for capturing and moving objects.
Casalino [36] considers the problem of avoiding a collision between a robot and a human. The paths that the robot follows must be safe for humans, especially when the robot is holding dangerous tools or parts. At the same time, it is important to maintain the efficiency of the robot, without imposing too strict restrictions on the robot's movements. The authors propose to use Gaussian processes to predict the movement of the operator in order to control the speed of the robot and prevent collisions. An adaptive approach is proposed that takes into account the constantly updated model of human movement. The resulting approach proved to be less conservative than existing analogues, which at the same time preserves operator safety. Such works are a possible predicate for works with the opposite goal-the study of the tasks of maximum harm to humans by a robot both within the framework of national security and defense, and outside of them.
In this regard, we should once again pay attention to the creation of a possible legal document establishing the ethical principles of robotics and aspects of their application.

Conclusion
Even if we remain realistic about the types of robots that are technologically feasible to develop in the foreseeable future, a more detailed ethical assessment of specific robotic technologies is needed in each case: there is no universal ethical solution that applies to all types of robots. More specifically, at the political level, such an ethical assessment should be twofold. First, it should be assessed whether the development, production and use of a particular type of robot is potentially violating human rights. Second, policymakers need to develop adequate structures and institutions through which users and developers of robotic technologies can be held accountable for what their machines do.