Features of legal regulation of artificial intelligence as a guarantee of sustainable development of society

. The authors of the article consider the features of the legal regulation of artificial intelligence that guarantee the sustainable development of society in the era of global digitalization. The artificial intelligence-induced transformation of the world-building is leading to a change in the legal landscape. In this regard, the authors investigate artificial intelligence as a subject and object of legal regulation. The article provides an overview of foreign and Russian legislation in artificial intelligence, based on which a legal model of a single codified act is proposed. The authors advert to the need for technical and public control when introducing artificial intelligence into operation, and also for a priori legal regulation of artificial intelligence. Based on theoretical research methods, such as the axiomatic method, analysis and synthesis, systematization, modeling and forecasting, the authors conclude that a comprehensive, consistent, systemic and prospective legal regulation can remove the possible risks of introducing AI technologies, the threat of human destruction and provide a guarantee for the sustainable development of society.


Introduction
The mass introduction of artificial intelligence technologies (hereinafter -AI, AI technologies) capable of self-learning, creating their algorithms, adapting to the environment, surpassing human intelligence and, ultimately, completely replacing it, puts our society on the brink of survival and requires the development of such mechanisms that can ensure its sustainable development. In this regard, the presented research is of absolute relevance.
The reason to study this issue was the lack of the necessary regulatory legal framework to regulate public relations in the development, implementation, and operation of AI. The sustainable development of society in the era of global digitalization requires identifying and comprehensively investigating the possible risks of introducing AI technologies into mass industrial operation and outlining the main directions of legal regulation of AI.

Materials and methods
The traditional legal practice continues considering AI systems and AI-based robots as objects of law. In this regard, a set of already known standards of current law can be applied to them: to the physical form -standards on the property, things, etc., and to information systems and AI-technologies -standards of information law, intellectual property rights, etc. With such a narrow and superficial approach to the legal regulation of AI, no new legal standards are required, but an appropriate interpretation of the existing legal standards is enough.
However, the distribution of the area of responsibility, in this case, will not be adequate, and law enforcement will not be uniform and based on formally defined interpretations. The current legal standards are not able to regulate new social relations based on the use of robotics and AI technologies and thus resolve difficult conflict situations in this area.
If we proceed from the assumption that soon the practice of implanting artificial implants into the human body, including the brain, which is responsible for the functioning of the whole organism or its parts, will be widespread, then considering such a person as a person with exclusively natural intelligence will already be problematic, or rather -wrong. Therefore, the determination of the legal personality of artificial intelligence should proceed from what it is: the basis of the functioning of an automated system (a robot) or only a part of it.
Such an abstraction today seems to be an impossible fantasy, however, we should differentiate the rights of the subject before such fantasies become reality. The property of intelligence acts as a classification feature: 1) artificial intelligence, 2) natural intelligence, 3) hybrid intelligence. Here, it is very difficult to determine the criteria by which the subject of law will be attributed to "natural intelligence", "artificial intelligence" or "hybrid of intelligence".
On the one hand, the task of the legislator is to ensure the safety of a person, the owner of natural intelligence, from suppression and destruction by subjects with artificial and hybrid intelligence. On the other hand, it is necessary to allow developing and improving artificial intelligence. At the same time, admitting the idea of the existence of hybrid forms of intelligence, it is necessary to ensure the protection of such subjects from the first two and themselves.
Endowing artificial intelligence with a specific status of legal personality is especially important for determining the degree of responsibility for harm caused by the abovementioned subjects. Given that such harm is not always the responsibility of their owners or users of AI, there is a need to reconsider traditional approaches.
The degree of responsibility can be influenced, in our opinion, not only by the infliction of harm and the amount of damage caused but also by the presence of appropriate control, including technical and public control during the development and implementation of AI.
Director of research projects of the Higher School of Economics Garbuk S.V. notes that "the barriers associated with the use of AI systems... can be removed by standardizing the requirements for test methods of critical intelligent systems, as well as by creating a certification system..." [5]. He emphasizes the need for certification of AI technologies to ensure trust and the ability to transfer part of the human authority to such solutions.
In general, the supporters of AI certification stand for the need to create a system for assessing the conformity of intelligent technologies to human capabilities, which should include technical committees for standardization, certification bodies, as well as testing laboratories, in which the capabilities of developed technologies are directly evaluated. To this end, in July 2019, the technical committee for standardization named TC 164 "Artificial Intelligence" was formed, which consolidates about one hundred and twenty leading Russian organizations in the research area: consumers of AI technologies, higher educational institutions, research organizations, IT companies, and specialized government bodies.
Mandatory certification of AI should precede the introduction of AI into operation. This procedure should be legally regulated with the simultaneous establishment of the full legal responsibility of the AI developer who allowed the introduction of AI into operation without appropriate certification.
Foreign countries have been gradually accumulating experience in the adoption of regulatory legal acts in certain areas of AI application [8]. While Russia is still making the first attempts in this direction. In particular, the National Strategy for the Development of Artificial Intelligence until 2030, approved by the President of the Russian Federation on October 10, 2019, is designed to ensure the superiority of AI over humans in a wide range of issues by the specified period.
AI technologies gained its wide use in recognizing the faces of passengers in the metro, at railway stations, at airports, in video-recording of car plates of vehicles moving on the roads, in epidemiological control by measuring the temperature of people with thermal imagers in places with heavy passenger traffic, in issuing a digital pass with the assignment of a QR code for movement around the city, in the automatic provision of social benefits to various categories of individuals and legal entities, based on the implementation of mechanisms of targeted social support, using Big Data from numerous state registers [6].
As M.A. Rozhkova notes, "Big Data is understood, primarily, as big data analytics, which is recognized as a "new form of knowledge production" and involves the processing and structuring of data, the creation of data analysis algorithms, data aggregation and analysis, and the identification of relationships between data, the establishment of patterns and hidden trends, forecasting, etc." [10].
We should note that the Federal Law No. 168-FZ of 08.06.2020 "On the Unified Federal Information Register Containing Information about the Population of the Russian Federation" is a BigData-law, based on which a single federal information register (EFIR) includes personal data, including basic information -last name, first name, patronymic (if any), date and place of birth and death, sex (or opposite, if changed), details of the civil registration of birth and death, SNILS, TIN and other data and additional information, such as marital status, family ties and other information about an individual. This register accumulates about 30 types of information from 12 different sources, the suppliers of which are various government departments and foundations. The aggregator of such data is the Federal Tax Service of the Russian Federation (FTS of Russia).
As we can see, personal data freely flow into Big Data streams, becoming their components, which raises a large number of questions and requires detailed legal regulation.
There is an opinion that "big data itself, despite its obvious usefulness, does not contain an intellectual component and cannot be used by a person individually due to its volume. At the same time, big data allows an artificial neural network to gain experience and minimize the number of errors made" [3].
For example, in Moscow, a unified medical reference center for radiation diagnostics has been established, which uses AI technologies based on "computer vision" as an experimental site [9]. At the same time, the legal basis for the introduction of artificial intelligence technologies in the fight against the coronavirus pandemic is laid down in the Federal Law No. 123-FZ of 24.04.2020 "On experimenting to establish special regulation to create the necessary conditions for the development and implementation of artificial intelligence technologies in the constituent entity of the Russian Federation, Moscow, the city of federal importance, and amendments to Articles 6 and 10 of the Federal Law On Personal Data". It defines the goals and objectives, the basic principles of establishing an experimental legal regime, as well as the features of the legal regulation of relations arising in this connection. Thus, starting from July 1, 2020, an experiment on the development and implementation of AI technologies will be carried out in Moscow for 5 years, the successful experience of which is planned to be further disseminated throughout Russia. The results of the experiment will have to be presented by the coordinating council, created to develop strategic directions and monitor the experimental legal regime, and upon its completionproposals on the feasibility or inexpediency of appropriate changes to Russian legislation.
Legal regulation of AI seems appropriate to be carried out based on a single regulatory legal act -the Code of Artificial Intelligence, which consists of two parts: general and special. Accordingly, the general part should regulate the goals, principles, main provisions of the legal regulation of AI' provide legislative definitions; determine the legal personality of AI, the areas of AI activity, the conditions for introducing AI into operation, including certification and public control mechanisms. The general part, in turn, should reflect the features of legal regulation in various areas of AI application: for example, trade, the financial sector, the use of AI-based exchange algorithms in public procurement, in the field of medical diagnostics and support for medical decision-making, management of unmanned vehicles (by type of transport), construction equipment and hazardous industrial equipment, the functioning of a "smart city", industry, state and municipal administration, space activities, defense and security, etc. As the scope of AI application in public relations expands, the special part is subject to amendment. This approach will always allow finding and applying a legal standard to specific legal relationships. Here, one should adhere to the rule on the priority of a special rule over a general rule, and in its absence, the analogy of the law should be applied (as, for example, under Part 6, Article 13 of the Arbitration Procedure Code of the Russian Federation).
In the process of the legal regulation of artificial intelligence, the legislator should provide for mechanisms for exercising public control over the development and implementation of systems endowed with artificial intelligence.
The creation of artificial intelligence systems must proceed under public control to prevent infringement of human rights and freedoms. Algorithmic system designers have varying levels of discretion when making decisions, such as what training data to use or how to respond to false positives. The task of the public controller is to verify the structure of the data set underlying the operation of the algorithm operator, and not in understanding the exact operation of the algorithms.
Artificial intelligence is developed by specialists with specialized knowledge in programming and data analysis, information theory, mathematics, mathematical statistics, probability theory, neural networks, etc., which, unfortunately, are not a common case. Therefore, AI developers must provide visualization of and translate used algorithms and codes from "machine" to "human" language. Ensuring the transparency of AI design is necessary, first of all, for subjects of public control, designed to check the primary algorithm and initial data, based on which the artificial intelligence will self-learn. It is important to make sure that the designed system will not lead to violations of human rights and freedoms when it is put into operation.
One of the challenges that come with AI development is the AI's ability to adapt to its environment. Adaptability manifests itself in self-learning algorithms that use data to develop new models and knowledge, as well as to develop new decision-making rules using machine learning methods [2,11]. However, this problem is more connected with their design and with human perception and interpretation of their implementation and results, rather than with algorithms as a tool. Latent, unintentional bias is increasingly present in learning algorithms for automated systems [4].
In our opinion, it is the social aspect and the specific norms and values embedded in the algorithms that need verification, doubt, criticism and dispute. Indeed, not the algorithms themselves, but the decision-making processes of artificial intelligence based on input data require their deeper investigation by experts in terms of how they affect human rights.
The source data should be independent and neutral; the choice of such data should not contain so-called "discretion, prejudices, and predispositions". Based on the types of crimes recorded and selected for inclusion in the analysis, and types of analytical tools used, predictive algorithms can thus contribute to biased decisions and discriminatory outcomes.
For example, Europe still has the ethnic profiling practice in predicting future crimes, when a particular race or ethnic background becomes decisive, without objective and reasonable reasons. Such persons are several times more likely to stop for checks, search; they are punished much more severely than persons without such signs. As Commissioner for Human Rights DunjaMijatović notes in her record, "according to the available data, arbitrary document checks of people from the North Caucasus are widespread in the Russian Federation. In Ukraine and in the Republic of Moldova, police misconduct and surveillance of migrants and black people were noted" [4]. The US civil and criminal justice system increasingly uses artificial intelligence to make complex judicial decisions by judges for an "error-free and objective" decision in the flow of many cases being resolved under pressure, high workload, and lack of resources. Such systems are currently being tested to determine the outcomes of decision-making to identify patterns in the decision-making process in determining probation and assigning parole. However, independent research into this software suggests that "the software [...] used to predict future criminals [...] is biased against black people" [1].
Public control is designed to eliminate the risks of violation of constitutional rights and freedoms by artificial intelligence, such as the right to privacy, the right to equality, freedom of expression of thought and speech, to prevent a new form of discrimination by artificial intelligence and curb discrimination based on gender, racial, linguistic, national, ethnic, social, property, origin and official position, place of residence, attitude to religion, beliefs, membership in public associations, as well as other circumstances.
Only after ensuring "algorithmic transparency" of AI systems, the issue of limiting the liability of developers for harm and damage as a result of the activity of artificial intelligence can be considered. Otherwise, the artificial intelligence developers should be fully responsible for the actions of their product.
Consequently, considering the potential risks of introducing artificial intelligence technologies into public relations, the legislator must ensure the appropriate legal regime for the development, implementation, and operation of artificial intelligence and oblige the developers of artificial intelligence to go through the public control procedure before introducing artificial control systems into industrial operation. The mechanisms of public control must be effective, which requires amendments to the federal law on the foundations of public control.
Speaking about the need for the legal regulation of AI, attention should be paid to the existence of two approaches to the implementation of this task: a priori (prospective) and a posteriori (current) legal regulation. In particular, according to the head of the Research Center for Problems of Regulation of Robotics and AI, Neznamov A.V., who adheres to a posteriori (current) legal regulation, "there is no talk of a large-scale legislative regulation of AI yet. At this stage, there is no need in complex regulation". The so-called artificial intelligence systems are of applied nature. Accordingly, regulation is required depending on the way the AI technologies are used in a particular area [7]. However, we cannot agree with such a position in solving this problem.
The point of the disagreement is that the challenges posed by the impact of AI algorithms and automated data processing on human rights will inevitably increase as the systems become more complex and interact with each other. Also, problems may arise that are currently incomprehensible to the human mind. In other words, there is no upper limit

Conclusion
Thus, based on the above, legal regulation should guarantee the safety of individuals, society, and the state, as well as promote the development of AI technologies. The rapid development of AI leads to a change in world-building, in connection with which the task of promising legal regulation of AI becomes paramount. The solution to this problem will provide a guarantee for the sustainable development of society and ensure its security from AI.