The relevance of choosing the subject of research as the basis for ensuring the safety of the use of robotic systems for various purposes, primarily those using artificial intelligence for control, and the object of research, which is the problem of sharing responsibility for the development and operation of robotic systems, is determined by the existing contradiction between the need for autonomous use of robotic systems and the complexity of software implementation of this requirement. At the same time, in robotics, quite often, it is the errors of control algorithms that are the source of most problems. Based on the analysis of regulatory documents regulating the development of artificial intelligence tools, possible problems of ensuring the safety of the use of autonomous robotic systems are analyzed. The conclusion is synthesized that, in the current state, these documents do not provide solutions to the security problem of artificial intelligence systems. A systematic approach was chosen as the methodological basis of the study. The use of a systematic approach, decomposition methods and comparative analysis made it possible to consider in a complex the problems of dividing the areas of responsibility of developers and operators of autonomous and partially autonomous robots implementing the principles of control based on artificial intelligence. The research's source base consists of scientific articles, regulatory and legislative documents that are publicly available. It is concluded that the existing approaches to training and self-learning of artificial intelligence systems that control autonomous robots “blur” the boundaries of responsibility of the participants in the process, which, in theory, can lead to critical situations during operation. With this in mind, based on the analysis of a typical development and application process, it is proposed to clarify the distribution of responsibility, as well as add new participants to the process: supplement it with specialists focused on the safety and impartiality of artificial intelligence (AI Alignment), and also provide a group approach in the development of artificial intelligence and machine learning algorithms, reducing the subjectivity factor.. Theoretically, the application of the principles of responsibility sharing synthesized in the article will ensure an increase in the safety of robotic systems based on the use of artificial intelligence.