THE ARTIFICIAL INTELLIGENCE RISKS WHEN USED IN SOCIOTECHNICAL SYSTEMS
- Authors: Mikheev M.Y.1, Prokofiev O.V.1, Semochkina I.Y.1
-
Affiliations:
- Penza State Technological University
- Issue: No 1 (2025)
- Pages: 12-19
- Section: FUNDAMENTALS OF RELIABILITY AND QUALITY ISSUES
- URL: https://journal-vniispk.ru/2307-4205/article/view/289653
- DOI: https://doi.org/10.21685/2307-4205-2025-1-2
- ID: 289653
Cite item
Full Text
Abstract
Background. The use of autonomous devices with artificial intelligence in socio-technical systems has led to the emergence of new problems, the solution of which is much more complex compared to the tasks of the previous stage of improving the human-machine interface. The authors of the work conducted a study to identify risk sources in socio-technical systems and ways to reduce them at the stages of the life cycle of autonomous devices. Materials and methods. The study used data from open reports on statistical surveys of developers and users of critical devices, reports on the study of accidents of autonomous transport Uber AV. Results. The formulations of risk sources arising from the use of autonomous devices with artificial intelligence in socio-technical systems are given. An example of the life cycle of a device is presented, providing for human control at the stages of development and operation. Conclusions. Risk management in cases of critical use of a device is possible if the actual stages of the life cycle correspond to a number of safety criteria listed in the article.
About the authors
Mikhail Yu. Mikheev
Penza State Technological University
Author for correspondence.
Email: mix1959@gmail.com
Doctor of technical sciences, professor, head of the sub-department of informational technologies and systems
(1a/11 Baydukov passage/Gagarin street,Penza, Russia)Oleg V. Prokofiev
Penza State Technological University
Email: prokof_ow@mail.ru
Candidate of technical sciences, associate professor, associate professor of the sub-department of informational technologies and systems
(1a/11 Baydukov passage/Gagarin street, Penza, Russia)Irina Yu. Semochkina
Penza State Technological University
Email: ius1961@gmail.com
Candidate of technical sciences, associate professor, associate professor of the sub-department of informational technologies and systems
(1a/11 Baydukov passage/Gagarin street, Penza, Russia)References
- Sociotechnical system. Available at: https://en.wikipedia.org/wiki/Sociotechnical_system
- Pause Giant AI Experiments: An Open Letter. Available at: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
- Boulanin V., Saalman L., Topychkanov P. et al. Artificial Intelligence, Strategic Stability and Nuclear Risk. 2020. Available at: https://www.sipri.org/sites/default/files/2020-06/artificial_intelligence_strategic_stability_and_nuclear_ risk.pdf
- Saalman L. (ed.). Integrating Cybersecurity and Critical Infrastructure. National, Regional and International Approaches. 2018. Available at: https://www.sipri.org/sites/default/files/2018-04/integrating_cybersecurity_0.pdf
- Boulanin V. (ed.). The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk. Volume I. Euro- Atlantic Perspectives. 2020. Available at: https://www.sipri.org/publications/2020/other-publications/artificial-intelligence- strategic-stability-and-nuclear-risk
- Boulanin V., Verbruggen M. Mapping the Development of Autonomy in Weapon Systems. 2020. Available at: https://www.sipri.org/sites/default/files/2018-04/integrating_cybersecurity_0.pdf
- Boulanin V., Bruun L., Goussac N. Autonomous Weapon Systems And International Humanitarian Law. Identifying Limits and the Required Type and Degree of Human–Machine Interaction. 2021. Available at: https://www.sipri.org/sites/default/files/2021-06/2106_aws_and_ihl_0.pdf
- Saalman L., Su F., Saveleva Dovgal L. Cyber Posture Trends in China, Russia, the United States and the European Union. 2022. Available at: https://www.sipri.org/sites/default/files/2022-12/2212_cyber_postures_0.pdf
- Boulanin V. Mapping the development of autonomy in weapon systems. A primer on autonomy. 2017. Available at: https://www.sipri.org/sites/default/files/Mapping-development-autonomy-in-weapon-systems.pdf
- Boulanin V., Goussac N., Bruun L., Richards L. Responsible Military Use of Artificial Intelligence. Can the European Union Lead the Way in Developing Best Practice? 2020. Available at: https://www.sipri.org/publications/ 2020/other-publications/responsible-military-use-artificial-intelligence-can-european-union-lead-way-developing- best
- Boulanin V., Brockmann K., Richards L. Responsible Artificial Intelligence Research and Innovation for International Peace and Security. 2020. Available at: https://www.sipri.org/sites/default/files/2020-11/sipri_report_responsible_ artificial_intelligence_research_and_innovation_for_international_peace_and_security_2011.pdf
- Bromley M., Maletta G. The Challenge of Software and Technology Transfers to Non-Proliferation Efforts. Implementing and Complying with Export Controls. 2018. Available at: https://www.sipri.org/publications/ 2018/other-publications/challenge-software-and-technology-transfers-non-proliferation-efforts-implementing- and-complying
- Su F., Boulanin V., Turell J. Cyber-incident Management Identifying and Dealing with the Risk of Escalation. 2020. IPRI Policy Paper No. 55. Available at: https://www.sipri.org/publications/2020/sipri-policy-papers/cyberincident- management-identifying-and-dealing-risk-escalation
- Natsional'naya strategiya razvitiya iskusstvennogo intellekta na period do 2030 goda = National Strategy for the development of artificial intelligence for the period up to 2030. (In Russ.). Available at: http://pravo.gov.ru/proxy/ips/?docbody=&firstDoc=1&lastDoc=1&nd=102608394
- Al'yans v sfere iskusstvennogo intellekta = Alliance in the field of artificial intelligence. (In Russ.). Available at: https://a-ai.ru/
- Kodeks etiki v sfere iskusstvennogo intellekta = Code of Ethics in the field of artificial intelligence. (In Russ.). Available at: https://ethics.a-ai.ru/assets/ethics_files/2023/05/12/ %D0 %9A %D0 %BE %D0 %B4 %D0 %B5 %D0 %BA %D1 %81_ %D1 %8D %D1 %82 %D0 %B8 %D0 %BA %D0 %B8_20_10_1.pdf
- Primenenie iskusstvennogo intellekta na finansovom rynke: dokl. dlya obshchestvennykh konsul'tatsiy = Application of artificial intelligence in the financial market : a report for public consultations. Moscow: Bank Rossii, 2023:51. (In Russ.). Available at: https://www.cbr.ru/Content/Document/File/156061/Consultation_Paper_ 03112023.pdf
- Hindriks F., Veluwenkamp H. The risks of autonomous machines: from responsibility gaps to control gaps. Synthese. 2023;201. doi: 10.1007/s11229-022-04001-5
- Radanliev P., De Roure D., Maple C. et al. Super forecasting the technological singularity risks from artificial intelligence. Evolving Systems. 2022;13:747–757. doi: 10.1007/s12530-022-09431-7
- Macrae C. Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety and Sociotechnical Sources of Risk. SSRN Electronic Journal. 2021. doi: 10.2139/ssrn.3832621
- Morgan F.E., Boudreaux B., Lohn A.J. et al. Military Applications of Artificial Intelligence Ethical Concerns in an Uncertain World. RAND Corporation, 2020:202. Available at: https://www.rand.org/pubs/research_reports/ RR3139-1.html
- Artificial Intelligence Risk & Governance. By Artificial Intelligence/Machine Learning Risk & Security Working Group (AIRS). The Wharton School, The University of Pennsylvania, 2024. Available at: https://ai.wharton.upenn. edu/white-paper/artificial-intelligence-risk-governance/
- Ruhl Ch. Autonomous weapon systems and military artificial intelligence (AI) applications report. 2022. Available at: https://www.founderspledge.com/research/autonomous-weapon-systems-and-military-artificial-intelligence-ai
- Mikheev M.Yu., Prokof'ev O.V., Savochkin A.E., Semochkina I.Yu. Ensuring reliability in the life cycle of artificial intelligence systems for responsible purposes. Nadezhnost’ i kachestvo slozhnykh system = Reliability and quality of complex systems. 2023;(3):12–20. (In Russ.)
- Ivanov A.I., Ivanov A.P., Savinov K.N., Eremenko R.V. Virtual enhancement of the effect of parallelization of computing during the transition from binary neurons to the use of q-ary artificial neurons. Nadezhnost’ i kachestvo slozhnykh system = Reliability and quality of complex systems. 2022;(4):89–97. (In Russ.)
Supplementary files
