No 2 (2025)

Articles

Dynamic RACH-Slot Allocation for Collision Minimization in NB-IoT Networks Based on Reinforcement Learning Algorithms

Shabrova A.S., Knyazev M.A., Kolesnikov A.V.

Abstract

The subject of this research is the adaptive management of access to Random Access Channels (RACH) in Narrowband Internet of Things (NB-IoT) networks, which frequently face congestion due to high device density and limited channel capacity. The study focuses on the practical application of Reinforcement Learning algorithms, specifically Q-learning and Deep Q-Network (DQN), to address this issue. The authors thoroughly examine the problem of RACH overload and the resulting collisions that cause delays in data transmission and increased energy consumption in connected devices. The article analyzes the limitations and inefficiency of traditional static slot allocation methods and justifies the necessity of implementing a dynamic, learning-based approach capable of adapting to constantly changing network conditions. The research aims to significantly minimize collision rates, improve connection success rates, and reduce the overall energy consumption of NB-IoT devices. The research methodology involved the use of advanced machine learning methods, including Q-learning and DQN, together with simulation modeling conducted in the NS-3 environment, integrating a dedicated RL-agent for dynamic and intelligent RACH slot allocation. The main conclusions of the study highlight the demonstrated effectiveness of the adaptive RL-based approach for optimizing access to communication slots in NB-IoT networks. The scientific novelty lies in the development and integration of a specialized RL-agent capable of dynamically managing slot distribution based on real-time network conditions. As a result of implementing the proposed approach, the number of collisions was reduced by 74%, the number of successful connections increased by 16%, and the energy efficiency of the devices improved by 15% in comparison with traditional static methods. These results clearly demonstrate the practical applicability, and scalability of adaptive RL-based management techniques for enhancing both the performance and reliability of real-world NB-IoT networks.
Software systems and computational methods. 2025;(2):1-11
pages 1-11 views

WebAssembly performance in the Node.js environment

Karpovich V.D., Gosudarev I.B.

Abstract

Modern runtime environments such as browsers, Node.js, and others provide developers with tools that go beyond traditional JavaScript. This study focuses on a modern approach to building web applications where components written in different programming languages can be executed and shared using WebAssembly. The subject of the research is the testing and analysis of performance benchmarks comparing JavaScript and WebAssembly modules in the Node.js runtime. The focus is on evaluating performance in computational tasks, memory interaction, data processing, and cross-language communication. The author thoroughly explores topics such as WebAssembly integration in applications, its advantages for resource-intensive tasks like image processing, and the objectivity, representativeness, and reproducibility of the tests. The work follows an applied, experimental approach. It includes performance comparisons between pure JavaScript and WebAssembly modules. Metrics like response time and system resource consumption were used to assess efficiency. The scientific novelty of this work lies in the development and theoretical grounding of testing approaches for web applications using WebAssembly. Unlike most existing studies focused on WebAssembly's performance and security in browser environments, this work emphasizes automated testing of WebAssembly modules outside the browser — a relatively unexplored area until now. A methodological approach is proposed for testing WebAssembly modules in Node.js, including principles for test structuring, integration with JavaScript components, and execution analysis. This approach takes into account the specifics of the server environment, where WebAssembly is increasingly used — particularly for high-load computational modules, cross-language logic, and secure isolated execution. The novelty also lies in defining criteria to evaluate whether certain application components are suitable for migration to WebAssembly in terms of testability, providing developers with a tool for making architectural decisions. The proposed ideas are backed by experimental results, including test case implementations for WebAssembly and JavaScript interaction scenarios.
Software systems and computational methods. 2025;(2):12-34
pages 12-34 views

Analysis of DOM update methods in modern web frameworks: Virtual DOM and Incremental DOM

Bondarenko O.S.

Abstract

The article presents an analysis of modern methods for updating the Document Object Model (DOM) structure in popular client-side web frameworks such as Angular, React and Vue. The main focus is on comparing the concepts of Virtual DOM and Incremental DOM, which underlie the architectural solutions of the respective frameworks. The Virtual DOM used in React and Vue operates on a virtual tree, compares its versions in order to identify differences and minimize changes in the real DOM. This approach provides a relatively simple implementation of the reactive interface, but comes with additional costs for computing and resource usage. In contrast, Angular uses an Incremental DOM, which does not create intermediate structures: changes are applied directly through the Change Detection mechanism. This approach allows to achieve high performance through point updates of DOM elements without the need for a virtual representation. The study uses a comparative analysis of architectural approaches to updating the DOM, based on the study of official documentation, practical experiments with code and visualization of rendering processes in Angular and React. The methodology includes a theoretical justification, a step-by-step analysis of the update mechanisms and an assessment of their impact on performance. The scientific novelty of the article lies in the systematic comparison of architectural approaches to updating the DOM in leading frameworks, with an emphasis on the implementation of the signal model in Angular version 17+. The impact of using signals on the abandonment of the Zone library is analyzed in detail.js and the formation of a more predictable, deterministic rendering model, as well as lower-level performance management capabilities. The article contains not only a theoretical description, but also practical examples that reveal the behavior of updates in real-world scenarios. The nuances of template compilation, the operation of the effect() and computed() functions are also considered. The comparison of Virtual DOM and Incremental DOM makes it possible to identify key differences, evaluate the applicability of approaches depending on the tasks and complexity of the project, and also suggest ways to optimize frontend architect
Software systems and computational methods. 2025;(2):35-43
pages 35-43 views

Analysis of the impact of prompt obfuscation on the effectiveness of language models in detecting prompt injections

Krohin A.S., Gusev M.M.

Abstract

The article addresses the issue of prompt obfuscation as a means of circumventing protective mechanisms in large language models (LLMs) designed to detect prompt injections. Prompt injections represent a method of attack in which malicious actors manipulate input data to alter the model's behavior and cause it to perform undesirable or harmful actions. Obfuscation involves various methods of changing the structure and content of text, such as replacing words with synonyms, scrambling letters in words, inserting random characters, and others. The purpose of obfuscation is to complicate the analysis and classification of text in order to bypass filters and protective mechanisms built into language models. The study conducts an analysis of the effectiveness of various obfuscation methods in bypassing models trained for text classification tasks. Particular attention is paid to assessing the potential implications of obfuscation for security and data protection. The research utilizes different text obfuscation methods applied to prompts from the AdvBench dataset. The effectiveness of the methods is evaluated using three classifier models trained to detect prompt injections. The scientific novelty of the research lies in analyzing the impact of prompt obfuscation on the effectiveness of language models in detecting prompt injections. During the study, it was found that the application of complex obfuscation methods increases the proportion of requests classified as injections, highlighting the need for a thorough approach to testing the security of large language models. The conclusions of the research indicate the importance of balancing the complexity of the obfuscation method with its effectiveness in the context of attacks on models. Excessively complex obfuscation methods may increase the likelihood of injection detection, which requires further investigation to optimize approaches to ensuring the security of language models. The results underline the need for the continuous improvement of protective mechanisms and the development of new methods for detecting and preventing attacks on large language models.
Software systems and computational methods. 2025;(2):44-62
pages 44-62 views

Research on performance in modern client-side web-frameworks.

Ratushniak E.A.

Abstract

The subject of the study is the comparative rendering performance of three modern frameworks — React, Angular, and Svelte — in typical scenarios of building and updating user interfaces in web applications. The object of the study is the frameworks themselves as complexes of technological solutions, including change detection mechanisms, virtual or compiled DOM structures, and accompanying optimizations. The author thoroughly examines aspects of the topic such as initial and subsequent rendering, element update and deletion operations, and working with linear and deeply nested data structures. Special attention is paid to the practical significance of choosing a framework for commercial products, where performance differences directly impact conversion, user experience, and the financial efficiency of the project. Key internal mechanisms are described — React's virtual DOM, Angular's change detector, and Svelte's compiled code — which determine their behavior in various load scenarios. The methodology is based on an automated benchmark: a unified set of test scenarios is executed by client applications on React, Angular, and Svelte, a reference JavaScript solution, and an Express JS orchestrator server; operation times are recorded using performance.now() in Chrome 126, with Time To First Paint as the performance criterion. The novelty of the research lies in the comprehensive laboratory comparison of the three frameworks across four critically important scenarios (initial rendering, subsequent rendering, updating, and deleting elements) considering two types of data structures and referencing the current versions of 2025. The main conclusions of the study are as follows: Svelte provides the lowest TTFP and leads in deep hierarchy scenarios due to the compilation of DOM operations; React shows better results in re-rendering long lists, using an optimized diff algorithm and element keys; Angular ensures predictability and architectural integrity but increases TTFP by approximately 60% due to the change detector. There is no universal leader; a rational choice should rely on the analytical profile of the operations of a specific application, which is confirmed by the results of the presented experiment.
Software systems and computational methods. 2025;(2):63-78
pages 63-78 views

Analysis of Microservices Granularity: Effectiveness of Architectural Approaches

Chikaleva Y.S.

Abstract

Modern information systems require scalable architectures for processing big data and ensuring availability. Microservice architecture, based on decomposing applications into autonomous services focused on business functions, addresses these challenges. However, the optimal granularity of microservices impacts performance, scalability, and manageability. Suboptimal decomposition leads to anti-patterns, such as excessive fineness or cosmetic microservice architecture, complicating maintenance. The aim of the study is a comparative analysis of methods for determining the granularity of microservices to identify approaches that provide a balance of performance, flexibility, and manageability in high-load systems. The object of the study is the microservice architecture of high-load systems. The subject of the research is the comparison of granularity methods, including monolith, DDD, Data-Driven Approach, Monolith to Microservices, and their impact on the system. The study employs an experimental approach, including the implementation of a Task Manager application in four architectural configurations. Load testing was conducted using Apache JMeter under a load of 1000 users. Performance metrics (response time, throughput, CPU), availability, scalability, security, and consistency were collected via Prometheus and processed to calculate averages and standard deviations. The scientific novelty lies in the development of a methodology for comparative analysis of decomposition methods using unified metrics adapted for high-load systems, setting this study apart from works that focus on qualitative assessments. The results of the experiment showed that the monolithic architecture provides the minimum response time (0.76 s) and high throughput (282.5 requests/s) under a load of 1000 users, but is limited in scalability. The Data-Driven Approach ensures data consistency, DDD is effective for complex domains, while Monolith to Microservices demonstrates low performance (response time 15.99 s) due to the overload of the authorization service. A limitation of the study is the use of a single host system (8 GB RAM), which may restrict the scalability of the experiment. The obtained data are applicable for designing architectures of high-load systems. It is recommended to optimize network calls in DDD (based on response time of 1.07 s), data access in Data-Driven (response time of 5.49 s), and to carefully plan decomposition for Monolith to Microservices to reduce the load on services (response time of 15.99 s).
Software systems and computational methods. 2025;(2):79-93
pages 79-93 views

Intellectual infrastructure for automated control and interoperability of microservices in cloud environments

Rogov D.V., Alpatov A.N.

Abstract

In the context of rapid growth in the scale and complexity of information systems, the questions of effective integration and support of microservices architectures are becoming increasingly relevant. One of the key challenges is ensuring the interoperability of software components, which implies the ability to reliably exchange data and share information between various services implemented using heterogeneous technologies, protocols, and data formats. In this work, the subject of research is the formalization and construction of an intelligent system ensuring the interoperability of microservice components within cloud infrastructure. A formalized approach is proposed, based on graph, categorical, and algebraic models, which allows for a strict description of data transmission routes, conditions for interface compatibility, and the procedure for automated agreement on interaction formats. An operation for interface agreement is introduced, which identifies the need to use adapters and converters for the integration of various services. Special attention is paid to the task of building a universal interface through which any data streams can be routed, significantly simplifying the process of scaling and refining the microservice system. The developed system architecture encompasses the stages of creation, publication, and deployment of container microservices, automatic verification of data transmission routes, and dynamic management of service states based on load forecasting using artificial intelligence models. The application of the proposed methodology allows for a significant increase in the flexibility, reliability, and scalability of the infrastructure, reduction of operational costs, and automation of the processes of support and integration of new components. The proposed solution is based on a formalized approach to ensuring the interoperability of microservice components within cloud infrastructure. A graph and categorical model is used as a foundation, allowing for a strict definition of data transmission routes and interface agreement procedures between various services. To unify interaction and enhance system flexibility, an interface agreement operation is introduced, as well as the capability for automated identification of the need for data adapters and converters. The developed intelligent load forecasting algorithm allows for dynamic management of component states and rapid adaptation of the infrastructure to changing operating conditions.
Software systems and computational methods. 2025;(2):94-114
pages 94-114 views

Areal data types in instrumental approach to programming

Dagaev D.V.

Abstract

For a large amount of tasks classical structured programming approach is preferred to object-oriented one. These preferences are typical for deterministic world and in machine-representation-oriented systems. A modular Oberon programming language was oriented on such tasks. It demonstrate minimalistic way of reliability, which differs from vast majority of program systems maximizing amount of features supported. Usage of instrumental approach instead of OOP was proposed before for solving the problems of deterministic world. The data-code separation principle assumes that data lifecycle is independently controlled, and lifetime duration is longer then code lifetime. The areal data types proposed by author are aimed for implementation within instrumental approach. Areal data types provide orthogonal persistency and are integrated with codes, defined in types hierarchy for instruments. Areal data types are embodied in MultiOberon system compilers. Reference to address conversion methods are based on runtime system metadata. Areal types integration resulted in developing additional test in MultiOberon. MultiOberon restrictive semantics makes an opportunity to turn off pointer usage permissions and switch on areal types usage. Areal is fixed for specifically marked data type. Areal references are implemented as persistent ones in areal array. Due to such paradigm the problem of persistence reference during software restarts was solved. Novelty in work is using areal references which gains in index type and pointer type advantages. Such approach implements principles of generic programming without creating dependencies of types extensions and template specifications. An example of generic sorting algorithm is provided for areal types. A new data type differs with compactness and simplicity in comparison to dynamic structures. It demonstrates advantages for systems with complex technological objects data structures in relatively static bounds.
Software systems and computational methods. 2025;(2):115-131
pages 115-131 views

The use of neural networks for real-time big data analysis

Makarov I.S., Raikov A.V., Kazantsev A.A., Nekhaev M.V., Romanov M.A.

Abstract

The article is devoted to the exploration of the possibilities of using neural networks for real-time big data analysis in the field of information security. The relevance of the topic is due to the rapid growth of generated data volumes, the complexity of cyberattack methods, and the necessity to develop new effective approaches to information protection. The work examines in detail the key tasks addressed using neural network technologies, including anomaly detection in network traffic, prevention of distributed DDoS attacks, classification of malware, and forecasting new cyber threats. Special attention is paid to the unique advantages of neural networks, such as the ability to process extremely large volumes of heterogeneous data, identify complex non-obvious attack patterns, continuously learn and adapt to rapidly changing conditions in the cyber environment. The study utilizes deep learning methods, including convolutional and recurrent neural networks, for big data analysis and cyber threat identification. Approaches for real-time data processing and model robustness assessment are applied. The conducted research demonstrates that modern neural network architectures possess significant potential for revolutionary transformation of information security systems. Key advantages include ultra-high speed of streaming data processing, the capability to detect previously unknown types of attacks through the identification of complex correlations, as well as the ability to predict threats based on historical data analysis. However, the research also revealed serious technological challenges: excessive demand for computational resources to train complex models, the "black box" problem in interpreting decisions, the vulnerability of the neural network models themselves to specialized attacks (adversarial attacks), and ethical aspects of automated decision-making in cybersecurity. Successful implementation cases are presented, including next-generation intrusion detection systems and malware analysis platforms. The authors see promising directions for further research in the development of energy-efficient neural network models, creation of explainable AI methods for security, and advancement of adaptive systems capable of evolving alongside cyber threats. The results obtained are valuable for cybersecurity specialists, developers of protective solutions, and researchers in the field of artificial intelligence.
Software systems and computational methods. 2025;(2):132-147
pages 132-147 views

Implementation of Drag&Drop behavior in an Android application based on the gesture processing API.

Petrovsky A.A., Rysin M.L.

Abstract

The subject of the study is the organization of the movement of user interface objects (User Interface, UI) in Android applications. The focus of the research is the development of a software solution for implementing Drag&Drop behavior in mobile Android applications using the modern user interface framework Jetpack Compose. The relevance of the presented work is due to the need to create flexible and intuitive mechanisms for user interaction with the interface of mobile Android applications. The main results of the research include: 1. Development of a set of Composable functions for managing the Drag&Drop state of user interface objects. 2. Integration of the behavior of the "source" and "receiver" of the moving UI objects with the possibility of decoration. 3. Overcoming the limitations of the built-in tools of the Jetpack Compose framework. 4. Creation of a mechanism for handling user movement gestures. 5. Formation of a universal approach to implementing interactive interaction with interface elements. The methodology is based on the application of the architectural pattern MVI (Model-View-Intent), which provides effective management of interface state, and the use of object-oriented design patterns, in particular, the "decorator" pattern. Research methods include analyzing existing approaches to implementing Drag&Drop, designing a software solution, developing a prototype, and testing it within a mobile application. The scientific novelty of the research lies in the development of an innovative approach to organizing Drag&Drop interaction, which allows overcoming the limitations of built-in tools of the Jetpack Compose framework. The proposed solution is characterized by: - complete isolation of Drag&Drop components; - possibility of decorating moving UI objects; - flexible configuration of the behavior of the source and receiver of interface objects; - absence of rigid connections between user interface components. The practical significance of the work lies in the development of tools that can be successfully applied in various mobile software projects requiring complex user interactions. The conclusions of the research demonstrate the effectiveness of the proposed solution in overcoming the existing limitations of Jetpack Compose and open new opportunities for creating more dynamic and user-friendly interfaces in mobile applications.
Software systems and computational methods. 2025;(2):148-164
pages 148-164 views

Simulation modeling of the functional twin of the microclimate control system of an intelligent building.

Dushkin R.V., Klimov V.V.

Abstract

The presented work is dedicated to the development of an intelligent microclimate control system for buildings (an HVAC system). The research focuses on addressing the problem of insufficient adaptability of traditional approaches (PID controllers, knowledge-based systems) in the context of dynamically changing internal environmental parameters of a building. The main emphasis is on creating a hybrid method that combines the advantages of functional programming and artificial intelligence. The study examines issues of energy efficiency, accuracy in maintaining comfortable conditions for visitors of intelligent buildings, and the robustness of the HVAC system to external disturbances. A crucial task is to minimize operational costs while ensuring the safety and reliability of equipment operation. The presented research covers all stages of software development—from designing its architecture to practical testing. The core of the research is based on the approach of a functional twin implemented in Haskell. LSTM networks are used for forecasting, genetic algorithms for optimization, and the RETE algorithm for rule processing. Verification is conducted through simulation modeling, generating 1440 data points. The scientific novelty of the presented work lies in the application of a categorical-theoretic approach to model the functional twin, where each device (both sensors and actuators) is represented as a composition of pure functions. Results demonstrate a 14.7% reduction in energy consumption, an increase in the operational time within a comfortable range to 94.7%, and a threefold reduction in the switching frequency of the HVAC system modes. Practical significance is confirmed by a 15% decrease in operational costs and improved cyber resilience through the use of immutable data structures. The conclusions indicate that the combination of functional programming with a hybrid approach in artificial intelligence provides a balance of key system parameters. The proposed architecture can serve as a benchmark for integrating IoT and cyber-physical systems within the framework of Industry 4.0.
Software systems and computational methods. 2025;(2):165-174
pages 165-174 views

Development of the PLAY VISION AI project for watching sports matches using artificial intelligence

Kovalev S.V., Smirnova T.N., Zverev R.E., Rakov I.V.

Abstract

With the development of digital technologies the sports industry is facing a growing need for advanced analytical tools. In football the use of computer vision and machine learning technologies to analyze games is becoming not just a trend, but a necessity to maintain competitiveness. The use of computer vision and machine learning in sports analytics allows to automatically extract meaningful data from video matches, which significantly increases the speed and accuracy of analysis compared to traditional methods. Such technologies can provide coaches with detailed reports on the movements, positioning and tactics of players in real time. The goal is to create a system that will allow for a comprehensive analysis of football matches using the latest advances in artificial intelligence and computer vision. The main method is a review and analysis of publications on the research topic; analysis of modern technologies that allow automatic processing of video data. The main methodology is the concept of developing the PLAY VISION AI project as a way to watch sports matches using artificial intelligence to evaluate the effectiveness of game strategies. The relevance of this work is due to the maximum modification of modern technical means to improve analytical capabilities in sports. Main results: algorithms for calibration and correction of video distortions from matches have been developed; methods for detecting and tracking reference points and players have been developed; algorithms for comparing images with real coordinates on the field have been implemented; the developed methods have been integrated into a single system with an interface for end users. The developed system will provide coaches and analysts with tools to evaluate the effectiveness of game strategies and prepare for upcoming matches. It will also contribute to the further development of analysis technologies in sports.
Software systems and computational methods. 2025;(2):175-189
pages 175-189 views

Modern methods of preventing DDoS attacks and protecting web servers

Kozyreva N.I., Muhtulov M.O., Ershov S.A., Novoseltseva S.V., Akhmadullin D.A.

Abstract

The object of the study is modern methods and technologies for protecting web servers from distributed denial of service (DDoS) attacks. The subject of the research covers current strategies for preventing and mitigating DDoS threats, including a detailed classification of attacks by types and vectors of impact. Special attention is paid to the mechanisms of DDoS attack effects on information systems, addressing both the technical aspects of operational disruptions and their consequences for business processes. The field of study analyzes modern technological protection solutions: Anycast routing, rate limiting, behavioral analysis systems for network traffic, and CAPTCHA mechanisms. Additionally, the integration of innovative approaches with traditional cybersecurity tools—such as firewalls, intrusion prevention systems (IPS), and protective proxy servers—is explored. The relevance of the research is determined by the rapid digitization and exponential growth in the complexity of cyberattacks, making the issue of DDoS protection critically important for ensuring the resilience of web infrastructures. The methodology includes an analysis of DDoS attacks at the network, transport, and application levels, assessing their impact on IT systems. Modern protective technologies are examined, including anomaly detection systems, load balancing, ML traffic filtering, and cloud solutions. Special attention is given to the adaptability and scalability of protection. The scientific novelty of the work lies in a comprehensive analysis of the economic and technical aspects of countering DDoS threats, including an assessment of the cost and effectiveness of various solutions for businesses of different scales. The research offers practical recommendations for building multi-layered protection that combines innovative approaches (machine learning, cloud services) with proven methods (firewalls, IPS). An analysis of real cases demonstrates the effectiveness of adaptive strategies against modern complex attacks. The conclusions emphasize the need for a proactive approach to security that considers both technological and organizational protective measures. The results obtained have practical value for cybersecurity specialists, system administrators, and developers of protective solutions, providing them with a methodological basis for creating DDoS-resistant web infrastructures. The work also outlines promising directions for further research in the field of intelligent detection and neutralization systems for attacks.
Software systems and computational methods. 2025;(2):190-203
pages 190-203 views

Analysis of Spatiotemporal Motion Patterns in Aerial Images Using Optical Flow

Rodionov D.G., Sergeev D.A., Konnikov E.A., Pashinina P.A.

Abstract

This study focuses on the analysis of spatiotemporal motion patterns in aerial imagery using the optical flow method. With the advancement of remote sensing technologies and the widespread use of unmanned aerial vehicles (UAVs), the need for accurate and automated analysis of natural and anthropogenic dynamics is increasing. The work emphasizes the detailed examination of motion direction and intensity in high-resolution images. Existing methods of optical flow estimation are considered, including classical approaches such as the Lucas-Kanade and Horn-Schunck methods, as well as dense optical flow calculated using the Farnebäck method. The latter is applied as a core technique for constructing velocity vector fields, which serve as the basis for segment-wise motion distribution analysis, direction visualization, and heatmap generation. The proposed approach enables the identification of structural patterns and local motion features, which is particularly relevant for infrastructure monitoring and environmental risk assessment. It is also demonstrated that median motion estimates are more robust to noise and local outliers than mean values, thus improving analysis reliability. The research method is based on the calculation of dense optical flow using the Farnebäck algorithm, followed by statistical analysis of motion velocity characteristics and directional patterns across image segments. The scientific novelty of this study lies in the development of a comprehensive approach for analyzing spatiotemporal motion characteristics in aerial images using dense optical flow computed via the Farnebäck method. Unlike traditional techniques focused on global motion estimation, the proposed methodology emphasizes local patterns, enabling detailed segment-based evaluation of motion direction and intensity. For the first time, this study integrates both quantitative and visual analysis methods: histograms, heatmaps, calculations of median and mean velocities, and metrics such as structural similarity index (SSIM) and mean squared error (MSE) between image segments. This approach allows for detecting motion anomalies, identifying highly dynamic regions, and assessing structural stability. The method is tailored to UAV imagery and does not require large training datasets, making it suitable for low-resource environments. The results have practical relevance for automated infrastructure monitoring and environmental risk assessment.
Software systems and computational methods. 2025;(2):204-216
pages 204-216 views

Method of UAV Aerial Image Analysis Based on SSIM and MSE for Assessing the Reliability of Technical Systems

Rodionov D.G., Sergeev D.A., Konnikov E.A., Popova S.D.

Abstract

This article presents an automated method for analyzing aerial images from unmanned aerial vehicles (UAVs), aimed at improving the reliability of technical systems and tracking changes in natural and anthropogenic processes. The objective of this work is to develop an algorithm that ensures accurate detection of anomalies and prediction of potential failure threats based on image processing. The methodology involves the application of the Structural Similarity Index (SSIM) and Mean Squared Error (MSE) for assessing spatial variations between adjacent segments of the imagery. The proposed approach is characterized by high stability to changes in illumination, low computational costs, and the possibility of integration into autonomous UAV systems. This work is based on computer modeling and statistical analysis of anomaly detection accuracy. The algorithm was tested on various datasets of aerial images using machine vision techniques and mathematical statistics to evaluate the effectiveness of the proposed method. The results include the development and validation of the algorithm, the construction of SSIM and MSE heatmaps, as well as the evaluation of the accuracy and reliability of the method. The obtained data confirm its effectiveness in automated monitoring of infrastructure facilities and the assessment of environmental risks. The scope of application of the developed method encompasses automated surveillance of engineering structures, monitoring the condition of agricultural lands, analyzing the consequences of natural disasters, and environmental control. The method can be integrated into intelligent control systems for the reliability of technical objects. In conclusion, the developed algorithm significantly enhances the accuracy of anomaly detection, minimizes the influence of external factors, and automates the aerial image processing workflow. Its application contributes to improving the reliability of technical systems and reducing the probability of failures through the early identification of potential threats. Scientific Novelty: The scientific novelty lies in the development of a new method for assessing spatial variations based on a combination of the Structural Similarity Index (SSIM) and Mean Squared Error (MSE), which provides high accuracy in anomaly detection. In contrast to traditional image analysis methods, the proposed algorithm is characterized by robustness to changing imaging conditions, and its computational efficiency allows for real-time application. Furthermore, the method can be integrated into autonomous monitoring systems, expanding the capabilities of intelligent data analysis from UAVs. The obtained results and proposed solutions can be used to improve technologies for automated condition monitoring of objects and analysis of the dynamics of natural processes.
Software systems and computational methods. 2025;(2):217-230
pages 217-230 views

Согласие на обработку персональных данных с помощью сервиса «Яндекс.Метрика»

1. Я (далее – «Пользователь» или «Субъект персональных данных»), осуществляя использование сайта https://journals.rcsi.science/ (далее – «Сайт»), подтверждая свою полную дееспособность даю согласие на обработку персональных данных с использованием средств автоматизации Оператору - федеральному государственному бюджетному учреждению «Российский центр научной информации» (РЦНИ), далее – «Оператор», расположенному по адресу: 119991, г. Москва, Ленинский просп., д.32А, со следующими условиями.

2. Категории обрабатываемых данных: файлы «cookies» (куки-файлы). Файлы «cookie» – это небольшой текстовый файл, который веб-сервер может хранить в браузере Пользователя. Данные файлы веб-сервер загружает на устройство Пользователя при посещении им Сайта. При каждом следующем посещении Пользователем Сайта «cookie» файлы отправляются на Сайт Оператора. Данные файлы позволяют Сайту распознавать устройство Пользователя. Содержимое такого файла может как относиться, так и не относиться к персональным данным, в зависимости от того, содержит ли такой файл персональные данные или содержит обезличенные технические данные.

3. Цель обработки персональных данных: анализ пользовательской активности с помощью сервиса «Яндекс.Метрика».

4. Категории субъектов персональных данных: все Пользователи Сайта, которые дали согласие на обработку файлов «cookie».

5. Способы обработки: сбор, запись, систематизация, накопление, хранение, уточнение (обновление, изменение), извлечение, использование, передача (доступ, предоставление), блокирование, удаление, уничтожение персональных данных.

6. Срок обработки и хранения: до получения от Субъекта персональных данных требования о прекращении обработки/отзыва согласия.

7. Способ отзыва: заявление об отзыве в письменном виде путём его направления на адрес электронной почты Оператора: info@rcsi.science или путем письменного обращения по юридическому адресу: 119991, г. Москва, Ленинский просп., д.32А

8. Субъект персональных данных вправе запретить своему оборудованию прием этих данных или ограничить прием этих данных. При отказе от получения таких данных или при ограничении приема данных некоторые функции Сайта могут работать некорректно. Субъект персональных данных обязуется сам настроить свое оборудование таким способом, чтобы оно обеспечивало адекватный его желаниям режим работы и уровень защиты данных файлов «cookie», Оператор не предоставляет технологических и правовых консультаций на темы подобного характера.

9. Порядок уничтожения персональных данных при достижении цели их обработки или при наступлении иных законных оснований определяется Оператором в соответствии с законодательством Российской Федерации.

10. Я согласен/согласна квалифицировать в качестве своей простой электронной подписи под настоящим Согласием и под Политикой обработки персональных данных выполнение мною следующего действия на сайте: https://journals.rcsi.science/ нажатие мною на интерфейсе с текстом: «Сайт использует сервис «Яндекс.Метрика» (который использует файлы «cookie») на элемент с текстом «Принять и продолжить».