No 3 (2025)

Articles

Analysis of modern optimization methods in "React"

Ratushniak E.A.

Abstract

The article presents a comprehensive analysis of performance optimization methods for React applications: memoization (React.memo, useMemo, useCallback), list virtualization (react window, react virtualized), code splitting (React.lazy, Suspense), optimizing the Context API, and new capabilities of React 18 (automatic batching, Concurrent Mode). The analysis and performance data are based on various existing scientific research as well as the documentation of the optimization methods themselves. Developers are recommended to use the React Profiler API and Chrome Performance for measuring the performance of React applications. A test SPA with dynamic data filtering and three rendering options has been developed. Using the React Profiler API, the processing time for 1,000 to 20,000 elements was measured ten times, followed by statistical processing. The methodology includes a comprehensive theoretical analysis, an examination of the mechanisms of various optimization methods, and their impact on performance. The scientific novelty of the article lies in the comprehensive analysis and practical comparison of key approaches to optimization within the React framework. The practical significance of the work is justified by the fact that the results can be directly used in commercial software development. Additionally, the article conducts an experimental comparison of list virtualization libraries using a computer experiment, followed by statistical analysis. The results showed that react window provides up to a 95% increase in speed and stability under increasing load, while react virtualized offers enhanced functionality at the cost of slightly higher latency, as confirmed by other studies. The article contains not only a theoretical description but also practical examples that reveal optimization methods in real applications, which confirms its practical significance.
Software systems and computational methods. 2025;(3):1-9
pages 1-9 views

Two-step semantic clustering of embeddings as an alternative to LDA for infometric analysis of industry news.

Konnikov E.A., Kryzhko D.A.

Abstract

The subject of the research is the development and validation of an alternative approach to thematic modeling of texts aimed at overcoming the limitations of classical Latent Dirichlet Allocation (LDA). The object of the study is short Russian-language news texts about nuclear energy, presented in the form of the "AtomicNews" corpus. The authors thoroughly examine various aspects of the topic, such as the impact of sparsity on the quality of thematic modeling, issues of theme interpretability, and the limitations of a priori fixing the number of topics. Special attention is paid to the geometric interpretation of text semantics, in particular, the transformation of lexical units into the space of pre-trained embeddings and subsequent clustering aimed at forming document thematic profiles. The research focuses on the comparative analysis of the new method and LDA using coherence, perplexity, and thematic diversity metrics. The proposed approach aims to create an interpretable, computationally lightweight, and noise-resistant model suitable for online monitoring of news flows. The research methodology is based on a two-stage semantic smoothing process—embedding representation of lemmas using Sentence-BERT and agglomerative cosine clustering, followed by the application of K-means to the thematic profiles of documents. The scientific novelty of the study lies in the development and empirical justification of a thematic modeling scheme that replaces probabilistic word generation with geometric smoothing of embeddings. The proposed approach departs from the assumptions of the "bag of words" and a fixed number of topics, forming thematic coordinates of documents through density clusters in semantic space. This enhances theme interpretability, reduces sensitivity to text sparsity, and avoids the collapse of topic distribution in short messages. Experiments on the "AtomicNews" corpus demonstrated a statistically significant improvement compared to classical LDA: a 5% reduction in perplexity, a 0.15-point increase in topic coherence, and an increase in thematic diversity. The method also demonstrated computational efficiency—the entire procedure takes seconds on a CPU, making it suitable for application in resource-constrained environments. Thus, the transition from probabilistic decomposition to geometric analysis of embeddings represents a promising direction in thematic modeling of industry texts.
Software systems and computational methods. 2025;(3):10-19
pages 10-19 views

Designing the architecture of the client-server interaction protocol for web applications based on websocket

Fedulov A.A.

Abstract

The subject of the research is the design of the architecture of the client-server protocol for web applications based on the WebSocket technology. The object of the study is the mechanisms for bidirectional real-time data exchange and existing solutions based on HTTP/HTTPS, SSE, SignalR, gRPC-Web, Socket.IO, and WS. The author examines in detail aspects of the topic such as formalization and unification of the logical channel structures, message routing, connection activity monitoring, and automatic session recovery. Special attention is given to analyzing the advantages and limitations of each approach to develop requirements for a new protocol architecture. The work includes a comparative analysis of existing libraries and technologies, which helped identify key parameters for effective, scalable, and fault-tolerant implementation of bidirectional interaction. The design method used is the MVCS (Model-View-Controller-Service) pattern, supplemented by the modular organization of the SingleSocket library and an analytical comparison of existing technological solutions in client-server interaction. The main conclusions of the conducted research are the development and implementation of a protocol architecture that provides a balanced and technically justified approach to bidirectional real-time data exchange. A significant contribution of the author to the research topic is the formalization of the declarative structure of channels tied to controllers, the implementation of a configurable connection activity monitoring mechanism on the server with automatic termination of inactive sessions, and the built-in logic for connection recovery on the client. The novelty of the research lies in the application of the MVCS architectural model to enhance structuring, the use of a universal JSON format with channel routing, and the modular implementation of components for reuse and simplified integration. The proposed architecture serves as a solid foundation for creating scalable, reliable, and flexible modern information systems.
Software systems and computational methods. 2025;(3):20-30
pages 20-30 views

Specification of regression analysis of the impact of the information environment on the company's financial indicators

Konnikov E.A., Polyakov P.A., Rodionov D.G.

Abstract

The subject of the research is the development and experimental validation of a comprehensive regression specification designed for the quantitative assessment of the elasticity of market stock values to thematic information flows. The object of the research includes daily time series of thematic intensities, extracted by the Latent Dirichlet Allocation algorithm from the industry news corpus, and the stock exchange differential "closing-opening" of the shares of PJSC "GMK Norilsk Nickel". The author thoroughly examines aspects such as Corr–γ–split normalization, which eliminates the bimodality of distributions, the orthogonalization of "scale-asymmetry," which reduces multicollinearity, Partial Least Squares projection for aggregating features, and regularized Ridge regression for robust forecasting. Special attention is given to how the combination of these stages forms a statistically sound and interpretable bridge between textual signals and financial metrics, ensuring the practical applicability of the model to the dynamics of high-frequency informational disturbances. The methodological foundation consists of Corr–γ–split normalization, "Sum/Diff" orthogonalization, Partial Least Squares projection, and Ridge regression with cross-validation, combined in a full factorial experiment of forty-five alternative specifications. The main conclusions of the conducted research are the confirmation that only a comprehensive integration of Corr–γ–split normalization, "Sum/Diff" orthogonalization, PLS projection, and Ridge regression forms a statistically robust and practically applicable model of the influence of news background on market price. The novelty of the work lies in the introduction of a metrically justified threshold T*, which eliminates the inherent bimodality of LDA intensity distributions, as well as in the development of interpretable decompositions of flows into size and asymmetry, which enhances the explanatory power of elasticity coefficients. The empirical testing on data from PJSC "GMK Norilsk Nickel" showed a reduction in RMSE by 13%, an increase in CV-R² to 0.78, and an improvement in the aggregate quality score by 0.32 compared to the baseline model. The obtained results prove that the proposed specification is scalable to various corporate or industry information flows and can serve as a reliable tool for monitoring and forecasting market indicators in the context of high-frequency informational disturbances.
Software systems and computational methods. 2025;(3):31-44
pages 31-44 views

Processing requests with hierarchical nature of shared resources

Kirilov V.S.

Abstract

The subject of this research is the development and analysis of a data structure and algorithm for managing parallel message execution in a microservice architecture without a message broker. In the context of the transition to a microservice architecture and asynchronous messaging, especially in the absence of a centralized broker, there is a need for effective methods to ensure the order of processing messages that affect shared resources. The problem lies in the fact that traditional methods, such as segmentation, do not guarantee compliance with the order of message processing during parallel execution and become more complicated when it is necessary to synchronize access to resources. As an alternative to traditional approaches, a method using a shared queue and a thread pool is being considered. The paper explores and proposes a data structure that provides the possibility of parallel message processing provided there are no lock conflicts, thereby ensuring the correct order of operations related to shared resources and avoiding mutual locks. The main goal is to create resource access control mechanisms adapted to the microservice architecture, without complicating the message processing logic and avoiding problems associated with multithreading. The paper uses an analytical approach to the development of a data structure and algorithm based on the formalization of the synchronization problem, as well as a theoretical analysis of the algorithmic complexity and correctness of the proposed solution. The scientific novelty of the work lies in the proposal of a new data structure using ordered sets and waiting lists for effective management of parallel processing of asynchronous messages in microservice architectures, especially where there is no message broker. The proposed algorithm allows to dynamically determine the locks associated with messages, as well as separate blocking and non-blocking messages, which makes it possible to execute them in parallel. The proposed data structure and algorithm make it possible to change the granularity of blocked resources without complicating message processing procedures, and simplify multithreaded programming by allowing each message processing procedure to be considered as single-threaded. The algorithm does not have the problem of mutual blocking of resources, which increases the overall fault tolerance of the system. The identified shortcomings related to resource blocking are proposed to be eliminated in further research.
Software systems and computational methods. 2025;(3):45-58
pages 45-58 views

On the division of responsibility for errors in the operation of robotic systems

Tikhanychev O.V.

Abstract

The relevance of choosing the subject of research as the basis for ensuring the safety of the use of robotic systems for various purposes, primarily those using artificial intelligence for control, and the object of research, which is the problem of sharing responsibility for the development and operation of robotic systems, is determined by the existing contradiction between the need for autonomous use of robotic systems and the complexity of software implementation of this requirement. At the same time, in robotics, quite often, it is the errors of control algorithms that are the source of most problems. Based on the analysis of regulatory documents regulating the development of artificial intelligence tools, possible problems of ensuring the safety of the use of autonomous robotic systems are analyzed. The conclusion is synthesized that, in the current state, these documents do not provide solutions to the security problem of artificial intelligence systems.  A systematic approach was chosen as the methodological basis of the study. The use of a systematic approach, decomposition methods and comparative analysis made it possible to consider in a complex the problems of dividing the areas of responsibility of developers and operators of autonomous and partially autonomous robots implementing the principles of control based on artificial intelligence. The research's source base consists of scientific articles, regulatory and legislative documents that are publicly available. It is concluded that the existing approaches to training and self-learning of artificial intelligence systems that control autonomous robots “blur” the boundaries of responsibility of the participants in the process, which, in theory, can lead to critical situations during operation. With this in mind, based on the analysis of a typical development and application process, it is proposed to clarify the distribution of responsibility, as well as add new participants to the process: supplement it with specialists focused on the safety and impartiality of artificial intelligence (AI Alignment), and also provide a group approach in the development of artificial intelligence and machine learning algorithms, reducing the subjectivity factor.. Theoretically, the application of the principles of responsibility sharing synthesized in the article will ensure an increase in the safety of robotic systems based on the use of artificial intelligence.
Software systems and computational methods. 2025;(3):59-71
pages 59-71 views

Automating Windows Application deployment using Alt Domain GPO

Tomilova S.D., Shibaev D.O.

Abstract

The subject of the study is the process of automating the deployment of Windows applications in mixed IT infrastructures, including Alt Linux and Windows systems. Special attention is paid to exploring the possibilities of the Alt Domain for applying group policies (GPO) in order to centrally manage the installation and configuration of applications such as Yandex Browser and Yandex Telemost. The paper analyzes the effectiveness of using GPO in the Alt Domain to ensure network stability, ease of administration, and reduce setup time. The author examines in detail the integration of GPO with the Samba protocol, which allows for compatibility between Linux and Windows. This approach may be useful for organizations with a hybrid IT infrastructure. This topic is relevant in the context of the growing need for automation of management processes, which makes it in demand for both commercial and educational institutions. The methodology includes configuring the Alt Domain to work with GPO, testing automated deployment of Windows applications, and analyzing compatibility and stability in a mixed environment. The scientific novelty of the research lies in the adaptation and application of Alt Domain tools for the centralized management of Windows application deployment processes using group policies (GPOs). For the first time, the possibility of successful automated deployment of applications such as Yandex Browser and Yandex Telemost in mixed IT environments, including Alt Linux and Windows, has been demonstrated. The results confirmed the hypotheses about the compatibility of GPO with Alt Linux through integration with the Samba protocol, which allows not only to unify the application configuration processes, but also to increase the stability and reliability of the network infrastructure. The use of this approach reduces the burden on administrators, optimizes time costs and provides flexibility in managing IT systems. The main conclusions emphasize the prospects of using Alt Domain in organizations with hybrid infrastructure. In the future, it is planned to expand the study to include an analysis of system scalability, compatibility with new versions of Windows, as well as the introduction of additional applications and automation tools. The results of the work are valuable for organizations seeking effective integration and automation of management of mixed IT infrastructures.
Software systems and computational methods. 2025;(3):72-85
pages 72-85 views

Methodological Concept for Organizing an Information System of Public and Professional Expertise in the Field of Ecology and Environmental improvement

Zotov V.V., Zotov M.V., Aleksiadis N.F., Yushin V.V.

Abstract

Today, there is a gap between the requirement of independent and objective analysis of decisions by public authorities and the mechanism for involving stakeholders in this process. The key tool for achieving this reasonableness is public and professional expertise. The object of the study was a system for organizing public and professional expertise of management decisions in the fields of ecology and urban environment improvement using digital platforms. The goal of the work is to create a concept for designing an information system that provides effective organization of public and professional expertise. Such a system should allow the stakeholders to participate collectively in assessing the public importance of environmental and landscaping issues, contributing to management decisions through comprehensive discussion. This paper presents an integrated methodology of reconciliation of estimates in the information system of social-professional expertise, based on the synthesis of fuzzy logic, graph theory and Delphi method. Within the framework of development of information system of social-professional expertise in the field of ecology and improvement, conceptual models of functioning were created: 1) subsystems for selection and ranking of stakeholder representatives based on processing self-assessments through fuzzy logic; 2) subsystems for configuration of a relevant network from a pool of stakeholder representatives based on the root tree of graph theory; 3) subsystem of expert evaluation of problems of public importance based on the Delphi method. Scientific novelty is defined by the complex use of mathematical and organizational methods, which allows to overcome uncertainty and subjectivity of self-assessments of experts, to structure information flows in pools of experts and to ensure consistency of decisions. The developed information system increases the quality and legitimacy of management decisions on issues of public importance.
Software systems and computational methods. 2025;(3):86-102
pages 86-102 views

A general algorithm for eliminating critical conditions for solving the problem of controlling a real walking robot based on deep reinforcement learning methods

Kashko V.V., Oleinikova S.A.

Abstract

The object of the study is a mobile walking robot with two or more movable articulated limbs. The concept of a "critical condition" is introduced, in which the mechanism balances on the verge of falling (but does not fall) or there is a possibility of damage to mechanical components due to the generation of unacceptable joint angles. The subject of the study is a general algorithm for the elimination of critical conditions, which provides the possibility of training an agent based on a deep reinforcement learning algorithm directly on a real robot, without the risk of damaging its mechanisms and interrupting the process of interaction with the environment to restore a stable state. The purpose of this work is to develop a general algorithm for the elimination of critical conditions in the context of adaptive control of a walking robot based on deep learning algorithms with reinforcement. A comparison was made between the proposed and standard methods of applying deep OP on a real robot. The experiments were conducted on 6,000 episodes, with a dimension of 300 steps each. The following quality metrics were selected for evaluation: the percentage of episodes without an actual fall, the percentage of fully completed episodes, and the maximum episode length. The algorithm is based on the concept of "critical condition" and uses the following principles and methods: the "trial and error" method, the feedback principle, holding the projection of the center of gravity point in the area of the polygon formed by the points of contact of the limbs with the work surface, which ensures the balancing of the structure and allows you to determine the boundary areas in which the robot is still stable. The scientific novelty of the work lies in the proposed approach, which allows an intelligent agent to control a physical robot "directly", without pre-configuration in a simulation environment with subsequent transfer implementation. The proposed algorithm is not aimed at improving the agent's performance, but is intended to provide greater autonomy in the learning process of the robot, directly in the hardware. The basic idea is to immediately respond to a critical condition in the form of the fastest sequential return to a certain number of steps back along the decision-making trajectory, ensuring that the agent remains in a stable and safe state at all times. The method of proximal policy optimization (PPO) was used as a method of deep reinforcement learning. As a result of the comparative analysis, the proposed algorithm demonstrated a hundredfold increase in the stability of the mechanism.
Software systems and computational methods. 2025;(3):103-114
pages 103-114 views

Models and algorithms of nonlinear regression analysis of time series

Sklyar A.Y.

Abstract

The analysis of data describing certain objects and processes is primarily intended to find dependencies within them and to identify the dynamics of their development. The goals of analysis and forecasting are to prepare materials for making informed decisions. This work examines the stages, methods, and algorithms of conducting analysis aimed at obtaining, primarily, functional dependencies that are suitable not only for description but also for predicting the behavior of the studied objects and processes. The analysis itself is viewed as a multi-stage process that includes data preparation, identifying and removing noise from the data as much as possible, finding a long-term trend, identifying oscillatory components, periodicity of fluctuations, assessing the dynamics of amplitude fluctuations, evaluating the accuracy of possible approximations of the process, and the feasibility of forecasting considering the level of data noise. A number of procedures are proposed to ensure a reasoned verification of hypotheses about the course of processes and to obtain analytical, including differential dependencies based on optimization methods for parameter fitting of nonlinear dependencies. The methods discussed allow for conducting a sufficiently objective data analysis and create conditions for building a reasonable forecast. A numerical analysis was conducted based on multi-year statistics of production dynamics. The scientific novelty of the study lies in the development of a methodology for decomposing the process into trend and oscillatory components. Unlike most existing studies of process dynamics analysis, significant attention is given to accounting for and assessing the noise level by determining the limits of accuracy of the obtained results and, moreover, forecasts, which helps avoid unfounded conclusions and decisions and the construction of "overly" precise results based on insufficiently precise initial data, taking into account the requirements of function smoothness at the existing noise level. The use of limited growth functions and the identification of trend shift points allows for correct qualitative long-term forecasting without unreasonable predictions of a catastrophic course of the studied process. The results obtained allow for an analytical expression of the studied processes, primarily economic ones, which not only enables the approximation of the process behavior but also reveals its physical essence, thus allowing the application of these solutions to a whole class of processes of a similar nature.
Software systems and computational methods. 2025;(3):115-128
pages 115-128 views

Informetric method for determining the effective drop point of humanitarian cargo from UAVs under limited computational resources.

Konnikov E.A., Polyakov P.A., Starchenkova O.D., Sergeev D.A.

Abstract

The subject of the research is the calculation of high-precision drop points for humanitarian cargo from unmanned aerial vehicles (UAVs) in challenging atmospheric conditions and under strict limitations on onboard computational resources. The object of the study is the airdrop process, which includes ballistic, aerodynamic, and informational factors that determine the final trajectory of the container. The author examines in detail aspects such as the integration of differential-geometric modeling of the atmosphere based on the Ricci flow, quantum-inspired global optimization of the drop point, and lightweight neural network-based real-time trajectory correction using the ESP32 microcontroller. Special attention is given to the distribution of computational load between the Raspberry Pi 5 single-board computer, which performs resource-intensive calculations, and the energy-efficient controller responsible for online corrections. Thus, the research aims to establish a unified infometric approach that minimizes the uncertainty of the landing coordinate and ensures a metric level of delivery accuracy for cargo. The research methodology is based on the combination of the Ricci flow for adaptive atmospheric modeling, quantum-inspired particle swarm optimization for CARP search, and TinyML corrections of the cargo trajectory on the ESP32 during descent. The main findings of the study include the confirmed feasibility of metrically accurate airdrop without heavy navigation systems and the demonstration of the effectiveness of the proposed infometric concept QRNA. The author's special contribution to the research is the development of a hybrid algorithm that combines methods of differential geometry, quantum-inspired optimization, and lightweight neural network training, as well as its practical implementation on accessible single-board devices. The novelty of the study lies in the integration of the Ricci flow for dynamic distortion of the metric model of the atmosphere directly in the drop point calculation task and the application of quantum swarm searching in the CARP coordinate space. Additional novelty is manifested in the use of a TinyML network for online trajectory correction of the cargo, which has not been previously applied in the context of humanitarian UAVs. The modeling results demonstrate a reduction in the root mean square error of landing to 0.15 m, which is an order of magnitude better than advanced ML approaches and two orders of magnitude more accurate than classical ballistic methods, confirming the high practical value of the developed algorithm.
Software systems and computational methods. 2025;(3):129-140
pages 129-140 views

Interpolating and extrapolating memoization in the Planning C language

Pekunov V.V.

Abstract

This paper discusses the possibilities of increasing the speed of execution of programs that implement mainly mathematical algorithms using some special types of memoization. A brief overview of the existing basic approaches to memoization is carried out, and a conclusion is made about the lack of knowledge of the possibilities of explicit program memoization based on one or another method of approximating the results missing in the memoization cache. The possibilities of such memoization are analyzed, the syntax and semantics of possible program constructions are described, indicating the need for its inclusion for functions/procedures (void functions). The proposed variants of memoization are tested, it is shown that for some mathematical algorithms, a significant acceleration of work is possible with a fairly low error. The novelty of this study lies in the fact that for the first time the syntax, semantics and basic mechanisms for implementing explicit program memoization based on interpolation (by neural networks of direct propagation or by the method of group accounting of arguments) or linear extrapolation are proposed and described. This memoization is introduced into the Planning C language. The conditions of justification of memoization are formulated. For the proposed variant of memoization, the concept of a grouping parameter is introduced, which allows using sets of interpolators for various combinations of input arguments of a memoized procedure/function in order to reduce additional time spent on training the interpolator and increase the likelihood of its results. The concept of an ordinal parameter used to establish the order of key points of extrapolating memoization is also introduced. The adequacy of the proposed approaches and memoization algorithms is shown by a number of examples from the field of numerical modeling.
Software systems and computational methods. 2025;(3):141-154
pages 141-154 views

Согласие на обработку персональных данных с помощью сервиса «Яндекс.Метрика»

1. Я (далее – «Пользователь» или «Субъект персональных данных»), осуществляя использование сайта https://journals.rcsi.science/ (далее – «Сайт»), подтверждая свою полную дееспособность даю согласие на обработку персональных данных с использованием средств автоматизации Оператору - федеральному государственному бюджетному учреждению «Российский центр научной информации» (РЦНИ), далее – «Оператор», расположенному по адресу: 119991, г. Москва, Ленинский просп., д.32А, со следующими условиями.

2. Категории обрабатываемых данных: файлы «cookies» (куки-файлы). Файлы «cookie» – это небольшой текстовый файл, который веб-сервер может хранить в браузере Пользователя. Данные файлы веб-сервер загружает на устройство Пользователя при посещении им Сайта. При каждом следующем посещении Пользователем Сайта «cookie» файлы отправляются на Сайт Оператора. Данные файлы позволяют Сайту распознавать устройство Пользователя. Содержимое такого файла может как относиться, так и не относиться к персональным данным, в зависимости от того, содержит ли такой файл персональные данные или содержит обезличенные технические данные.

3. Цель обработки персональных данных: анализ пользовательской активности с помощью сервиса «Яндекс.Метрика».

4. Категории субъектов персональных данных: все Пользователи Сайта, которые дали согласие на обработку файлов «cookie».

5. Способы обработки: сбор, запись, систематизация, накопление, хранение, уточнение (обновление, изменение), извлечение, использование, передача (доступ, предоставление), блокирование, удаление, уничтожение персональных данных.

6. Срок обработки и хранения: до получения от Субъекта персональных данных требования о прекращении обработки/отзыва согласия.

7. Способ отзыва: заявление об отзыве в письменном виде путём его направления на адрес электронной почты Оператора: info@rcsi.science или путем письменного обращения по юридическому адресу: 119991, г. Москва, Ленинский просп., д.32А

8. Субъект персональных данных вправе запретить своему оборудованию прием этих данных или ограничить прием этих данных. При отказе от получения таких данных или при ограничении приема данных некоторые функции Сайта могут работать некорректно. Субъект персональных данных обязуется сам настроить свое оборудование таким способом, чтобы оно обеспечивало адекватный его желаниям режим работы и уровень защиты данных файлов «cookie», Оператор не предоставляет технологических и правовых консультаций на темы подобного характера.

9. Порядок уничтожения персональных данных при достижении цели их обработки или при наступлении иных законных оснований определяется Оператором в соответствии с законодательством Российской Федерации.

10. Я согласен/согласна квалифицировать в качестве своей простой электронной подписи под настоящим Согласием и под Политикой обработки персональных данных выполнение мною следующего действия на сайте: https://journals.rcsi.science/ нажатие мною на интерфейсе с текстом: «Сайт использует сервис «Яндекс.Метрика» (который использует файлы «cookie») на элемент с текстом «Принять и продолжить».