Informatics and Automation
ISSN (print): 2713-3192, ISSN (online): 2713-3206
Media registration certificate: ПИ № ФС 77 - 79228 from 25.09.2020
Founder: St. Petersburg Federal Research Center of the Russian Academy of Sciences
Editor-in-Chief: Ronzhin Andrey Leonidovich, Dr. Sci., Professor of RAS
Frequency / Assess: 6 issues per year / Open
Included in: White List (2nd level), Higher Attestation Commission List, RISC, Scopus
"Informatics and Automation" is a scientific, educational, and interdisciplinary journal primarily intended for papers from the fields of computer science, automation, and applied mathematics. The journal is published in both printed and online versions. The printed version has been published since 2002, the online one since 2010. Frequency: 6 times in year. Fee for the processing and publication of an article is not charged. The maximum term of the paper’s reviewing comprises 3 months.
Ағымдағы шығарылым
Том 24, № 2 (2025)
Mathematical modeling and applied mathematics
Adaptive Regression Model Construction Based on the Functional Quality Analysis of the Sequence Segment Processing
Аннотация



Two-Level Optimization of Task Distribution into Batches and Scheduling Their Execution in Pipeline Systems with Limited Buffers
Аннотация



An AudioCodec Based on the Perceptual Equality between the Original and Restored Audio Signals
Аннотация



Solving Multi-Objective Rational Placement of Load-Bearing Walls Problem via Genetic Algorithm
Аннотация



Routing of Autonomous Devices in Three-Dimensional Space
Аннотация



Invasive Approach to Verification of Functional and Structural Specifications Implemented in Custom Integrated Circuits
Аннотация



Artificial intelligence, knowledge and data engineering
Building Predictive Smell Models for Virtual Reality Environments
Аннотация
In a sensory-rich environment, human experiences are shaped by the complex interplay of multiple senses. However, digital interactions predominantly engage visual and auditory modalities, leaving other sensory channels, such as olfaction, largely unutilized. Virtual Reality (VR) technology holds significant potential for addressing this limitation by incorporating a wider range of sensory inputs to create more immersive experiences. This study introduces a novel approach for integrating olfactory stimuli into VR environments through the development of predictive odor models, termed SPRF (Sensory Predictive Response Framework). The objective is to enhance the sensory dimension of VR by tailoring scent stimuli to specific content and context with the collection of information about the location of scent sources and their identification through features to serve to reproduce them in the space of the VR environment, thereby enriching user engagement and immersion. Additionally, the research investigates the influence of various scent-related factors on user perception and behavior in VR, aiming to develop predictive models optimized for olfactory integration. Empirical evaluations demonstrate that the SPRF model achieves superior performance, with an accuracy of 98.13%, significantly outperforming conventional models such as Convolutional Neural Networks (CNN, 79.46%), Long Short-Term Memory (LSTM, 80.37%), and Support Vector Machines (SVM, 85.24%). Additionally, SPRF delivers notable improvements in F1-scores (13.05%-21.38%) and accuracy (12.89%-18.67%) compared to these alternatives. These findings highlight the efficacy of SPRF in advancing olfactory integration within VR, offering actionable insights for the design of multisensory digital environments.



Enhanced People Re-identification in CCTV Surveillance Using Deep Learning: A Framework for Real-World Applications
Аннотация
People re-identification (ReID) plays a pivotal role in modern surveillance, enabling continuous tracking of individuals across various CCTV cameras and enhancing the effectiveness of public security systems. However, ReID in real-world CCTV footage presents challenges, including changes in camera angles, variations in lighting, partial occlusions, and similar appearances among individuals. In this paper, we propose a robust deep learning framework that leverages convolutional neural networks (CNNs) with a customized triplet loss function to overcome these obstacles and improve re-identification accuracy. The framework is designed to generate unique feature embeddings for individuals, allowing precise differentiation even under complex environmental conditions. To validate our approach, we perform extensive evaluations on benchmark ReID datasets, achieving state-of-the-art results in terms of both accuracy and processing speed. Our model's performance is assessed using key metrics, including Cumulative Matching Characteristic (CMC) and mean Average Precision (mAP), demonstrating its robustness in diverse surveillance scenarios. Compared to existing methods, our approach consistently outperforms in both accuracy and scalability, making it suitable for integration into large-scale CCTV systems. Furthermore, we discuss practical considerations for deploying AI-based ReID models in surveillance infrastructure, including system scalability, real-time capabilities, and privacy concerns. By advancing techniques for re-identifying people, this work not only contributes to the field of intelligent surveillance but also provides a framework for enhancing public safety in real-world applications through automated and reliable tracking capabilities.



Use of Pre-Trained Multilingual Models for Karelian Speech Recognition
Аннотация



Detection of Student Engagement via Transformer-Enhanced Feature Pyramid Networks on Channel-Spatial Attention
Аннотация
One of the most important aspects of contemporary educational systems is student engagement detection, which involves determining how involved, attentive, and active students are in class activities. For educators, this approach is essential as it provides insights into students' learning experiences, enabling tailored interventions and instructional enhancements. Traditional techniques for evaluating student engagement are often time-consuming and subjective. This study proposes a novel real-time detection framework that leverages Transformer-enhanced Feature Pyramid Networks (FPN) with Channel-Spatial Attention (CSA), referred to as BiusFPN_CSA. The proposed approach automatically analyses student engagement patterns, such as body posture, eye contact, and head position, from visual data streams by integrating cutting-edge deep learning and computer vision techniques. By integrating the attention mechanism of CSA with the hierarchical feature representation capabilities of FPN, the model can accurately detect student engagement levels by capturing contextual and spatial information in the input data. Additionally, by incorporating the Transformer architecture, the model achieves better overall performance by effectively capturing long-range dependencies and semantic relationships within the input sequences. Evaluation using the WACV dataset demonstrates that the proposed model outperforms baseline techniques in terms of accuracy. Specifically, in terms of accuracy, the FPN_CSA_Trans_EH variant of the proposed model outperforms FPN_CSA by 3.28% and 4.98%, respectively. These findings underscore the efficacy of the BiusFPN_CSA framework in real-time student engagement detection, offering educators a valuable tool for enhancing instructional quality, fostering active learning environments, and ultimately improving student outcomes.



Factoring Decision Support System Based on Optimized Quantum Algorithms QMC
Аннотация



An Assembled Model of Multilayer Geoinformation Space-Time
Аннотация


