


Том 65, № 3 (2025)
ОБЩИЕ ЧИСЛЕННЫЕ МЕТОДЫ
ОДНОВРЕМЕННАЯ ДИАГОНАЛИЗУЕМОСТЬ ПАРЫ МАТРИЦ: ПОДОБИЯ И КОНГРУЭНЦИИ
Аннотация



ИНДЕКСАЦИЯ В АЛГОРИТМЕ ГУДА–ТОМАСА БЫСТРОГО ПРЕОБРАЗОВАНИЯ ФУРЬЕ
Аннотация



ОПТИМАЛЬНОЕ УПРАВЛЕНИЕ
АНТИСИММЕТРИЧНОЕ ЭКСТРЕМАЛЬНОЕ ОТОБРАЖЕНИЕ И ЛИНЕЙНАЯ ДИНАМИКА
Аннотация



SPARSE AND TRANSFERABLE UNIVERSAL SINGULAR VECTORS ATTACK
Аннотация
Mounting concerns about neural networks’ safety and robustness call for a deeper understanding of models’ vulnerability and research in adversarial attacks. Motivated by this, we propose a novel universal attack that is highly efficient in terms of transferability. In contrast to the existing (p, q)-singular vectors approach, we focus on finding sparse singular vectors of Jacobian matrices of the hidden layers by employing the truncated power iteration method. We discovered that using resulting vectors as adversarial perturbations can effectively attack the original model and models with entirely different architectures, highlighting the importance of sparsity constraint for attack transferability. Moreover, we achieve results comparable to dense baselines while damaging less than 1% of pixels and utilizing only 256 samples for perturbation fitting. Our algorithm also admits higher attack magnitude without affecting the human ability to solve the task, and damaging 5% of pixels attains more than a 50% fooling rate on average across models. Finally, our findings demonstrate the vulnerability of state-of-the-art models to universal sparse attacks and highlight the importance of developing robust machine learning systems.



МЕТОД СУММИРОВАНИЯ РЯДА ФУРЬЕ, СВЯЗАННОГО СО СМЕШАННОЙ ЗАДАЧЕЙ ДЛЯ НЕОДНОРОДНОГО ТЕЛЕГРАФНОГО УРАВНЕНИЯ
Аннотация



ПРИМЕНЕНИЕ ИНТЕРВАЛЬНЫХ НАКЛОНОВ В ЗАДАЧАХ НЕГЛАДКОЙ ОДНОМЕРНОЙ ОПТИМИЗАЦИИ
Аннотация



OPTIMAL APPROXIMATION OF AVERAGE REWARD MARKOV DECISION PROCESSES
Аннотация
We continue to develop the concept of studying the ε-optimal policy for Average Reward Markov Decision Processes (AMDP) by reducing it to Discounted Markov Decision Processes (DMDP). Existing research often stipulates that the discount factor must not fall below a certain threshold. Typically, this threshold is close to one, and as is well-known, iterative methods used to find the optimal policy for DMDP become less effective as the discount factor approaches this value. Our work distinguishes itself from existing studies by allowing for inaccuracies in solving the empirical Bellman equation. Despite this, we have managed to maintain the sample complexity that aligns with the latest results. We have succeeded in separating the contributions from the inaccuracy of approximating the transition matrix and the residuals in solving the Bellman equation in the upper estimate so that our findings enable us to determine the total complexity of the epsilon-optimal policy analysis for DMDP across any method with a theoretical foundation in iterative complexity. Bybl. 17. Fig. 5. Table 1.



НОВОЕ — ЭТО ХОРОШО ЗАБЫТОЕ СТАРОЕ. ОПТИМИЗАЦИЯ АЛГОРИТМА F4
Аннотация



АНАЛИЗ ПОГРЕШНОСТЕЙ ЧИСЛЕННЫХ МЕТОДОВ РЕШЕНИЯ ЗАДАЧ ОПТИМИЗАЦИИ
Аннотация



АДАПТИВНЫЙ АЛГОРИТМ ФРАНК–ВУЛЬФА ДЛЯ ЗАДАЧ МИНИМИЗАЦИИ ВЫПУКЛЫХ ОТНОСИТЕЛЬНО ГЛАДКИХ ФУНКЦИЙ
Аннотация



УРАВНЕНИЯ В ЧАСТНЫХ ПРОИЗВОДНЫХ
ОБ ОДНОЗНАЧНОСТИ ОПРЕДЕЛЕНИЯ ДИСКРЕТНЫХ ГРАВИТАЦИОННОГО И МАГНИТНОГО ПОТЕНЦИАЛОВ
Аннотация



ЗАДАЧА КОШИ ДЛЯ ОДНОМЕРНОГО УРАВНЕНИЯ ДВИЖЕНИЯ В МЕТАМАТЕРИАЛЕ
Аннотация



ИНФОРМАТИКА
МАТЕМАТИЧЕСКАЯ РЕКОНСТРУКЦИЯ СИГНАЛОВ И ИЗОБРАЖЕНИЙ С ПОМОЩЬЮ ТЕСТОВЫХ ИСПЫТАНИЙ: НЕСЛЕПОЙ ПОДХОД
Аннотация


