Robust ADAM optimizer based on averaging aggregation functions

Cover Page

Cite item

Full Text

Abstract

Training on contaminated data (outliers, heavy tails, label noise, preprocessing artifacts) makes arithmetic averaging in empirical risk unstable: multiple anomalies bias estimates, destabilize optimization steps, and degrade generalization ability. There is a need for a way to improve the robustness without changing the loss function or model architecture.

Aim. The paper aims to develop and demonstrate an alternative approach to batch averaging in ADAM, replacing it with a robust penalty-based averaging aggregation function, which mitigates the influence of outliers, while still maintaining the benefits of moment-based and coordinate-wise step adaptation.

Methods. Penalized, averaging aggregation means are used. The Huber dissimilarity function is used. Newton's method is used to find the optimal center and weights for batch elements. Performance is evaluated in a controlled experiment with synthetic outliers, by comparing it to the standard ADAM algorithm for training stability.

Results. Robust ADAM showed more robust training for synthetic linear regression, with the resulting model remaining stable even with up to 20% of outliers. The method keeps providing computational efficiency and compatibility by adding only a small number of iterations of the robust center search to each batch, while sustaining the same asymptotic behavior. With a quadratic penalty function, it degenerates into standard Adam, confirming the validity of the generalization.

Conclusion. A modification of the Adam optimization algorithm has been made using M-means. This method ensures the stability of linear regression, with outliers even up to 20%. The exact limitations are still to be determined. Computational overhead is associated with calculating the optimal value for each batch. However, due to the rapid convergence (approximately three iterations using Newton's method), the algorithm slowdown is not significant.

About the authors

M. A. Kazakov

Institute of Applied Mathematics and Automation - branch of Kabardino-Balkarian Scientific Center of the Russian Academy of Sciences

Author for correspondence.
Email: kasakow.muchamed@gmail.com
ORCID iD: 0000-0002-5112-5079
SPIN-code: 6983-1220

Junior Researcher of the Department of Neuroinformatics and Machine
Learning
 

Russian Federation, 89 A, Shortanov street, Nalchik, 360000, Russia

References

  1. Kingma D.P., Ba J. Adam. A method for stochastic optimization. international conference on learning representations (ICLR 2015). San Diego, 2015. 15 p., available at: https://arxiv.org/abs/1412.6980
  2. Tieleman T. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning. 2012. Vol. 4. No. 2. Pp. 26. DOI: https://cir.nii.ac.jp/crid/1370017282431050757
  3. Polyak B.T. Some methods of speeding up the convergence of iteration methods. Computational Mathematics and Mathematical Physics. 1964. Vol. 4. No. 5. Pp. 1-17. doi: 10.1016/0041-5553(64)90137-5. (In Russian)
  4. Duchi J., Hazan E., Singer Y. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research. 2011. Vol. 12. Pp. 2121-2159.
  5. Koenker R. Quantile Regression. Cambridge: Cambridge University Press, 2005. 366 p. ISBN: 0-521-60827-9
  6. Huber P.J. Robust Statistics. New York: Wiley, 1981. 308 p. ISBN: 0-471-41805-6
  7. Tukey J.W. A survey of sampling from contaminated distributions. in: contributions to probability and statistics: essays in honor of Harold Hoteling. Stanford: Stanford University Press, 1960. Pp. 448-485. DOI: https://cir.nii.ac.jp/crid/1570291226404846720
  8. Rousseeuw P.J., Leroy A.M. Robust regression and outlier detection. New York: Wiley, 1987. 329 p. ISBN: 9780471852339
  9. Rousseeuw P.J. Least median of square regression. Journal of the American Statistical Association. 1984. Vol. 79. Pp. 871-880. doi: 10.1080/01621459.1984.10477105.
  10. Vapnik V. The nature of statistical learning theory. New York: Springer-Verlag, 2000. 314 p. ISBN: 978-1-4419-3160-3
  11. Beliakov G., Sola H., Calvo T. A practical guide to averaging functions. Berlin: Springer-Verlag, 2016. 371 p. ISBN: 978-3319247519
  12. Calvo T., Beliakov G. Aggregation functions. Fuzzy Sets and Systems. 2010. Vol. 161. No. 10. Pp. 1420-1436. doi: 10.1016/j.fss.2009.05.012
  13. Mesiar R., Kolesárová A., Calvo T., Komorníková M. A review of aggregation functions. fuzzy sets and their extensions: representation, aggregation and models. studies in fuzziness and soft computing. 2008. Vol. 220. Springer, Berlin, Heidelberg. doi: 10.1007/978-3-540-73723-0_7
  14. Shibzukhov Z.M. Principle of minimizing empirical risk and averaging aggregate functions. Journal of Mathematical Sciences. 2021. Vol. 253. No. 4. Pp. 571-583. doi: 10.1007/s10958-021-05256-y
  15. Vapnik V. Principles of risk minimization for learning theory. Advances in Neural Information Processing Systems (NeurIPS). 1991. Vol. 4. Pp. 831-838. DOI: https://cir.nii.ac.jp/crid/1571698599429734144

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2025 Kazakov M.A.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Согласие на обработку персональных данных

 

Используя сайт https://journals.rcsi.science, я (далее – «Пользователь» или «Субъект персональных данных») даю согласие на обработку персональных данных на этом сайте (текст Согласия) и на обработку персональных данных с помощью сервиса «Яндекс.Метрика» (текст Согласия).