On the Properties of the Limit Set of a Repeated Machine Learning Process under Feature Space Transformations
- 作者: Veprikov A.S.1,2, Khritankov A.S.1,2
-
隶属关系:
- Moscow Institute of Physics and Technology (National Research University)
- A.A. Kharkevich Institute for Information Transmission Problems Russian Academy of Sciences
- 期: 编号 1 (2025)
- 页面: 56-66
- 栏目: Machine Learning, Neural Networks
- URL: https://journal-vniispk.ru/2071-8594/article/view/293493
- DOI: https://doi.org/10.14357/20718594250105
- EDN: https://elibrary.ru/RKXLLY
- ID: 293493
如何引用文章
全文:
详细
Widely used in practice the recommender systems, decision support systems, intelligent control, AI assistants in medicine, and search engines can influence users and properties of the environment in which they are employed. The process of repeated machine learning describes such systems in which continuous improvement of machine learning models is performed over time using training data obtained from the users. In this paper, we study how feature space transformations influence properties of the repeated machine learning process. In particular, we investigate the conditions under which the prediction of the asymptotic behavior of a system over time obtained in the original space can be applied to a similar system in the transformed space. The results of the research indicate the possibility of using simpler systems in spaces of lower dimensionality to study processes in more complex systems.
作者简介
Andrey Veprikov
Moscow Institute of Physics and Technology (National Research University); A.A. Kharkevich Institute for Information Transmission Problems Russian Academy of Sciences
编辑信件的主要联系方式.
Email: veprikov.as@phystech.edu
Graduate student, Department of Intelligent Data Analysis, Junior researcher
俄罗斯联邦, Moscow; MoscowAnton Khritankov
Moscow Institute of Physics and Technology (National Research University); A.A. Kharkevich Institute for Information Transmission Problems Russian Academy of Sciences
Email: anton.khritankov@phystech.edu
Candidate of physical and mathematical sciences, Associate professor, Senior researcher
俄罗斯联邦, Moscow; Moscow参考
- Khritankov A. Hidden Feedback Loops in Machine Learning Systems: A Simulation Model and Preliminary Results // Software Quality: Future Perspectives on Software Engineering Quality. 13th International Conference SWQD 2021. Lecture Notes in Business Information Processing. Springer International Publishing, Cham, 2021. V. 404. P. 54-65.
- Khritankov A. Positive feedback loops lead to concept drift in machine learning systems // Applied Intelligence. 2023. V. 53. No 19. P. 22648-22666.
- Taori R., Hashimoto T. Data feedback loops: Model-driven amplification of dataset biases // Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. P. 33883-33920.
- Adam G. A. et al. Error amplification when updating deployed machine learning models // Machine Learning for Healthcare Conference. PMLR, 2022. P. 715-740.
- Adam G. A. et al. Hidden risks of machine learning applied to healthcare: unintended feedback loops between models and future data causing model degradation // Machine Learning for Healthcare Conference. PMLR, 2020. P. 710-731.
- Rowland F. The filter bubble: What the internet is hiding from you // Portal: Libraries and the Academy. 2011. V. 11. No 4. P. 1009-1011.
- Davies H. C. Redefining filter bubbles as (escapable) sociotechnical recursion // Sociological Research Online. 2018. V. 23. No 3. P. 637-654.
- Guess A. et al. Avoiding the echo chamber about echo chambers // Knight Foundation. 2018. V. 2. No 1. P. 1-25.
- Terren L., Borge-Bravo R. Echo chambers on social media: A systematic review of the literature // Review of Communication Research. 2021. V. 9.
- Veprikov A. S., Afanas'ev A. P., Khritankov A.S. Matematicheskaya model' effekta obratnoj svyazi v sistemah iskusstvennogo intellekta [Mathematical model of the hidden feedback loop effect in intelligent systems] // Sbornik tezisov 21-j Vserossijskoj konferencii «Matematicheskie metody raspoznavaniya obrazov» (MMRO-21) [Proceedings of the 21th All-Russian Scientific Conference "Mathematical methods of pattern recognition"]. Moscow: Rossijskaya akademiya nauk [Russian Academy of Sciences], 2023. V. 12. P. 35-37.
- Xie X. et al. METTLE: A metamorphic testing approach to assessing and validating unsupervised machine learning systems // IEEE Transactions on Reliability. 2020. V. 69. No 4. P. 1293-1322.
- Jia M. et al. Testing machine learning classifiers based on compositional metamorphic relations // International Journal of Performability Engineering. 2020. V. 16. No 1. P. 67.
- Iakusheva S.F., Khritankov A.S. Sistematicheskij obzor metodov sostavleniya testovyh invariantov [A systematic review of methods for deriving metamorphic relations] // Programmnye sistemy: teoriya i prilozheniya [Software sistems: theory and applications]. 2024. V. 15. No 2. P. 62–86.
- Tian Y. et al. Deeptest: Automated testing of deep-neuralnetwork-driven autonomous cars // Proceedings of the 40th international conference on software engineering. 2018. P. 303-314.
- Nakajima S., Chen T. Y. Generating biased dataset for metamorphic testing of machine learning programs // In book: Testing Software and Systems. 31st IFIP WG 6.1 International Conference, ICTSS 2019, Paris, France, Proceedings. Springer International Publishing, 2019. P. 56-64.
- Dwarakanath A. et al. Identifying implementation bugs in machine learning based image classifiers using metamorphic testing // Proceedings of the 27th ACM SIGSOFT international symposium on software testing and analysis. 2018. P. 118-128.
- Domingos P. A unified bias-variance decomposition // Proceedings of 17th international conference on machine learning. Morgan Kaufmann Stanford, 2000. P. 231-238.
- Khritankov A. et al. MLDev: Data Science Experiment Automation and Reproducibility Software // International Conference on Data Analytics and Management in Data Intensive Domains. Springer International Publishing, 2021. P. 3-18.
补充文件
