OPTIMAL APPROXIMATION OF AVERAGE REWARD MARKOV DECISION PROCESSES
- Authors: Sapronov Y.F.1,2,3, Yudin N.E.1,2,3,4,5,6
-
Affiliations:
- Moscow Institute of Physics and Technology
- Higher School of Economics University
- Innopolis University
- Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute)
- Ivannikov Institute for System Programming of the Russian Academy of Sciences
- Federal Research Center “Informatics and Control” of the Russian Academy of Sciences
- Issue: Vol 65, No 3 (2025)
- Pages: 325-337
- Section: Optimal control
- URL: https://journal-vniispk.ru/0044-4669/article/view/293542
- DOI: https://doi.org/10.31857/S0044466925030074
- EDN: https://elibrary.ru/HSPLIH
- ID: 293542
Cite item
Abstract
We continue to develop the concept of studying the ε-optimal policy for Average Reward Markov Decision Processes (AMDP) by reducing it to Discounted Markov Decision Processes (DMDP). Existing research often stipulates that the discount factor must not fall below a certain threshold. Typically, this threshold is close to one, and as is well-known, iterative methods used to find the optimal policy for DMDP become less effective as the discount factor approaches this value. Our work distinguishes itself from existing studies by allowing for inaccuracies in solving the empirical Bellman equation. Despite this, we have managed to maintain the sample complexity that aligns with the latest results. We have succeeded in separating the contributions from the inaccuracy of approximating the transition matrix and the residuals in solving the Bellman equation in the upper estimate so that our findings enable us to determine the total complexity of the epsilon-optimal policy analysis for DMDP across any method with a theoretical foundation in iterative complexity. Bybl. 17. Fig. 5. Table 1.
About the authors
Y. F. Sapronov
Moscow Institute of Physics and Technology; Higher School of Economics University; Innopolis University; Innopolis University
Author for correspondence.
Email: sapronov.iuf@phystech.edu
Dolgoprudny, Russia; Moscow, Russia; Innopolis, Russia
N. E. Yudin
Moscow Institute of Physics and Technology; Higher School of Economics University; Innopolis University; Innopolis University; Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute); Ivannikov Institute for System Programming of the Russian Academy of Sciences; Federal Research Center “Informatics and Control” of the Russian Academy of Sciences
Email: iudin.ne@phystech.edu
Dolgoprudny, Russia; Moscow, Russia; Innopolis, Russia; Moscow, Russia; Moscow, Russia; Moscow, Russia
References
- Gheshlaghi Azar M., Munos R., Kappen H.J. Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model. Machine learning, vol. 91 (2013), pp. 325–349.
- Zurek M., Chen Y. Span-Based Optimal Sample Complexity for Average Reward MDPs. arXiv preprint arXiv:2311.13469.
- Tuynman A., Degenne R., Kaufmann E. Finding good policies in average-reward Markov Decision Processes without prior knowledge. arXiv preprint arXiv:2405.17108.
- Wang S., Blanchet J., Glynn P. Optimal sample complexity for average reward markov decision processes. arXiv preprint arXiv:2310.08833.
- Bartlett P.L., Tewari A. REGAL: A regularization based algorithm for reinforcement learning in weakly communicating MDPs. arXiv preprint arXiv:1205.2661.
- Wang J., Wang M., Yang L.F. Near sample-optimal reduction-based policy learning for average reward mdp. arXiv preprint arXiv:2212.00603.
- Goyal V., Grand-Clement J. A first-order approach to accelerated value iteration. Operations Research, vol. 71, no. 2 (2023), pp. 517–535.
- Grand-Clement J. From convex optimization to MDPs: A review of first-order, second-order and quasi-Newton methods for MDPs. arXiv preprint arXiv:2104.10677.
- Farahmand A.m., Ghavamzadeh M. PID accelerated value iteration algorithm. In International Conference on Machine Learning. PMLR, pp. 3143–3153.
- Weissman T., Ordentlich E., Seroussi G., Verdu S., Weinberger M.J. Inequalities for the L1 deviation of the empirical distribution. Hewlett-Packard Labs, Tech. Rep (2003), p. 125.
- Singh S.P., Yee R.C. An upper bound on the loss from approximate optimal-value functions. Machine Learning, vol. 16 (1994), pp. 227–233.
- Li G., Wei Y., Chi Y., Gu Y., Chen Y. Breaking the sample size barrier in model-based reinforcement learning with a generative model. Advances in neural information processing systems, vol. 33 (2020), pp. 12 861–12 872.
- Puterman M.L. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014.
- Wang S., Blanchet J., Glynn P. Optimal Sample Complexity of Reinforcement Learning for Mixing Discounted Markov Decision Processes. arXiv preprint arXiv:2302.07477.
- Li T., Wu F., Lan G. Stochastic first-order methods for average-reward markov decision processes. arXiv preprint arXiv:2205.05800.
- Jin Y., Sidford A. Towards tight bounds on the sample complexity of average-reward MDPs. In International Conference on Machine Learning. PMLR, pp. 5055–5064.
- Tiapkin D., Gasnikov A. Primal-dual stochastic mirror descent for MDPs. In International Conference on Artificial Intelligence and Statistics. PMLR, pp. 9723–9740.
Supplementary files
