Adaptive neural network method for multidimensional integration in arbitrary subdomains


如何引用文章

全文:

详细

Multidimensional integration is a fundamental problem in computational mathematics with numerous applications in physics, engineering, and data science. Traditional numerical methods such as Gauss–Legendre quadrature [1] and Monte Carlo techniques face significant challenges in high-dimensional spaces due to the curse of dimensionality, often requiring substantial computational resources and suffering from accuracy degradation. This study proposes an adaptive neural network-based method for efficient multidimensional integration over arbitrary subdomains. The approach optimizes training sample composition through a balancing parameter $\rho $, which controls the proportion of points generated via a Metropolis–Hastings inspired method versus uniform sampling. This enables the neural network to effectively capture complex integrand behaviors, particularly in regions with sharp variations. A key innovation of the method is its ``train once, integrate anywhere'' capability: a single neural network trained on a large domain can subsequently compute integrals over any arbitrary subdomain without retraining, significantly reducing computational overhead. Experiments were conducted on three function types---quadratic, Corner Peak, and sine of sum of squares---across dimensions 2D to 6D. Integration accuracy was evaluated using the Correct Digits (CD) metric. Results show that the neural network method achieves comparable or superior accuracy to traditional methods (Gauss–Legendre, Monte Carlo, Halton) for complex functions, while substantially reducing computation time. Optimal $\rho $ ranges were identified: 0.0--0.2 for smooth functions, and 0.3--0.5 for functions with sharp features. In multidimensional scenarios (4D, 6D), the method demonstrates stability at $\rho = 0.2\text {--}0.6$, outperforming stochastic methods though slightly less accurate than Latin hypercube sampling [2]. The proposed method offers a scalable, efficient alternative to classical integration techniques, particularly beneficial in high-dimensional settings and applications requiring repeated integration over varying subdomains.

作者简介

Margarita Shcherbak

RUDN University

Email: 1032216537@rudn.ru
ORCID iD: 0000-0002-9229-2535

Student of Department of Computational Mathematics and Artificial Intelligence 

俄罗斯联邦, 6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation

Laysan Abdullina

RUDN University

Email: 1032216538@rudn.ru
ORCID iD: 0000-0002-3918-3620

Student of Department of Computational Mathematics and Artificial Intelligence 

俄罗斯联邦, 6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation

Soltan Salpagarov

RUDN University

Email: salpagarov-si@rudn.ru
ORCID iD: 0000-0002-5321-9650
Scopus 作者 ID: 57201380251

Candidate of Physical and Mathematical Sciences, associate Professor of Department of Computational Mathematics and Artificial Intelligence

俄罗斯联邦, 6, Miklukho-Maklaya St, Moscow, 117198, Russian Federation

Vyacheslav Fedorishchev

RUDN University

编辑信件的主要联系方式.
Email: 1142230295@rudn.ru
ORCID iD: 0009-0003-5906-9993

Student of Department of Computational Mathematics and Artificial Intelligence 

俄罗斯联邦, 6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation

参考

  1. Press, W. H., Teukolsky, S. A., Vetterling, W. T. & Flannery, B. P. Numerical Recipes: The Art of Scientific Computing 3rd (Cambridge University Press, Cambridge, UK, 2007).
  2. McKay, M. D., Beckman, R. J. & Conover, W. J. A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code. Technometrics 21, 239–245. doi: 10.1080/00401706.1979.10489755 (1979).
  3. Bassi, H., Zhu, Y., Liang, S., Yin, J., Reeves, C. C., Vlček, V. & Yang, C. Learning nonlinear integral operators via recurrent neural networks and its application in solving integro-differential equations. Machine Learning with Applications 15, 100524. doi: 10.1016/j.mlwa.2023.100524 (Mar. 2024).
  4. Maître, D. & Santos-Mateos, R. Multi-variable integration with a neural network. Journal of High Energy Physics 2023, 221. doi: 10.1007/JHEP03(2023)221 (Mar. 2023).
  5. Li, S., Huang, X., Wang, X., et al. A new reliability analysis approach with multiple correlation neural networks method. Soft Computing 27, 7449–7458. doi: 10.1007/s00500-022-07685-6 (June 2023).
  6. Subr, K. Q-NET: A Network for Low-dimensional Integrals of Neural Proxies. Computer Graphics Forum 40, 61–71. doi: 10.1111/cgf.14341 (2021).
  7. Beck, C., Becker, S., Cheridito, P., Jentzen, A. & Neufeld, A. Deep Splitting Method for Parabolic PDEs. SIAM Journal on Scientific Computing 43, A3135–A3154. doi: 10.1137/19M1297919 (2021).
  8. Wan, M., Pan, Y. & Zhang, Z. A Physics-Informed Neural Network Integration Framework for Efficient Dynamic Fracture Simulation in an Explicit Algorithm. Applied Sciences 15, 10336. doi: 10.3390/app151910336 (2025).
  9. Nowak, A., Kustal, D., Sun, H. & Blaszczyk, T. Neural network approximation of the composition of fractional operators and its application to the fractional Euler-Bernoulli beam equation. Applied Mathematics and Computation 501, 129475. doi: 10.1016/j.amc.2025.129475 (2025).
  10. Brunner, K. J., Fuchert, G., de Amorim Resende, F. B. L., Knauer, J., Hirsch, M., Wolf, R. C. & the W7-X Team. Auto-encoding quadrature components of modulated dispersion interferometers. Plasma Physics and Controlled Fusion 67. Special Issue on the 6th European Conference on Plasma Diagnostics (ECPD 2025), 105007. doi: 10.1088/1361-6587/ae0a80 (Oct. 2025).
  11. Saxena, S., Bastek, J.-H., Spinola, M., Gupta, P. & Kochmann, D. M. GNN-assisted phase space integration with application to atomistics. Mechanics of Materials 182, 104681. doi:10.1016/j. mechmat.2023.104681 (July 2023).
  12. Saz Ulibarrena, V., Horn, P., Portegies Zwart, S., Sellentin, E., Koren, B. & Cai, M. X. A hybrid approach for solving the gravitational N-body problem with Artificial Neural Networks. Journal of Computational Physics 496, 112596. doi: 10.1016/j.jcp.2023.112596 (Jan. 2024).
  13. Hu, Z., Shukla, K., Karniadakis, G. E. & Kawaguchi, K. Tackling the curse of dimensionality with physics-informed neural networks. Neural Networks 176, 106369. doi: 10.1016/j.neunet.2024. 106369 (Aug. 2024).
  14. Cho, J., Nam, S., Yang, H., Yun, S.-B., Hong, Y. & Park, E. Separable PINN: Mitigating the Curse of Dimensionality in Physics-Informed Neural Networks 2023.
  15. Ayriyan, A., Grigorian, H. & Papoyan, V. Sampling of Integrand for Integration Using Shallow Neural Network. Discrete and Continuous Models and Applied Computational Science 32, 38–47 (2024).
  16. Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. & Teller, E. Equation of State Calculations by Fast Computing Machines. The Journal of Chemical Physics 21, 1087–1092. doi: 10.1063/1.1699114 (1953).
  17. Hastings, W. K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57. _eprint: https://academic.oup.com/biomet/article-pdf/57/1/97/23940249/57-197.pdf, 97–109. doi: 10.1093/biomet/57.1.97 (Apr. 1970).
  18. Lloyd, S. Using Neural Networks for Fast Numerical Integration and Optimization. IEEE Access 8, 84519–84531. doi: 10.1109/ACCESS.2020.2991966 (2020).
  19. Cybenko, G. Approximation by superpositions of a sigmoidal function. Mathematics of Control Signals and Systems 2, 303–314. doi: 10.1007/BF02551274 (Dec. 1989).
  20. Marquardt, D. W. An Algorithm for Least-Squares Estimation of Nonlinear Parameters. Journal of the Society for Industrial and Applied Mathematics 11. Publisher: Society for Industrial and Applied Mathematics, 431–441. doi: 10.1137/0111030 (June 1963).
  21. Genz, A. A Package for Testing Multiple Integration Subroutines in Numerical Integration: Recent Developments, Software and Applications (eds Keast, P. & Fairweather, G.) 337–340 (Springer, 1987). doi: 10.1007/978-94-009-3889-2_33.
  22. Anikina, A. et al. Structure and Features of the Software and Information Environment of the HybriLIT Heterogeneous Platform in Distributed Computer and Communication Networks (eds Vishnevsky, V. M., Samouylov, K. E. & Kozyrev, D. V.) 444–457 (Springer Nature Switzerland, Cham, 2025). doi: 10.1007/978-3-031-80853-1_33.
  23. Abadi, M. et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems Software available from tensorflow.org. 2015.
  24. Halton, J. H. On the efficiency of certain quasi-random sequences of points in evaluating multidimensional integrals. Numerische Mathematik 2, 84–90. doi: 10.1007/BF01386213 (1960).

补充文件

附件文件
动作
1. JATS XML