Method for Training Decision Trees with Non-Linear Splitters
- Authors: Devyatkin D.A.1, Grigoriev O.G.1
-
Affiliations:
- Computer Science and Control Federal Research Center of the Russian Academy of Sciences
- Issue: No 3 (2022)
- Pages: 96-105
- Section: Analysis of Textual and Graphical Information
- URL: https://journal-vniispk.ru/2071-8594/article/view/270475
- DOI: https://doi.org/10.14357/20718594220308
- ID: 270475
Cite item
Full Text
Abstract
Univariate decision trees, used in the processing of sparse large dimentional data, have low computational efficiency. Multivariate decision trees are more expressive when classifying data, but overfit on small datasets. The paper proposes a method for learning trees with multidimensional nonlinear splitters, which improves the accuracy of classification on sets of images and texts. This is achieved by jointly optimizing the distance from the objects of the training dataset to the separating hyperplane and the data impurity criterion when building each node of the tree. Test results confirm the effectiveness of the method.
About the authors
Dmitry A. Devyatkin
Computer Science and Control Federal Research Center of the Russian Academy of Sciences
Author for correspondence.
Email: devyatkin@isa.ru
Researcher
Russian Federation, MoscowOleg G. Grigoriev
Computer Science and Control Federal Research Center of the Russian Academy of Sciences
Email: oleggpolikvart@yandex.ru
Doctor of Technical Sciences, Chief Researcher
Russian Federation, MoscowReferences
- Breiman L. et al. Classification and regression trees. – Routledge, 2017.
- Chen T., Guestrin C. Xgboost: A scalable tree boosting system // Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. – 2016. – pp. 785-794.
- Breiman L. Random forests // Machine learning. – 2001. – Vol. 45. – No. 1. – pp. 5-32.
- Golea M. et al. Generalization in decision trees and DNF: Does size matter? // Advances in Neural Information Processing Systems. – 1997. – Vol. 10.
- Vapnik V. N. An overview of statistical learning theory // IEEE transactions on neural networks. – 1999. – Vol. 10. No. 5. – pp. 988-999.
- Breiman L. Some properties of splitting criteria // Machine learning. – 1996. – Vol. 24. – No. 1. – pp. 41-47.
- Liu W., Tsang I. W. Sparse perceptron decision tree for millions of dimensions // Thirtieth AAAI Conference on Artificial Intelligence. – 2016.
- Liu W., Tsang I. W. Making decision trees feasible in ultrahigh feature and label dimensions // Journal of Machine Learning Research. – 2017.
- Bennett K. P., Blue J. A. A support vector machine approach to decision trees // 1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat. No. 98CH36227). – IEEE, 1998. – Vol. 3. – pp. 2396-2401.
- Menze B. H. et al. On oblique random forests // Joint European Conference on Machine Learning and Knowledge Discovery in Databases. – Springer, Berlin, Heidelberg, 2011. – pp. 453-469.
- Tibshirani R., Hastie T. Margin Trees for Highdimensional Classification // Journal of Machine Learning Research. – 2007. – Vol. 8. – No. 3.
- Manwani N., Sastry P. S. Geometric decision tree // IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics). – 2011. – Vol. 42. – No. 1. – pp. 181-192.
- Hofmann T., Schölkopf B., Smola A. J. Kernel methods in machine learning // The annals of statistics. – 2008. – Т. 36. – №. 3. – С. 1171-1220.
- Norouzi M. et al. Co2 forest: Improved random forest by continuous optimization of oblique splits // arXiv preprint arXiv:1506.06155. – 2015.
- Tsochantaridis I. et al. Large margin methods for structured and interdependent output variables // Journal of machine learning research. – 2005. – Vol. 6. – No. 9.
- Yuille A. L., Rangarajan A. The concave-convex procedure // Neural computation. – 2003. – Vol. 15. – No. 4. – pp. 915-936.
- DeSalvo G., Mohri M. Random composite forests // Proceedings of the AAAI Conference on Artificial Intelligence. – 2016. – Vol. 30. – No. 1.
- Hehn T. M., Kooij J. F. P., Hamprecht F. A. End-to-end learning of decision trees and forests // International Journal of Computer Vision. – 2020. – Vol. 128. – No. 4. – pp. 997-1011.
- Irsoy O., Alpaydin E. Autoencoder trees //Asian conference on machine learning. – PMLR, 2016. – pp. 378-390.
- Chai Z., Zhao C. Multiclass oblique random forests with dual-incremental learning capacity // IEEE transactions on neural networks and learning systems. – 2020. – Vol. 31. – No. 12. – pp. 5192-5203.
- Hecht-Nielsen R. Theory of the backpropagation neural network // Neural networks for perception. – Academic Press, 1992. – pp. 65-93.
- Yang B. B., Shen S. Q., Gao W. Weighted oblique decision trees // Proceedings of the AAAI Conference on Artificial Intelligence. – 2019. – Vol. 33. – №. 01. – pp. 5621-5627.
- Carreira-Perpinán M. A., Tavallali P. Alternating optimization of decision trees, with application to learning sparse oblique trees // Advances in neural information processing systems. – 2018. – Vol. 31.
- Kumar M. A., Gopal M. A hybrid SVM based decision tree // Pattern Recognition. – 2010. – Vol. 43. – No. 12. – pp. 3977-3987.
- Krizhevsky A. Learning Multiple Layers of Features from Tiny Images // Master's thesis, University of Tront. – 2009.
- Youtube channels dataset. URL: http://keen.isa.ru/youtube (accessed 14.07.2022).
- Blake C. UCI repository of machine learning databases. URL: http://www.ics.uci.edu/~mlearn/MLRepository.html (accessed: 14.07.2022).
Supplementary files
