Active Learning and Crowdsourcing: A Survey of Optimization Methods for Data Labeling
- Authors: Gilyazev R.A.1,2, Turdakov D.Y.1,3,4
-
Affiliations:
- Ivannikov Institute for System Programming, Russian Academy of Sciences
- Moscow Institute of Physics and Technology
- Moscow State University
- National Research University Higher School of Economics
- Issue: Vol 44, No 6 (2018)
- Pages: 476-491
- Section: Article
- URL: https://journal-vniispk.ru/0361-7688/article/view/176707
- DOI: https://doi.org/10.1134/S0361768818060142
- ID: 176707
Cite item
Abstract
High-quality annotated collections are a key element in constructing systems that use machine learning. In most cases, these collections are created through manual labeling, which is expensive and tedious for annotators. To optimize data labeling, a number of methods using active learning and crowdsourcing were proposed. This paper provides a survey of currently available approaches, discusses their combined use, and describes existing software systems designed to facilitate the data labeling process.
About the authors
R. A. Gilyazev
Ivannikov Institute for System Programming, Russian Academy of Sciences; Moscow Institute of Physics and Technology
Author for correspondence.
Email: gilyazev@ispras.ru
Russian Federation, ul. Solzhenitsyna 25, Moscow, 109004; Institutskii per. 9, Dolgoprudnyi, Moscow oblast, 141701
D. Yu. Turdakov
Ivannikov Institute for System Programming, Russian Academy of Sciences; Moscow State University; National Research University Higher School of Economics
Author for correspondence.
Email: turdakov@ispras.ru
Russian Federation, ul. Solzhenitsyna 25, Moscow, 109004; Moscow, 119991; ul. Myasnitskaya 20, Moscow, 101000
Supplementary files
