Active Learning and Crowdsourcing: A Survey of Optimization Methods for Data Labeling


Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription Access

Abstract

High-quality annotated collections are a key element in constructing systems that use machine learning. In most cases, these collections are created through manual labeling, which is expensive and tedious for annotators. To optimize data labeling, a number of methods using active learning and crowdsourcing were proposed. This paper provides a survey of currently available approaches, discusses their combined use, and describes existing software systems designed to facilitate the data labeling process.

About the authors

R. A. Gilyazev

Ivannikov Institute for System Programming, Russian Academy of Sciences; Moscow Institute of Physics and Technology

Author for correspondence.
Email: gilyazev@ispras.ru
Russian Federation, ul. Solzhenitsyna 25, Moscow, 109004; Institutskii per. 9, Dolgoprudnyi, Moscow oblast, 141701

D. Yu. Turdakov

Ivannikov Institute for System Programming, Russian Academy of Sciences; Moscow State University; National Research University Higher School of Economics

Author for correspondence.
Email: turdakov@ispras.ru
Russian Federation, ul. Solzhenitsyna 25, Moscow, 109004; Moscow, 119991; ul. Myasnitskaya 20, Moscow, 101000

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2018 Pleiades Publishing, Ltd.