Analytical Review of Methods for Automatic Analysis of Extra-Linguistic Units in Spontaneous Speech
- Authors: Povolotskaia A.A1, Karpov A.A1
-
Affiliations:
- St. Petersburg Federal Research Center of the Russian Academy of Sciences (SPC RAS)
- Issue: Vol 23, No 1 (2024)
- Pages: 5-38
- Section: Artificial intelligence, knowledge and data engineering
- URL: https://journal-vniispk.ru/2713-3192/article/view/267186
- DOI: https://doi.org/10.15622/ia.23.1.1
- ID: 267186
Cite item
Full Text
Abstract
About the authors
A. A Povolotskaia
St. Petersburg Federal Research Center of the Russian Academy of Sciences (SPC RAS)
Email: anastasiia.povolotskaia@gmail.com
14-th Line V.O. 39
A. A Karpov
St. Petersburg Federal Research Center of the Russian Academy of Sciences (SPC RAS)
Email: karpov@iias.spb.su
14-th Line V.O. 39
References
- Верходанова В.О., Шапранов В.В., Кипяткова И.С., Карпов А.А. Автоматическое определение вокализованных хезитаций в русской речи // Вопросы языкознания. 2018. № 6. С. 104–118.
- Ataollahi F., Suarez M.T. Laughter Classification Using 3D Convolutional Neural Networks // Proceedings of the 3rd International Conference on Advances in Artificial Intelligence (ICAAI '19). 2019. pp. 47–51.
- Судьенкова А.В. Обзор методов извлечения акустических признаков речи в задаче распознавания диктора // Сборник научных трудов НГТУ. 2019. № 3–4. С. 139–164.
- Hsu J.-H., Su M.-H., Wu C.-H., Chen Y.-H. Speech Emotion Recognition Considering Nonverbal Vocalization in Affective Conversations // IEEE/ACM Transactions on Audio, Speech, and Language Processing. 2021. vol. 29. pp. 1675–1686.
- Dumpala S.H., Alluri K.N.R.K.R. An Algorithm for Detection of Breath Sounds in Spontaneous Speech with Application to Speaker Recognition. Speech and Computer: 19th International Conference (SPECOM). 2017. pp. 98–108.
- Huang K.-Y., Wu C.-H., Hong Q.-B., Su M.-H., Chen Y.-H. Speech Emotion Recognition Using Deep Neural Network Considering Verbal and Nonverbal Speech Sounds // International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2019. pp. 5866–5870.
- Kuluozturk M., Kobat M.A., Barua P.D., Dogan S., Tuncer T., Tan R.S., Ciaccio E.J., Acharya U.R. DKPNet41: Directed knight pattern network-based cough sound classification model for automatic disease diagnosis // Medical engineering and physics. 2022. vol. 110. no. 103870.
- Lahmiri S., Tadj C., Gargour C., Bekiros S. Deep learning systems for automatic diagnosis of infant cry signals // Chaos, Solitons & Fractals. 2022. vol. 154. no. 111700.
- Matikolaie F.S., Tadj C. Machine Learning-Based Cry Diagnostic System for Identifying Septic Newborns // Journal of Voice. 2022. doi: 10.1016/j.jvoice.2021.12.021.
- Matsuda T., Arimoto Y. Detection of laughter and screaming using the attention and ctc models // Proceedings of INTERSPEECH 2023. pp. 1025–1029. doi: 10.21437/Interspeech.2023-1412.
- Ortega D., Meyer S., Schweitzer A., Vu N.T. Modeling Speaker-Listener Interaction for Backchannel Prediction // 13th International Workshop on Spoken Dialogue Systems Technology. 2023. pp. 1–16.
- Lea C., Huang Z., Jain D., Tooley L., Liaghat Z., Thelapurath S., Findlater L., Bigham J.P. Nonverbal Sound Detection for Disordered Speech // International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2022. pp. 7397–7401.
- Crystal D. Prosodic Systems and Intonation in English // Cambridge University Press, 1969. 390 p.
- Simon-Thomas E., Sauter D., Sinicropi-Yao L., Abramson A., Keltner D. Vocal Bursts Communicate Discrete Emotions: Evidence for New Displays. Nature Proceedings. 2007. doi: 10.1038/npre.2007.1356.1.
- Trouvain J., Truong K.P. Comparing non-verbal vocalisations in conversational speech corpora. Proceedings of the 4th International Workshop on Corpora for Research on Emotion Sentiment and Social Signals (ES3’2012). 2012. pp. 36–39.
- Савельева Н.А., Пальчик А.Б., Калашникова Т.П. Особенности довербальной вокализации у плодов и младенцев // Специальное образование. 2022. № 2(66). С. 246–259.
- Богданова-Бегларян Н.В., Блинова О.В., Зайдес К.Д., Шерстинова Т.Ю. Корпус «Сбалансированная аннотированная текстотека» (САТ): изучение специфики русской монологической речи // Труды института русского языка им. В.В. Виноградова. 2019. № 21. С. 110–126.
- Богданова-Бегларян Н.В., Шерстинова Т.Ю., Блинова О.В., Мартыненко Г.Я. Корпус «Один речевой день» в исследованиях социолингвистической вариативности русской разговорной речи // Анализ разговорной русской речи (АР3 – 2017): труды седьмого междисциплинарного семинара Санкт-Петербург. 2017. С. 14–20.
- Кибрик А.А., Подлесская В.И. Коррекция в устной русской монологической речи по данным корпусного исследования // Русский язык в научном освещении. 2006. № 2. С. 7–55.
- Kachkovskaia T., Kocharov D., Skrelin P., Volskaya N. CoRuSS – a New Prosodically Annotated Corpus of Russian Spontaneous Speech // Proceedings of the tenth international conference on language resources and evaluation. Portoroz, Slovenia. 2016. pp. 1949–1954.
- Кибрик А.А. Русский мультиканальный дискурс. Часть II. Разработка корпуса и направления исследований // Психологический журнал. 2018. № 39(2). С. 79–90.
- Pitt M.A., Johnson K., Hume E., Kiesling S., Raymond W. The Buckeye corpus of conversational speech: labeling conventions and a test of transcriber reliability // Speech Communication. 2005. vol. 45(1). no. 1. pp. 89–95.
- Baker R., Hazan V. LUCID: a corpus of spontaneous and read clear speech in British English // Proceedings of DiSS-LPSS Joint Workshop. 2010. pp. 3–6.
- Polychroniou A., Salamin H., Vinciarelli A. The SSPNet-Mobile Corpus: Social Signal Processing Over Mobile Phones // Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). 2014. pp. 1492–1498.
- Van Engen K.J., Baese-Berk M., Baker R.E., Choi A., Kim M., Bradlow A.R. The Wildcat Corpus of native- and foreign-accented English: communicative efficiency across conversational dyads with varying language alignment profiles // Language and speech. 2010. vol. 53(4). pp. 510–540.
- Johnson K.A., Babel M., Fong I., Yiu N. SpiCE: A New Open-Access Corpus of Conversational Bilingual Speech in Cantonese and English // Proceedings of the Twelfth Language Resources and Evaluation Conference. European Language Resources Association (ELRA). 2020. pp. 4089–4095.
- Baese-Berk M.M., Morrill T.H. Speaking rate consistency in native and non-native speakers of English // The Journal of the Acoustical Society of America. 2015. vol. 138(3). pp. 223–228.
- Janin A., Baron D., Edwards J., Ellis D., Gelbart D., Morgan N., Wooters C. The ICSI Meeting Corpus // IEEE International Conference on Acoustics, Speech, and Signal Processing. 2003. vol. 1. doi: 10.1109/icassp.2003.1198793.
- Chou H.C., Lin W.C., Chang L.C., Li C.C., Ma H.P., Lee C.C. NNIME: The NTHU-NTUA Chinese interactive multimodal emotion corpus // Proceedings of the Seventh International Conference on Affective Computing and Intelligent Interaction (ACII’2017). 2017. pp. 292–298.
- Gosy M. BEA – a multifunctional Hungarian spoken language data base // The Phonetician. 2012. vol. 105. pp. 50–61.
- Landry Dejoli T.T., He Q., Yan H., Li Y. ASVP-ESD: A dataset and its benchmark for emotion recognition using both speech and non-speech utterances // Global Scientific Journals. 2020. vol. 8(5). pp. 1793–1798.
- Baird A., Tzirakis P., Brooks J.A., Gregory C.B., Schuller B., Batliner A., Keltner D., Cowen A. The ACII 2022 Affective Vocal Bursts Workshop & Competition: Understanding a critically understudied modality of emotional expression // 10th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos. 2022.
- Petridis S., Martinez B., Pantic M. The MAHNOB Laughter database // Image and Vision Computing. 2013. vol. 31(2). pp. 186–202.
- Fonseca E., Favory X., Pons J., Font F., Serra X. FSD50K: An Open Dataset of Human-Labeled Sound Events // IEEE/ACM Transactions on Audio, Speech, and Language Processing. 2022. vol. 30. pp. 829–852.
- Gong Y., Yu J., Glass J. Vocalsound: A Dataset for Improving Human Vocal Sounds Recognition // International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2022. pp. 151–155.
- Kantharaju R.B., Ringeval F., Besacier L. Automatic recognition of affective laughter in spontaneous dyadic interactions from audiovisual signals // Proceedings of the ACM 20th International Conference on Multimodal Interaction (ICMI'18). 2018. pp. 220–228.
- Hallmen T., Mertes S., Schiller D., André E. An efficient multitask learning architecture for affective vocal burst analysis // arXiv preprint arXiv: abs/2209.13914. 2022.
- Karas V., Triantafyllopoulos A., Song M., Schuller B.W. Self-Supervised Attention Networks and Uncertainty Loss Weighting for Multi-Task Emotion Recognition on Vocal Bursts // The 2022 ACII Affective Vocal Burst Workshop & Challenge (A-VB). 2022. vol. 45(1). pp. 89–95.
- Liu S., Johns E., Davison A.J. End-to-end multi-task learning with attention // IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019. pp. 1871–1880.
- Nguyen D.-K., Pant S., Ho N.-H., Lee G.-S., Kim S.-H., Yang H.-J. Fine-tuning Wav2vec for Vocal-burst Emotion Recognition // The 2022 ACII Affective Vocal Burst Workshop & Challenge (A-VB). 2022. vol. 45(1). pp. 89–95.
- Pratap V., Xu Q., Sriram A., Synnaeve G., Collobert R. MLS: a large-scale multilingual dataset for speech research // Proceedings of INTERSPEECH. 2020. pp. 2757–2761.
- Ardila R., Branson M., Davis K., Henretty M., Kohler M., Meyer J., Morais R., Saunders L., Tyers F.M., Weber G. Common voice: a massively-multilingual speech corpus // Proceedings of the 12th Conference on Language Resources and Evaluation (LREC’2020). 2020. pp. 4218–4222.
- Gales M.J.F., Knill K., Ragni A., Rath S.P. Speech recognition and keyword spotting for low-resource languages: babel project research at cued // Proceedings 4th Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU’2014). 2012. pp. 16–23.
- Vaessen N., Van Leeuwen D.A. Fine-tuning wav2vec2 for speaker recognition // IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2022. pp. 7967–7971.
- Kahn J., Riviere M, Zheng W., Kharitonov E., Xu Q., Mazare P-E., Karaday J., Liptchinsky V., Collobert R., Fuegen C., et al. Libri-light: A benchmark for asr with limited or no supervision // IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2020. pp. 7669–7673.
- Lotfian R., Busso C. Building naturalistic emotionally balanced speech corpus by retrieving emotional speech from existing podcast recordings // IEEE Transactions on Affective Computing. 2019. vol. 10. no. 4. pp. 471–483.
- Panayotov V., Chen G., Povey D., Khudanpur S. LibriSpeech: an ASR corpus based on public domain audio books // IEEE international conference on acoustics, speech and signal processing (ICASSP). 2015. pp. 5206–5210.
- Schuller B., Steidl S., Batliner A., Vinciarelli A., Scherer K., Ringeval F., Chetouani M., Weninger F., Eyben F., Marchi E., Mortillaro M., Salamin H., Polychroniou A., Valente F., Kim S. The INTERSPEECH 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism // Proceedings of the 14th Annual Conference of the International Speech Communication Association. 2013. pp. 148–152.
- Hall M., Frank E., Holmes G., Pfahringer B., Reutemann P., Witten I.H. The WEKA data mining software: An update // ACM SIGKDD Explorations Newsletter. 2009. vol. 11. no. 1. pp. 10–18.
- Brueckner R., Schuller B. Social signal classification using deep BLSTM recurrent neural networks // International conference on acoustics, speech and signal processing (ICASSP). 2014. pp. 4823–4827.
- Eyben F., Wollmer M., Schuller B. Opensmile: The munich versatile and fast open-source audio feature extractor // Proceedings 18th ACM International Conference Multimedia. 2010. pp. 1459–1462.
- Gupta R., Audhkhasi K., Lee S., Narayanan S. Detecting paralinguistic events in audio stream using context in features and probabilistic decisions // Computer Speech & Language. 2016. vol. 36. pp. 72–92.
- Gosztolya G. Optimized Time Series Filters for Detecting Laughter and Filler Events // INTERSPEECH. 2017. pp. 2376–2380.
- Hansenand N., Ostermeier A. Completely derandomized selfadaptation in evolution strategies // Evolutionary Computation. 2001. vol. 9. no. 2. pp. 159–195.
Supplementary files
