TY - JOUR
T1 - Self-Supervised Speech Representation Learning
T2 - A Review
AU - Mohamed, Abdelrahman
AU - Lee, Hung Yi
AU - Borgholt, Lasse
AU - Havtorn, Jakob D.
AU - Edin, Joakim
AU - Igel, Christian
AU - Kirchhoff, Katrin
AU - Li, Shang Wen
AU - Livescu, Karen
AU - Maaløe, Lars
AU - Sainath, Tara N.
AU - Watanabe, Shinji
N1 - Publisher Copyright:
© 2007-2012 IEEE.
PY - 2022/10/1
Y1 - 2022/10/1
N2 - Although supervised deep learning has revolutionized speech and audio processing, it has necessitated the building of specialist models for individual tasks and application scenarios. It is likewise difficult to apply this to dialects and languages for which only limited labeled data is available. Self-supervised representation learning methods promise a single universal model that would benefit a wide variety of tasks and domains. Such methods have shown success in natural language processing and computer vision domains, achieving new levels of performance while reducing the number of labels required for many downstream scenarios. Speech representation learning is experiencing similar progress in three main categories: generative, contrastive, and predictive methods. Other approaches rely on multi-modal data for pre-training, mixing text or visual data streams with speech. Although self-supervised speech representation is still a nascent research area, it is closely related to acoustic word embedding and learning with zero lexical resources, both of which have seen active research for many years. This review presents approaches for self-supervised speech representation learning and their connection to other research areas. Since many current methods focus solely on automatic speech recognition as a downstream task, we review recent efforts on benchmarking learned representations to extend the application beyond speech recognition.
AB - Although supervised deep learning has revolutionized speech and audio processing, it has necessitated the building of specialist models for individual tasks and application scenarios. It is likewise difficult to apply this to dialects and languages for which only limited labeled data is available. Self-supervised representation learning methods promise a single universal model that would benefit a wide variety of tasks and domains. Such methods have shown success in natural language processing and computer vision domains, achieving new levels of performance while reducing the number of labels required for many downstream scenarios. Speech representation learning is experiencing similar progress in three main categories: generative, contrastive, and predictive methods. Other approaches rely on multi-modal data for pre-training, mixing text or visual data streams with speech. Although self-supervised speech representation is still a nascent research area, it is closely related to acoustic word embedding and learning with zero lexical resources, both of which have seen active research for many years. This review presents approaches for self-supervised speech representation learning and their connection to other research areas. Since many current methods focus solely on automatic speech recognition as a downstream task, we review recent efforts on benchmarking learned representations to extend the application beyond speech recognition.
KW - Self-supervised learning
KW - speech representations
UR - http://www.scopus.com/inward/record.url?scp=85139425711&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85139425711&partnerID=8YFLogxK
U2 - 10.1109/JSTSP.2022.3207050
DO - 10.1109/JSTSP.2022.3207050
M3 - Review article
AN - SCOPUS:85139425711
SN - 1932-4553
VL - 16
SP - 1179
EP - 1210
JO - IEEE Journal on Selected Topics in Signal Processing
JF - IEEE Journal on Selected Topics in Signal Processing
IS - 6
ER -