TY - GEN
T1 - Towards Neural Diarization for Unlimited Numbers of Speakers Using Global and Local Attractors
AU - Horiguchi, Shota
AU - Watanabe, Shinji
AU - Garcia, Paola
AU - Xue, Yawen
AU - Takashima, Yuki
AU - Kawaguchi, Yohei
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Attractor-based end-to-end diarization is achieving comparable accuracy to the carefully tuned conventional clustering-based methods on challenging datasets. However, the main drawback is that it cannot deal with the case where the number of speakers is larger than the one observed during training. This is because its speaker counting relies on supervised learning. In this work, we introduce an unsupervised clustering process embedded in the attractor-based end-to-end diarization. We first split a sequence of frame-wise embeddings into short subsequences and then perform attractor-based diarization for each subsequence. Given subsequence-wise diarization results, inter-subsequence speaker correspondence is obtained by unsupervised clustering of the vectors computed from the attractors from all the subsequences. This makes it possible to produce diarization results of a large number of speakers for the whole recording even if the number of output speakers for each subsequence is limited. Experimental results showed that our method could produce accurate diarization results of an unseen number of speakers. Our method achieved 11.84 %, 28.33 %, and 19.49 % on the CALLHOME, DI-HARD II, and DIHARD III datasets, respectively, each of which is better than the conventional end-to-end diarization methods.
AB - Attractor-based end-to-end diarization is achieving comparable accuracy to the carefully tuned conventional clustering-based methods on challenging datasets. However, the main drawback is that it cannot deal with the case where the number of speakers is larger than the one observed during training. This is because its speaker counting relies on supervised learning. In this work, we introduce an unsupervised clustering process embedded in the attractor-based end-to-end diarization. We first split a sequence of frame-wise embeddings into short subsequences and then perform attractor-based diarization for each subsequence. Given subsequence-wise diarization results, inter-subsequence speaker correspondence is obtained by unsupervised clustering of the vectors computed from the attractors from all the subsequences. This makes it possible to produce diarization results of a large number of speakers for the whole recording even if the number of output speakers for each subsequence is limited. Experimental results showed that our method could produce accurate diarization results of an unseen number of speakers. Our method achieved 11.84 %, 28.33 %, and 19.49 % on the CALLHOME, DI-HARD II, and DIHARD III datasets, respectively, each of which is better than the conventional end-to-end diarization methods.
KW - EDA
KW - EEND
KW - attractor
KW - clustering
KW - speaker diarization
UR - http://www.scopus.com/inward/record.url?scp=85124681861&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124681861&partnerID=8YFLogxK
U2 - 10.1109/ASRU51503.2021.9687875
DO - 10.1109/ASRU51503.2021.9687875
M3 - Conference contribution
AN - SCOPUS:85124681861
T3 - 2021 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2021 - Proceedings
SP - 98
EP - 105
BT - 2021 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2021 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2021
Y2 - 13 December 2021 through 17 December 2021
ER -