TY - GEN
T1 - SemSeq
T2 - 16th International Conference of the Pacific Association for Computational Linguistics, PACLING 2019
AU - Tsuyuki, Hiroaki
AU - Ogawa, Tetsuji
AU - Kobayashi, Tetsunori
AU - Hayashi, Yoshihiko
N1 - Funding Information:
The present work was partially supported by JSPS KAKENHI Grants number 17H01831.
Publisher Copyright:
© 2020, Springer Nature Singapore Pte Ltd.
PY - 2020
Y1 - 2020
N2 - A sentence encoder that can be readily employed in many applications or effectively fine-tuned to a specific task/domain is highly demanded. Such a sentence encoding technique would achieve a broader range of applications if it can deal with almost arbitrary word-sequences. This paper proposes a training regime for enabling encoders that can effectively deal with word-sequences of various kinds, including complete sentences, as well as incomplete sentences and phrases. The proposed training regime can be distinguished from existing methods in that it first extracts word-sequences of an arbitrary length from an unlabeled corpus of ordered or unordered sentences. An encoding model is then trained to predict the adjacency between these word-sequences. Herein an unordered sentence indicates an individual sentence without neighboring contextual sentences. In some NLP tasks, such as sentence classification, the semantic contents of an isolated sentence have to be properly encoded. Further, by employing rather unconstrained word-sequences extracted from a large corpus, without heavily relying on complete sentences, it is expected that linguistic expressions of various kinds are employed in the training. This property contributes to enhancing the applicability of the resulting word-sequence/sentence encoders. The experimental results obtained from supervised evaluation tasks demonstrated that the trained encoder achieved performance comparable to existing encoders while exhibiting superior performance in unsupervised evaluation tasks that involve incomplete sentences and phrases.
AB - A sentence encoder that can be readily employed in many applications or effectively fine-tuned to a specific task/domain is highly demanded. Such a sentence encoding technique would achieve a broader range of applications if it can deal with almost arbitrary word-sequences. This paper proposes a training regime for enabling encoders that can effectively deal with word-sequences of various kinds, including complete sentences, as well as incomplete sentences and phrases. The proposed training regime can be distinguished from existing methods in that it first extracts word-sequences of an arbitrary length from an unlabeled corpus of ordered or unordered sentences. An encoding model is then trained to predict the adjacency between these word-sequences. Herein an unordered sentence indicates an individual sentence without neighboring contextual sentences. In some NLP tasks, such as sentence classification, the semantic contents of an isolated sentence have to be properly encoded. Further, by employing rather unconstrained word-sequences extracted from a large corpus, without heavily relying on complete sentences, it is expected that linguistic expressions of various kinds are employed in the training. This property contributes to enhancing the applicability of the resulting word-sequence/sentence encoders. The experimental results obtained from supervised evaluation tasks demonstrated that the trained encoder achieved performance comparable to existing encoders while exhibiting superior performance in unsupervised evaluation tasks that involve incomplete sentences and phrases.
KW - Semantic tasks
KW - Sentence encoding
KW - Unsupervised representation learning
UR - http://www.scopus.com/inward/record.url?scp=85088503453&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85088503453&partnerID=8YFLogxK
U2 - 10.1007/978-981-15-6168-9_4
DO - 10.1007/978-981-15-6168-9_4
M3 - Conference contribution
AN - SCOPUS:85088503453
SN - 9789811561672
T3 - Communications in Computer and Information Science
SP - 43
EP - 55
BT - Computational Linguistics - 16th International Conference of the Pacific Association for Computational Linguistics, PACLING 2019, Revised Selected Papers
A2 - Nguyen, Le-Minh
A2 - Tojo, Satoshi
A2 - Phan, Xuan-Hieu
A2 - Hasida, Kôiti
PB - Springer
Y2 - 11 October 2019 through 13 October 2019
ER -