TY - JOUR
T1 - A study of analogical density in various corpora at various granularity
AU - Fam, Rashel
AU - Lepage, Yves
N1 - Funding Information:
Funding: This work was supported by a JSPS grant, number 18K11447 (Kakenhi Kiban C), entitled “Self-explainable and fast-to-train example-based machine translation”.
Publisher Copyright:
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.
PY - 2021/8
Y1 - 2021/8
N2 - In this paper, we inspect the theoretical problem of counting the number of analogies between sentences contained in a text. Based on this, we measure the analogical density of the text. We focus on analogy at the sentence level, based on the level of form rather than on the level of semantics. Experiments are carried on two different corpora in six European languages known to have various levels of morphological richness. Corpora are tokenised using several tokenisation schemes: character, sub-word and word. For the sub-word tokenisation scheme, we employ two popular sub-word models: unigram language model and byte-pair-encoding. The results show that the corpus with a higher Type-Token Ratio tends to have higher analogical density. We also observe that masking the tokens based on their frequency helps to increase the analogical density. As for the tokenisation scheme, the results show that analogical density decreases from the character to word. However, this is not true when tokens are masked based on their frequencies. We find that tokenising the sentences using sub-word models and masking the least frequent tokens increase analogical density.
AB - In this paper, we inspect the theoretical problem of counting the number of analogies between sentences contained in a text. Based on this, we measure the analogical density of the text. We focus on analogy at the sentence level, based on the level of form rather than on the level of semantics. Experiments are carried on two different corpora in six European languages known to have various levels of morphological richness. Corpora are tokenised using several tokenisation schemes: character, sub-word and word. For the sub-word tokenisation scheme, we employ two popular sub-word models: unigram language model and byte-pair-encoding. The results show that the corpus with a higher Type-Token Ratio tends to have higher analogical density. We also observe that masking the tokens based on their frequency helps to increase the analogical density. As for the tokenisation scheme, the results show that analogical density decreases from the character to word. However, this is not true when tokens are masked based on their frequencies. We find that tokenising the sentences using sub-word models and masking the least frequent tokens increase analogical density.
KW - Automatic acquisition
KW - Language productivity
KW - Proportional analogy
UR - http://www.scopus.com/inward/record.url?scp=85112526141&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112526141&partnerID=8YFLogxK
U2 - 10.3390/info12080314
DO - 10.3390/info12080314
M3 - Article
AN - SCOPUS:85112526141
SN - 2078-2489
VL - 12
JO - Information (Switzerland)
JF - Information (Switzerland)
IS - 8
M1 - 314
ER -