Abstract
Prompt learning, a recent thread in few-shot learning for pre-trained language models (PLMs), has been explored for completing word analogies in the extractive way. In this paper, we reformulate the analogy task as masked analogy completion task with the use of prompting to derive a generative model for analogies beyond words. We introduce a simple prompt-based fine-tuning paradigm for language modeling on answered prompts of analogies in the sequence-to-sequence framework. To convert discrete terms of analogies into linear sequences, we present a symbolic prompt template. The sequence-to-sequence model is fine-tuned to fill in the missing span of masked prompts deduced from different masking schemes on phrase analogies extracted from a small corpus. We analyze the out-of-distribution performance on sentence analogies which are unseen cases. Our experiments demonstrate that prompt-based fine-tuning with the objective of language modeling enables models to achieve significantly better performance on in-distribution cases than PLMs. Masked prompt learning with one-term masking exhibits the best out-of-distribution generalization on sentence analogies, with a difference of only 3 characters from references.
Original language | English |
---|---|
Pages (from-to) | 1-14 |
Number of pages | 14 |
Journal | CEUR Workshop Proceedings |
Volume | 3174 |
Publication status | Published - 2022 |
Event | 1st Workshop on the Interactions between Analogical Reasoning and Machine Learning at 31st International Joint Conference on Artificial Intelligence - 25th European Conference on Artificial Intelligence, IARML@IJCAI-ECAI 2022 - Vienna, Austria Duration: 2022 Jul 23 → … |
Keywords
- Prompt learning
- analogies beyond words
- fine-tuning
- masked analogy completion
ASJC Scopus subject areas
- Computer Science(all)