TY - JOUR
T1 - Improved prior selection using semantics in maximum a posteriori for few-shot learning
AU - Wu, Jiaying
AU - Hu, Jinglu
N1 - Funding Information:
This work is financially supported by China Scholarship Council (No. 201908320571 ).
Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2022/2/15
Y1 - 2022/2/15
N2 - Few-shot learning is to recognize novel concepts with few labeled samples. Recently, significant progress has been made to address the overfitting caused by data scarcity, especially those on modeling the distribution of novel categories given a single point. However, they often deeply rely on the prior knowledge from base set, which is generally hard to define, and its selection can easily bias the learning. A popular pipeline is to pretrain a feature extractor with base set and generate statistics from them as prior information. Since pretrained feature extractor cannot extract accurate representations for categories have never seen, and there is only 1 or 5 support images from novel categories, making it hard to acquire accurate priors, especially when they are far away from the class center. To address these issues, in this paper, we base our network on Maximum a posteriori (MAP), proposing a strategy for better prior selection from base set. We specially introduce semantic information, which are learned from unsupervised text corpora and easily available, to alleviate biases caused by unrepresentative support samples. Our intuition is that when the support from visual information is biased, semantics can provide strong prior knowledge to assist learning. Experimental results on four few-shot benchmarks also show that it outperforms the state-of-the-art methods by a large margin, improves around 2.08%∼12.68% than the best results in each dataset on both 1- and 5-shot tasks.
AB - Few-shot learning is to recognize novel concepts with few labeled samples. Recently, significant progress has been made to address the overfitting caused by data scarcity, especially those on modeling the distribution of novel categories given a single point. However, they often deeply rely on the prior knowledge from base set, which is generally hard to define, and its selection can easily bias the learning. A popular pipeline is to pretrain a feature extractor with base set and generate statistics from them as prior information. Since pretrained feature extractor cannot extract accurate representations for categories have never seen, and there is only 1 or 5 support images from novel categories, making it hard to acquire accurate priors, especially when they are far away from the class center. To address these issues, in this paper, we base our network on Maximum a posteriori (MAP), proposing a strategy for better prior selection from base set. We specially introduce semantic information, which are learned from unsupervised text corpora and easily available, to alleviate biases caused by unrepresentative support samples. Our intuition is that when the support from visual information is biased, semantics can provide strong prior knowledge to assist learning. Experimental results on four few-shot benchmarks also show that it outperforms the state-of-the-art methods by a large margin, improves around 2.08%∼12.68% than the best results in each dataset on both 1- and 5-shot tasks.
KW - Few-shot learning
KW - Maximum a posteriori
KW - Prior selection
KW - Semantics
UR - http://www.scopus.com/inward/record.url?scp=85119919988&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85119919988&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2021.107688
DO - 10.1016/j.knosys.2021.107688
M3 - Article
AN - SCOPUS:85119919988
SN - 0950-7051
VL - 237
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 107688
ER -