TY - GEN
T1 - A mixture of multiple linear classifiers with sample weight and manifold regularization
AU - Li, Weite
AU - Chen, Benhui
AU - Zhou, Bo
AU - Hu, Jinglu
N1 - Publisher Copyright:
© 2017 IEEE.
Copyright:
Copyright 2018 Elsevier B.V., All rights reserved.
PY - 2017/6/30
Y1 - 2017/6/30
N2 - A mixture of multiple linear classifiers is famous for its efficiency and effectiveness to tackle nonlinear classification problems. Each classifier contains one linear function multiplied with a gated function, which restricts its corresponding classifier to a local region. Previous researches mainly focus on the partition of local regions, since its quality directly determines the performance of mixture models. However, in real-world data sets, imbalanced and insufficient labeled data are two frequently encountered problems, which also have large influences on the performance of learned classifiers but are seldom considered or explored in the context of mixture models. In this paper, these missing components are introduced into the original formulation of mixture models, namely, a sample weighting scheme for imbalanced data distributions and a manifold regularization to leverage unlabeled data. Then, two solutions with closed form are provided for parameter optimization. Experimental results in the end of our paper exhibit the significance of the added components. As a result, a mixture of multiple linear classifiers can be extended to imbalanced and semi-supervised learning problems.
AB - A mixture of multiple linear classifiers is famous for its efficiency and effectiveness to tackle nonlinear classification problems. Each classifier contains one linear function multiplied with a gated function, which restricts its corresponding classifier to a local region. Previous researches mainly focus on the partition of local regions, since its quality directly determines the performance of mixture models. However, in real-world data sets, imbalanced and insufficient labeled data are two frequently encountered problems, which also have large influences on the performance of learned classifiers but are seldom considered or explored in the context of mixture models. In this paper, these missing components are introduced into the original formulation of mixture models, namely, a sample weighting scheme for imbalanced data distributions and a manifold regularization to leverage unlabeled data. Then, two solutions with closed form are provided for parameter optimization. Experimental results in the end of our paper exhibit the significance of the added components. As a result, a mixture of multiple linear classifiers can be extended to imbalanced and semi-supervised learning problems.
UR - http://www.scopus.com/inward/record.url?scp=85031040239&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85031040239&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2017.7966328
DO - 10.1109/IJCNN.2017.7966328
M3 - Conference contribution
AN - SCOPUS:85031040239
T3 - Proceedings of the International Joint Conference on Neural Networks
SP - 3747
EP - 3752
BT - 2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 International Joint Conference on Neural Networks, IJCNN 2017
Y2 - 14 May 2017 through 19 May 2017
ER -