Improving multi-label classification performance by label constraints

Benhui Chen, Xuefen Hong, Lihua Duan, Jinglu Hu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)

Abstract

Multi-label classification is an extension of traditional classification problem in which each instance is associated with a set of labels. For some multi-label classification tasks, labels are usually overlapped and correlated, and some implicit constraint rules are existed among the labels. This paper presents an improved multi-label classification method based on label ranking strategy and label constraints. Firstly, one-against-all decomposition technique is used to divide a multilabel classification task into multiple independent binary classification sub-problems. One binary SVM classifier is trained for each label. Secondly, based on training data, label constraint rules are mined by association rule learning method. Thirdly, a correction model based on label constraints is used to correct the probabilistic outputs of SVM classifiers for label ranking. Experiment results on three well-known multi-label benchmark datasets show that the proposed method outperforms some conventional multi-label classification methods.

Original languageEnglish
Title of host publication2013 International Joint Conference on Neural Networks, IJCNN 2013
DOIs
Publication statusPublished - 2013
Event2013 International Joint Conference on Neural Networks, IJCNN 2013 - Dallas, TX, United States
Duration: 2013 Aug 42013 Aug 9

Publication series

NameProceedings of the International Joint Conference on Neural Networks

Conference

Conference2013 International Joint Conference on Neural Networks, IJCNN 2013
Country/TerritoryUnited States
CityDallas, TX
Period13/8/413/8/9

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Improving multi-label classification performance by label constraints'. Together they form a unique fingerprint.

Cite this