An interpretable neural network ensemble

Pitoyo Hartono*, Shuji Hashimoto

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    4 Citations (Scopus)

    Abstract

    The objective of this study is to build a model of neural network classifier that is not only reliable but also, as opposed to most of the presently available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction from trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box. Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable for human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied to critical real world problems.

    Original languageEnglish
    Title of host publicationIECON Proceedings (Industrial Electronics Conference)
    Pages228-232
    Number of pages5
    DOIs
    Publication statusPublished - 2007
    Event33rd Annual Conference of the IEEE Industrial Electronics Society, IECON - Taipei
    Duration: 2007 Nov 52007 Nov 8

    Other

    Other33rd Annual Conference of the IEEE Industrial Electronics Society, IECON
    CityTaipei
    Period07/11/507/11/8

    ASJC Scopus subject areas

    • Electrical and Electronic Engineering

    Fingerprint

    Dive into the research topics of 'An interpretable neural network ensemble'. Together they form a unique fingerprint.

    Cite this