On uncertain logic based upon information theory

Toshiyasu Matsushima*, Joe Suzuki, Hiroshige Inazumi, Shigeichi Hirasawa

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review


Summary form only given, as follows. The authors propose a semantic generalized predicate logic that is based on probability theory and information theory for providing theoretical methods for processing uncertain knowledge in artifical intelligence (AI) applications. The basic concept of the proposed logic is that the interpretation of the well-formed formula (wff) containing uncertainty is represented by using conditional probability. By the interpretation model using the conditional probability, a lot of problems that are impossible to treat by conventional AI methods can be explained in terms of information theory. From the definition, the self-information of the wff, the mutual information between a couple of predicates, and information gain by the reasoning can be shown. Next, reasoning rules are evaluated using the information gain which expresses the difference between the prior information and the posterior information of the consequent wff. Finally, the authors give a new calculation method for reasoning that gives the most unbiased probability estimation, given the available evidence, and prove that the proposed method is optimal from the principle of maximum entropy, subject to the given marginal probability condition.

Original languageEnglish
Number of pages2
Publication statusPublished - 1988

ASJC Scopus subject areas

  • General Engineering


Dive into the research topics of 'On uncertain logic based upon information theory'. Together they form a unique fingerprint.

Cite this