Situated dialog model for software agents

Hideyuki Nakashima*, Yasunari Harada

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

When we communicate through (natural) languages, we do not explicitly say everything. Both the speaker and the hearer utilize information available from the utterance situation, which includes the mental states of the speaker and the hearer. Interesting cases are frequently observed in the use of Japanese (in dialogue situations). Syntactic (or configurational) constraints of Japanese are weaker than those of English, in the sense that the speaker may omit almost any element in a sentence. In this paper we present a mechanism of the hearer in the light of situated reasoning and show how the missing information can be supplied from the situation. Although we believe that the model captures the essential nature of human communication, it may be too naive as a model of human cognition. Rather, the model is intended to be used in the design of software agents that communicate with each other in a mechanical but flexible and efficient way.

Original languageEnglish
Pages (from-to)275-281
Number of pages7
JournalSpeech Communication
Volume15
Issue number3-4
DOIs
Publication statusPublished - 1994 Dec

Keywords

  • Agents
  • Dialog model
  • Situated reasoning

ASJC Scopus subject areas

  • Software
  • Modelling and Simulation
  • Communication
  • Language and Linguistics
  • Linguistics and Language
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Situated dialog model for software agents'. Together they form a unique fingerprint.

Cite this