Situated dialog model for software agents

Hideyuki Nakashima*, Yasunari Harada

*この研究の対応する著者

研究成果: Article査読

抄録

When we communicate through (natural) languages, we do not explicitly say everything. Both the speaker and the hearer utilize information available from the utterance situation, which includes the mental states of the speaker and the hearer. Interesting cases are frequently observed in the use of Japanese (in dialogue situations). Syntactic (or configurational) constraints of Japanese are weaker than those of English, in the sense that the speaker may omit almost any element in a sentence. In this paper we present a mechanism of the hearer in the light of situated reasoning and show how the missing information can be supplied from the situation. Although we believe that the model captures the essential nature of human communication, it may be too naive as a model of human cognition. Rather, the model is intended to be used in the design of software agents that communicate with each other in a mechanical but flexible and efficient way.

本文言語English
ページ(範囲)275-281
ページ数7
ジャーナルSpeech Communication
15
3-4
DOI
出版ステータスPublished - 1994 12月

ASJC Scopus subject areas

  • ソフトウェア
  • モデリングとシミュレーション
  • 通信
  • 言語および言語学
  • 言語学および言語
  • コンピュータ ビジョンおよびパターン認識
  • コンピュータ サイエンスの応用

フィンガープリント

「Situated dialog model for software agents」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル