Abstract
In this paper, we provide an overview of our research on multimodal media and contents using embodied lifelike agents. In particular we describe our research centered on MPML (Multimodal Presentation Markup Language). MPML allows people to write and produce multimodal contents easily, and serves as a core for integrating various components and functionalities important for multimodal media. To demonstrate the benefits and usability of MPML in a variety of environments including animated Web, 3D VRML space, mobile phones, and the physical world with a humanoid robot, several versions of MPML have been developed while keeping its basic format. Since emotional behavior of the agent is an important factor for making agents lifelike and for being accepted by people as an attractive and friendly human-computer interaction style, emotion-related functions have been emphasized in MPML. In order to alleviate the workload of authoring the contents, it is also required to endow the agents with a certain level of autonomy. We show some of our approaches towards this end.
Original language | English |
---|---|
Pages (from-to) | 97-128 |
Number of pages | 32 |
Journal | New Generation Computing |
Volume | 24 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2006 |
Externally published | Yes |
Keywords
- Affective computing
- Content description language
- Emotion
- Lifelike agent
- Multimodal contents
ASJC Scopus subject areas
- Hardware and Architecture
- Theoretical Computer Science
- Computational Theory and Mathematics