Abstract
This paper proposes a design of a shared task whose ultimate goal is automatic evaluation of multi-turn, dyadic, textual helpdesk dialogues. The proposed task takes the form of an offline evaluation, where participating systems are given a dialogue as input, and output at least one of the following: (1) an estimated distribution of the annotators' quality ratings for that dialogue; and (2) an estimated distribution of the annotators' nugget type labels for each utterance block (i.e., a maximal sequence of consecutive posts by the same utterer) in that dialogue. This shared task should help researchers build automatic helpdesk dialogue systems that respond appropriately to inquiries by considering the diverse views of customers. The proposed task has been accepted as part of the NTCIR-14 Short Text Conversation (STC-3) task. While estimated and gold distributions are traditionally compared by means of root mean squared error, Jensen-Shannon divergence and the like, we propose a pilot measure that considers the order of the probability bins for the dialogue quality subtask, which we call Symmetric Normalised Order-aware Divergence (SNOD).
Original language | English |
---|---|
Pages (from-to) | 24-30 |
Number of pages | 7 |
Journal | CEUR Workshop Proceedings |
Volume | 2008 |
Publication status | Published - 2017 |
Event | 8th International Workshop on Evaluating Information Access, EVIA 2017 - Tokyo, Japan Duration: 2017 Dec 5 → … |
Keywords
- Dialogues
- Divergence
- Evaluation
- Nuggets
- Probability distributions
- Test collections
ASJC Scopus subject areas
- Computer Science(all)