TAILIEUCHUNG - Báo cáo khoa học: "Assessing Dialog System User Simulation Evaluation Measures Using Human Judges"

Previous studies evaluate simulated dialog corpora using evaluation measures which can be automatically extracted from the dialog systems’ logs. However, the validity of these automatic measures has not been fully proven. In this study, we first recruit human judges to assess the quality of three simulated dialog corpora and then use human judgments as the gold standard to validate the conclusions drawn from the automatic measures. We observe that it is hard for the human judges to reach good agreement when asked to rate the quality of the dialogs from given perspectives. However, the human ratings give consistent ranking. | Assessing Dialog System User Simulation Evaluation Measures Using Human Judges Hua Ai University of Pittsburgh Pittsburgh Pa 15260 UsA hua@ Diane J. Litman University of Pittsburgh Pittsburgh pa 15260 UsA litman@ Abstract Previous studies evaluate simulated dialog corpora using evaluation measures which can be automatically extracted from the dialog systems logs. However the validity of these automatic measures has not been fully proven. In this study we first recruit human judges to assess the quality of three simulated dialog corpora and then use human judgments as the gold standard to validate the conclusions drawn from the automatic measures. We observe that it is hard for the human judges to reach good agreement when asked to rate the quality of the dialogs from given perspectives. However the human ratings give consistent ranking of the quality of simulated corpora generated by different simulation models. When building prediction models of human judgments using previously proposed automatic measures we find that we cannot reliably predict human ratings using a regression model but we can predict human rankings by a ranking model. 1 Introduction User simulation has been widely used in different phases in spoken dialog system development. In the system development phase user simulation is used in training different system components. For example Levin et al. 2000 and Scheffler 2002 exploit user simulations to generate large corpora for using Reinforcement Learning to develop dialog strategies while Chung 2004 implement user simulation to train the speech recognizer and understanding components. While user simulation is considered to be more low-cost and time-efficient than experiments with human subjects one major concern is how well the state-of-the-art user simulations can mimic human user behaviors and how well they can substitute for human users in a variety of tasks. Schatzmann et al. 2005 propose a set of evaluation measures to .

TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.