TAILIEUCHUNG - Báo cáo khoa học: "Mixture Model POMDPs for Efficient Handling of Uncertainty in Dialogue Management"

In spoken dialogue systems, Partially Observable Markov Decision Processes (POMDPs) provide a formal framework for making dialogue management decisions under uncertainty, but efficiency and interpretability considerations mean that most current statistical dialogue managers are only MDPs. These MDP systems encode uncertainty explicitly in a single state representation. We formalise such MDP states in terms of distributions over POMDP states, and propose a new dialogue system architecture (Mixture Model POMDPs) which uses mixtures of these distributions to efficiently represent uncertainty. We also provide initial evaluation results (with real users) for this architecture. . | Mixture Model POMDPs for Efficient Handling of Uncertainty in Dialogue Management James Henderson University of Geneva Department of Computer Science Oliver Lemon University of Edinburgh School of Informatics olemon@ Abstract In spoken dialogue systems Partially Observable Markov Decision Processes POMDPs provide a formal framework for making dialogue management decisions under uncertainty but efficiency and interpretability considerations mean that most current statistical dialogue managers are only MDPs. These MDP systems encode uncertainty explicitly in a single state representation. We formalise such MDP states in terms of distributions over POMDP states and propose a new dialogue system architecture Mixture Model POMDPs which uses mixtures of these distributions to efficiently represent uncertainty. We also provide initial evaluation results with real users for this architecture. 1 Introduction Partially Observable Markov Decision Processes POMDPs provide a formal framework for making decisions under uncertainty. Recent research in spoken dialogue systems has used POMDPs for dialogue management Williams and Young 2007 Young et al. 2007 . These systems represent the uncertainty about the dialogue history using a probability distribution over dialogue states known as the POMDP s belief state and they use approximate POMDP inference procedures to make dialogue management decisions. However these inference procedures are too computationally intensive for most domains and the system s behaviour can be difficult to predict. Instead most current statistical dialogue managers use a single state to represent the dialogue history thereby making them only Markov Decision Process models MDPs . These state rep resentations have been fine-tuned over many development cycles so that common types of uncertainty can be encoded in a single state. Examples of such representations include unspecified values confidence scores and confirmed .

TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.