TAILIEUCHUNG - Báo cáo khoa học: "Complexity Metrics in an Incremental Right-corner Parser"

Hierarchical HMM (HHMM) parsers make promising cognitive models: while they use a bounded model of working memory and pursue incremental hypotheses in parallel, they still achieve parsing accuracies competitive with chart-based techniques. This paper aims to validate that a right-corner HHMM parser is also able to produce complexity metrics, which quantify a reader’s incremental difficulty in understanding a sentence. Besides defining standard metrics in the HHMM framework, a new metric, embedding difference, is also proposed, which tests the hypothesis that HHMM store elements represents syntactic working memory. . | Complexity Metrics in an Incremental Right-corner Parser Stephen Wu Asaf Bachrach Carlos Cardenas William Schuler0 Department of Computer Science University of Minnesota t Unit de Neuroimagerie Cognitive INSERM-CEA Department of Brain Cognitive Sciences Massachusetts Institute of Technology 0 University of Minnesota and The Ohio State University swu@ Usaf@ cardenas@ schuler@ Abstract Hierarchical HMM HHMM parsers make promising cognitive models while they use a bounded model of working memory and pursue incremental hypotheses in parallel they still achieve parsing accuracies competitive with chart-based techniques. This paper aims to validate that a right-corner HHMM parser is also able to produce complexity metrics which quantify a reader s incremental difficulty in understanding a sentence. Besides defining standard metrics in the HHMM framework a new metric embedding difference is also proposed which tests the hypothesis that HHMM store elements represents syntactic working memory. Results show that HHMM surprisal outperforms all other evaluated metrics in predicting reading times and that embedding difference makes a significant independent contribution. 1 Introduction Since the introduction of a parser-based calculation for surprisal by Hale 2001 statistical techniques have been become common as models of reading difficulty and linguistic complexity. Sur-prisal has received a lot of attention in recent literature due to nice mathematical properties Levy 2008 and predictive ability on eye-tracking movements Demberg and Keller 2008 Boston et al. 2008a . Many other complexity metrics have been suggested as mutually contributing to reading difficulty for example entropy reduction Hale 2006 bigram probabilities McDonald and Shillcock 2003 and split-syntactic lexical versions of other metrics Roark et al. 2009 . A parser-derived complexity metric such as sur-prisal can only be as good empirically as the model of language .

TỪ KHÓA LIÊN QUAN
TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.