TAILIEUCHUNG - Báo cáo khoa học: "Exact Decoding for Jointly Labeling and Chunking Sequences"

There are two decoding algorithms essential to the area of natural language processing. One is the Viterbi algorithm for linear-chain models, such as HMMs or CRFs. The other is the CKY algorithm for probabilistic context free grammars. However, tasks such as noun phrase chunking and relation extraction seem to fall between the two, neither of them being the best fit. Ideally we would like to model entities and relations, with two layers of labels. We present a tractable algorithm for exact inference over two layers of labels and chunks with time complexity O(n2 ), and provide empirical results comparing. | Exact Decoding for Jointly Labeling and Chunking Sequences Nobuyuki Shimizu Department of Computer Science State University of New York at Albany Albany NY 12222 USA nobuyuki@shimi Andrew Haas Department of Computer Science State University of New York at Albany Albany NY 12222 USA haas@ Abstract There are two decoding algorithms essential to the area of natural language processing. One is the Viterbi algorithm for linear-chain models such as HMMs or CRFs. The other is the CKY algorithm for probabilistic context free grammars. However tasks such as noun phrase chunking and relation extraction seem to fall between the two neither of them being the best fit. Ideally we would like to model entities and relations with two layers of labels. We present a tractable algorithm for exact inference over two layers of labels and chunks with time complexity O n2 and provide empirical results comparing our model with linear-chain models. 1 Introduction The Viterbi algorithm and the CKY algorithms are two decoding algorithms essential to the area of natural language processing. The former models a linear chain of labels such as part of speech tags and the latter models a parse tree. Both are used to extract the best prediction from the model Manning and Schutze 1999 . However some tasks seem to fall between the two having more than one layer but flatter than the trees created by parsers. For example in relation extraction we have entities in one layer and relations between entities as another layer. Another task is shallow parsing. We may want to model part-of-speech tags and noun verb chunks at the same time since performing simultaneous labeling may result in increased joint accuracy by sharing information between the two layers of labels. To apply the Viterbi decoder to such tasks we need two models one for each layer. We must feed the output of one layer to the next layer. In such an approach errors in earlier processing nearly always accumulate and .

TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.