TAILIEUCHUNG - Báo cáo khoa học: "Distributional Representations for Handling Sparsity in Supervised Sequence-Labeling"

Supervised sequence-labeling systems in natural language processing often suffer from data sparsity because they use word types as features in their prediction tasks. Consequently, they have difficulty estimating parameters for types which appear in the test set, but seldom (or never) appear in the training set. We demonstrate that distributional representations of word types, trained on unannotated text, can be used to improve performance on rare words. We incorporate aspects of these representations into the feature space of our sequence-labeling systems. . | Distributional Representations for Handling Sparsity in Supervised Sequence-Labeling Fei Huang Temple University 1805 N. Broad St. Wachman Hall 324 tub58431@ Abstract Supervised sequence-labeling systems in natural language processing often suffer from data sparsity because they use word types as features in their prediction tasks. Consequently they have difficulty estimating parameters for types which appear in the test set but seldom or never appear in the training set. We demonstrate that distributional representations of word types trained on unannotated text can be used to improve performance on rare words. We incorporate aspects of these representations into the feature space of our sequence-labeling systems. In an experiment on a standard chunking dataset our best technique improves a chunker from F1 to F1 on chunks beginning with rare words. On the same dataset it improves our part-of-speech tagger from 74 to 80 accuracy on rare words. Furthermore our system improves significantly over a baseline system when applied to text from a different domain and it reduces the sample complexity of sequence labeling. 1 Introduction Data sparsity and high dimensionality are the twin curses of statistical natural language processing NLP . In many traditional supervised NLP systems the feature space includes dimensions for each word type in the data or perhaps even combinations of word types. Since vocabularies can be extremely large this leads to an explosion in the number of parameters. To make matters worse language is Zipf-distributed so that a large fraction of any training data set will be hapax legom-ena very many word types will appear only a few times and many word types will be left out of the training set altogether. As a consequence for Alexander Yates Temple University 1805 N. Broad St. Wachman Hall 324 yates@ many word types supervised NLP systems have very few or even zero labeled examples from which to estimate parameters. The

TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.