TAILIEUCHUNG - Báo cáo khoa học: "Bootstrapping Statistical Parsers from Small Datasets"

We present a practical co-training method for bootstrapping statistical parsers using a small amount of manually parsed training material and a much larger pool of raw sentences. Experimental results show that unlabelled sentences can be used to improve the performance of statistical parsers. In addition, we consider the problem of bootstrapping parsers when the manually parsed training material is in a different domain to either the raw sentences or the testing material. We show that bootstrapping continues to be useful, even though no manually produced parses from the target domain are used. these tasks typically involved a small set. | Bootstrapping Statistical Parsers from Small Datasets Mark Steedman Miles Osborne Anoop Sarkar Stephen Clark Rebecca Hwa Julia Hockenmaier Paul Ruhlen. Steven Baker and Jeremiah Crim Division of Informatics University of Edinburgh steedman stephenc julia osborne @ School of Computing Science Simon Fraser University anoop@cs . sfu . ca Institute for Advanced Computer Studies University of Maryland hwagumiacs . umd. edu Center for Language and Speech Processing Johns Hopkins University jcrimg Department of Computer Science Cornell University sdb2 2 @ Cornell. edu Abstract We present a practical co-training method for bootstrapping statistical parsers using a small amount of manually parsed training material and a much larger pool of raw sentences. Experimental results show that unlabelled sentences can be used to improve the performance of statistical parsers. In addition we consider the problem of bootstrapping parsers when the manually parsed training material is in a different domain to either the raw sentences or the testing material. We show that bootstrapping continues to be useful even though no manually produced parses from the target domain are used. 1 Introduction In this paper we describe how co-training Blum and Mitchell 1998 can be used to bootstrap a pair of statistical parsers from a small amount of annotated training data. Co-training is a weakly supervised learning algorithm in which two or more learners are iteratively retrained on each other s output. It has been applied to problems such as word-sense disambiguation Yarowsky 1995 web-page classification Blum and Mitchell 1998 and named-entity recognition Collins and Singer 1999 . However these tasks typically involved a small set of labels around 2-3 and a relatively small parameter space. It is therefore instructive to consider co-training for more complex models. Compared to these earlier models a statistical parser has a larger parameter space and instead

TỪ KHÓA LIÊN QUAN
TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.