TAILIEUCHUNG - Báo cáo khoa học: "Updating a Name Tagger Using Contemporary Unlabeled Data"

For many NLP tasks, including named entity tagging, semi-supervised learning has been proposed as a reasonable alternative to methods that require annotating large amounts of training data. In this paper, we address the problem of analyzing new data given a semi-supervised NE tagger trained on data from an earlier time period. We will show that updating the unlabeled data is sufficient to maintain quality over time, and outperforms updating the labeled data. | Updating a Name Tagger Using Contemporary Unlabeled Data Cristina Mota L2F INESC-ID 1ST NYU Rua Alves Redol 9 1000-029 Lisboa Portugal cmota@ Ralph Grishman New York University Computer Science Department NeW York NY 10003 USA grishman@ Abstract For many NLP tasks including named entity tagging semi-supervised learning has been proposed as a reasonable alternative to methods that require annotating large amounts of training data. In this paper we address the problem of analyzing new data given a semi-supervised NE tagger trained on data from an earlier time period. We will show that updating the unlabeled data is sufficient to maintain quality over time and outperforms updating the labeled data. Furthermore we will also show that augmenting the unlabeled data with older data in most cases does not result in better performance than simply using a smaller amount of current unlabeled data. 1 Introduction Brill 2003 observed large gains in performance for different NLP tasks solely by increasing the size of unlabeled data but stressed that for other NLP tasks such as named entity recognition NER we still need to focus on developing tools that help to increase the size of annotated data. This problem is particularly crucial when processing languages such as Portuguese for which the labeled data is scarce. For instance in the first NER evaluation for Portuguese HAREM Santos and Cardoso 2007 only two out of the nine participants presented systems based on machine learning and they both argued they could have achieved significantly better results if they had larger training sets. Semi-supervised methods are commonly chosen as an alternative to overcome the lack of annotated resources because they present a good trade-off between amount of labeled data needed and performance achieved. Co-training is one of those methods and has been extensively studied in NLP Nigam and Ghani 2000 Pierce and Cardie 2001 Ng and Cardie 2003 Mota and Grishman 2008 . In .

TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.