TAILIEUCHUNG - Báo cáo khoa học: "Dialogue Segmentation with Large Numbers of Volunteer Internet Annotators"

This paper shows the results of an experiment in dialogue segmentation. In this experiment, segmentation was done on a level of analysis similar to adjacency pairs. The method of annotation was somewhat novel: volunteers were invited to participate over the Web, and their responses were aggregated using a simple voting method. Though volunteers received a minimum of training, the aggregated responses of the group showed very high agreement with expert opinion. | Dialogue Segmentation with Large Numbers of Volunteer Internet Annotators T. Daniel Midgley Discipline of Linguistics School of Computer Science and Software Engineering University of Western Australia Perth Australia Abstract This paper shows the results of an experiment in dialogue segmentation. In this experiment segmentation was done on a level of analysis similar to adjacency pairs. The method of annotation was somewhat novel volunteers were invited to participate over the Web and their responses were aggregated using a simple voting method. Though volunteers received a minimum of training the aggregated responses of the group showed very high agreement with expert opinion. The group as a unit performed at the top of the list of annotators and in many cases performed as well as or better than the best annotator. 1 Introduction Aggregated human behaviour is a valuable source of information. The Internet shows us many examples of collaboration as a means of resource creation. Wikipedia reviews and Yahoo Answers are just some examples of large repositories of information powered by individuals who voluntarily contribute their time and talents. Some NLP projects are now using this idea notably the ESP Game von Ahn 2004 a data collection effort presented as a game in which players label images from the Web. This paper presents an extension of this collaborative volunteer ethic in the area of dialogue annotation. For dialogue researchers the prospect of using volunteer annotators from the Web can be an attractive option. The task of training annotators can be time-consuming expensive and if inter-annotator agreement turns out to be poor risky. Getting Internet volunteers for annotation has its own pitfalls. Dialogue annotation is often not very interesting so it can be difficult to attract willing participants. Experimenters will have little control over the conditions of the annotation and the skill of the annotators. Training .

TÀI LIỆU MỚI ĐĂNG
309    139    0    24-12-2024
TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.