Đang chuẩn bị liên kết để tải về tài liệu:
Báo cáo khoa học: "The Back-translation Score: Automatic MT Evaluation at the Sentence Level without Reference Translations"

Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ

Automatic tools for machine translation (MT) evaluation such as BLEU are well established, but have the drawbacks that they do not perform well at the sentence level and that they presuppose manually translated reference texts. Assuming that the MT system to be evaluated can deal with both directions of a language pair, in this research we suggest to conduct automatic MT evaluation by determining the orthographic similarity between a back-translation and the original source text. This way we eliminate the need for human translated reference texts. By correlating BLEU and back-translation scores with human judgments, it could be shown. | The Back-translation Score Automatic MT Evaluation at the Sentence Level without Reference Translations Reinhard Rapp Universitat Rovira i Virgili Avinguda Catalunya 35 43002 Tarragona Spain reinhard.rapp@urv.cat Abstract Automatic tools for machine translation MT evaluation such as BLEU are well established but have the drawbacks that they do not perform well at the sentence level and that they presuppose manually translated reference texts. Assuming that the MT system to be evaluated can deal with both directions of a language pair in this research we suggest to conduct automatic MT evaluation by determining the orthographic similarity between a back-translation and the original source text. This way we eliminate the need for human translated reference texts. By correlating BLEU and back-translation scores with human judgments it could be shown that the back-translation score gives an improved performance at the sentence level. 1 Introduction The manual evaluation of the results of machine translation systems requires considerable time and effort. For this reason fast and inexpensive automatic methods were developed. They are based on the comparison of a machine translation with a reference translation produced by humans. The comparison is done by determining the number of matching word sequences between both translations. It could be shown that such methods of which BLEU Papineni et al. 2002 is the most common can deliver evaluation results that show a high agreement with human judgments Papineni et al. 2002 Coughlin 2003 Koehn Monz 2006 . Disadvantages of BLEU and related methods are that a human reference translation is required and that the results are reliable only at corpus level i.e. when computed over many sentence pairs see e.g. Callison-Burch et al. 2006 . However at the sentence level due to data sparseness the results tend to be unsatisfactory Agarwal Lavie 2008 Callison-Burch et al. 2008 . Pap-ineni et al. 2002 describe this as follows BLEU s .

TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.