Đang chuẩn bị liên kết để tải về tài liệu:
Báo cáo khoa học: "Character-Level Machine Translation Evaluation for Languages with Ambiguous Word Boundaries"

Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ

In this work, we introduce the TESLACELAB metric (Translation Evaluation of Sentences with Linear-programming-based Analysis – Character-level Evaluation for Languages with Ambiguous word Boundaries) for automatic machine translation evaluation. For languages such as Chinese where words usually have meaningful internal structure and word boundaries are often fuzzy, TESLA-CELAB acknowledges the advantage of character-level evaluation over word-level evaluation. | Character-Level Machine Translation Evaluation for Languages with Ambiguous Word Boundaries Chang Liu and HweeTouNg Department of Computer Science National University of Singapore 13 Computing Drive Singapore 117417 liuchan1 nght @comp.nus.edu.sg Abstract In this work we introduce the TESLA-CELAB metric Translation Evaluation of Sentences with Linear-programming-based Analysis - Character-level Evaluation for Languages with Ambiguous word Boundaries for automatic machine translation evaluation. For languages such as Chinese where words usually have meaningful internal structure and word boundaries are often fuzzy TESLA-CELAB acknowledges the advantage of character-level evaluation over word-level evaluation. By reformulating the problem in the linear programming framework TESLA-CELAB addresses several drawbacks of the character-level metrics in particular the modeling of synonyms spanning multiple characters. We show empirically that TESLA-CELAB significantly outperforms characterlevel BLEU in the English-Chinese translation evaluation tasks. 1 Introduction Since the introduction of BLEU Papineni et al. 2002 automatic machine translation MT evaluation has received a lot of research interest. The Workshop on Statistical Machine Translation WMT hosts regular campaigns comparing different machine translation evaluation metrics Callison-Burch et al. 2009 Callison-Burch et al. 2010 Callison-Burch et al. 2011 . In the WMT shared tasks many new generation metrics such as METEOR Banerjee and Lavie 2005 TER Snover et al. 2006 and TESLA Liu et al. 2010 have consistently outperformed BLEU as judged by the correlations with human judgments. 921 The research on automatic machine translation evaluation is important for a number of reasons. Automatic translation evaluation gives machine translation researchers a cheap and reproducible way to guide their research and makes it possible to compare machine translation methods across different studies. In addition machine translation

TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.