TAILIEUCHUNG - Báo cáo khoa học: "Randomised Language Modelling for Statistical Machine Translation"

A Bloom filter (BF) is a randomised data structure for set membership queries. Its space requirements are significantly below lossless information-theoretic lower bounds but it produces false positives with some quantifiable probability. Here we explore the use of BFs for language modelling in statistical machine translation. We show how a BF containing n-grams can enable us to use much larger corpora and higher-order models complementing a conventional n-gram LM within an SMT system. We also consider (i) how to include approximate frequency information efficiently within a BF and (ii) how to reduce the error rate of these models by. | Randomised Language Modelling for Statistical Machine Translation David Talbot and Miles Osborne School of Informatics University of Edinburgh 2 Buccleuch Place Edinburgh Eh8 9LW uK miles@ Abstract A Bloom filter BF is a randomised data structure for set membership queries. Its space requirements are significantly below lossless information-theoretic lower bounds but it produces false positives with some quantifiable probability. Here we explore the use of BFs for language modelling in statistical machine translation. We show how a BF containing n-grams can enable us to use much larger corpora and higher-order models complementing a conventional n-gram LM within an SMT system. We also consider i how to include approximate frequency information efficiently within a BF and ii how to reduce the error rate of these models by first checking for lower-order sub-sequences in candidate ngrams. Our solutions in both cases retain the one-sided error guarantees of the BF while taking advantage of the Zipf-like distribution of word frequencies to reduce the space requirements. 1 Introduction Language modelling LM is a crucial component in statistical machine translation SMT . Standard ngram language models assign probabilities to translation hypotheses in the target language typically as smoothed trigram models . Chiang 2005 . Although it is well-known that higher-order LMs and models trained on additional monolingual corpora can yield better translation performance the chal- 512 lenges in deploying large LMs are not trivial. Increasing the order of an n-gram model can result in an exponential increase in the number of parameters for corpora such as the English Gigaword corpus for instance there are 300 million distinct trigrams and over billion 5-grams. Since a LM may be queried millions of times per sentence it should ideally reside locally in memory to avoid time-consuming remote or disk-based look-ups. Against this background we .

TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.