TAILIEUCHUNG - Báo cáo khoa học: "Contextualizing Semantic Representations Using Syntactically Enriched Vector Models"

We present a syntactically enriched vector model that supports the computation of contextualized semantic representations in a quasi compositional fashion. It employs a systematic combination of first- and second-order context vectors. We apply our model to two different tasks and show that (i) it substantially outperforms previous work on a paraphrase ranking task, and (ii) achieves promising results on a wordsense similarity task; to our knowledge, it is the first time that an unsupervised method has been applied to this task. . | Contextualizing Semantic Representations Using Syntactically Enriched Vector Models Stefan Thater and Hagen Furstenau and Manfred Pinkal Department of Computational Linguistics Saarland University stth hagenf pinkal @ Abstract We present a syntactically enriched vector model that supports the computation of contextualized semantic representations in a quasi compositional fashion. It employs a systematic combination of first- and second-order context vectors. We apply our model to two different tasks and show that i it substantially outperforms previous work on a paraphrase ranking task and ii achieves promising results on a wordsense similarity task to our knowledge it is the first time that an unsupervised method has been applied to this task. 1 Introduction In the logical paradigm of natural-language semantics originating from Montague 1973 semantic structure composition and entailment have been modelled to an impressive degree of detail and formal consistency. These approaches however lack coverage and robustness and their impact on realistic natural-language applications is limited The logical framework suffers from overspecificity and is inappropriate to model the pervasive vagueness ambivalence and uncertainty of natural-language semantics. Also the handcrafting of resources covering the huge amounts of content which are required for deep semantic processing is highly inefficient and expensive. Co-occurrence-based semantic vector models offer an attractive alternative. In the standard approach word meaning is represented by feature vectors with large sets of context words as dimensions and their co-occurrence frequencies as values. Semantic similarity information can be acquired using unsupervised methods at virtually no cost and the information gained is soft and gradual. Many NLP tasks have been modelled successfully using vector-based models. Examples include in formation retrieval Manning et al. 2008 wordsense discrimination Schutze .

TỪ KHÓA LIÊN QUAN
TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.