TAILIEUCHUNG - Báo cáo khoa học: "Techniques to incorporate the benefits of a Hierarchy in a modified hidden Markov model"

This paper explores techniques to take advantage of the fundamental difference in structure between hidden Markov models (HMM) and hierarchical hidden Markov models (HHMM). The HHMM structure allows repeated parts of the model to be merged together. A merged model takes advantage of the recurring patterns within the hierarchy, and the clusters that exist in some sequences of observations, in order to increase the extraction accuracy. | Techniques to incorporate the benefits of a Hierarchy in a modified hidden Markov model Lin-Yi Chou University of Waikato Hamilton New Zealand lc55@ Abstract This paper explores techniques to take advantage of the fundamental difference in structure between hidden Markov models HMM and hierarchical hidden Markov models HHMM . The HHMM structure allows repeated parts of the model to be merged together. A merged model takes advantage of the recurring patterns within the hierarchy and the clusters that exist in some sequences of observations in order to increase the extraction accuracy. This paper also presents a new technique for reconstructing grammar rules automatically. This work builds on the idea of combining a phrase extraction method with HHMM to expose patterns within English text. The reconstruction is then used to simplify the complex structure of an HHMM The models discussed here are evaluated by applying them to natural language tasks based on CoNLL-20041 and a sub-corpus of the Lancaster Treebank1 2. Keywords information extraction natural language hidden Markov models. 1 Introduction Hidden Markov models HMMs were introduced in the late 1960s and are widely used as a probabilistic tool for modeling sequences of observations Rabiner and Juang 1986 . They have proven to be capable of assigning semantic labels to tokens over a wide variety of input types. 1The 2004 Conference on Computational Natural Language Learning http conll2004 2Lancaster IBM Treebank http EAGLES96 synlex This is useful for text-related tasks that involve some uncertainty including part-of-speech tagging Brill 1995 text segmentation Borkar et al. 2001 named entity recognition Bikel et al. 1999 and information extraction tasks McCallum et al. 1999 . However most natural language processing tasks are dependent on discovering a hierarchical structure hidden within the source information. An example would be predicting semantic .

Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.