Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ
Tải xuống
Parser self-training is the technique of taking an existing parser, parsing extra data and then creating a second parser by treating the extra data as further training data. Here we apply this technique to parser adaptation. In particular, we self-train the standard Charniak/Johnson Penn-Treebank parser using unlabeled biomedical abstracts. This achieves an f -score of 84.3% on a standard test set of biomedical abstracts from the Genia corpus. This is a 20% error reduction over the best previous result on biomedical data (80.2% on the same test set). . | Self-Training for Biomedical Parsing David McClosky and Eugene Charniak Brown Laboratory for Linguistic Information Processing BLLIP Brown University Providence RI 02912 dmcc ec @cs.brown.edu Abstract Parser self-training is the technique of taking an existing parser parsing extra data and then creating a second parser by treating the extra data as further training data. Here we apply this technique to parser adaptation. In particular we self-train the standard Char-niak Johnson Penn-Treebank parser using unlabeled biomedical abstracts. This achieves an -score of 84.3 on a standard test set of biomedical abstracts from the Genia corpus. This is a 20 error reduction over the best previous result on biomedical data 80.2 on the same test set . 1 Introduction Parser self-training is the technique of taking an existing parser parsing extra data and then creating a second parser by treating the extra data as further training data. While for many years it was thought not to help state-of-the art parsers more recent work has shown otherwise. In this paper we apply this technique to parser adaptation. In particular we self-train the standard Charniak Johnson Penn-Treebank C J parser using unannotated biomedical data. As is well known biomedical data is hard on parsers because it is so far from more standard English. To our knowledge this is the first application of self-training where the gap between the training and self-training data is so large. In section two we look at previous work. In particular we note that there is in fact very little data on self-training when the corpora for self-training is so different from the original labeled data. Section three describes our main experiment on standard test data Clegg and Shepherd 2005 . Section four looks at some preliminary results we obtained on development data that show in slightly more detail how selftraining improved the parser. We conclude in section five. 2 Previous Work While self-training has worked in several .