Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ
Tải xuống
We consider the problem of predictive inference for probabilistic binary sequence labeling models under F-score as utility. For a simple class of models, we show that the number of hypotheses whose expected Fscore needs to be evaluated is linear in the sequence length and present a framework for efficiently evaluating the expectation of many common loss/utility functions, including the F-score. This framework includes both exact and faster inexact calculation methods. | A Maximum Expected Utility Framework for Binary Sequence Labeling Martin Jansche jansche@acm.org Abstract We consider the problem of predictive inference for probabilistic binary sequence labeling models under F-score as utility. For a simple class of models we show that the number of hypotheses whose expected F-score needs to be evaluated is linear in the sequence length and present a framework for efficiently evaluating the expectation of many common loss utility functions including the F-score. This framework includes both exact and faster inexact calculation methods. 1 Introduction 1.1 Motivation and Scope The weighted F-score van Rijsbergen 1974 plays an important role in the evaluation of binary classifiers as it neatly summarizes a classifier s ability to identify the positive class. A variety of methods exists for training classifiers that optimize the F-score or some similar trade-off between false positives and false negatives precision and recall sensitivity and specificity type I error and type II error rate etc. Among the most general methods are those of Mozer et al. 2001 whose constrained optimization technique is similar to those in Gao et al. 2006 Jansche 2005 . More specialized methods also exist for example for support vector machines Musicant et al. 2003 and for conditional random fields Gross et al. 2007 Suzuki et al. 2006 . All of these methods are about classifier training. In this paper we focus primarily on the related but orthogonal issue of predictive inference with a fully trained probabilistic classifier. Using the weighted F -score as our utility function predictive inference amounts to choosing an optimal hypothesis which maximizes the expected utility. We refer to this as Current affiliation Google Inc. Former affiliation Center of Computational Learning Systems Columbia University. 736 the prediction or decoding task. In general decoding can be a hard computational problem Casacuberta and de la Higuera 2000 Knight 1999 . In this .