Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ
Tải xuống
We describe experiments with a Naive Bayes text classifier in the context of anti- spam E-mail filtering, using two different statistical event models: a multi-variate Bernoulli model and a multinomial model. We introduce a family of feature ranking functions for feature selection in the multinomial event model that take account of the word frequency information. We present evaluation results on two publicly available corpora of legitimate and spam E-mails. | A Comparison of Event Models for Naive Bayes Anti-Spam E-Mail Filtering Karl-Michael Schneider University of Passau Department of General Linguistics Innstr. 40 D-94032 Passau schneide@phil.uni-passau.de Abstract We describe experiments with a Naive Bayes text classifier in the context of anti-spam E-mail filtering using two different statistical event models a multi-variate Bernoulli model and a multinomial model. We introduce a family of feature ranking functions for feature selection in the multinomial event model that take account of the word frequency information. We present evaluation results on two publicly available corpora of legitimate and spam E-mails. We find that the multinomial model is less biased towards one class and achieves slightly higher accuracy than the multi-variate Bernoulli model. 1 Introduction Text categorization is the task of assigning a text document to one of several predefined categories. Text categorization plays an important role in natural language processing NLP and information retrieval 1R applications. One particular application of text categorization is anti-spam E-mail filtering where the goal is to block unsolicited messages with commercial or pornographic content UCE spam from a user s E-mail stream while letting other legitimate messages pass. Here the task is to assign a message to one of two categories legitimate and spam based on the message s content. In recent years a growing body of research has applied machine learning techniques to text categorization and anti-spam E-mail filtering including rule learning Cohen 1996 Naive Bayes Sahami et al. 1998 Androutsopoulos et al. 2000b Rennie 2000 memory based learning Androutsopoulos et aL 2000b decision trees Carreras and Marquez 2001 support vector machines Drucker et al. 1999 or combinations of different learners Sakkis et al. 2001 . In these approaches a classifier is learned from training data rather than constructed by hand which results in better and more robust .