TAILIEUCHUNG - International Encyclopedia Of Statistical Science

lasso estimates can be obtained at the same computational cost as that of an ordinary least squares estimation Hastie et al. (òýýÀ). Further, the lasso estimator remains numerically feasible for dimensions m that are much higher than the sample size n. Zou and Hastie (òýý ) introduced a hybrid PLS regression with the so called elastic net penalty de ned as "Ppj =Ô( ò ( j + Ô − )S jS). Here the penalty function is a linear combination of the ridge regression penalty function and lasso penalty function. A di erent type of PLS, called garotte is due to Breiman (ÔÀÀç). Further, PLS estimation provides a generalization of both nonparametric least squares and weighted projection estimators, and a. | SPRINGER REFERENCE Miodrag Lovric Editor International Encyclopedia of Statistical Science Springer A 1 Absolute Penalty Estimation Ejaz S. Ahmed1 Enayetur Raheem2 Shakhawat Hossain2 1Professor and Department Head of Mathematics and Statistics University of Windsor Windsor on Canada 2University of Windsor Windsor on Canada In statistics the technique of 7least squares is used for estimating the unknown parameters in a linear regression model see 7Linear Regression Models . This method minimizes the sum of squared distances between the observed responses in a set of data and the fitted responses from the regression model. Suppose we observe a collection of data ys i i on n units where yis are responses and Xi Xi 1 x 2 . Xip T is a vector of predictors. It is convenient to write the model in matrix notation as y Xp e 1 where y is n X 1 vector of responses X is n X p matrix known as the design matrix p pi 2 . pp T is the unknown parameter vector and e is the vector of random errors. In ordinary least squares OLS regression we estimate p by minimizing the residual sum of squares RSS y-Xp T y- Xp givingPols XTX -1XTy. Thisesti-mator is simple and has some good statistical properties. However the estimator suffers from lack of uniqueness if the design matrix X is less than full rank and if the columns of X are nearly collinear. To achieve better prediction and to alleviate ill conditioning problem of XtX Hoerl and Kernard 1970 introduced ridge regression see Ridge and Surrogate Ridge Regressions which minimizes the RSS subject to a constraint Y pj ắ t in other words pridge argmin j yt - p0 - xj j À p2 ị p i l j l j l J 2 where À 0 is known as the complexity parameter that controls the amount of shrinkage. The larger the value of À the greater the amount of shrinkage. The quadratic penalty term makes pig a linear function of y. Frank and Friedman 1993 introduced bridge regression a generalized version of penalty or absolute penalty type estimation which includes ridge .

TỪ KHÓA LIÊN QUAN
TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.