TAILIEUCHUNG - Kalman Filtering and Neural Networks - Chapter 2: PARAMETER-BASED KALMAN FILTER TRAINING: THEORY AND IMPLEMENTATION

Although the rediscovery in the mid 1980s of the backpropagation algorithm by Rumelhart, Hinton, and Williams [1] has long been viewed as a landmark event in the history of neural network computing and has led to a sustained resurgence of activity, the relative ineffectiveness of this simple gradient method has motivated many researchers to develop enhanced training procedures. In fact, the neural network literature has been inundated with papers proposing alternative training Kalman Filtering and Neural Networks. | Kalman Filtering and Neural Networks Edited by Simon Haykin Copyright 2001 John Wiley Sons Inc. ISBNs 0-471-36998-5 Hardback 0-471-22154-6 Electronic 2 PARAMETER-BASED KALMAN FILTER TRAINING THEORY AND IMPLEMENTATION Gintaras V. Puskorius and Lee A. Feldkamp Ford Research Laboratory Ford Motor Company Dearborn Michigan . gpuskori@ lfeldkam@ INTRODUCTION Although the rediscovery in the mid 1980s of the backpropagation algorithm by Rumelhart Hinton and Williams 1 has long been viewed as a landmark event in the history of neural network computing and has led to a sustained resurgence of activity the relative ineffectiveness of this simple gradient method has motivated many researchers to develop enhanced training procedures. In fact the neural network literature has been inundated with papers proposing alternative training 23 24 2 PARAMETER-BASED KALMAN FILTER TRAINING methods that are claimed to exhibit superior capabilities in terms of training speed mapping accuracy generalization and overall performance relative to standard backpropagation and related methods. Amongst the most promising and enduring of enhanced training methods are those whose weight update procedures are based upon second-order derivative information whereas standard backpropagation exclusively utilizes first-derivative information . A variety of second-order methods began to be developed and appeared in the published neural network literature shortly after the seminal article on backpropagation was published. The vast majority of these methods can be characterized as batch update methods where a single weight update is based on a matrix of second derivatives that is approximated on the basis of many training patterns. Popular second-order methods have included weight updates based on quasi-Newton Levenburg-Marquardt and conjugate gradient techniques. Although these methods have shown promise they are often plagued by convergence to poor local optima which can be .

TỪ KHÓA LIÊN QUAN
TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.