TAILIEUCHUNG - Mạng thần kinh thường xuyên cho dự đoán P4

Activation Functions Used in Neural Networks Perspective The choice of nonlinear activation function has a key influence on the complexity and performance of artificial neural networks, note the term neural network will be used interchangeably with the term artificial neural network. The brief introduction to activation functions given in Chapter 3 is therefore extended. Although sigmoidal nonlinear activation functions are the most common choice, there is no strong a priori justification why models based on such functions should be preferred to others | Recurrent Neural Networks for Prediction Authored by Danilo P. Mandic Jonathon A. Chambers Copyright 2001 John Wiley Sons Ltd ISBNs 0-471-49517-4 Hardback 0-470-84535-X Electronic 4 Activation Functions Used in Neural Networks Perspective The choice of nonlinear activation function has a key influence on the complexity and performance of artificial neural networks note the term neural network will be used interchangeably with the term artificial neural network. The brief introduction to activation functions given in Chapter 3 is therefore extended. Although sigmoidal nonlinear activation functions are the most common choice there is no strong a priori justification why models based on such functions should be preferred to others. We therefore introduce neural networks as universal approximators of functions and trajectories based upon the Kolmogorov universal approximation theorem which is valid for both feedforward and recurrent neural networks. From these universal approximation properties we then demonstrate the need for a sigmoidal activation function within a neuron. To reduce computational complexity approximations to sigmoid functions are further discussed. The use of nonlinear activation functions suitable for hardware realisation of neural networks is also considered. For rigour we extend the analysis to complex activation functions and recognise that a suitable complex activation function is a Mobius transformation. In that context a framework for rigorous analysis of some inherent properties of neural networks such as fixed points nesting and invertibility based upon the theory of modular groups of Mobius transformations is provided. All the relevant definitions theorems and other mathematical terms are given in Appendix B and Appendix C. Introduction A century ago a set of 23 originally unsolved problems in mathematics was proposed by David Hilbert Hilbert 1901-1902 . In his lecture Mathematische Probleme at the second International Congress of

TỪ KHÓA LIÊN QUAN
TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.