TAILIEUCHUNG - Linear neurons and their learning algorithms

In this paper, we introduce the concepts of Linear neurons, and new learning algorithms based on Linear neurons, with an explanation of the reasons behind these algorithms. First, we briefly review the Boltzmann Machine and the fact that the invariant distributions of the Boltzmann Machine generate Markov chains. We then review the θ-transformation and its completeness, . any function can be expanded by θ-transformation. We further review ABM (Attrasoft Boltzmann Machine). | Journal of Computer Science and Information Technology December 2018, Vol. 6, No. 2, pp. 1-14 ISSN: 2334-2366 (Print), 2334-2374 (Online) Copyright © The Author(s). All Rights Reserved. Published by American Research Institute for Policy Development DOI: URL: Linear Neurons and Their Learning Algorithms Ying Liu1 Abstract In this paper, we introduce the concepts of Linear neurons, and new learning algorithms based on Linear neurons, with an explanation of the reasons behind these algorithms. First, we briefly review the Boltzmann Machine and the fact that the invariant distributions of the Boltzmann Machine generate Markov chains. We then review the θ-transformation and its completeness, . any function can be expanded by θ-transformation. We further review ABM (Attrasoft Boltzmann Machine). The invariant distribution of the ABM is a θtransformation; therefore, an ABM can simulate any distribution. We then discuss that the ABM algorithm is only the first algorithm in a family of new algorithms based on the θ-transformation. We introduce the simplest algorithm in this family based on Linear neurons. We also discuss the advantages of this algorithm: accuracy, stability, and low time complexity. Keywords: AI, Boltzmann machine, Markov chain, invariant distribution, Completeness, Deep Neural Network. 1. Introduction Neural networks and deep learning currently provide the best solutions to many supervised learning problems. In 2006, a publication by Hinton, Osindero, and Teh [1] introduced the idea of a “deep” neural network, which first trains a simple supervised model; then adds on a new layer on top and trains the parameters for the new layer alone. You keep adding layers and training layers in this fashion until you have a deep network. Later, this condition of training one layer at a time is removed. After Hinton‟s initial attempt of training one layer at a time, Deep Neural Networks train all layers .

TỪ KHÓA LIÊN QUAN
TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.