TAILIEUCHUNG - Báo cáo khoa học: "Deep Learning for NLP"

Machine learning is everywhere in today’s NLP, but by and large machine learning amounts to numerical optimization of weights for human designed representations and features. The goal of deep learning is to explore how computers can take advantage of data to develop features and representations appropriate for complex interpretation tasks. | Deep Learning for NLP without Magic Richard Socher Yoshua Bengio Christopher D. Manning richard@ bengioy@ manning@ Computer Science Department Stanford University DIRO Universite de Montreal Montreal QC Canada 1 Abtract Machine learning is everywhere in today s NLP but by and large machine learning amounts to numerical optimization of weights for human designed representations and features. The goal of deep learning is to explore how computers can take advantage of data to develop features and representations appropriate for complex interpretation tasks. This tutorial aims to cover the basic motivation ideas models and learning algorithms in deep learning for natural language processing. Recently these methods have been shown to perform very well on various NLP tasks such as language modeling POS tagging named entity recognition sentiment analysis and paraphrase detection among others. The most attractive quality of these techniques is that they can perform well without any external hand-designed resources or time-intensive feature engineering. Despite these advantages many researchers in NLP are not familiar with these methods. Our focus is on insight and understanding using graphical illustrations and simple intuitive derivations. The goal of the tutorial is to make the inner workings of these techniques transparent intuitive and their results interpretable rather than black boxes labeled magic here . The first part of the tutorial presents the basics of neural networks neural word vectors several simple models based on local windows and the math and algorithms of training via backpropagation. In this section applications include language modeling and POS tagging. In the second section we present recursive neural networks which can learn structured tree outputs as well as vector representations for phrases and sentences. We cover both equations as well as applications. We show how training can be achieved by a 5 modified .

TỪ KHÓA LIÊN QUAN
TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.