Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ
Tải xuống
Học để dự đoán một phần thưởng. A) Cốt truyện bề mặt cho thấy các dự đoán lỗi δ (t) như một hàm của thời gian trong một thử nghiệm, qua thử nghiệm. Trong các thử nghiệm đầu tiên, các lỗi cao điểm xảy ra tại thời điểm khen thưởng (t = 200), trong khi trong các thử nghiệm sau đó | 8 Classical Conditioning and Reinforcement Learning A B 2 Ỏ 1- Q 1 0 100 200 before after 0 100 200 t Figure 9.2 Learning to predict a reward. A The surface plot shows the prediction error 8 t as a function of time within a trial across trials. In the early trials the peak error occurs at the time of the reward t 200 while in later trials it occurs at the time of the stimulus t 100 . B The rows show the stimulus u t the reward r t the prediction v t the temporal difference between predictions Av t 1 v t v t 1 and the full temporal difference error 8 t 1 r t 1 Av t 1 . The reward is presented over a short interval and the prediction v sums the total reward. The left column shows the behavior before training and the right column after training. Av t 1 and 8 t 1 are plotted instead of Av t and 8 t because the latter quantities cannot be computed until time t 1 when v t 1 is available. and a reward is given for a short interval around t 200. Initially w t 0 for all T. Figure 9.2A shows that the temporal difference error starts off being non-zero only at the time of the reward t 200 and then over trials moves backward in time eventually stabilizing around the time of the stimulus where it takes the value 2. This is equal to the integrated total reward provided over the course of each trial. Figure 9.2B shows the behavior during a trial of a number of variables before and after learning. After learning the prediction v t is 2 from the time the stimulus is first presented t 100 until the time the reward starts to be delivered. Thus the temporal difference prediction error has a spike at t 99. This spike persists because u t 0 for t 100. The temporal difference term Av t is negative around t 200 exactly compensating for the delivery of reward and so making 8 0. As the peak in 8 moves backwards from the time of the reward to the time of the stimulus weights w t for T 100 99 . successively grow. This gradually extends the prediction of future reward v t from an initial .