Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ
Tải xuống
Tham khảo tài liệu 'advances in robot navigation part 4', kỹ thuật - công nghệ, cơ khí - chế tạo máy phục vụ nhu cầu học tập, nghiên cứu và làm việc hiệu quả | Vision-only Motion Controller for Omni-directional Mobile Robot Navigation 49 8. Navigation experiments Navigation experiments have been scheduled in two different environments the 3rd floor corridor environment and the 1st floor hall environment of the Mechanical Engineering Department building. The layout of the corridor environment can be seen in Fig. 14 and for the hall environment the layout and representative images is presented in Fig. 21. The corridor has been prepared with a total of 3 nodes separated from each other about 22.5m. The total length of the corridor is about 52.2m with 1.74m in width. Meanwhile in the hall environment 5 nodes have been arranged. Distance between each node is vary where the longest distance is between node 2 and node 3 which is about 4 meter as shown in Fig. 21. Node 1 Node 5 Node 4 Fig. 21. Experiment layout of the hall environment with representative images of each node 8.1 Experimental setup For the real-world experiments outlined in this research study the ZEN360 autonomous mobile robot was used. It is equipped with a CCD colour video camera. The robot system is explained in section 4. Each image acquired by the system has a resolution of 320 x 240. The robot is scheduled to navigate from Node 1 to Node 3 passing through Node 2 at the middle of the navigation in the corridor environment. Meanwhile in the hall environment the robot will have to navigate from Node 1 to Node 5 following the sequences of the node and is expected to perform a turning task at most of the nodes. The robot was first brought to the environments and a recording run has been executed. The robot is organized to capture images in order to supply environmental visual features for both position and orientation identification. The images were captured following the method explained in section 7.1 and 7.3 at around each specified nodes. Then the robot generated a topological map and the visual features were used for training NNs. After the recording run the