帳號:guest(18.191.171.20)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):張洵豪
作者(外文):Chang, Hsun-Hao
論文名稱(中文):利用雙眼視覺影像於室內場景之機器人同步定位與環境地圖建立
論文名稱(外文):SLAM for Indoor Environment Using Stereo Vision
指導教授(中文):陳永昌
指導教授(外文):Chen, Yung-Chang
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:9761534
出版年(民國):99
畢業學年度:98
語文別:英文
論文頁數:52
中文關鍵詞:機器人同步定位與建立地圖擴展式卡爾曼濾波器
外文關鍵詞:SLAMExtended Kalman Filter
相關次數:
  • 推薦推薦:0
  • 點閱點閱:585
  • 評分評分:*****
  • 下載下載:19
  • 收藏收藏:0
近幾年,機器人的及時自我定位及建立環境地圖研究成為了一個重要的課題。行動自主機器人能夠及時自我定位及航行在未知環境中為是不可或缺的能力。傳統的定位方法為使用里程器傳回的資料去預測機器人所在的位置及方向,但是這種方法會隨著時間增加而誤差跟著累積。許多方法被提出用來修正誤差例如:粒子濾波器(Particle Filter)、卡爾曼濾波器(Kalman Filter)等等。在我們的時間中,使用的擴展型卡爾曼濾波器(Extended Kalman Filter)來修正定位誤差。
在這篇論文中,我們提出一個由雙眼視覺攝影機提取出影像的特徵,並利用雙眼視覺的特性求出這些線段的在三維空間中的垂直線的標的做為系統使用。為了處理測量上的不確定性,我們使用了兩種不同的觀察模型。其中一種是為了處理較近的標的點,另一種為處理較遠的標的點。這個系統包含了輪型機器人、里程器的資訊、影像線段萃取、標的點的座標轉換以及 Extended Kalman Filter 修正里程器誤差。
我們的方法可以利用單一感測器實現機器人即時定位,並且可以在材質較少的環境可以運作。機器人可以以 0.1m/s 的移動速度完成即時定位,系統誤差及運算時間也在可接受的合理範圍內。
In recent years, simultaneous localization and mapping (SLAM) becomes an important topic for robotic research. The ability that an autonomous mobile robot can simultaneously locate itself and navigate in an unknown indoor environment is indispensable. The simplest localization method only uses the odometer to estimate the robot position and pose, but the accumulated error is growing with the execution time of the system. Many algorithms can be used to reduce error like Particle Filter, Kalman Filter and so on. In our system, we use Extended Kalman Filter to revise the system error of SLAM problem.
In this thesis, we propose a system which extracts image line features from stereo camera as landmarks, and use stereo property to obtain the 3D vertical line landmarks. This system is based on the EKF. To handle with measurement uncertainty, we use two different observation models. One is nearby landmark model, the other one faraway landmark model. Our algorithm contains a wheeled robot, UBOT, which serves as our experiment platform, odometer data, image line segmentation, landmark’s 3D position reconstruction, and Extended Kalman Filter.
Our system can use a single sensor to implement the EKF-based SLAM in real time. It can work in an environment lacking texture. The robot moves at a speed of 0.1m/s and simultaneously locates itself. The estimated error and computational time of this system are acceptable.
Chapter 1: Introduction
1.1 Overview of Autonomous Mobile Robotics
1.2 Motivation
1.3 Thesis Organization
Chapter 2: Related Works
2.1 Overview of Simultaneous Localization and Mapping (SLAM)
2.2 Probabilistic Models in SLAM
2.2.1 The Extended Kalman Filter SLAM
2.3 Multi-Modal Sensor based in SLAM
2.3.1 Range Sensor based SLAM
2.3.2 Vision SLAM
Chapter 3: System Overview
3.1 System Architecture
3.1.1 System Flowchart
3.1.2 Landmarks in Indoor Environment
3.2 Stereo Vision in Our System
3.2.1 Feature Extraction
3.2.2 Stereo Vision Problem in Visual SLAM
Chapter 4: Landmark Extraction and Extended Kalman Filter based SLAM Algorithm
4.1 Landmark Extraction Method
4.1.1 Landmarks Extraction Flowchart
4.1.2 Image Line Extraction Method
4.1.3 Edge Detection and Edge Thining
4.1.4 Line Segmentation
4.1.5 3D Line Parameters
4.2 System Models
4.2.1 Motion Model
4.2.2 Landmark Model
4.2.3 Observation Model
4.3 Extended Kalman Filter in SLAM
4.3.1 State Description
4.3.2 Prediction
4.3.3 Update
4.3.4 Data Association
4.3.5 Map Management
4.4 Summary
Chapter 5: Experimental Results and Discussion
5.1 Experimental Platform
5.2 Experimental Results
5.3 Discussions
Chapter 6: Conclusion and Future Works
6.1 Conclusion
6.2 Future Works
[1] R. Smith, M. Self, and P. Cheeseman. “Estimating uncertain spatial relationships in robotics,” In I.J. Cox and G.T. Wilfong, editors, Autonomous Robot Vehnicles, pp. 167-193, Springer-Verlag, 1990.

[2] R. C. Smith and P. Cheeseman. “On the representation and estimation of spatial uncertainty,” Technical Report TR 4760 & 7239, SRI, 1985.

[3] M. Montemerlo, S. Thrun, D. Koller, and B.Wegbreit, “FastSLAM: A factored solution to simultaneous localization and mapping,” in Proc. Nat. Conf. Artif. Intell., Edmonton, AB, Canada, 2002, pp. 593-598.

[4] H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: part I,” IEEE Robotics & Automation Magazine, vol. 13, no. 2, pp. 99-110, 2006.

[5] T. Bailey and H. Durrant-Whyte, “Simultaneous localization and mapping: part II,” Robotics & Automation Magazine, IEEE, vol. 13, no. 3, pp. 108-117, 2006.

[6] Kalman, R. E. 1960. “A New Approach to Linear Filtering and Prediction Problems,” Transaction of the ASME—Journal of Basic Engineering, pp. 35-45, Mar 1960.

[7] M. W. M. G. Dissanayake, P. Newman, S. Clark, H.F. Durrant-Whyte, and M. Csorba, ”A solution to the simultaneous localization and map building (SLAM) problem,” IEEE Transactions on Robotics and Automation, vol. 17, no. 3, pp. 229-241, Jun 2001.

[8] G. P. Huang, A. I. Mourikis, and S. I. Roumeliotis, “Analysis and Improvement of the Consistency of Extended Kalman Filter based SLAM,” IEEE International Conference on Robotics and Automation”, pp. 473-479, Pasadena, CA, May 2008.

[9] J. J. Leonard and H. F. Durrant-Whyte, “Mobile robot localization by tracking geometric beacons,” IEEE Trans. Robotics and Automation, vol. 7, no.3, pp. 376-382, June 1991.

[10] L. Kleeman, “Advanced Sonar and Odometry Error Modeling for Simultaneous Localisation and Map Building,” IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 1, pp. 699-704, Dec 2003.

[11] C. Harris and M. Stephens, “A combined corner and edge detector,” Proc. Alvey Vision Conf, pp. 147-151, 1988.

[12] E. Mouragnon, M. Lhuillier, M. Dhome, F. Dekeyser, and P. Sayd, “Monocular Vision Based SLAM for Mobile Robots,” IEEE International Conference on Pattern Recognition, vol. 3, pp. 1027-1031, Hong Kong, Sep 2006.

[13] A. J. Davison and D. W. Murray, “Simultaneous Localisation and Map-Building Using Active Vision,” IEEE Trans on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 865-880, Oct 2002.

[14] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.

[15] P. Nunez, R. Vazquez-Martin, J. C. del Toro, A. Bandera, and F. Sandoval, “Feature Extraction from Laser Scan Data based on Curvature Estimation for Mobile Robotics,” IEEE International Conference on Robotics and Automation, pp. 1167-1172, Orlando FL, Jun 2006.

[16] J. Civera, A. J. Davison, and J. Montiel, “Inverse Depth Parametrization for MonocularSLAM,” IEEE Trans. on Robotics, vol. 24, no. 5, pp. 932-945, Oct 2008.

[17] L. M. Paz, P. Pinies, J. D. Tardos, andJ. Neira, “Large-Scale 6-DOF SLAM With Stereo-in-Hand”, IEEE Trans. on Robotics, vol. 24, no. 5, pp. 946-957, Sep 2008.

[18] A. P. Gee, D. Chekhlov, A. Calway, and W. Mayol-Cuevas, “Discovering Higher Level Structure in Visual SLAM”, IEEE Trans. on Robotics, vol. 24, no. 5, pp. 980-990, Oct 2008.

[19] R. O. Duda and P. E. Hart, “Use of the Hough Transformation to Detect Lines and Curves in Pictures”, Comm. ACM, vol. 15, pp. 11-15, Jan 1972.

[20] P.V.C. Hough, “Machine Analysis of Bubble Chamber Pictures,” Proc. Int. Conf. High Energy Accelerators and Instrumentation, 1959.

[21] J. Matas, C. Galambos, and J. Kittler, “Progressive Probabilistic Hough Transform,” in Proc. British Machine Vision Conf. BMVC98, Sep. 1998.

[22] http://www.ptgrey.com/products/Point_Grey_stereo_catalog.pdf

[23] Y. Bar-Shalom and T. E. Fortmann, “Tracking and Data Association.” Academic Press, 1988.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *