透過您的圖書館登入
IP:18.218.184.214
  • 期刊

設施內以影像與雷射為基礎之農用載具定位系統

A Positioning System for Agricultural Vehicles in Greenhouses Based on Image and Laser Methods

摘要


臺灣地處亞熱帶,適合農作物生長,但是也容易受到病蟲害、天災等影響。透過設施農業,可以進行生產環境的監控、作業流程自動化以及降低病蟲害以及天災的影響。而農業行為經常需要使用載具來承載相關的儀器、設施抑或是農作物,遵循智慧農業4.0概念,本文擬建立一套無人農用載具定位系統,藉以輔助路徑規劃及避障系統。其中,本文使用的定位方法結合了即時定位與地圖建構(SLAM)以及視覺定位。在視覺定位中,本文提出了兩種不同的監控相機架設模式,第一種為以監測影像平行於地面的方式架設,第二種為透過不平行的方式架設。前者在物件追蹤演算法上較為簡單且穩定,但是監控範圍受限於相機的視角;後者在物件追蹤上需要使用深度學習的方法來達成,監控範圍較廣,但是誤差也較大。而本文中也比較了兩種將影像轉換至世界座標的方法:鳥瞰圖轉換法與全連接層轉換法,在三個不同的架設位置中,前者的平均誤差距離為0.107347公尺,後者為0.033525公尺。最後,透過結合即時定位與地圖建構(SLAM)及視覺定位,可以得到比原本精度更高的定位資訊,優化比例約3%~5%。

並列摘要


Because Taiwan is located in a subtropical zone, it is suitable for crop growth. However, it is also susceptible to plant disease and natural calamity. Through the establishment of facilities agriculture, we can monitor the production environment, automate the process and reduce the impact of plant disease and natural calamity. Agricultural activities often require the use of vehicles to carry related equipment, facilities or crops. Following the intelligent agriculture 4.0 concept, this paper aims to establish an agricultural vehicle positioning system to assist path planning and obstacle avoidance systems. In this paper, the positioning method combines SLAM with visual orientation. For visual orientation, two different methods are proposed to set up the monitoring camera. The first method is to set up the camera and place the monitoring image parallel to the ground. The second method is non-parallel. The former is simple and stable for object tracking, but the monitoring range is limited by the camera angle. The latter needs to use the deep learning method to achieve object tracking. The monitoring range is wider, but the error is larger than that of the former. This paper also compares two methods of converting image coordinates into world coordinates: one is the bird-eye view transformation method and the other is the fully-connected layer transformation method. Among the three different setup positions, the average error distance of the former is 0.10734 meters, and that of the latter is 0.033525 meter. Finally, by combining SLAM with visual orientation, more accurate positioning information can be obtained than in the original, with an optimization ratio of about 3% to 5%.

參考文獻


李其諺。2019。以影像與雷射為基礎之設施內農用載具定位系統。碩士論文。臺北:國立臺灣大學生物產業機電工程學系。
He, K., X. Zhang, S. Ren, and J. Sun. 2016. Deep residual learning for image recognition. In "Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) ". pp. 770-778. Las Vegas, NV, USA: IEEE. doi: 10.1109/CVPR.2016.90
行政院主計總處。2015。94 年普查結果統計表。2018 年12 月1 日取自https://www.dgbas. gov.tw/ct.asp?xItem=38634&ctNode=3279&mp=1。
行政院主計總處。2017。104 年普查結果統計表。2018 年12 月1 日取自https://www.dgbas.gov.tw/ct.asp?xItem=41996&ctNode=3279&mp=1。
楊智凱、施瑩艷、楊舒涵。2016。以智慧科技邁向臺灣農業4.0 時代。2018 年12 月1 日取自https://www.coa.gov.tw/ws.php?id=2505139。

延伸閱讀