Author(s) |
Sadgrove, Edmund J
Falzon, Gregory
Miron, David J
Lamb, David
|
Publication Date |
2018-10-27
|
Abstract |
Please contact rune@une.edu.au if you require access to this thesis for the purpose of research or study.
|
Abstract |
<p>Machine vision is an essential function of autonomous robotics, especially those which use visual mechanisms to navigate the complexities of the outside world. Agricultural environments such as pastures presents diverse and complex visual scenes containing flora, fauna and farm machinery. The ability to detect key objects within this environment greatly assist autonomous robotic navigation and operations. Operation of agricultural robotics such as quad-copters requires real-time (milliseconds) analysis of visual data to ensure performance. Current machine vision systems lack performance in processing time or detection accuracy within such environments. To address current machine vision limitations, this thesis presents a customised class of extreme learning machine algorithms intended for use within remote laptop or fog computing settings.</p><p> Colour was observed to often be a key visual cue for object detection in pasture scenes. The colour-feature extreme learning machine (CF-ELM) was introduced for image classification and was demonstrated to out-perform existing extreme learning machine (ELM) algorithms which did not use colour information for object detection. The CF-ELM utilised the small memory structure and fast training times of the ELM to develop a real-time classification algorithm with the added benefit of colour information. This allowed the CF-ELM to classify objects within pastoral scenarios in 0.06 to 0.18 seconds and between 82% to 96% accuracy. These scenarios included, weed detection, cattle detection and farm vehicle detection.</p><p> The multiple expert colour-feature extreme learning machine (MEC-ELM) was then introduced to both enhance detection and further reduce processing time. The MEC-ELM used multiple instances of the CF-ELM and a summed area table to produce real-time classification of objects within video frames. Object detection was performed on both quad-copter and surveillance camera video to demonstrate the wide utility of the MEC-ELM algorithm. Detection scenarios included stock monitoring, weed scouting and vehicle tracking with the MEC-ELM producing 78% to 95% precision and recall with processing times between 0.5 and 2.0 seconds per frame. Performance of the MEC-ELM was compared and contrasted to other suitable machine vision algorithms. The results in this research indicate that the MEC-ELM is a highly competitive algorithm suitable for real-time object detection in video, particularly for agricultural robotics applications.</p>
|
Link | |
Title |
The MEC-ELM and its Application in Robotic Vision for Pastoral Landscapes
|
Type of document |
Thesis Doctoral
|
Entity Type |
Publication
|
Name | Size | format | Description | Link |
---|