Please use this identifier to cite or link to this item:
https://hdl.handle.net/1959.11/56752
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shepley, Andrew Jason | en |
dc.contributor.author | Falzon, Gregory | en |
dc.contributor.author | Kwan, Paul | en |
dc.date.accessioned | 2023-11-27T22:42:05Z | - |
dc.date.available | 2023-11-27T22:42:05Z | - |
dc.date.created | 2021 | - |
dc.date.issued | 2021-10-06 | - |
dc.identifier.uri | https://hdl.handle.net/1959.11/56752 | - |
dc.description | Please contact rune@une.edu.au if you require access to this thesis for the purpose of research or study. | en |
dc.description.abstract | <p>Artificially intelligent computer vision systems are becoming increasingly prevalent in an ever-expanding range of applications, providing greater automation in data-driven tasks, reducing resource expenditure, and enabling new insights to be gained.</p> <p>Underpinning these systems are Deep Convolutional Neural Networks (DCNNs), which learn discriminative features present in data, allowing classification and object detection tasks to be performed. For object detection tasks, which are the focus of this thesis, DCNNs are usually trained by computer scientists on large numbers of domain specific images, using site-specific and target object features to locate objects in images. Greedy Non-Maxima Suppression (NMS) is used to return one optimal bounding box per object in a given image.</p> <p>Although the capabilities and benefits of DCNNs have heralded a new age of automation, access to and performance of these networks in small projects is often inadequate, inhibiting widespread adoption. Individuals who are not trained in computer science and do not have access to copious quantities of annotated images struggle to train robust object detectors, often resorting to time consuming manual processing, or resource expensive collaboration with or employment of computer scientists. This is particularly true in the case of ecological image processing tasks, which are characterized by large volumes of complex image data, which must be classified and interpreted to enable effective ecological monitoring and management.</p> <p>This thesis aims to facilitate broader adoption of DCNN-based computer vision in ecology and beyond, by improving object detection performance, and bridging the technical gap between ecology and computer science. We identified poor domain adaptability caused by reliance on large numbers of similar camera trap images in training, inadequate performance of NMS in the task of bounding box retention and removal and technical barriers to object detector development. Accordingly, a training protocol was developed that leverages high inter and intra class variability in training data to enable the development of robust object detectors using relatively small, publicly available image datasets with minimal ‘infusion’ of domain-specific images for optimisation. A novel non-Intersect Over Union alternative algorithm to NMS, dubbed Confluence, is proposed, which uses the normalised Manhattan Distance between confluent candidate bounding boxes to reach a better balance between retention of true positives and removal of false positives. These contributions were brought together in the development of an open-source desktop application dubbed U-Infuse, which allows those not trained in computer science to use the location invariance training protocol and Confluence to develop and use their own high performance custom object detectors. Confluence was evaluated on standardised object detection benchmarks including MS COCO and PASCAL VOC, using multiple DCNN architectures, achieving state-of-theart results, validating its use in replacing NMS in any object detection application. The proposed training protocol was extensively evaluated out-of-sample and in-sample on a range of challenging datasets, including Snapshot Serengeti, Wildlife Conservation Society datasets and Camera CATalogue, demonstrating its robustness to single class and multi-class object detection for any species. Finally, U-Infuse was evaluated in a real-life case study; the task of feral cat detection in camera trap data collected from the New England Gorges.</p> <p>This thesis has succeeded in its aims by providing a unique opportunity to advance the democratisation of artificially intelligent object detection by developing an open source, freely available app that leverages the power of the location invariance training method, and the optimal performance of Confluence allowing non-computer scientist to develop and deploy their own object detectors using their own data, on their own devices. Furthermore, the findings of this research indicate that broad adoption of Confluence would have extensive benefits in applications ranging from autonomous vehicles, aerial surveying and crowd counting.</p> | en |
dc.language | en | en |
dc.publisher | University of New England | - |
dc.relation.uri | https://hdl.handle.net/1959.11/56753 | en |
dc.title | Promoting Usage of Deep Learning Object Detection in Ecology by Improving Performance and Accessibility | en |
dc.type | Thesis Doctoral | en |
local.contributor.firstname | Andrew Jason | en |
local.contributor.firstname | Gregory | en |
local.contributor.firstname | Paul | en |
local.subject.for2008 | 070702 Veterinary Anatomy and Physiology | en |
local.subject.for2008 | 080104 Computer Vision | en |
local.subject.for2008 | 080108 Neural, Evolutionary and Fuzzy Computation | en |
local.subject.seo2008 | 830301 Beef Cattle | en |
local.subject.seo2008 | 830311 Sheep - Wool | en |
local.subject.seo2008 | 890299 Computer Software and Services not elsewhere classified | en |
local.hos.email | st-sabl@une.edu.au | en |
local.thesis.passed | Passed | en |
local.thesis.degreelevel | Doctoral | en |
local.thesis.degreename | Doctor of Philosophy - PhD | en |
local.contributor.grantor | University of New England | - |
local.profile.school | School of Science and Technology | en |
local.profile.school | School of Science and Technology | en |
local.profile.school | School of Science and Technology | en |
local.profile.email | asheple2@une.edu.au | en |
local.profile.email | gfalzon2@une.edu.au | en |
local.profile.email | wkwan2@une.edu.au | en |
local.output.category | T2 | en |
local.record.place | au | en |
local.record.institution | University of New England | en |
local.publisher.place | Armidale, Australia | - |
local.contributor.lastname | Shepley | en |
local.contributor.lastname | Falzon | en |
local.contributor.lastname | Kwan | en |
dc.identifier.staff | une-id:asheple2 | en |
dc.identifier.staff | une-id:gfalzon2 | en |
dc.identifier.staff | une-id:wkwan2 | en |
local.profile.orcid | 0000-0001-7511-4967 | en |
local.profile.orcid | 0000-0002-1989-9357 | en |
local.profile.role | author | en |
local.profile.role | supervisor | en |
local.profile.role | supervisor | en |
local.identifier.unepublicationid | une:1959.11/56752 | en |
dc.identifier.academiclevel | Student | en |
dc.identifier.academiclevel | Academic | en |
dc.identifier.academiclevel | Academic | en |
local.thesis.bypublication | Yes | en |
local.title.maintitle | Promoting Usage of Deep Learning Object Detection in Ecology by Improving Performance and Accessibility | en |
local.output.categorydescription | T2 Thesis - Doctorate by Research | en |
local.relation.doi | 10.1002/ece3.7344 | en |
local.relation.doi | 10.48550/arXiv.2012.00257 | en |
local.relation.doi | 10.3390/s21082611 | en |
local.school.graduation | School of Science & Technology | en |
local.thesis.borndigital | Yes | - |
local.search.author | Shepley, Andrew Jason | en |
local.search.supervisor | Falzon, Gregory | en |
local.search.supervisor | Kwan, Paul | en |
local.uneassociation | Yes | en |
local.atsiresearch | No | en |
local.sensitive.cultural | No | en |
local.year.conferred | 2021 | - |
local.profile.affiliationtype | UNE Affiliation | en |
local.profile.affiliationtype | UNE Affiliation | en |
local.profile.affiliationtype | UNE Affiliation | en |
Appears in Collections: | School of Science and Technology Thesis Doctoral |
Files in This Item:
File | Description | Size | Format |
---|
Page view(s)
320
checked on Jun 30, 2024
Download(s)
4
checked on Jun 30, 2024
Items in Research UNE are protected by copyright, with all rights reserved, unless otherwise indicated.