Please use this identifier to cite or link to this item: https://hdl.handle.net/1959.11/45542
Title: U-Infuse: Democratization of Customizable Deep Learning for Object Detection
Contributor(s): Shepley, Andrew  (author)orcid ; Falzon, Greg  (author)orcid ; Lawson, Christopher  (author); Meek, Paul  (author); Kwan, Paul (author)
Publication Date: 2021-04
Open Access: Yes
DOI: 10.3390/s21082611
Handle Link: https://hdl.handle.net/1959.11/45542
Related Research Outputs: https://github.com/u-infuse/u-infuse
Abstract: 

Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been used to achieve this task but are often not suited to specific applications due to their inability to generalise to new environments and inconsistent performance. Models need to be developed for specific species cohorts and environments, but the technical skills required to achieve this are a key barrier to the accessibility of this technology to ecologists. Thus, there is a strong need to democratize access to deep learning technologies by providing an easy-to-use software application allowing non-technical users to train custom object detectors. U-Infuse addresses this issue by providing ecologists with the ability to train customised models using publicly available images and/or their own images without specific technical expertise. Auto-annotation and annotation editing functionalities minimize the constraints of manually annotating and pre-processing large numbers of images. U-Infuse is a free and open-source software solution that supports both multiclass and single class training and object detection, allowing ecologists to access deep learning technologies usually only available to computer scientists, on their own device, customised for their application, without sharing intellectual property or sensitive data. It provides ecological practitioners with the ability to (i) easily achieve object detection within a user-friendly GUI, generating a species distribution report, and other useful statistics, (ii) custom train deep learning models using publicly available and custom training data, (iii) achieve supervised auto-annotation of images for further training, with the benefit of editing annotations to ensure quality datasets. Broad adoption of U-Infuse by ecological practitioners will improve ecological image analysis and processing by allowing significantly more image data to be processed with minimal expenditure of time and resources, particularly for camera trap images. Ease of training and use of transfer learning means domain-specific models can be trained rapidly, and frequently updated without the need for computer science expertise, or data sharing, protecting intellectual property and privacy.

Publication Type: Journal Article
Source of Publication: Sensors, 21(8), p. 1-17
Publisher: MDPI AG
Place of Publication: Switzerland
ISSN: 1424-8220
Fields of Research (FoR) 2020: 460304 Computer vision
460202 Autonomous agents and multiagent systems
460103 Applications in life sciences
Socio-Economic Objective (SEO) 2020: 220402 Applied computing
220403 Artificial intelligence
220404 Computer systems
Peer Reviewed: Yes
HERDC Category Description: C1 Refereed Article in a Scholarly Journal
Appears in Collections:Journal Article
School of Environmental and Rural Science
School of Science and Technology

Files in This Item:
2 files
File Description SizeFormat 
openpublished/UInfuseShepleyFalzonLawsonMeek2021JournalArticle.pdfPublished Version63.92 MBAdobe PDF
Download Adobe
View/Open
Show full item record
Google Media

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons