Please use this identifier to cite or link to this item:
https://hdl.handle.net/1959.11/45476
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shepley, Andrew | en |
dc.contributor.author | Falzon, Greg | en |
dc.contributor.author | Meek, Paul | en |
dc.contributor.author | Kwan, Paul | en |
dc.date.accessioned | 2022-02-28T22:08:52Z | - |
dc.date.available | 2022-02-28T22:08:52Z | - |
dc.date.issued | 2021-05 | - |
dc.identifier.citation | Ecology and Evolution, 11(9), p. 4494-4506 | en |
dc.identifier.issn | 2045-7758 | en |
dc.identifier.uri | https://hdl.handle.net/1959.11/45476 | - |
dc.description.abstract | <ol><li>A time-consuming challenge faced by camera trap practitioners is the extraction of meaningful data from images to inform ecological management. An increasingly popular solution is automated image classification software. However, most solutions are not sufficiently robust to be deployed on a large scale due to lack of location invariance when transferring models between sites. This prevents optimal use of ecological data resulting in significant expenditure of time and resources to annotate and retrain deep learning models.</li><li>We present a method ecologists can use to develop optimized location invariant camera trap object detectors by (a) evaluating publicly available image datasets characterized by high intradataset variability in training deep learning models for camera trap object detection and (b) using small subsets of camera trap images to optimize models for high accuracy domain-specific applications.</li><li>We collected and annotated three datasets of images of striped hyena, rhinoceros, and pigs, from the image-sharing websites FlickR and iNaturalist (FiN), to train three object detection models. We compared the performance of these models to that of three models trained on the Wildlife Conservation Society and Camera CATalogue datasets, when tested on out-of-sample Snapshot Serengeti datasets. We then increased FiN model robustness by infusing small subsets of camera trap images into training.</li><li>In all experiments, the mean Average Precision (mAP) of the FiN trained models was significantly higher (82.33%-88.59%) than that achieved by the models trained only on camera trap datasets (38.5%-66.74%). Infusion further improved mAP by 1.78%-32.08%.</li><li>Ecologists can use FiN images for training deep learning object detection solutions for camera trap image processing to develop location invariant, robust, out-of-the-box software. Models can be further optimized by infusion of 5%-10% camera trap images into training data. This would allow AI technologies to be deployed on a large scale in ecological applications. Datasets and code related to this study are open source and available on this repository: <a href="https://doi.org/10.5061/dryad.1c59zw3tx">https://doi.org/10.5061/dryad.1c59zw3tx.</a></li></ol> | en |
dc.language | en | en |
dc.publisher | John Wiley & Sons Ltd | en |
dc.relation.ispartof | Ecology and Evolution | en |
dc.rights | Attribution 4.0 International | * |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | * |
dc.title | Automated location invariant animal detection in camera trap images using publicly available data sources | en |
dc.type | Journal Article | en |
dc.identifier.doi | 10.1002/ece3.7344 | en |
dc.identifier.pmid | 33976825 | en |
dcterms.accessRights | UNE Green | en |
local.contributor.firstname | Andrew | en |
local.contributor.firstname | Greg | en |
local.contributor.firstname | Paul | en |
local.contributor.firstname | Paul | en |
local.profile.school | School of Science and Technology | en |
local.profile.school | School of Science and Technology | en |
local.profile.school | School of Environmental and Rural Science | en |
local.profile.email | asheple2@une.edu.au | en |
local.profile.email | gfalzon2@une.edu.au | en |
local.profile.email | pmeek5@une.edu.au | en |
local.output.category | C1 | en |
local.record.place | au | en |
local.record.institution | University of New England | en |
local.publisher.place | United Kingdom | en |
local.format.startpage | 4494 | en |
local.format.endpage | 4506 | en |
local.identifier.scopusid | 85102255278 | en |
local.peerreviewed | Yes | en |
local.identifier.volume | 11 | en |
local.identifier.issue | 9 | en |
local.access.fulltext | Yes | en |
local.contributor.lastname | Shepley | en |
local.contributor.lastname | Falzon | en |
local.contributor.lastname | Meek | en |
local.contributor.lastname | Kwan | en |
dc.identifier.staff | une-id:asheple2 | en |
dc.identifier.staff | une-id:gfalzon2 | en |
dc.identifier.staff | une-id:pmeek5 | en |
local.profile.orcid | 0000-0001-7511-4967 | en |
local.profile.orcid | 0000-0002-1989-9357 | en |
local.profile.role | author | en |
local.profile.role | author | en |
local.profile.role | author | en |
local.profile.role | author | en |
local.identifier.unepublicationid | une:1959.11/45476 | en |
local.date.onlineversion | 2021-03-10 | - |
dc.identifier.academiclevel | Academic | en |
dc.identifier.academiclevel | Academic | en |
dc.identifier.academiclevel | Academic | en |
dc.identifier.academiclevel | Academic | en |
local.title.maintitle | Automated location invariant animal detection in camera trap images using publicly available data sources | en |
local.relation.fundingsourcenote | Andrew Shepley is supported by an Australian Postgraduate Award. We would like to thank the Australian Department of Agriculture and Water Resources, the Centre for Invasive Species Solutions, NSW Environmental Trust, University of New England, and the NSW Department of Primary Industries for supporting this project. | en |
local.output.categorydescription | C1 Refereed Article in a Scholarly Journal | en |
local.search.author | Shepley, Andrew | en |
local.search.author | Falzon, Greg | en |
local.search.author | Meek, Paul | en |
local.search.author | Kwan, Paul | en |
local.open.fileurl | https://rune.une.edu.au/web/retrieve/5dadc2a5-0f6a-4c93-9b87-27bc299f2d7b | en |
local.uneassociation | Yes | en |
local.atsiresearch | No | en |
local.sensitive.cultural | No | en |
local.identifier.wosid | 000626984400001 | en |
local.year.available | 2021 | en |
local.year.published | 2021 | en |
local.fileurl.open | https://rune.une.edu.au/web/retrieve/5dadc2a5-0f6a-4c93-9b87-27bc299f2d7b | en |
local.fileurl.openpublished | https://rune.une.edu.au/web/retrieve/5dadc2a5-0f6a-4c93-9b87-27bc299f2d7b | en |
local.subject.for2020 | 460304 Computer vision | en |
local.subject.for2020 | 460202 Autonomous agents and multiagent systems | en |
local.subject.for2020 | 460103 Applications in life sciences | en |
local.subject.seo2020 | 220402 Applied computing | en |
local.subject.seo2020 | 220403 Artificial intelligence | en |
local.subject.seo2020 | 220404 Computer systems | en |
Appears in Collections: | Journal Article School of Environmental and Rural Science School of Science and Technology |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
openpublished/AutomatedShepleyFalzonMeek2021JournalArticle.pdf | Published Version | 1.28 MB | Adobe PDF Download Adobe | View/Open |
SCOPUSTM
Citations
19
checked on Feb 15, 2025
Page view(s)
1,110
checked on Jun 18, 2023
Download(s)
6
checked on Jun 18, 2023
This item is licensed under a Creative Commons License