Multi-source Multimodal Data and Deep Learning for Disaster Response: A Systematic Review

dc.citation.issue1
dc.citation.volume3
dc.contributor.authorAlgiriyage N
dc.contributor.authorPrasanna R
dc.contributor.authorStock K
dc.contributor.authorDoyle EEH
dc.contributor.authorJohnston D
dc.coverage.spatialSingapore
dc.date.available2022
dc.date.available2021-11-11
dc.date.issued2022-01
dc.description.abstractMechanisms for sharing information in a disaster situation have drastically changed due to new technological innovations throughout the world. The use of social media applications and collaborative technologies for information sharing have become increasingly popular. With these advancements, the amount of data collected increases daily in different modalities, such as text, audio, video, and images. However, to date, practical Disaster Response (DR) activities are mostly depended on textual information, such as situation reports and email content, and the benefit of other media is often not realised. Deep Learning (DL) algorithms have recently demonstrated promising results in extracting knowledge from multiple modalities of data, but the use of DL approaches for DR tasks has thus far mostly been pursued in an academic context. This paper conducts a systematic review of 83 articles to identify the successes, current and future challenges, and opportunities in using DL for DR tasks. Our analysis is centred around the components of learning, a set of aspects that govern the application of Machine learning (ML) for a given problem domain. A flowchart and guidance for future research are developed as an outcome of the analysis to ensure the benefits of DL for DR activities are utilized.
dc.description.publication-statusPublished
dc.format.extent92 - ?
dc.identifierhttps://www.ncbi.nlm.nih.gov/pubmed/34870241
dc.identifier971
dc.identifier.citationSN Comput Sci, 2022, 3 (1), pp. 92 - ?
dc.identifier.doi10.1007/s42979-021-00971-4
dc.identifier.eissn2661-8907
dc.identifier.elements-id450035
dc.identifier.harvestedMassey_Dark
dc.identifier.urihttps://hdl.handle.net/10179/17034
dc.languageeng
dc.publisherSpringer Nature
dc.relation.isPartOfSN Comput Sci
dc.subjectDeep learning
dc.subjectDisaster management
dc.subjectDisaster response
dc.subjectLiterature review
dc.titleMulti-source Multimodal Data and Deep Learning for Disaster Response: A Systematic Review
dc.typeJournal article
pubs.notesNot known
pubs.organisational-group/Massey University
pubs.organisational-group/Massey University/College of Humanities and Social Sciences
pubs.organisational-group/Massey University/College of Humanities and Social Sciences/Joint Centre for Disaster Research
pubs.organisational-group/Massey University/College of Sciences
pubs.organisational-group/Massey University/College of Sciences/School of Mathematical and Computational Sciences
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Multi-source Multimodal Data and Deep Learning for Disaster Response A Systematic Review.pdf
Size:
1.81 MB
Format:
Adobe Portable Document Format
Description:
Collections