Autonomous Fingerprinting and Large Experimental Data Set for Visible Light Positioning

dc.citation.issue9
dc.citation.volume21
dc.contributor.authorGlass T
dc.contributor.authorAlam F
dc.contributor.authorLegg M
dc.contributor.authorNoble F
dc.date.available2021-05
dc.date.available2021-05-04
dc.date.issued8/05/2021
dc.description.abstractThis paper presents an autonomous method of collecting data for Visible Light Positioning (VLP) and a comprehensive investigation of VLP using a large set of experimental data. Received Signal Strength (RSS) data are efficiently collected using a novel method that utilizes consumer grade Virtual Reality (VR) tracking for accurate ground truth recording. An investigation into the accuracy of the ground truth system showed median and 90th percentile errors of 4.24 and 7.35 mm, respectively. Co-locating a VR tracker with a photodiode-equipped VLP receiver on a mobile robotic platform allows fingerprinting on a scale and accuracy that has not been possible with traditional manual collection methods. RSS data at 7344 locations within a 6.3 × 6.9 m test space fitted with 11 VLP luminaires is collected and has been made available for researchers. The quality and the volume of the data allow for a robust study of Machine Learning (ML)- and channel model-based positioning utilizing visible light. Among the ML-based techniques, ridge regression is found to be the most accurate, outperforming Weighted k Nearest Neighbor, Multilayer Perceptron, and random forest, among others. Model-based positioning is more accurate than ML techniques when a small data set is available for calibration and training. However, if a large data set is available for training, ML-based positioning outperforms its model-based counterparts in terms of localization accuracy.
dc.description.publication-statusPublished
dc.identifierhttp://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000650778600001&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=c5bb3b2499afac691c2e3c1a83ef6fef
dc.identifierARTN 3256
dc.identifier.citationSENSORS, 2021, 21 (9)
dc.identifier.doi10.3390/s21093256
dc.identifier.eissn1424-8220
dc.identifier.elements-id444970
dc.identifier.harvestedMassey_Dark
dc.identifier.urihttps://hdl.handle.net/10179/16372
dc.publisherMDPI (Basel, Switzerland)
dc.relation.isPartOfSENSORS
dc.relation.urihttps://www.mdpi.com/1424-8220/21/9/3256/pdf
dc.subjectfingerprint
dc.subjectIndoor Localization
dc.subjectIndoor Positioning Systems (IPS)
dc.subjectVirtual Reality (VR)
dc.subjectground truth
dc.subjectVisible Light Positioning
dc.subject.anzsrc0301 Analytical Chemistry
dc.subject.anzsrc0805 Distributed Computing
dc.subject.anzsrc0906 Electrical and Electronic Engineering
dc.subject.anzsrc0502 Environmental Science and Management
dc.subject.anzsrc0602 Ecology
dc.titleAutonomous Fingerprinting and Large Experimental Data Set for Visible Light Positioning
dc.typeJournal article
pubs.notesNot known
pubs.organisational-group/Massey University
pubs.organisational-group/Massey University/College of Sciences
pubs.organisational-group/Massey University/College of Sciences/School of Food and Advanced Technology
Files
Collections