Data

WildScenes Dataset

Commonwealth Scientific and Industrial Research Organisation
Vidanapathirana, Kavisha ; Knights, Joshua ; Hausler, Stephen ; Cox, Mark ; Ramezani, Milad ; Jooste, Jason ; Griffiths, Ethan ; Shaheer, Shaheer ; Sridharan, Sridha ; Fookes, Clinton ; Moghadam, Peyman
Viewed: [[ro.stat.viewed]] Cited: [[ro.stat.cited]] Accessed: [[ro.stat.accessed]]
ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rfr_id=info%3Asid%2FANDS&rft_id=info:doi10.25919/aetq-q420&rft.title=WildScenes Dataset&rft.identifier=https://doi.org/10.25919/aetq-q420&rft.publisher=Commonwealth Scientific and Industrial Research Organisation&rft.description=WildScenes is a large-scale 2D and 3D semantic segmentation dataset containing both labelled images and lidar point clouds, in natural environments. The data was collected from two natural environments in Brisbane, Australia across multiple revisits. Our release includes 2D images, 2D annotated images, 3D point cloud submaps, 3D annotated point cloud submaps, alongside accurate 6-DoF poses.\nLineage: The data was collected using a handheld sensor payload consisting of a spinning lidar sensor mounted at an angle of 45 degrees to maximise the field of view, a motor, encoder, an IMU, and four cameras. For each collected sequence we use the Wildcat slam system to create an accurate 6DoF estimation of the pose of the sensor and to process the lidar data into a globally registered map, from which we produce our submaps. The images we collected were manually annotated with per-pixel annotations and label transfer, using Paintcloud, was used to project 2D annotations into our 3D lidar maps.&rft.creator=Vidanapathirana, Kavisha &rft.creator=Knights, Joshua &rft.creator=Hausler, Stephen &rft.creator=Cox, Mark &rft.creator=Ramezani, Milad &rft.creator=Jooste, Jason &rft.creator=Griffiths, Ethan &rft.creator=Shaheer, Shaheer &rft.creator=Sridharan, Sridha &rft.creator=Fookes, Clinton &rft.creator=Moghadam, Peyman &rft.date=2024&rft.edition=v3&rft_rights=Creative Commons Attribution Noncommercial-Share Alike 4.0 Licence https://creativecommons.org/licenses/by-nc-sa/4.0/&rft_rights=Data is accessible online and may be reused in accordance with licence conditions&rft_rights=All Rights (including copyright) CSIRO 2024.&rft_subject=Semantic Segmentation&rft_subject=Dataset&rft_subject=Natural&rft_subject=Lidar&rft_subject=Images&rft_subject=AI&rft_subject=Machine Learning&rft_subject=Robotics&rft_subject=Autonomous vehicle systems&rft_subject=Control engineering, mechatronics and robotics&rft_subject=ENGINEERING&rft_subject=Artificial intelligence not elsewhere classified&rft_subject=Artificial intelligence&rft_subject=INFORMATION AND COMPUTING SCIENCES&rft_subject=Machine learning not elsewhere classified&rft_subject=Machine learning&rft.type=dataset&rft.language=English Access the data

Licence & Rights:

Non-Commercial Licence view details
CC-BY-NC-SA

Creative Commons Attribution Noncommercial-Share Alike 4.0 Licence
https://creativecommons.org/licenses/by-nc-sa/4.0/

Data is accessible online and may be reused in accordance with licence conditions

All Rights (including copyright) CSIRO 2024.

Access:

Open view details

Accessible for free

Contact Information



Brief description

WildScenes is a large-scale 2D and 3D semantic segmentation dataset containing both labelled images and lidar point clouds, in natural environments. The data was collected from two natural environments in Brisbane, Australia across multiple revisits. Our release includes 2D images, 2D annotated images, 3D point cloud submaps, 3D annotated point cloud submaps, alongside accurate 6-DoF poses.
Lineage: The data was collected using a handheld sensor payload consisting of a spinning lidar sensor mounted at an angle of 45 degrees to maximise the field of view, a motor, encoder, an IMU, and four cameras. For each collected sequence we use the Wildcat slam system to create an accurate 6DoF estimation of the pose of the sensor and to process the lidar data into a globally registered map, from which we produce our submaps. The images we collected were manually annotated with per-pixel annotations and label transfer, using Paintcloud, was used to project 2D annotations into our 3D lidar maps.

Available: 2024-09-25

Data time period: 2021-06-11 to 2021-12-14

This dataset is part of a larger collection

Click to explore relationships graph