Skip to end of metadata
Go to start of metadata

Datasets created by members of the lab.

Please follow the links below.

Dataset

Description

Authors

City sunset drive

 

Description: GoPro vision-only dataset gathered on a late afternoon / evening approximately 10 km drive (one way) into and out of the Brisbane metropolitan area. Lots of varied traffic and interesting pedestrian situations. Map shows inbound route (part 1), return route is approximately in reverse but has some extra suburban streets at the end.

Settings: 1080p 30 fps wide FOV setting on a GoPro 4 Silver .

Paper reference: If you use this dataset, please cite the below paper:

Michael Milford

Night time drive

Description: Night time ~13 km drive with Sony A7s camera mounted on the roof in Brisbane. Mixture of highway and suburban driving, some light traffic and stop go at traffic lights.

Settings: 1080p 25 fps.

Paper reference: If you use this dataset, please cite the below paper:

Michael Milford

Gold Coast drive

Description: GoPro vision-only dataset gathered along an approximately 87 km drive from Brisbane to the Gold Coast, in sunny weather (no ground truth but a reference trajectory provided in the image on the left. Lots of varied traffic conditions, some interesting pedestrian and dangerous driving situations captured on the camera.

Settings: 1080p 30 fps wide FOV setting on a GoPro 4 Silver .

Paper reference: If you use this dataset, please cite the below paper:

Michael Milford

Day-night vacuum-cleaner robot and lawn datasets

Vision datasets gathered within a townhouse (Indooroopilly, Brisbane), and a suburban backyard (Gaythorne, Brisbane) in varying conditions over the same area: one set during the day, and one during night time. The dataset includes the all the extracted frames, as well as a text document containing their ground truthed locations. The dataset was used in a paper that is accepted to ICRA2016:

J. Mount, M. Milford, "2D Vision Place Recognition for Domestic Service Robots at Night", in IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 2016.

The code used to compare images and perform place recognition is also contained within the files.

If you use this dataset, or the provided code, please cite the above paper.

James Mount and Michael Milford

Alderley Day/Night Dataset

A vision dataset gathered from a car driven around Alderley, Queensland in two different conditions for the same route: one on a sunny day and one during a rainy night. The dataset includes extracted frames from the original .avi video files, as well as manually ground-truthed frame correspondences. The dataset was first used in the ICRA2012 Best Robot Vision Paper:

M. Milford, G. Wyeth, "SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights", in IEEE International Conference on Robotics and Automation, St Paul, United States, 2012.

If you use this dataset please cite the above paper. BibTeX, Endnote, RefMan and CSV citation options available by  clicking here .

Michael Milford, Obadiah Lam, Gordon Wyeth, Arren Glover

Kagaru Airborne Vision Dataset

A vision dataset gathered from a radio-controlled aircraft flown at Kagaru, Queensland, Australia on 31/08/10. The data consists of visual data from a pair of downward facing cameras, translation and orientation information as a ground truth from an XSens Mti-g INS/GPS and additional information from a USB NMEA GPS. The dataset traverses over farmland and includes views of grass, an air-strip, roads, trees, ponds, parked aircraft and buildings.

Michael Warren, Michael Shiel, Ben Upcroft

OpenRATSLAM Datasets

RatSLAM is a robot navigation system based on models of the rodent brain.

Michael Milford, David Ball

UQ St Lucia Vision Dataset

A vision dataset gathered from a car driven in a 9.5km circuit around the University of Queensland's St Lucia campus on 15/12/10. The data consists of visual data from a calibrated stereo pair, translation and orientation information as a ground truth from an XSens Mti-g INS/GPS and additional information from a USB NMEA GPS. The dataset traverses local roads and encounters a number of varying scenarios including roadworks, speed bumps, bright scenes, dark scenes, reverse traverses, a number of loop closure events, multi-lane roads, roundabouts and speeds of up to 60 km/h.

Michael Warren, Michael Shiel, Ben Upcroft

Indoor Level 7 S-Block Dataset

A vision dataset was taken on level 7 of S-Block QUT Gardens Point campus. The data contains stereo images, laser data and wheel odometry in addition to secondary data such as camera calibrations and transforms between sensors. This data was collected over a single continuous run over the level with the Guiabot platform under manual control.

Timothy Morris, Liz Murphy

St Lucia Multiple Times of Day

A vision dataset of a single route through the suburb of St Lucia, Queensland, Australia. The visual data was collected with a forward facing webcam attached to the roof of a car. The route was traversed at five different times during the day to capture the difference in appearance between early morning and late afternoon. The route was traversed again, another five times, two weeks later for a total of ten datasets. GPS data is included for each dataset.

 

 

 

Will Maddern, Arren Glover

Day and Night with Lateral Pose Change

Two vision datasets of a single route through the Gardens Point Campus, Queensland University of Technology and along the Brisbane River, Brisbane, Australia. One route is traversed on the left-hand side of the path during the day and the other day route is traversed on the right-hand side of the path during the night, to capture both pose and condition change.Arren Glover

Fish Image Dataset

A large scale fish dataset used for fine-grained image classification. The dataset now contains 3960 real world images collected from 468 fish species.

 

 

 

Zongyuan Ge,

Christopher Mccool,

Peter Corke

KITTI Semantic Labels

41 labelled images from the KITTI datasetsBen Upcroft

2014 Multi-Lane Road Sideways-Camera Datasets

Vehicular road datasets consisting of day-night imagery from multiple passes in different lanes of 4-lane highway and suburban roads.Edward Pepperell, Peter Corke, Michael Milford

IJRR2015 CBD and Highway Datasets

Paper is currently in press and expected to be published in 2016. Please cite the following paper if you use these datasets (use the correct year after it has been published):

Pepperell, E., Corke, P. & Milford, M. (in press). Routed Roads: Probabilistic Vision-Based Place Recognition for Changing Conditions, Split Streets and Varied Viewpoints. The International Journal of Robotics Research (IJRR)

Links to datasets can be found here:

CBD dataset:
https://mega.co.nz/#F!FEM2zBzb!D72oxkUG2jDhaIDxsig1iQ

Highway dataset:
https://mega.co.nz/#F!xRsxCZ4Y!s1Lq4KmtmZfR5MLBLw4a2g

Edward Pepperell, Peter Corke, Michael Milford

Multimodal Rock Surface Image Dataset

Presented with the permission of NASA's Jet Propulsion Laboratory (JPL) in:

J. Sergeant, G. Doran, D. R. Thompson, C. Lehnert, A. Allwood, B. Upcroft, M. Milford, "Towards Multimodal and Condition-Invariant Vision-based Registration for Robot Positioning on Changing Surfaces," Proceedings of the Australasian Conference on Robotics and Automation, 2016.

Further referenced in:

J. Sergeant, G. Doran, D. R. Thompson, C. Lehnert, A. Allwood, B. Upcroft, M. Milford, "Appearance-Invariant Surface Registration for Robot Positioning," International Conference on Robotics and Automation 2017 [under review], 2017.

Associated code for the above papers can be obtained at the following repository:

https://github.com/jamessergeant/seqreg_tpp.git

James Sergeant, Gary Doran (JPL), David R. Thompson, Chris Lehnert, Alison Allwood, Ben Upcroft, Michael Milford

Trip Hazards on a Construction Site

Contains RGB, Depth and HHA images of a Construction Site with the trip hazards labelled.

The dataset spans 2'000m^2 of construction site over four floors, contains ~629 trip hazards.

https://cloudstor.aarnet.edu.au/plus/index.php/s/kVAh7G8V4mwdtp4

Presented in paper under review:

McMahon, S., Sϋnderhauf, N., Upcroft, B & Milford, M. (2017). Trip Hazard Detection On Construction Sites Using Colour and Depth Information. Submitted to International Conference on Intelligent Robotics and Systems (IROS) with RAL option 2017

 

Raw Image Low-Light Object Dataset

For 28 objects (22 within the ImageNet class set and 6 within the PASCAL VOC class set), a set of raw images (DNG format) has been obtained at a variety of lighting conditions (1-40lx), ISO settings (3200 - 409600) and exposure times (1/8000 - 1/10) for comparison of the influence of demosaicing techniques on feature point detectors and CNNs at low-light and with noise. Each object set has a reference image captured at ~380lx. All images were captured with a Sony α7s in a dark room with controlled lighting.

Presented in:

D. Richards, J. Sergeant, M. Milford, P. Corke, "Seeing in the Dark: the Demosaicing Difference", IEEE Conference on Computer Vision and Pattern Recognition [under review], 2017.

Dan Richards, James Sergeant, Michael Milford, Peter Corke

Related Links

Dataset

Description

Institution

ETHZ Laser Registration Datasets

The "Laser Registration Datasets" covers a large spectrum of environmental structures with precise ground truth measurements. Recorded information: 3D point clouds, gravity, magnetic north, GPS, and global poses using a theodolite.

Autonomous Systems Lab, ETH Zürich

  • No labels