Skip to end of metadata
Go to start of metadata

Michael Milford

1) Robot Dancing in Japan 2) As the T-800 3) Driving in the Google Autonomous Car 4) Sunset at FSR conference in Matsushima

Contact Details

Michael Milford | ARC DECRA Fellow | Microsoft Faculty Fellow | Senior Lecturer | School of Electrical Engineering and Computer Science (EECS)
Science and Engineering Faculty | Queensland University of Technology
phone: + 61 7 3138 9969 | fax: + 61 7 3138 1469 | email: michael.milford@qut.edu.au
Gardens Point, S Block 1110 | 2 George Street, Brisbane, QLD 4000

 

Brief Bio

I hold a PhD in Electrical Engineering and a Bachelor of Mechanical and Space Engineering from the University of Queensland (UQ), awarded in 2006 and 2002 respectively. After a brief postdoc in robotics at UQ, I worked for three years at the Queensland Brain Institute as a Research Fellow on the Thinking Systems Project. In 2010 I moved to the Queensland University of Technology (QUT) to finish off my Thinking Systems postdoc, and then was appointed as a Lecturer in 2011. In 2012 I was awarded an inaugural Australian Research Council Discovery Early Career Researcher Award, which provides me with a research-intensive fellowship salary and extra funding support for 3 years. In 2013 I became a Microsoft Faculty Fellow and lived in Boston on sabbatical working with Harvard and Boston University. I am currently a Senior Lecturer at QUT with a research focus, although I continue to teach Introduction to Robotics every year. From 2014 to 2020 I am a Chief Investigator on a $19,000,000 Australian Research Council Centre of Excellence for Robotic Vision.

My research interests include:

  • Vision-based mapping and navigation
  • Computational modelling of the rodent hippocampus and entorhinal cortex, especially with respect to mapping and navigation
  • Computational modelling of human visual recognition
  • Biologically inspired robot navigation and computer vision
  • Simultaneous Localisation And Mapping (SLAM) 

News

Here I highlight recent news of interest. I keep a fairly extensive archive of all older news here.

14 January 2014: 5 papers, 2 workshops, 1 organized session and 1 Pecha Kucha presentation accepted to the 2014 International Conference on Robotics and Automation

5 papers led by my research group have been accepted into the 2014 International Conference on Robotics and Automation, to be held in Hong Kong in May/June. The papers are:

  • "Condition-Invariant, Top-Down Visual Place Recognition," Michael J Milford, Walter Scheirer, Eleonora Vig, David Cox (an ongoing collaboration with neuroscientists and computer vision researchers at Harvard University)

  • "Towards Training-Free Appearance-Based Localization: Probabilistic Models for Whole-Image Descriptors," Stephanie Lowry, Gordon Wyeth, Michael J Milford
  • "Transforming Morning to Afternoon using Linear Regression Techniques," Stephanie Lowry, Michael J Milford, Gordon Wyeth
  • "All-Environment Visual Place Recognition with SMART," Edward Pepperell, Peter Corke, Michael J Milford
  • "Multi-scale Bio-inspired Place Recognition," Zetao Chen, Adam Jacobson, Ugur M. Erdem, Michael E. Hasselmo and Michael Milford (an ongoing collaboration with neuroscientists at Boston University)

Well done to PhD students Steph, Ed, Zetao and Adam for their papers.

I am also co-chairing two accepted workshops and an organized session with the University of California, Boston University, University of Plymouth, Hong Kong University of Science and Technology and the Chemnitz University of Technology:

Finally, I have been selected as one of six inaugural Pecha Kucha speakers at the conference, out of a pool of sixteen proposals. 20 slides, 20 seconds per slide, should be very manic but fun. My provisional title is:

  • "Superhuman Robot Navigation with a Frankenstein Model"

19 December 2013: Australian Research Council Centre of Excellence for Robotic Vision funded for $19,000,000 over the next 7 years.


Yesterday we received the happy news that our Australian Research Council Centre of Excellence for Robotic Vision has been funded for $19,000,000 over the next 7 years.

The project will be led by Professor Peter Corke at the Queensland University of Technology and involves a total of 13 chief investigators and 6 partner investigators from 10 organisations spanning robotics and computer vision across the globe.

Centre Overview: The Centre’s research will allow robots to see, to understand their environment using the sense of vision. This is the missing capability that currently prevents robots from performing useful tasks in the complex, unstructured and dynamically changing environments in which we live and work.

The entire team of investigators comprises: Peter Corke, Ian Reid, Tom Drummond, Robert Mahony, Gordon Wyeth, Michael Milford , Ben Upcroft, Anton van den Hengel, Chunhua Shen, Richard Hartley, Hongdong Li, Stephen Gould, Gustavo Carneiro, Paul Newman, Philip Torr, Francois Chaumette, Frank Dellaert, Andrew Davison and Marc Pollefeys

The organization list includes: Queensland University of Technology, The University of Adelaide, Monash University, the Australian National University, University of Oxford, INRIA Rennes Bretagne, Georgia Institute of Technology, Imperial College London, Swiss Federal Institute of Technology, Zurich and National ICT Australia.

18 December 2013: ISRR2013, Singapore

I was an invited speaker at the 2013 International Symposium on Robotics Research in Singapore this week, photos below. I also gave an invited presentation at A*Star.

ISRR2013, Singapore

3 December 2013: New Robotnik Robot

We've taken delivery of our new Robotnik Summit XL robot.

09 December 2013: ICCV2013 and ACRA2013, Sydney

I've just returned from an interesting week at the 2013 Australasian Conference on Robotics and Automation and the 2013 International Conference on Computer Vision, both held in Sydney. Photos in the photo section. I had the pleasure of seeing some of the very top computer vision researchers talk at ICCV including Jitendra Malik. Also had a great time at the Computer Vision in Vehicle Technology workshop and gave an invited presentation.

3 December 2013: ACRA Best Paper Award and Finalist

My PhD student Zetao Chen has won the inaugural Ray Jarvis best paper award at the 2013 Australasian Conference on Robotics and Automation for the paper:

"Towards Bio-inspired Place Recognition over Multiple Spatial Scales," Zetao ChenAdam Jacobson, Ugur Murat Erdem, Michael Hasselmo and Michael Milford

This paper was a collaborative paper led by QUT with neuroscientists at Boston University. The award has been established in memory of Emeritus Professor Ray Jarvis, a pioneer in Australian robotics.

We also had two other best paper finalists:

Best Paper Finalist:

"Towards Condition-Invariant, Top-Down Visual Place Recognition," Michael Milford, Walter Scheirer, Eleonora Vig and David Cox

This paper was a result of collaborative work led by QUT with computer vision and neuroscience researchers at Harvard University.

Best Student Paper Finalist:

"Towards Bio-inspired Place Recognition over Multiple Spatial Scales," Zetao ChenAdam Jacobson, Ugur Murat Erdem, Michael Hasselmo and Michael Milford

 

Older news can be found here.

Competitive Grant Funding

I have brought in almost $20,000,000 in competitive chief investigator / fellowship grant funding to date. The funding represents a mixture of sole investigator funding (fellowships), international, multidisciplinary collaborative grants and funding from industry.

  • Peter Corke, Ian Reid, Tom Drummond, Robert Mahony, Gordon Wyeth, Michael Milford , Ben Upcroft, Anton van den Hengel, Chunhua Shen, Richard Hartley, Hongdong Li, Stephen Gould, Gustavo Carneiro, Paul Newman, Philip Torr, Francois Chaumette, Frank Dellaert, Andrew Davison and Marc Pollefeys, Australian Research Council Centre of Excellence for Robotic Vision, 2014-2020, $19,000,000
  • M. Milford, Microsoft Research Faculty Fellowship 2013-2014, $100,000
  • M. Milford, Equipment Grant, 2013, $11,600
  • M. Milford, ARC Discovery Early Career Researcher Award 2012-2014, "Visual navigation for sunny summer days and stormy winter nights", $375,000
  • M. Milford, ARC Discovery Project Grant 2012-2014, "Brain-based Sensor Fusion for Navigating Robots", $140,000
  • M. Milford, Small Teaching and Learning Grant, 2011-2012, "No Student Left Behind", $5,500
  • M. Milford, Equipment Grant, 2012, $15,000
  • M. Milford, Early Career Academic Recruitment and Development (ECARD) Grant, 2011-2012, $15,000
  • M. Milford, Equipment Grant, 2011, $20,000
  • M. Milford, Staff Start Up Grant, 2007-2008, $11,000

Research

A brief overview of my current research interests and projects.

Project

Description

Visual navigation for sunny summer days and stormy winter nights (2012-2014 ARC DECRA Fellowship)

This project will develop novel visual navigation algorithms that can recognize places along a route, whether travelled on a bright sunny summer day or in the middle of a dark and stormy winter night. Visual recognition under any environmental conditions is a holy grail for robotics and computer vision, and is a task far beyond current state of the art algorithms. Consequently robot and personal navigation systems use GPS or laser range finders, missing out on visual sensor advantages such as cheap cost and small size. This project will set a new benchmark in visual route recognition, and in doing so enable the extensive use of low cost visual sensors in robot and personal navigation systems under wide ranging environmental conditions. The DECRA award is for 3 years and worth $375,000, and enables me to conduct research full-time, help in funding PhD students and essential robotics and computer vision research equipment.

Selection of relevant papers:

  • Milford, Michael J., and Gordon Fraser Wyeth. "Seqslam: Visual route-based navigation for sunny summer days and stormy winter nights."  Robotics and Automation (ICRA), 2012 IEEE International Conference on . IEEE, 2012 (Best Robot Vision Award)
  • Milford, Michael. "Vision-based place recognition: how low can you go?."  The International Journal of Robotics Research  32.7 (2013): 766-789.
  • Milford, Michael, Ian Turner, and Peter Corke. "Long exposure localization in darkness using consumer cameras."  Proceedings of the 2013 IEEE International Conference on Robotics and Automation . IEEE, 2013.
  • Milford, Michael. "Visual route recognition with a handful of bits."  Proceedings of Robotics Science and Systems Conference 2012 . University of Sydney, 2012.
  • Edward Pepperell, Peter Corke and Michael Milford, "Towards Persistent Visual Navigation using SMART," in Proceedings of the 2013 Australasian Conference on Robotics and Automation , ARAA, 2013.

Brain-based sensor fusion for navigating robots (2012-14 ARC Discovery Project)

This project will develop new methods for sensor fusion using brain-based algorithms for calibration, learning and recall. Current robotic sensor fusion techniques are primarily based on fusing depth or feature data from range and vision sensors. These approaches require manual calibration and are restricted to environments with structured geometry and reliable visual features. In contrast, rats rapidly calibrate a wide range of sensors to learn and navigate in environments ranging from a pitch-black sewer in Cairo to a featureless desert in America. The project will produce robots that, like rats, autonomously learn how best to use their sensor suites, enabling unsupervised, rapid deployment in a range of environments. The award is for 3 years and worth $140,000, and enables me to fund PhD students and essential robotics research equipment.

Selection of relevant papers:

  • Michael Milford and Adam Jacobson, "Brain-based Sensor Fusion for Navigating Robots." Proceedings of the 2013 IEEE International Conference on Robotics and Automation . IEEE, 2013.
  • Adam Jacobson, Zetao Chen, Michael J Milford, "Autonomous Movement-Driven Place Recognition Calibration for Generic Multi-Sensor Robot Platforms", Proceedings of the 2013 IEEE International Conference on Intelligent Robotics and Systems, IEEE, 2013.
  • Zetao Chen, Adam Jacobson, Ugur M. Erdem, Michael E. Hasselmo and Michael Milford, "Towards Bio-inspired Place Recognition over Multiple Spatial Scales," in Proceedings of the 2013 Australasian Conference on Robotics and Automation , ARAA, 2013.

Condition- and Pose-invariant Visual Place Recognition (partly funded by a Microsoft Research Faculty Fellowship)

This project addresses the challenge of developing a place recognition system with performance exceeding that of humans and current state of the art robotic, computer vision and artificial intelligence systems. Place recognition is a well defined but extremely challenging problem to solve in the general sense; given sensory information about a place such as a photo, can a human, animal, robot or personal navigation aid decide whether that place is the same as any places it has previously visited or learnt, despite the vast range of ways in which the appearance of that place can change. Current approaches to the problem using GPS, cameras or lasers have one or more severe theoretical, technological or application-based limitations including high cost, sensitivity to changing environmental conditions, lack of generality, training requirements and long recognition latencies. This project will solve these problems by developing a generally applicable, single shot place recognition system based on recent discoveries in human and rodent studies of visual recognition and place memory. 

Selection of relevant papers:

  • Michael Milford, Eleonora Vig, Walter Scheirer and David Cox, "Towards Condition-Invariant, Top-Down Visual Place Recognition," in Proceedings of the 2013 Australasian Conference on Robotics and Automation, ARAA, 2013.

RatSLAM and OpenRatSLAM

RatSLAM is a robot navigation system based on models of the rodent brain. The project has been ongoing for more than a decade and has been integrated into many other projects such as the Lingodroids project. Most recently we have released an open source version OpenRatSLAM along with an accompanying journal paper in the Autonomous Robots journal.

Selection of relevant papers:

  • Ball, David, et al. "OpenRatSLAM: an open source brain-based SLAM system." Autonomous Robots  (2013): 1-28.
  • Milford, Michael J., Gordon F. Wyeth, and David Prasser. "RatSLAM: a hippocampal model for simultaneous localization and mapping."  Robotics and Automation, 2004. Proceedings. ICRA'04. 2004 IEEE International Conference on . Vol. 1. IEEE, 2004.
  • Milford, Michael J., and Gordon F. Wyeth. "Mapping a suburb with a single camera using a biologically inspired SLAM system."  Robotics, IEEE Transactions on  24.5 (2008): 1038-1053.
  • Milford, Michael, and Gordon Wyeth. "Persistent navigation and mapping using a biologically inspired SLAM system."  The International Journal of Robotics Research  29.9 (2010): 1131-1153.
  • Milford, Michael John. "Robot navigation from nature."  Springer tracts in advanced robotics  41 (2008).

Persistent Mapping and Navigation

The world is a constantly changing place. If robots are ever to be a permanent fixture in our daily lives, they must be able to map and navigate their environments autonomously over long periods of time. We are approaching this problem in a number of ways - attacking the "Place Recognition Quad" from all four sides so to speak.

Selection of relevant papers:

  • Stephanie M. Lowry, Gordon. F. Wyeth, and Michael J. Milford, "Odometry-driven Inference to Link Multiple Exemplars of a Location," Proceedings of the 2013 IEEE International Conference on Intelligent Robotics and Systems, IEEE, 2013.
  • Stephanie Lowry, Gordon Wyeth and Michael Milford, "Training-Free Probability Models for Whole-Image Based Place Recognition," in Proceedings of the 2013 Australasian Conference on Robotics and Automation , ARAA, 2013.
  • Glover, Arren J., et al. "FAB-MAP+ RatSLAM: appearance-based SLAM for multiple times of day."  Robotics and Automation (ICRA), 2010 IEEE International Conference on . IEEE, 2010.
  • Murphy, Liz, et al. "Experimental comparison of odometry approaches." Proceedings of the 13th International symposium on experimental robotics (ISER 2012) . 2012.

Computational Neuroscience and Modelling

I'm interested in a wide range of computational neuroscience areas with a focus on those related to processes like navigation, mapping, learning and recall, and mathematical modelling of these processes.

Selection of relevant papers:

  • Nolan, Christopher R., et al. "The race to learn: spike timing and STDP can coordinate learning and recall in CA3."  Hippocampus  21.6 (2011): 647-660.
  • Cheung, Allen, et al. "Maintaining a cognitive map in darkness: the need to fuse boundary knowledge with path integration."  PLoS computational biology  8.8 (2012): e1002651.
  • Milford, Michael J., Janet Wiles, and Gordon F. Wyeth. "Solving navigational uncertainty using grid cells on robots."  PLoS computational biology  6.11 (2010): e1000995.
  • Stratton, Peter, et al. "Using strategic movement to calibrate a neural compass: A spiking network for tracking head direction in rats and robots."  PloS one  6.10 (2011): e25687.

CAT-SLAM: Continuous Appearance-based Simultaneous Localization And Mapping

Work led by Will Maddern on creating a (nearly) parameter free, highly capable SLAM system - basically a lightweight, probabilistic, mathematically rigorous version of RatSLAM without all the neural dynamics and parameters.

Selection of relevant papers:

  • Maddern, Will, Michael Milford, and Gordon Wyeth. "CAT-SLAM: probabilistic localisation and mapping using a continuous appearance-based trajectory."  The International Journal of Robotics Research  31.4 (2012): 429-451.
  • Maddern, Will, Michael Milford, and Gordon Wyeth. "Continuous appearance-based trajectory slam."  Robotics and Automation (ICRA), 2011 IEEE International Conference on . IEEE, 2011.
  • Maddern, Will, Michael Milford, and Gordon Wyeth. "Capping computation time and storage requirements for appearance-based localization with CAT-SLAM." Robotics and Automation (ICRA), 2012 IEEE International Conference on . IEEE, 2012.
  • Maddern, William, Michael Milford, and Gordon Wyeth. "Towards persistent localization and mapping with a continuous appearance-based topology." Proceedings of Robotics Science and Systems Conference 2012 . University of Sydney, 2012.

Semantic Mapping and Cognitive Science

RatSLAM has formed the base for a number of research projects in cognitive science including the Lingodroids project. I'm interested in the interdisciplinary interface between robotics, computer vision and cognitive science.

Selection of relevant papers:

  • Milford, Michael, et al. "Learning spatial concepts from RatSLAM representations."  Robotics and Autonomous Systems  55.5 (2007): 403-410.
  • Schulz, Ruth, et al. "Lingodroids: Studies in spatial cognition and language." Robotics and Automation (ICRA), 2011 IEEE International Conference on . IEEE, 2011.

PlaceRecognition.com

I have started an online resource dedicated to the specific Place Recognition problem. Place recognition is a well defined but extremely challenging problem to solve in the general sense; given sensory information about a place such as a photo, can a human, animal, robot or personal navigation aid decide whether that place is the same as any places it has previously visited or learnt, despite the vast range of ways in which the appearance of that place can change. Current approaches to the problem using GPS, cameras or lasers have one or more significant theoretical, technological or application-based limitations including high cost, sensitivity to changing environmental conditions, lack of generality, training requirements and long recognition latencies.

The website is intended to act as a focused hub for place recognition research, and especially vision-based place recognition research.

You can view it by clicking the image.

YouTube Channel

My YouTube channel is " MilfordRobotics ". I post both research and teaching material on this channel. Click on the photo to visit it.


Current Research Staff

Arren Glover (Postdoctoral Research Fellow)

Obadiah Lam (Postdoctoral Research Fellow, 2014)

Current PhD Students

Will Maddern - Continuous Appearance-based Localisation and Mapping
Stephanie Lowry - Spatio-temporal Mapping for Mobile Robots
Zetao Chen - Brain-based Sensor Fusion for Navigating Robots
Edward Pepperell - Visual navigation for sunny summer days and stormy winter nights
Adam Jacobson - Brain-based Sensor Fusion for Navigating Robots

Current Undergraduate Students (Final Year Projects)

Mara Smeathers - Brain-based sensor fusion for navigating robots
Stephen Hausler - Brain-based sensor fusion for navigating robots

PhD Projects

I am looking for enthusiastic, talented people who are passionate about embarking on a long term research project and journey (i.e. a PhD).

I currently have PhD positions available. Click here for further information, especially concerning minimum thresholds for applying.

Undergraduate Projects

I have a range of undergraduate final year projects as part of QUT BEB801/802 or possibly vacation projects. Go here to check them out

Teaching

I teach ENB339 Introduction to Robotics in Semester 2 each year. In it we learn:

  • Robot construction, build a robot using Lego and the NXT controller brick
  • Fundamentals of robotics, how to control the end point of a simple robot arm and make it follow a path
  • Computer vision, how to interpret images to figure the size, shape and color of objects in the scene
  • Connect computer vision to robotics, make your robot move to objects of specific size, shape and color.

Here's a video showing prac footage from the 2013 incarnation of the unit. Music is from: Pamgaea by Kevin MacLeod (incompetech.com). Licensed under Creative Commons: By Attribution 3.0 http://creativecommons.org/licenses/by/3.0/

 

Collaborators

I am actively collaborating with or have collaborated with the following researchers and institutions:

Harvard University, United States 
David Cox, Walter Scheirer and Eleonora Vig
School of Engineering and Applied Sciences and Department of Molecular and Cellular Biology at Harvard University

Boston University , United States
Michael Hasselmo and Ugur M. Erdem
Center for Memory and Brain and Graduate Program for Neuroscience at Boston University

The University of Nottingham, United Kingdom 
Robert Oates, Graham Kendall and Jonathan M Garibaldi

Victoria University of Wellington, New Zealand
Henry Williams and Will Browne

University of Antwerp, Belgium
Rafael Berkvens, Herbert Peremans and Maarten Weyn

The Australian National University, Australia
Robert Mahony and Felix Schill

The University of Queensland, Australia
Janet Wiles, Allen Cheung, Peter Stratton and Christopher Nolan

Commonwealth Scientific and Industrial Research Organisation, Australia
Jonathan Roberts and Kane Usher

Photos

A collection of work-related photo albums and videos. I notice that we're usually so engrossed in our work that we don't take any photos of the process, only in our free time.

  

ISRR2013, Singapore

ICCV2013, Sydney


ACRA2013, Sydney


MIT Museum, Boston 2013


ICRA2012 and USA Work Trip


RSS2012, Sydney


FSR2012, Japan


ICRA2011 Shanghai


ACRA2011 Melbourne

MSR Summit 2013, Seattle

Thinking Systems Retreat 2010

Couran Cove, Queensland

 

QUT ECARD Retreat 2011

Kingscliff, Queensland

Moser lab and Hippocampus Workshop

Portugal and Norway, 2008

IROS2006, Bejing

ACRA2005, Sydney

Thinking Systems Retreat 2009

O'Reilleys, Queensland

ICRA2004

New Orleans, USA

 

Peer-Reviewed Publications

I have switched to Google Scholar to manage publication lists and citation analysis. The publicly visible link is available here. A selection of significant publications is listed below.

You can also access most of my publications at the QUT ePrints repository.

Some Select Publications

Here a selection of my significant publications. If any of the links are broken, just search the title on Google or Google Scholar.

Cover

Details

M. J. Milford, J. Wiles, G. Wyeth, "Solving Navigational Uncertainty Using Grid Cells on Robots ", PLoS Computational Biology 6(11), 2010. Impact Factor = 5.8. ERA ranking: A*.

M. J. Milford, G. Wyeth, "Persistent Navigation and Mapping using a Biologically Inspired SLAM System", The International Journal of Robotics Research, 2009. Impact Factor = 2.882. ERA ranking: A*.

Cover

Details

M. J. Milford, G. Wyeth, "Mapping a Suburb with a Single Camera using a Biologically Inspired SLAM System ", IEEE Transactions on Robotics, 24 (5), pp. 1038-1053, 2008. Impact Factor = 2.656. ERA ranking: A*.

M. Milford, Robot Navigation From Nature, Springer Tracts in Advanced Robotics, Volume 41, Springer-Verlag, 2008.

Software

I have produced open source software including OpenRatSLAM. There are also open source implementations of algorithms used in my other research, including OpenSeqSLAM by Niko Sunderhauf.

OpenRatSLAM  is a fully open source implementation of the RatSLAM system, with the most  recent release  integrated with the Robot Operating System (ROS), and an older vanilla C++ version. Click on the link below to get it and find out more:

http://code.google.com/p/ratslam/

You can also find OpenRatSLAM on OpenSLAM.com.

OpenSeqSLAM is an open source Matlab implementation of the original SeqSLAM algorithm written by Niko Sunderhauf.
openFABMAP is an open and modifiable code-source which implements the Fast Appearance-based Mapping algorithm (FAB-MAP) originally developed by Mark Cummins and Paul Newman. OpenFABMAP was designed from published FAB-MAP theory and is for personal and research use.
A simple Matlab script I wrote to estimate your current year citation count using Google Scholar.

Datasets and Downloads

Click here to access the datasets and downloads webpage.

Media Coverage

A smattering of media coverage of my research over the years.

Robots

Research Mapping

My research "Wordle" (Wordle.net) (click on the image to get a larger version):

  • No labels