Skip to end of metadata
Go to start of metadata

Michael Milford

Contact Details

View Michael Milford's profile on LinkedIn Michael Milford | ARC Future Fellow | Microsoft Faculty Fellow | Associate Professor | School of Electrical Engineering and Computer Science (EECS)
Science and Engineering Faculty | Queensland University of Technology
phone: + 61 7 3138 9969 | fax: + 61 7 3138 1469 | email: michael.milford@qut.edu.au
Gardens Point, S Block 1110 | 2 George Street, Brisbane, QLD 4000

 

Brief Bio

I am a leading robotics researcher conducting interdisciplinary research at the boundary between robotics, neuroscience and computer vision, and a multi-award winning educational entrepreneur. I currently hold the position of Associate Professor at the Queensland University of Technology, as well as Australian Research Council Future Fellow, Microsoft Research Faculty Fellow and Chief Investigator on the Australian Centre for Robotic Vision.

My research has attracted more than twenty million dollars in research and industry funding, both in the form of sole investigator fellowships and large team grants. My papers have won (4) or been finalists (7) for 11 best paper awards including the 2012 Best Vision paper at ICRA2012. My citation h-index is 19, with 1722 citations as of April 11, 2016. I have given more than 30 invited presentations across ten countries at top international conferences, universities (including Harvard, MIT, CMU, Boston Uni, Cambridge and Imperial College London) and corporations (including Google and Microsoft).

As an educational entrepreneur, I have written and produced innovative textbooks for high school students for fifteen years, with more than 6000 physical sales and educational website and YouTube views in excess of 1.5 million. I am currently launching the company Math Thrills, an initiative combining mass market entertainment and STEM education. Math Thrills received pre-seed funding on Kickstarter ($2500) and seed funding ($50,000) from QUT Bluebox and is in initial school trials. The initiative has led to prestigious awards including the 2015 Queensland Young Tall Poppy of the Year Award and a 2015 TedXQUT talk.

I have dual citizenship between Australia and the United States, and have lived and worked in locations including Boston, USA and Edinburgh and London in the UK, collaborating with organisations including Harvard University, Boston University, Oxford University, MIT, Edinburgh University and Imperial College London.

For a 40 second overview of what my research group does, watch the video below:

News

Here I highlight recent news of interest. I keep a fairly extensive  archive of all older news here.

25 February 2016: Two workshops accepted to Robotics: Science and Systems 2016

We are leading two accepted workshops to be run at Robotics Science and Systems in 2016:

  • Are the skeptics right? Limits and Potentials of Deep Learning in Robotics

  • Visual Place Recognition: What is it Good For?

15 January 2016: 3 papers accepted to the 2016 International Conference on Robotics and Automation

  • James Mount, Michael J Milford, "2D Visual Place Recognition for Domestic Service Robots at Night"

  • Thomas Stone, Dario Differt, Michael J Milford, Barbara Webb, "Skyline-based Localisation for Aggressively Manoeuvring Robots using UV sensors and Spherical Harmonics"
  • Niko Sünderhauf, Feras Dayoub, Sean Michael McMahon, Ben Talbot, Ruth Schulz, Gordon Wyeth, Peter Corke, Ben Upcroft, Michael J Milford, "Place Categorization and Semantic Mapping on a Mobile Robot"

24 October 2015: IEEE Transactions on Robotics paper accepted

Our paper "Supervised and Unsupervised Linear Learning Techniques for Visual Place Recognition in Changing Environments" has been accepted in IEEE Transactions on Robotics.

 

03 November 2015: International Journal of Robotics Research Paper accepted

Our survey paper "Routed roads: Probabilistic vision-based place recognition for changing conditions, split streets and varied viewpoints" has been accepted and published in The International Journal of Robotics Research - click here to go have a read.


24 October 2015: IEEE Transactions on Robotics survey paper accepted and published

Our survey paper "Visual Place Recognition: A Survey" has been accepted and published in IEEE Transactions on Robotics - click here to go have a read.

 


21 August 2015: Queensland Young Tall Poppy of the Year Award

Last night I was honoured to be awarded the 2015 Queensland Young Tall Poppy of the Year award, presented by the Minister for Science, the Hon Leeanne Enoch and the Queensland Chief Scientist Geoff Garrett, accompanied by my head of school Prof David Lovell.


04 October 2015: TedXQUT talk online: How Hollywood can save math education



"Mathematical and scientific illiteracy costs both individuals and society dearly. Michael Milford thinks we can solve our declining mathematical and science standards by blowing stuff up. He is using the books, video games and junky Hollywood blockbusters teenagers consume every day to get them excited about mathematics.

Michael Milford is the founding director of the Math Thrills initiative and a QUT Associate Professor, and Chief Investigator for the Australian Centre for Robotic Vision. He is currently in the process of launching his Math Thrills company, after a successful Kickstarter campaign. Math Thrills aims to stealthily embed key mathematical and scientific concepts throughout mass media entertainment including books, games and movies, and to enable educators to teach to that material. When he isn't trying to revolutionize how we learn math he leads a research group that uses neuroscience discoveries to create new algorithms and technologies for robotics, specializing in the area of robotic and personal navigation systems.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx"

29 April 2015: Robotics Science and Systems (RSS) 2015 paper accepted

Our paper "Place Recognition with ConvNet Landmarks: Viewpoint-Robust, Condition-Robust, Training-Free" has been accepted for publication at Robotics Science and Systems 2015. This is joint work with a great team here at the +Australian Centre for Robotic Vision at QUT: Niko Sunderhauf,  Ben Upcroft, Adam Jacobson, Edward Pepperell, Feras Dayoub and Sareh Shirazi. 

 

Older news can be found here .

Competitive Grant Funding

I have been awarded more than $20,000,000 in competitive chief investigator / fellowship grant funding to date. The funding represents a mixture of sole investigator funding (fellowships), international, multidisciplinary collaborative grants and funding from industry.

  • M. Milford, "Superhuman place recognition with a unified model of human visual processing and rodent spatial memory", ARC Future Fellowship, 2014-2018, $676,000.
  • Peter Corke, Ian Reid, Tom Drummond, Robert Mahony, Gordon Wyeth, Michael Milford , Ben Upcroft, Anton van den Hengel, Chunhua Shen, Richard Hartley, Hongdong Li, Stephen Gould, Gustavo Carneiro, Paul Newman, Philip Torr, Francois Chaumette, Frank Dellaert, Andrew Davison and Marc Pollefeys, Australian Research Council Centre of Excellence for Robotic Vision, 2014-2020, $19,000,000
  • M. Milford, M. Dunbabin, J. Firn, Pilot Project Grant, "Automated Environmental Change Monitoring", $20,000
  • M. Milford, Microsoft Research Faculty Fellowship 2013-2014, $100,000
  • M. Milford, Equipment Grant, 2013, $11,600
  • M. Milford, ARC Discovery Early Career Researcher Award 2012-2014, "Visual navigation for sunny summer days and stormy winter nights", $375,000
  • M. Milford, ARC Discovery Project Grant 2012-2014, "Brain-based Sensor Fusion for Navigating Robots", $140,000
  • M. Milford, Small Teaching and Learning Grant, 2011-2012, "No Student Left Behind", $5,500
  • M. Milford, Equipment Grant, 2012, $15,000
  • M. Milford, Early Career Academic Recruitment and Development (ECARD) Grant, 2011-2012, $15,000
  • M. Milford, Equipment Grant, 2011, $20,000
  • M. Milford, Staff Start Up Grant, 2007-2008, $11,000

Research

A brief overview of my current research interests and projects.

Project

Description

Visual navigation for sunny summer days and stormy winter nights (2012-2014 ARC DECRA Fellowship)

Click here to visit the main SeqSLAM project page

This project will develop novel visual navigation algorithms that can recognize places along a route, whether travelled on a bright sunny summer day or in the middle of a dark and stormy winter night. Visual recognition under any environmental conditions is a holy grail for robotics and computer vision, and is a task far beyond current state of the art algorithms. Consequently robot and personal navigation systems use GPS or laser range finders, missing out on visual sensor advantages such as cheap cost and small size. This project will set a new benchmark in visual route recognition, and in doing so enable the extensive use of low cost visual sensors in robot and personal navigation systems under wide ranging environmental conditions. The DECRA award is for 3 years and worth $375,000, and enables me to conduct research full-time, help in funding PhD students and essential robotics and computer vision research equipment.

Selection of relevant papers:

  • Milford, Michael J., and Gordon Fraser Wyeth. "Seqslam: Visual route-based navigation for sunny summer days and stormy winter nights."  Robotics and Automation (ICRA), 2012 IEEE International Conference on . IEEE, 2012 (Best Robot Vision Award)
  • Milford, Michael. "Vision-based place recognition: how low can you go?."  The International Journal of Robotics Research  32.7 (2013): 766-789.
  • Milford, Michael, Ian Turner, and Peter Corke. "Long exposure localization in darkness using consumer cameras."  Proceedings of the 2013 IEEE International Conference on Robotics and Automation . IEEE, 2013.
  • Milford, Michael. "Visual route recognition with a handful of bits."  Proceedings of Robotics Science and Systems Conference 2012 . University of Sydney, 2012.
  • Edward Pepperell, Peter Corke and Michael Milford, "Towards Persistent Visual Navigation using SMART," in Proceedings of the 2013 Australasian Conference on Robotics and Automation , ARAA, 2013.

Brain-based sensor fusion for navigating robots (2012-14 ARC Discovery Project)

This project will develop new methods for sensor fusion using brain-based algorithms for calibration, learning and recall. Current robotic sensor fusion techniques are primarily based on fusing depth or feature data from range and vision sensors. These approaches require manual calibration and are restricted to environments with structured geometry and reliable visual features. In contrast, rats rapidly calibrate a wide range of sensors to learn and navigate in environments ranging from a pitch-black sewer in Cairo to a featureless desert in America. The project will produce robots that, like rats, autonomously learn how best to use their sensor suites, enabling unsupervised, rapid deployment in a range of environments. The award is for 3 years and worth $140,000, and enables me to fund PhD students and essential robotics research equipment.

Selection of relevant papers:

  • Michael Milford and Adam Jacobson, "Brain-based Sensor Fusion for Navigating Robots." Proceedings of the 2013 IEEE International Conference on Robotics and Automation . IEEE, 2013.
  • Adam Jacobson, Zetao Chen, Michael J Milford, "Autonomous Movement-Driven Place Recognition Calibration for Generic Multi-Sensor Robot Platforms", Proceedings of the 2013 IEEE International Conference on Intelligent Robotics and Systems, IEEE, 2013.
  • Zetao Chen, Adam Jacobson, Ugur M. Erdem, Michael E. Hasselmo and Michael Milford, "Towards Bio-inspired Place Recognition over Multiple Spatial Scales," in Proceedings of the 2013 Australasian Conference on Robotics and Automation , ARAA, 2013.

Condition- and Pose-invariant Visual Place Recognition (partly funded by a Microsoft Research Faculty Fellowship)

This project addresses the challenge of developing a place recognition system with performance exceeding that of humans and current state of the art robotic, computer vision and artificial intelligence systems. Place recognition is a well defined but extremely challenging problem to solve in the general sense; given sensory information about a place such as a photo, can a human, animal, robot or personal navigation aid decide whether that place is the same as any places it has previously visited or learnt, despite the vast range of ways in which the appearance of that place can change. Current approaches to the problem using GPS, cameras or lasers have one or more severe theoretical, technological or application-based limitations including high cost, sensitivity to changing environmental conditions, lack of generality, training requirements and long recognition latencies. This project will solve these problems by developing a generally applicable, single shot place recognition system based on recent discoveries in human and rodent studies of visual recognition and place memory. 

Selection of relevant papers:

  • Michael Milford, Eleonora Vig, Walter Scheirer and David Cox, "Towards Condition-Invariant, Top-Down Visual Place Recognition," in Proceedings of the 2013 Australasian Conference on Robotics and Automation, ARAA, 2013.

RatSLAM and OpenRatSLAM

RatSLAM is a robot navigation system based on models of the rodent brain. The project has been ongoing for more than a decade and has been integrated into many other projects such as the Lingodroids project. Most recently we have released an open source version OpenRatSLAM along with an accompanying journal paper in the Autonomous Robots journal.

Selection of relevant papers:

  • Ball, David, et al. "OpenRatSLAM: an open source brain-based SLAM system." Autonomous Robots  (2013): 1-28.
  • Milford, Michael J., Gordon F. Wyeth, and David Prasser. "RatSLAM: a hippocampal model for simultaneous localization and mapping."  Robotics and Automation, 2004. Proceedings. ICRA'04. 2004 IEEE International Conference on . Vol. 1. IEEE, 2004.
  • Milford, Michael J., and Gordon F. Wyeth. "Mapping a suburb with a single camera using a biologically inspired SLAM system."  Robotics, IEEE Transactions on  24.5 (2008): 1038-1053.
  • Milford, Michael, and Gordon Wyeth. "Persistent navigation and mapping using a biologically inspired SLAM system."  The International Journal of Robotics Research  29.9 (2010): 1131-1153.
  • Milford, Michael John. "Robot navigation from nature."  Springer tracts in advanced robotics  41 (2008).

Persistent Mapping and Navigation

The world is a constantly changing place. If robots are ever to be a permanent fixture in our daily lives, they must be able to map and navigate their environments autonomously over long periods of time. We are approaching this problem in a number of ways - attacking the "Place Recognition Quad" from all four sides so to speak.

Selection of relevant papers:

  • Stephanie M. Lowry, Gordon. F. Wyeth, and Michael J. Milford, "Odometry-driven Inference to Link Multiple Exemplars of a Location," Proceedings of the 2013 IEEE International Conference on Intelligent Robotics and Systems, IEEE, 2013.
  • Stephanie Lowry, Gordon Wyeth and Michael Milford, "Training-Free Probability Models for Whole-Image Based Place Recognition," in Proceedings of the 2013 Australasian Conference on Robotics and Automation , ARAA, 2013.
  • Glover, Arren J., et al. "FAB-MAP+ RatSLAM: appearance-based SLAM for multiple times of day."  Robotics and Automation (ICRA), 2010 IEEE International Conference on . IEEE, 2010.
  • Murphy, Liz, et al. "Experimental comparison of odometry approaches." Proceedings of the 13th International symposium on experimental robotics (ISER 2012) . 2012.

Computational Neuroscience and Modelling

I'm interested in a wide range of computational neuroscience areas with a focus on those related to processes like navigation, mapping, learning and recall, and mathematical modelling of these processes.

Selection of relevant papers:

  • Nolan, Christopher R., et al. "The race to learn: spike timing and STDP can coordinate learning and recall in CA3."  Hippocampus  21.6 (2011): 647-660.
  • Cheung, Allen, et al. "Maintaining a cognitive map in darkness: the need to fuse boundary knowledge with path integration."  PLoS computational biology  8.8 (2012): e1002651.
  • Milford, Michael J., Janet Wiles, and Gordon F. Wyeth. "Solving navigational uncertainty using grid cells on robots."  PLoS computational biology  6.11 (2010): e1000995.
  • Stratton, Peter, et al. "Using strategic movement to calibrate a neural compass: A spiking network for tracking head direction in rats and robots."  PloS one  6.10 (2011): e25687.

CAT-SLAM: Continuous Appearance-based Simultaneous Localization And Mapping

Work led by Will Maddern on creating a (nearly) parameter free, highly capable SLAM system - basically a lightweight, probabilistic, mathematically rigorous version of RatSLAM without all the neural dynamics and parameters.

Selection of relevant papers:

  • Maddern, Will, Michael Milford, and Gordon Wyeth. "CAT-SLAM: probabilistic localisation and mapping using a continuous appearance-based trajectory."  The International Journal of Robotics Research  31.4 (2012): 429-451.
  • Maddern, Will, Michael Milford, and Gordon Wyeth. "Continuous appearance-based trajectory slam."  Robotics and Automation (ICRA), 2011 IEEE International Conference on . IEEE, 2011.
  • Maddern, Will, Michael Milford, and Gordon Wyeth. "Capping computation time and storage requirements for appearance-based localization with CAT-SLAM." Robotics and Automation (ICRA), 2012 IEEE International Conference on . IEEE, 2012.
  • Maddern, William, Michael Milford, and Gordon Wyeth. "Towards persistent localization and mapping with a continuous appearance-based topology." Proceedings of Robotics Science and Systems Conference 2012 . University of Sydney, 2012.

Semantic Mapping and Cognitive Science

RatSLAM has formed the base for a number of research projects in cognitive science including the Lingodroids project. I'm interested in the interdisciplinary interface between robotics, computer vision and cognitive science.

Selection of relevant papers:

  • Milford, Michael, et al. "Learning spatial concepts from RatSLAM representations."  Robotics and Autonomous Systems  55.5 (2007): 403-410.
  • Schulz, Ruth, et al. "Lingodroids: Studies in spatial cognition and language." Robotics and Automation (ICRA), 2011 IEEE International Conference on . IEEE, 2011.

PlaceRecognition.com

I have started an online resource dedicated to the specific Place Recognition problem. Place recognition is a well defined but extremely challenging problem to solve in the general sense; given sensory information about a place such as a photo, can a human, animal, robot or personal navigation aid decide whether that place is the same as any places it has previously visited or learnt, despite the vast range of ways in which the appearance of that place can change. Current approaches to the problem using GPS, cameras or lasers have one or more significant theoretical, technological or application-based limitations including high cost, sensitivity to changing environmental conditions, lack of generality, training requirements and long recognition latencies.

The website is intended to act as a focused hub for place recognition research, and especially vision-based place recognition research.

You can view it by clicking the image.

YouTube Channel

My YouTube channel is " MilfordRobotics ". I post both research and teaching material on this channel. Click on the photo to visit it.


Current Postdoctoral Fellows and Research Staff

Current PhD Students

  • Zetao Chen - Brain-based Sensor Fusion for Navigating Robots

  • Adam Jacobson - Brain-based Sensor Fusion for Navigating Robots

  • Jacob Bruce
  • James Mount
  • James Sergeant
  • Sean McMahon
  • Sourav Garg
  • John Skinner
  • Adam Tow
  • Fahimeh Rezazadegan
  • Daniel Richards

Alumni

  • Will Maddern - now project lead for the Robotic Car project at Oxford University - did his PhD on Continuous Appearance-based Localisation and Mapping
  • Edward Pepperell - Visual navigation for sunny summer days and stormy winter nights, now in medical training
  • Stephanie Lowry - now a postdoc at Orebro University in Sweden
  • Arren Glover (Postdoctoral Research Fellow, now doing a postdoc with Dr Chiara Bartolozzi at the Italian Institute of Technology)
  • Mara Smeathers - Brain-based sensor fusion for navigating robots
  • Stephen Hausler - Brain-based sensor fusion for navigating robots
  • Julian Pattie - GPU-based spatial navigation algorithms 
  • Obadiah Lam (Research Assistant, 2014)
  • Mei Weng Brough-Smyth - Place Recognition Challenge
  • Dominic Dall'Osto - Sequence-based Image Alignment
  • Vick Noronha, Marina Zavarnitsyna, Kaitlyn Pickard, Mark Robinson, Mei Weng Brough-Smyth - Place Recognition Challenge for Humans

PhD Projects

I am looking for enthusiastic, talented people who are passionate about embarking on a long term research project and journey (i.e. a PhD).

I currently have PhD positions available. Click here for further information, especially concerning minimum thresholds for applying.

Undergraduate Projects

I have a range of undergraduate final year projects as part of QUT BEB801/802 or possibly vacation projects. Go here to check them out

Teaching

I teach ENB339 Introduction to Robotics in Semester 2 each year. In it we learn:

  • Robot construction, build a robot using Lego and the NXT controller brick
  • Fundamentals of robotics, how to control the end point of a simple robot arm and make it follow a path
  • Computer vision, how to interpret images to figure the size, shape and color of objects in the scene
  • Connect computer vision to robotics, make your robot move to objects of specific size, shape and color.

Here's a video showing prac footage from the 2013 incarnation of the unit. Music is from: Pamgaea by Kevin MacLeod (incompetech.com). Licensed under Creative Commons: By Attribution 3.0 http://creativecommons.org/licenses/by/3.0/

 

Collaborators

I am actively collaborating with or have collaborated with the following researchers and institutions:

Harvard University, United States 
David Cox, Walter Scheirer and Eleonora Vig
School of Engineering and Applied Sciences and Department of Molecular and Cellular Biology at Harvard University

Boston University, United States
Michael Hasselmo and Ugur M. Erdem
Center for Memory and Brain and Graduate Program for Neuroscience at Boston University

Imperial College London, United Kingdom

Andrew Davison, Stefan Leutenegger, Hanme Kim

Edinburgh University, United Kingdom

Barbara Webb, Thomas Stone, Michael Mangan

Oxford University, United Kingdom

Paul Newman

MIT, United States

John Leonard

NASA JPL, United States

Abigail Allwood, David Thompson and Gary Doran

Caterpillar, Australia

Nigel Boswell and David Smith

The University of Adelaide, Australia

Chunhua Shen, Ian Reid

The University of Nottingham, United Kingdom 
Robert Oates, Graham Kendall and Jonathan M Garibaldi

Victoria University of Wellington, New Zealand
Henry Williams and Will Browne

University of Antwerp, Belgium
Rafael Berkvens, Herbert Peremans and Maarten Weyn

The Australian National University, Australia
Robert Mahony and Felix Schill

The University of Queensland, Australia
Janet Wiles, Allen Cheung, Peter Stratton, Christopher Nolan, Jason Mattingley and Oliver Baumann

Commonwealth Scientific and Industrial Research Organisation, Australia
Jonathan Roberts and Kane Usher

Photos

A collection of work-related photo albums and videos. I notice that we're usually so engrossed in our work that we don't take any photos of the process, only in our free time.

ACRA2014, Melbourne

Photo

ICRA2014, Hong Kong

ISRR2013, Singapore

ICCV2013, Sydney


ACRA2013, Sydney


MIT Museum, Boston 2013


ICRA2012 and USA Work Trip


RSS2012, Sydney


FSR2012, Japan


ICRA2011 Shanghai


ACRA2011 Melbourne

MSR Summit 2013, Seattle

Thinking Systems Retreat 2010

Couran Cove, Queensland

 

QUT ECARD Retreat 2011

Kingscliff, Queensland

Moser lab and Hippocampus Workshop

Portugal and Norway, 2008

IROS2006, Bejing

ACRA2005, Sydney

Thinking Systems Retreat 2009

O'Reilleys, Queensland

ICRA2004

New Orleans, USA

 

Peer-Reviewed Publications

I have switched to Google Scholar to manage publication lists and citation analysis. The publicly visible link is available here. A selection of significant publications is listed below.

You can also access most of my publications at the QUT ePrints repository.

Software

I have produced open source software including OpenRatSLAM. There are also open source implementations of algorithms used in my other research, including OpenSeqSLAM by Niko Sunderhauf.

OpenRatSLAM  is a fully open source implementation of the RatSLAM system, with the most  recent release  integrated with the Robot Operating System (ROS), and an older vanilla C++ version. Click on the link below to get it and find out more:

http://code.google.com/p/ratslam/

You can also find OpenRatSLAM on OpenSLAM.com.

OpenSeqSLAM is an open source Matlab implementation of the original SeqSLAM algorithm written by Niko Sunderhauf.
openFABMAP is an open and modifiable code-source which implements the Fast Appearance-based Mapping algorithm (FAB-MAP) originally developed by Mark Cummins and Paul Newman. OpenFABMAP was designed from published FAB-MAP theory and is for personal and research use.
A simple Matlab script I wrote to estimate your current year citation count using Google Scholar.

Datasets and Downloads

Click here to access the datasets and downloads webpage.

Media Coverage

A smattering of media coverage of my research over the years.

Robots

Research Mapping

My research "Wordle" (Wordle.net) (click on the image to get a larger version):

  • No labels