Skip to end of metadata
Go to start of metadata

News

This page is an archive of older news posts.

28 April 2015: CVPR Workshop Paper Accepted

My paper "Sequence Searching with Deep-learnt Depth for Condition- and Viewpoint-invariant Route-based Place Recognition" has been accepted to the 6th Computer Vision in Vehicle Technology Workshop (CVVT) at CVPR 2015. This is collaborative work with researchers at the Australian Centre for Visual Technologies at the University of Adelaide, our partners on the Australian Centre for Robotic Vision.


15 December 2014: Running dual accepted Place Recognition Workshops at CVPR2015 and ICRA2015


We (+Niko Sünderhauf +Peter Corke +Torsten Sattler ) will be running dual Place Recognition workshops at CVPR2015 and ICRA2015, the premiere large robotics and computer vision conferences.

These workshops follow the very well attended inaugural Place Recognition workshop we ran at ICRA2014. Guest speakers include computer vision and visual neuroscience expert +David Cox from Harvard University and robotic mapping expert +Paul Newman from Oxford University.

Watch Google Plus and the robotics and computer vision mailing lists for more details

27 October 2014: National news interviews on ABC

Robotics and neuroscience are exciting fields to be in at the moment - have done a lot of news interviews recently including these on ABC National News and Nine News. The December one was a commentary on the recent exciting PNAS paper "Place cells in the hippocampus: Eleven maps for eleven rooms" by the 2014 Nobel Prize winners Edvard and May-Britt Moser, while the September one was on drawing inspiration from nature to create robotic technology.

December 9th - ABC News 24September 29th - ABC News 24September 22nd - Channel Nine News

18 November, 2014: Three best paper finalists for the 2014 Australasian Conference on Robotics and Automation

Three of my group's papers are finalists for the best paper award at the 2014 Australasian Conference on Robotics and Automation, to be held at the University of Melbourne December 2-4:

  • M. Milford, J. Firn, J. Beattie, A. Jacobson, E. Pepperell, E. Mason, M. Kimlin and M. Dunbabin, "Automated Sensory Data Alignment for Environmental and Epidermal Change Monitoring"
  • Z. Chen, O. Lam, A. Jacobson and M. Milford, "Convolutional Neural Network-based Place Recognition"
  • E. Pepperell, P. Corke and M. Milford, "Towards Vision-Based Pose- and Condition-Invariant Place Recognition along Routes"

27 October 2014: 4 papers accepted to the 2014 Australasian Conference on Robotics and Automation

My research group has just had 4/4 papers accepted to the 2014 Australasian Conference on Robotics and Automation, to be held at the University of Melbourne December 2-4.

One of the papers is a highly interdisciplinary collaboration between roboticists, environmental monitoring scientists, ecologists and skin cancer researchers:

  • M. Milford, J. Firn, J. Beattie, A. Jacobson, E. Pepperell, E. Mason, M. Kimlin and M. Dunbabin, "Automated Sensory Data Alignment for Environmental and Epidermal Change Monitoring"

My students have led 3 of the papers, involving research on camera-based car navigation, automated learning processes and deep learning:

  • S. Lowry, G. Wyeth and M. Milford, "Unsupervised Online Learning of Condition-Invariant Images for Place Recognition"
  • Z. Chen, O. Lam, A. Jacobson and M. Milford, "Convolutional Neural Network-based Place Recognition"
  • E. Pepperell, P. Corke and M. Milford, "Towards Vision-Based Pose- and Condition-Invariant Place Recognition along Routes"


16 September, 2014: Front page of The Australian IT

The Australian just ran a feature article on the front page of the IT section on the rodent- and human-inspired place recognition research I'm carrying out as part of my Australian Research Council Future Fellowship. Click on the image for a larger version.

25 July, 2014: Awarded an Australian Research Council Future Fellowship


I've just been given the news that I have been fortunate enough to be awarded an Australian Research Council Future Fellowship. ARC media release here.

Title: Superhuman place recognition with a unified model of human visual processing and rodent spatial memory

Brief Summary: This project will revolutionize our understanding of how humans and animals use vision to determine their location in the world. This understanding will lead to new computer algorithms that enable robots to navigate in any environmental conditions using cheap visual sensors and breakthroughs in our knowledge of the brain.

The fellowship and associated institutional support provides funding for salary, a postdoc, PhD students and collaborative travel from 2014 to 2018. Some of the collaborators include Harvard University, Boston University, The University of Queensland and the University of Antwerp.


24 July 2014: Paper accepted to Philosophical Transactions of the Royal Society B

We've just had a review paper accepted to the journal Philosophical Transactions of the Royal Society B (the B is for Biological Sciences). We review common and contrasting principles of goal-directed navigation in the natural world and in artificial systems such as robots in an attempt to arrive at a common understanding.

Michael Milford, Ruth Schulz, "Principles of Goal-Directed Spatial Robot Navigation in Biomimetic Models", Philosophical Transactions of the Royal Society B (in press).


14 July, 2014: Collaborative paper with Boston University accepted to the journal Neurobiology of Learning and Memory

We've just had a collaborative paper accepted for publication in the journal Neurobiology of Learning and Memory with Professor Michael Hasselmo and Dr Murat Erdem of Boston University, combining and advancing BU's Hierarchical Look-Ahead Trajectory Model (HiLAM) with RatSLAM.


11 July 2014: ARC Centre of Excellence in Robotic Vision Retreat at O'Reillys

The chief investigators just spent two days at O'Reillys Rainforest Retreat at the first Centre of Excellence retreat, planning all aspects of how the centre will run. Photo album available here, or enlarge the collage below by clicking on it. Sessions ran from 8:30 am until about 9 pm at night, but we managed to fit in a couple of team bonding activities as well (Segways!).


24 June 2014: $20,000 Pilot Project Grant on "Automated Environmental Change Monitoring"

We've just been awarded a grant to conduct some innovative research and pilot trials in Automated Environmental Change Monitoring, in collaboration with expert field roboticist Matthew Dunbabin and leading ecologist Jennifer Firn. We are adapting some of the robotic navigation algorithms (SeqSLAM for example) to automatically align data from multiple surveying passes of an environment that is changing over time. Monitoring of environments and in particular environmental change is a highly topical scientific research area, both internationally and in Australia. This project will develop technology demonstrators for monitoring and quantifying environmental change over time in a range of ecological and environmental scenarios.

Pilot study results

16 June 2014: ICRA2014 Hong Kong Presentations on YouTube and Photos

ICRA was a great conference as usual this year. I've re-recorded two of my presentations - "Superhuman robot navigation with a Frankenstein model...and why I think it's a great idea" and "Condition-Invariant, Top-Down Visual Place Recognition" - and made them available on YouTube:

A photo album is available by clicking here, with highlights below (click to get a larger photo). Photos show the two workshops I helped run and general conference activities as well as the bits of Hong Kong I got to see.

 

19 May 2014: 1 Journal of Field Robotics paper with Harvard University accepted, 1 IROS paper with Antwerp University accepted

We've had two international collaborative papers accepted over the weekend:

The first paper is a collaborative venture with Harvard University and has been accepted into the Journal of Field Robotics:

  • Michael Milford, Eleonora Vig, Walter Scheirer and David Cox, "Vision-based SLAM in Changing Outdoor Environments", Journal of Field Robotics.

The second paper is a collaborative venture with Antwerp University, and is accepted into the 2014 International Conference on Intelligent Robots and Systems, to be held in Chicago in September 2014:

  • Rafael Berkvens, Adam Jacobson, Michael J Milford, Herbert Peremans, Maarten Weyn, "Biologically Inspired SLAM Using Wi-Fi", International Conference on Intelligent Robots and Systems.

30 April 2014: QUT is recruiting 5 Postdoctoral Research Fellows for the Australian Research Council Centre of Excellence for Robotic Vision!



The centre is a major long term initiative for advancing robotic vision, based in Australia but with an international investigator team as well. The fellows will work and collaborate with 19 leading researchers in robotics and computer vision from 10 top universities and research institutions around Australia and the globe. Over the course of the centre a total of 16 postdoctoral fellows will be hired.
 
Investigator Team

+Peter Corke, Ian Reid, +Tom Drummond, +Robert Mahony, +Gordon Wyeth, +Michael Milford , +Ben Upcroft, Anton van den Hengel, Chunhua Shen, Richard Hartley, Hongdong Li, +Stephen Gould, Gustavo Carneiro, +Paul Newman, Philip Torr, Francois Chaumette, Frank Dellaert, +Andrew Davison and Marc Pollefeys. 

Organizations
Queensland University of Technology, The University of Adelaide, Monash University, the Australian National University, University of Oxford, INRIA Rennes Bretagne, Georgia Institute of Technology, Imperial College London, Swiss Federal Institute of Technology, Zurich and National ICT Australia

The centre website and description is here: http://www.roboticvision.org
Job description: http://www.seek.com.au/job/26434943

I'm leading a 7 year project on Robust Vision which will employ one of the postdocs and graduate students, involving extensive collaboration with other projects and themes in the centre: http://roboticvision.org/projects/rv3-learning-spatio-temporally-robust-visual-representations-of-place/

We're also interested in top potential graduate students as well, please see my general graduate requirements here: Applications Milford

M

14 January 2014: 5 papers, 2 workshops, 1 organized session and 1 Pecha Kucha presentation accepted to the 2014 International Conference on Robotics and Automation

5 papers led by my research group have been accepted into the 2014 International Conference on Robotics and Automation, to be held in Hong Kong in May/June. The papers are:

  • "Condition-Invariant, Top-Down Visual Place Recognition," Michael J Milford, Walter Scheirer, Eleonora Vig, David Cox (an ongoing collaboration with neuroscientists and computer vision researchers at Harvard University)

  • "Towards Training-Free Appearance-Based Localization: Probabilistic Models for Whole-Image Descriptors," Stephanie Lowry, Gordon Wyeth, Michael J Milford
  • "Transforming Morning to Afternoon using Linear Regression Techniques," Stephanie Lowry, Michael J Milford, Gordon Wyeth
  • "All-Environment Visual Place Recognition with SMART," Edward Pepperell, Peter Corke, Michael J Milford
  • "Multi-scale Bio-inspired Place Recognition," Zetao Chen, Adam Jacobson, Ugur M. Erdem, Michael E. Hasselmo and Michael Milford (an ongoing collaboration with neuroscientists at Boston University)

Well done to PhD students Steph, Ed, Zetao and Adam for their papers.

I am also co-chairing two accepted workshops and an organized session with the University of California, Boston University, University of Plymouth, Hong Kong University of Science and Technology and the Chemnitz University of Technology:

Finally, I have been selected as one of six inaugural Pecha Kucha speakers at the conference, out of a pool of sixteen proposals. 20 slides, 20 seconds per slide, should be very manic but fun. My provisional title is:

  • "Superhuman Robot Navigation with a Frankenstein Model"

19 December 2013: Australian Research Council Centre of Excellence for Robotic Vision funded for $19,000,000 over the next 7 years.


Yesterday we received the happy news that our Australian Research Council Centre of Excellence for Robotic Vision has been funded for $19,000,000 over the next 7 years.

The project will be led by Professor Peter Corke at the Queensland University of Technology and involves a total of 13 chief investigators and 6 partner investigators from 10 organisations spanning robotics and computer vision across the globe.

Centre Overview: The Centre’s research will allow robots to see, to understand their environment using the sense of vision. This is the missing capability that currently prevents robots from performing useful tasks in the complex, unstructured and dynamically changing environments in which we live and work.

The entire team of investigators comprises: Peter Corke, Ian Reid, Tom Drummond, Robert Mahony, Gordon Wyeth, Michael Milford , Ben Upcroft, Anton van den Hengel, Chunhua Shen, Richard Hartley, Hongdong Li, Stephen Gould, Gustavo Carneiro, Paul Newman, Philip Torr, Francois Chaumette, Frank Dellaert, Andrew Davison and Marc Pollefeys

The organization list includes: Queensland University of Technology, The University of Adelaide, Monash University, the Australian National University, University of Oxford, INRIA Rennes Bretagne, Georgia Institute of Technology, Imperial College London, Swiss Federal Institute of Technology, Zurich and National ICT Australia.

18 December 2013: ISRR2013, Singapore

I was an invited speaker at the 2013 International Symposium on Robotics Research in Singapore this week, photos below. I also gave an invited presentation at A*Star.

ISRR2013, Singapore

3 December 2013: New Robotnik Robot

We've taken delivery of our new Robotnik Summit XL robot.

09 December 2013: ICCV2013 and ACRA2013, Sydney

I've just returned from an interesting week at the 2013 Australasian Conference on Robotics and Automation and the 2013 International Conference on Computer Vision, both held in Sydney. Photos in the photo section. I had the pleasure of seeing some of the very top computer vision researchers talk at ICCV including Jitendra Malik. Also had a great time at the Computer Vision in Vehicle Technology workshop and gave an invited presentation.

3 December 2013: ACRA Best Paper Award and Finalist

My PhD student Zetao Chen has won the inaugural Ray Jarvis best paper award at the 2013 Australasian Conference on Robotics and Automation for the paper:

"Towards Bio-inspired Place Recognition over Multiple Spatial Scales," Zetao ChenAdam Jacobson, Ugur Murat Erdem, Michael Hasselmo and Michael Milford

This paper was a collaborative paper led by QUT with neuroscientists at Boston University. The award has been established in memory of Emeritus Professor Ray Jarvis, a pioneer in Australian robotics.

We also had two other best paper finalists:

Best Paper Finalist:

"Towards Condition-Invariant, Top-Down Visual Place Recognition," Michael Milford, Walter Scheirer, Eleonora Vig and David Cox

This paper was a result of collaborative work led by QUT with computer vision and neuroscience researchers at Harvard University.

Best Student Paper Finalist:

"Towards Bio-inspired Place Recognition over Multiple Spatial Scales," Zetao ChenAdam Jacobson, Ugur Murat Erdem, Michael Hasselmo and Michael Milford

22 November 2013:   Adam's paper accepted to the Journal of Field Robotics

Adam Jacobson's paper "Autonomous Multi-Sensor Calibration and Closed Loop Fusion for SLAM" has just been fully accepted to the Journal of Field Robotics - an especially notable achievement for Adam considering he is only in the first year of his PhD studies!

17 October 2013: 4 papers accepted to the 2013 Australasian Conference on Robotics and Automation (ACRA)


Our research group hit 100% for ACRA, with 4 out of 4 submitted papers being accepted. Congratulations to Steph, Ed, Zetao and Adam for their great work in leading papers.

Two of these papers resulted from ongoing collaborations with top international neuroscience and computer vision laboratories at Harvard University (Professor David Cox, Dr Walter Scheirer and Dr Eleonora Vig) and Boston University (Professor Michael Hasselmo and, Dr Ugur Murat Erdem). Paper titles are as follows:

  • "Towards Condition-Invariant, Top-Down Visual Place Recognition," Michael Milford, Walter Scheirer, Eleonora Vig and David Cox
  • "Bio-inspired Place Recognition over Multiple Spatial Scales," Zetao Chen, Adam Jacobson, Ugur Murat Erdem, Michael Hasselmo and Michael Milford
  • "Towards Persistent Visual Navigation using SMART,", Edward Pepperell, Peter Corke and Michael Milford
  • "Training-Free Probability Models for Whole-Image Based Place Recognition,", Stephanie Lowry, Gordon Wyeth and Michael Milford

27 September 2013: Invited Talk at the 2013 International Symposium of Robotics Research (ISRR) in Singapore, December 16-19th

I will be giving one of the invited talks at the 2013 International Symposium of Robotics Research (ISRR) in Singapore, December 16-19th. ISRR is the flagship symposium of the International Foundation of Robotics Research (IFRR, http://www.ifrr.org/). Past invited speakers have included Sebastian Thrun and Hugh Durrant-Whyte.

16 September 2013: Back home from my Sabbatical in Boston

Today was my first day back at QUT after a 6 month sabbatical in Boston working primarily with neuroscience and computer science researchers at Harvard University and Boston University. Hopefully you'll be able to see some of the exciting collaborative research outcomes in press in the near future. Here's a preview of some of the work:

The first image shows some computer vision research I've been working on with David Cox, Walter Scheirer and Eleonora Vig at Harvard University. The second shows some hybrid modelling work combining RatSLAM with a high fidelity rodent navigation model with Michael Hasselmo and Murat Erdem at Boston University, as part of a large ONR MURI grant with John Leonard at MIT.

31 August 2013: Invited Talk at 4th IEEE Workshop on Computer Vision for Vehicle Technology at ICCV

I will be giving an invited talk at the 4th IEEE Workshop on Computer Vision for Vehicle Technology at the 2013 International Conference on Computer Vision on December 8th in Sydney. Hope to see you there!

19 July 2013: I have just returned from the 2013 Microsoft Research Faculty Summit held in Redmond, Washington where I received my Microsoft Faculty Fellowship. Highlights included:

  • Front row seats to Bill Gate's talk, he was appearing at the summit for the first time in 5 years, and Gordon asking him how the robot revolution was going
  • Meeting and chatting with Paul Ginsparg, creator of ArXiv
  • Meeting Rick Rashid, creator of one of the first multiplayer networked computer games Alto Trek 
  • Getting to see a range of other luminaries in computer science history, including Leslie Lamport, creator of LaTeX amongst other things
  • Seeing computer vision researchers such as Fei-Fei Li talk
  • Getting to visit the Microsoft Robotics and Augmented Reality groups

There's a video of proceedings available here, and below is a photo of the Faculty Fellows, the lake cruise with Mt Rainier in the background, and Bill Gates giving us a smile.

01 July 2013: Steph and Adam have just had their papers accepted to the 2013 Intelligent Robots and Systems conference to be held in Tokyo, Japan in November 2013. Their two papers are:

Adam Jacobson, Zetao Chen and Michael Milford, "Autonomous Movement-Driven Place Recognition Calibration for Generic Multi-Sensor Robot Platforms"

and

Stephanie Lowry, Gordon Wyeth and Michael Milford, "Odometry-driven Inference to Link Multiple Exemplars of a Location".

17 June 2013: I have been awarded a 2013 Microsoft Research Faculty Fellowship, one of seven fellowships awarded internationally. The fellowship is intended to "stimulate and support creative research undertaken by promising researchers who have the potential to make a profound impact on the field of computing in their research disciplines" and provides a cash award of $100,000 over two years. The fellowship was only made possible through the generous support of Gordon Wyeth, David Cox, Michael Hasselmo and Peter Corke.

15 May 2013: Just arrived back in Boston after a fun week at the International Conference on Robotics and Automation in Karlsruhe, Germany. Lots of interesting research and cool robots. Highlights included inspiring plenaries by Rodney Brooks and an unexpected (and a first for me in 11 years of going to conferences) massive fireworks display on the opening night. There's also a short video of the robots, talks and conference events available here.

24 April 2013: I will be presenting four papers at the International Conference on Robotics and Automation in Karlsruhe, Germany May 6-10.

Wednesday

11:30 AM - 12:45 PM: Paper WeCInt.37, M. Milford and A. Jacobson, "Brain-Inspired Sensor Fusion for Navigating Robots".

17:00 PM - 18:15 PM: Paper WeFInt.16, M. Milford, I. Turner and P. Corke, "Long Exposure Localization in Darkness Using Consumer Cameras".

Friday

11:00 AM - 11:20 AM: Workshop on Long-Term Autonomy, M. Milford, A. Jacobson, "Calibration and Continual Weighting of Multisensory Input for Lifelong Robot Mapping and Navigation"

12:00 AM - 12:30 AM: Workshop on Unconventional Approaches to Robotics, Automation and Control Inspired by Nature, M. Milford, A. Jacobson, G. Wyeth, "Using Performance-Based Evaluation to Close the Loop between Biological and Robotic Navigation".

If you want to meet up to have a chat / drink / meal, please ping me on e-mail and we'll work out a time.

23 April 2013: My paper titled "Vision-based Place Recognition: How Low Can You Go?" has just been accepted to the International Journal of Robotics Research, the top ranked robotics journal.

Abstract: In this paper we use the algorithm SeqSLAM to address the question, how little and what quality of visual information is needed to localize along a familiar route? We conduct a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length. Results confirm that place recognition using single images or short image sequences is poor, but improves to match or exceed current benchmarks as the matching sequence length increases. We then present place recognition results from two experiments where low quality imagery is directly caused by sensor limitations; in one, place recognition is achieved along an unlit mountain road by using noisy, long exposure blurred images, and in the other two single pixel light sensors are used to localize in an indoor environment. We also show failure modes caused by pose variance and sequence aliasing, and discuss ways in which they may be overcome. By showing how place recognition along a route is feasible even with severely degraded image sequences, we hope to provoke a re-examination of how we develop and test future localization and mapping systems.

A couple of screenshots from videos of the work are shown here:

 

06 February 2013: Adam Jacobson and I spent yesterday afternoon shooting a story with Channel 10's National Science Show Scope, the second story we've done with them in the past couple of years. It generally takes half or a full day to film a story that will air for about four minutes.

8 January 2013: I have just had 2 papers accepted to the 2013 International Conference on Robotics and Automation, to be held in Karlsruhe, Germany. Details of the accepted papers are below:

  • Long Exposure Localization in Darkness Using Consumer Cameras, Michael Milford, Ian Turner, Peter Corke. This was a collaborative paper between the School of Electrical Engineering and Computer Science (Corke, Milford) and Ian Turner who is head of the Mathematical Sciences school at QUT. We present research in which we dial up the maximum exposure and gain ratings of cheap consumer cameras to enable localization in almost pitch black environments.
  • Brain-inspired Sensor Fusion for Navigating Robots, Michael Milford, Adam Jacobson. This was a paper written together with a final year undergraduate student Adam Jacobson, who is commencing a PhD with me in 2013. We present research showing how a robot can perform SLAM in an environment without any knowledge of what or how many sensors it has, simply by analyzing the utility of each sensor's raw data readings for performing localization.

7 January 2013: Our Autonomous Robots paper "OpenRatSLAM: An Open Source Brain-Based SLAM System", David Ball, Scott Heath, Janet Wiles, Gordon Wyeth, Peter Corke and Michael Milford, is now available online here. It is part of an Autonomous Robots Special Issue on Open Source Software-Supported Robotics Research. This was a paper led by a colleague David Ball collaborating with researchers from the University of Queensland Scott Heath and Janet Wiles.

Abstract: RatSLAM is a navigation system based on the neural processes underlying navigation in the rodent brain, capable of operating with low resolution monocular image data. Seminal experiments using RatSLAM include mapping an entire suburb with a web camera and a long term robot delivery trial. This paper describes OpenRatSLAM, an open-source version of RatSLAM with bindings to the Robot Operating System framework to leverage advantages such as robot and sensor abstraction, networking, data playback, and visualization. OpenRatSLAM comprises connected ROS nodes to represent RatSLAM's pose cells, experience map, and local view cells, as well as a fourth node that provides visual odometry estimates. The nodes are described with reference to the RatSLAM model and salient details of the ROS implementation such as topics, messages, parameters, class diagrams, sequence diagrams, and parameter tuning strategies. The performance of the system is demonstrated on three publically available open-source datasets.

Here's a screenshot of OpenRatSLAM in action on an iRat robot, as well as sequence diagram:

OpenRatSLAM is a fully open source implementation of the RatSLAM system, with the most recent release integrated with the Robot Operating System (ROS), and an older vanilla C++ version. Click on the link below to get it and find out more:

http://code.google.com/p/ratslam/

You can also find OpenRatSLAM on OpenSLAM.com.

20 November 2012: My Professional Development Leave has been approved by QUT, so I'm very much looking forward to spending 6 months in Boston collaborating with the David Cox laboratory at Harvard University, the Michael Hasselmo lab at Boston University and the John Leonard lab at the Massachusetts Institute of Technology. I will be combining the bio-inspired rodent vision systems being developed in the Cox lab with RatSLAM to create a complete bio-inspired robot navigation system and investigating any navigation performance breakthroughs that can be leveraged from recent neuroscience discoveries regarding grid cells with the Hasselmo and Leonard labs.

MIT Stata center, view of Boston, parks lining the Charles River and Fenway Park.

14 November 2012: I've just received delivery of an Optrix PI450 Infrared Camera for use in our robot and personal navigation projects as well as environmental monitoring research. It is able to pick up very fine differences in temperature and should be especially useful at night. Best not to point it at oneself when testing it as the image is a little freaky.

12 October 2012: I've just had the good news that three student-led papers have been accepted to the 2012 Australasian Conference on Robotics and Automation (ACRA). The papers are:

S. Lowry, G. Wyeth and M. Milford, "CAT-GRAPH+: Towards Odometry-driven Place Consolidation in Changing Environments"

A. Jacobson and M. Milford, "Towards Brain-based Sensor Fusion for Navigating Robots"

H. Williams, W. Browne and M. Milford, "Image Salience Weighting for Improving Appearance-Based Place Recognition using a Supervised Classifier System"

04 October 2012: I've recently finished teaching Introduction to Robotics ENB339 for 2012. Below is a video of the students' robots performing the practical task, using many varying robot designs and algorithmic approaches. Many of the students have minimal programming or lego experience but are able to, in only a few hours, build a fully functional robot that can perform the task accurately at at relatively high speed (we used a Matlab toolbox to control the robot so there was some unavoidable latency).

20 August 2012: We've just had a collaborative paper led by colleague Allen Cheung at the University of Queensland "Maintaining a Cognitive Map in Darkness: The Need to Fuse Boundary Knowledge with Path Integration" published in PLoS Computational Biology.

In the paper we show that by combining idiothetic path integration and boundary information in an arena, a robot is able to indefinitely maintain stable place fields.

07 August 2012: I've posted a video presentation of the SeqSLAM system on my YouTube channel - click here to go watch it.

03 August 2012: I've posted a video presentation of the RatSLAM system on my YouTube channel - click here to go watch it.

26 July 2012: I have been fortunate enough to be promoted to "Senior Lecturer".

25 July 2012: I just got back from a week and a half in Japan, where I attended the international conference on Field and Service Robotics (FSR), giving one presentation, and visited the National Institute of Advanced Industrial Science and Technology (AIST) in Tsukuba outside of Tokyo where I gave another presentation. Japan has a wonderful robotics culture. A few photos below, from top left clockwise they are: the HRP-4C humanoid robot, presenting at FSR, group photo after my talk at AIST, dancing with a robot at Tokyu university in Sendai, one of the many rovers and rescue robots at Tokyu university, and the amazing view (one of the "three views of Japan") from the conference hotel.

18 July 2012: Last week I attended Robotics Science and Systems in Sydney along with most of the QUT robotics lab. RSS is a boutique robotics conference inbetween the huge ICRA and IROS conferences and small local conferences such as ACRA. I gave one of the 25 minute award candidate talks, a presentation at the Bio-Inspired Robotics workshop and a poster presentation. A collage of photos from the conference is below, in clockwise order from the top left: lunch at the bio-inspired workshop, hiding behind a sniper target robot, the conference banquet at the Australian museum, chatting with Anders Sandberg, an invited speaker from the Future of Humanity Institute, at the Sydney harbour bridge, running the robot trivia quiz on the opening night, giving the main conference talk, giving the workshop talk, and the aesthetically pleasing quadrangle where the conference was held at the University of Sydney.

06 July 2012: A special "Future Transport" episode of Channel 10's Scope national science television show did a story on my SeqSLAM vision-based GPS research.

You can watch the story (including the awesome segue way from Andy Keir of QUT's ARCAA at this link, starting at 1:30 in.

28 May 2012: I just got back from a whirlwind work trip to the United States including visits to Los Angeles, San Francisco, Pittsburgh, Boston and Saint Paul. The trip was centered around the International Conference on Robotics and Automation (ICRA) in Saint Paul, but also included visits to laboratories at Harvard, MIT, CMU, Google, the National Robotics Engineering Center, Boston University, the the University of Minnesota and the University of California. Needless to say after nine presentations and all those lab visits I'm a bit worn out but very happy with how the trip went. Highlights include being taken for a drive by the Google Self Driving Car, and winning the Best Robot Vision Paper Award at the conference.

There's a gallery of work-related photos available here (click on the first image), and a single collage of non-work related stuff I did in my spare time on the trip (second picture):



Recreational photos show: huge cruise ship sliding under the Golden Gate Bridge, model battleships fighting at the Maker Faire festival in the Bay Area San Francisco, relaxing on a Californian beach, Pigeon Point lighthouse on the Californian coast, riverside park vista in Boston, very relaxed half tonne polar bear at the Saint Paul zoo, vista from the Getty Centre in Los Angeles, manning the machine gun in a B-24 Liberator bomber, at the Getty Centre again, picking locks at the Maker Faire festival, standing in front of a huge coastal redwood in Henry Cowell Redwoods State Park, standing in front of an active B-17 bomber, poor photo capture of the annular solar eclipse, and relaxing on the Golden Gate Bridge.

02 May 2012: I've just been informed that one of my papers accepted to the Robotics: Science and Systems conference, M. Milford, "Visual Route Recognition with a Handful of Bits",  is nominated for the best paper award. I'll be doing a special best paper candidate oral presentation which is 20-25 minutes in length, rather than the customary e-poster.

19 April 2012: I have just had two papers accepted to the Robotics: Science and Systems conference, to be held in Sydney from July 9 - 13. The two papers are:

  • M. Milford, "Visual Route Recognition with a Handful of Bits"
  • W. Maddern, M. Milford, G. Wyeth, "Persistent Localization and Mapping with a Continuous Appearance-based Topology"

Here's a picture from the first paper, which is about pushing the lower boundaries of how much information you really need to do vision-based localization:

Hope to see you in Sydney!

15 April 2012: I've just been informed that one of my papers accepted to the International Conference on Robotics and Automation, Michael Milford, Gordon Wyeth, "SeqSLAM: Visual Route-Based Navigation for Sunny Summer Days and Stormy Winter Nights", is nominated for the Best Robot Vision paper award. If you're interested please come along to the presentation, which is on Tuesday May 15th at 5:15 pm in session TuD07, "Perception for Autonomous Vehicles".

31 March 2012: I have just had a paper accepted to the International Conference on Field and Service Robotics to be held in Matsushima, Japan. The paper title is Featureless Visual Processing for SLAM in Changing Outdoor Environments. The paper abstract is given below:

Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features.

There's a picture of the "experimental platform" - just an RC car with a webcam strapped to the top - and the type of images the SLAM system processed:

13 March 2012: The Australian Newspaper just ran a piece on my vision-based GPS research as the main story on the front page of the IT Section. The full article can be accessed by clicking here (if you have a subscription).

10 February 2012: I just received delivery of our new Pioneer 3DX robot. The robot will be a critical research platform for both my vision-based and sensor fusion research over the next three years. This particular model is equipped with front and rear sonar and bump sensors, and Will Maddern has attached a SICK laser as well. A panoramic and fish-eye imaging set-up will be added next, and we are considering other sensory modalities such as auditory sensors.

09 January 2012: I have just had 3 papers accepted to the 2012 International Conference on Robotics and Automation, to be held in St. Paul, Minnesota. Details of the accepted papers are below:

  • SeqSLAM: Visual Route-Based Navigation for Sunny Summer Days and Stormy Winter Nights, Michael Milford, Gordon Wyeth
  • Capping Computation Time and Storage Requirements for Appearance-based Localization with CAT-SLAM, William Maddern, Michael Milford, Gordon Wyeth
  • OpenFABMAP: An Open Source Toolbox for Appearance-based Loop Closure Detection, Arren Glover, William Maddern, Michael Warren, Stephanie Reid, Michael J Milford, Gordon Wyeth

The SeqSLAM paper presents some of the foundation theory that underpins my Discovery Early Career Researcher Award fellowship for 2012-2014.

As well as attending the conference, I am giving an invited presentation at Boston University and plan to visit several other laboratories in North America as well.

13 December 2011: I have just returned from the Australasian Conference on Robotics and Automation, 2011, which was held at Monash University in Melbourne, Australia. I presented two papers:

Towards Condition-Invariant Sequence-Based Route Recognition, Michael Milford

and

Feature-based Visual Odometry and Featureless Place Recognition for SLAM in 2.5D Environments, Michael Milford, David McKinnon, Michael Warren, Gordon Wyeth and Ben Upcroft

I also took a number of photos, which can be viewed here.


14 November 2011: I have just been awarded an Australian Research Council Discovery Early Career Researcher Award fellowship for 2012 to 2014. Details below:

Visual navigation for sunny summer days and stormy winter nights

This project will develop novel visual navigation algorithms that can recognize places along a route, whether travelled on a bright sunny summer day or in the middle of a dark and stormy winter night. Visual recognition under any environmental conditions is a holy grail for robotics and computer vision, and is a task far beyond current state of the art algorithms. Consequently robot and personal navigation systems use GPS or laser range finders, missing out on visual sensor advantages such as cheap cost and small size. This project will set a new benchmark in visual route recognition, and in doing so enable the extensive use of low cost visual sensors in robot and personal navigation systems under wide ranging environmental conditions.

The award is for 3 years and worth $375,000, and will enable me to conduct research full-time, help in funding PhD students and essential robotics and computer vision research equipment.


1 November 2011: I have just been awarded an Australian Research Council Discovery Project Grant for 2012 to 2014. Details below:

Brain-based sensor fusion for navigating robots

This project will develop new methods for sensor fusion using brain-based algorithms for calibration, learning and recall. Current robotic sensor fusion techniques are primarily based on fusing depth or feature data from range and vision sensors. These approaches require manual calibration and are restricted to environments with structured geometry and reliable visual features. In contrast, rats rapidly calibrate a wide range of sensors to learn and navigate in environments ranging from a pitch-black sewer in Cairo to a featureless desert in America. The project will produce robots that, like rats, autonomously learn how best to use their sensor suites, enabling unsupervised, rapid deployment in a range of environments.

The award is for 3 years and worth $140,000, and will enable me to fund PhD students and essential robotics research equipment.

Older news can be found at the bottom of this page, or by clicking here.

18 October 2011: I have just had two papers accepted to the Australasian Conference on Robotics and Automation (ACRA). The two papers are:

  • Towards Condition-Invariant Sequence-Based Route Recognition, Michael Milford - this paper is all about pushing the boundaries on visual navigation under changing environmental conditions, by exploiting some key properties of image sequences.
  • Feature-based Visual Odometry and Featureless Place Recognition for SLAM in 2.5D Environments, Michael Milford, David McKinnon, Michael Warren, Gordon Wyeth, Ben Upcroft - this paper is all about combining the RatSLAM brain-based robot navigation system with some state of the art stereo visual odometry to create accurate, pseudo-3D metric maps.

ACRA is being held at Monash University in Melbourne from 7-9 December, 2011 and I'll be attending the entire conference. Drop me a line if you want to meet up to discuss work, collaboration, or even just to have a friendly drink.

5 October 2011: We have just had a paper accepted for publication published in the journal PLoS one, headed by my collaborator Peter Stratton at the University of Queensland. The paper title is "Using Strategic Movement to Calibrate a Neural Compass: A Spiking Network for Tracking Head Direction in Rats and Robots PLoS ONE" and is about how an animal or robot can use a series of movements to calibrate and tune its navigation system. The paper can be found here. Full citation data:

Stratton P, Milford M, Wyeth G, Wiles J, 2011 Using Strategic Movement to Calibrate a Neural Compass: A Spiking Network for Tracking Head Direction in Rats and Robots. PLoS ONE 6(10): e25687. doi:10.1371/journal.pone.0025687

Here's one of the figures:

7 September 2011: We have a book chapter in a fantastic new book coming out, I just received the author's copy in the mail today. The book title is "Neuromorphic and Brain-Based Robots" with editors Jeffrey L. Krichmar and Hiroaki Wagatsuma. Our chapter is in the section "Brain-based robots: architectures and approaches" and is titled "The RatSLAM Project: robot spatial navigation". There's a picture of the cover below, you can see more details and get it from Cambridge University Press or from Amazon.

06 July 2011: I have just started a new Lecturer faculty position in the School of Engineering Systems at the Queensland University of Technology. As a QUT Early Career Academic Recruitment and Development member, I will be continuing to focus on research in robotics and neuroscience, but will also be doing some teaching. In semester 2, 2011 I am teaching ENB339 - Introduction to Robotics.

17 May 2011: I just got back from the 2011 International Conference on Robotics and Automation in Shanghai, China. Was a fantastic week, gave three presentations and a poster, and had two other co-author presentations. Photo collage down below (click on the thumbnail to view the full size collage):

 

 

21 March 2011: I just attended a 2 day residential for the QUT Early Career Academic Recruitment and Development (ECARD) Program with 63 other young academics. The residential was held at Peppers Salt Resort & Spa at Kingscliff, on the Tweed coast. A few photos below:

15 March 2011: I will be giving three presentations and presenting a poster at the International Conference on Robotics and Automation in Shanghai, China this May:

  • 2011 ICRA Workshop on Long-term Autonomy - "2011 ICRA Workshop on Long-term Autonomy", Monday May 9th
  • 2011 Main ICRA Program - "Aerial SLAM with a Single Camera Using Visual Expectation", 08:50-09:05, Paper WeA101.3, Wednesday May 11th
  • 2011 ICRA Workshop on Bio-mimetic and Hybrid Approaches to Robotics - Presentation on "RatSLAM: Using Models of Rodent Hippocampus for Robot Navigation", Friday, May 13th
  • 2011 ICRA Workshop on Bio-mimetic and Hybrid Approaches to Robotics - Poster on "Closing the Loop Between Neural Navigation Mechanisms and Robotics", Friday, May 13th

Workshop websites

Please feel free to come and say hello, or catch up at one of the social events, or even share a relaxing drink afterwards!

7 January 2011: I just had three papers accepted to ICRA 2011, the International Conference on Robotics and Automation. ICRA will be held in May in Shanghai, China. The three papers accepted are:

  • Aerial SLAM with a Single Camera using Visual Expectation, Michael Milford, Felix Schill, Peter Corke, Robert Mahony, Gordon Wyeth. This paper is the result of collaborative research done involving the Queensland University of Technology and the Australian National University.
  • Continuous Appearance-based Trajectory SLAM, William Maddern, Michael Milford, Gordon Wyeth. This paper was the result of core work carried out by one of our PhD students, Will Maddern.
  • Lingodroids: Studies in Spatial Cognition and Language, Ruth Jennifer Schulz, Arren Glover, Michael Milford, Gordon Wyeth, Janet Wiles. This paper was the result of collaborative work between the Complex Systems group at the University of Queensland and the Queensland University of Technology.

15 November 2010: We have just had an article published in the journal PLoS Computational Biology. The article presents cross-disciplinary research involving robotics, animal navigation and neuroscience. Browse down to the publications section of this page to see the article.

  • No labels